IPIE Publishes First Report by its Scientific Panel on Global Standards for AI Audits: Global Approaches to Auditing Artificial Intelligence
August 25, 2024
The International Panel on the Information Environment (IPIE), an independent global scientific body dedicated to providing actionable, evidence-based knowledge about threats to our information landscape, has published Global Approaches to Auditing Artificial Intelligence: A Literature Review, the first in a series of publications from its Scientific Panel on Global Standards for AI Audits.
This report explains what artificial intelligence (AI) system audits are and why they are important to ‘test whether they engender the outcomes they are expected to and/or whether they have significant – possibly adverse – societal and technological impacts.’
Wendy Hui Kyong Chun, Scientific Panel Chair, notes that “AI audits are an important means of assessing the global impacts of AI systems, but the audit ecosystem is extremely diverse and follows several different guidelines, frameworks, rules, and standards. This literature review, which was shaped by leading experts in the field, will help us parse these approaches and understand what really matters to global standards for AI audits.”
Following an extensive and collaborative consultation process with Scientific Panel members, IPIE scientists developed this foundational review, which lays the groundwork for further publications by the Panel around AI auditing due for release later in 2024. The report identifies and analyzes existing approaches to AI auditing as conducted by regulators and industry players, and highlights post-audit enforcement mechanisms. The report also considers key variables in the audit process - who, what and when, why, how, and what next - and considers how these might shape or influence an audit.
The report identifies three key takeaways for global policymakers as they develop and implement audit systems for the rapidly changing world of AI.
First, to fully assess the benefits and risks associated with AI in all of its current and future iterations, a trustworthy audit ecosystem must be established to include input from internal, external, and community auditors.
- Internal (and specialized second-party) audits can assess the governance of AI systems for compliance with national and global regulations. However, access to the data used in an audit, as well as the results of that audit, may be limited to the organization being audited, which in turn can restrict transparency and accountability and damage public trust.
- Trust could be engendered using other types of audits, such as independent assessments conducted by academic researchers, civil society actors, and impacted communities.
Second, in order to facilitate stringent and trustworthy external audits, auditors should have better access to the data, modelling, and documentation held by developers. While internal auditors are typically granted greater access than external auditors, developers may place controls on both, which may constrain the outcome of certain auditing processes.
Finally, audit regimes must account for the global impacts of AI systems that are currently audited against national or regional criteria. To date, most audits are conducted in the ‘global north’ (primarily North America and Europe), published in English, and focused on effects in the global north. However, the impacts of AI systems are felt globally and often have social and environmental impacts in the global south that may differ substantially from those identified in a north-focused audit process. This is not to suggest that audits be standardized (indeed, it is important they retain the flexibility needed to address national and local concerns), but a standard set of auditing protocols should be developed from which more tailored audit systems can be created.
Later this year, the IPIE’s Scientific Panel on Global Standards for AI Audits will publish a set of global protocols for AI audits for discussion by global policymakers, as well as a separate report on the provenance of data used by AI and the impact of this provenance on system bias.
About the IPIE
The IPIE is an independent and global science organization committed to providing the most actionable scientific knowledge about threats to the world’s information environment. Based in Switzerland, the mission of the IPIE is to provide policymakers, industry, and civil society with independent scientific assessments on the global information environment by organizing, evaluating, and elevating research, with the broad aim of improving the global information environment. Hundreds of researchers from around the world contribute to the IPIE’s reports.