IPIE

Press Release - Artificial Intelligence: It's Time for Proactive Oversight, Say Global Experts

Read The News

December 18, 2024 – Zürich, Switzerland – An international panel of AI experts and academics has declared that the time for passive oversight of AI systems is over. The International Panel on the Information Environment (IPIE) is issuing a call for the implementation of its newly-developed, comprehensive Global Framework for AI Audits to provide greater transparency and assurance to the public that AI systems are safe, fair, and effective. 

As artificial intelligence becomes deeply embedded in our lives, addressing its potential to perpetuate biases, exacerbate inequalities, and cause environmental and other harms has never been more critical. Increasingly, governments, regulators, industry, civil society, and researchers are turning to audits to assess an AI system’s safety. Audits are a vehicle for establishing public trust in these systems and the corporations that deploy them. There has been an explosion in AI auditing without any consistent standard guiding them.

The IPIE’s Scientific Panel on Global Standards for AI Audits has produced a Global Framework for AI Audits, based on independent scientific consensus on what makes an audit effective and trustworthy. The panel included a diverse group of experts from computer scientists who have defined the fields of algorithmic auditing to experts on data journalism and AI ethics. 

After 9 months of deliberation by the world’s experts in advanced AI auditing techniques, the panel has identified the criteria recognized by the global scientific community as ensuring consistent assessment of AI systems for risks and impacts.  

Professor Wendy Hui Kyong Chun (Simon Fraser University, Canada), who led the IPIE panel said: “Current auditing processes fail to capture the global and local impacts of these technologies or, if their promises translate into real benefits. Just as society did not limit audits of oil or tobacco companies to self-compliance exercises, the catastrophic consequences of past negligence—like the Gulf oil spill or the health impacts of smoking—demonstrate the dangers of insufficient oversight.” 

The panel found that AI auditing must not be reduced to compliance checks. The current auditing ecosystem lacks universally accepted standards, leaving critical gaps in our ability to assess AI's full risks. The panel’s final report emphasizes that audits should not only verify claims made by AI developers but set new standards for responsible AI use.  

The global effects of AI demand a global response. Audits must span the entire AI supply chain—from rare-earth mineral extraction in Africa to AI model development in Asia.  

“The IPIE Global AI Auditing Framework can provide a critical safeguard against the profound risks these technologies may pose to our societies and the environment,” said panel member Alondra Nelson, a Harold F. Linder Professor at the Institute for Advanced Study, who serves on the United Nations AI Advisory Body and is the former acting director of the White House Office of Science and Technology Policy, as well as its principal deputy director for science and society.  

“Without an independent, globally comprehensive auditing ecosystem, we risk allowing AI use to accelerate without guardrails and further entrench inequality, exploitation, and harm. This Framework is a pivotal step toward ensuring AI serves all of humanity, not just those who build or profit from it.” 

Key features of the Framework include: 

  1. Auditor Independence and Qualifications – Auditors must be independent and highly qualified, with the expertise to assess AI’s complex social, environmental, and technical risks. 
  1. Access to Documentation and Data – Auditors must have full access to AI models, data, and associated risks to evaluate the true impact of AI on communities, the environment, and labor forces. 
  1. Global Perspective – AI’s effects are felt worldwide, especially in the Global South. Audits must account for diverse cultural, social, and environmental contexts to ensure inclusivity and fairness. 
  1. Post-Audit Transparency – Audit findings and recommendations must be publicly available, ensuring accountability and allowing affected communities to have a voice in the follow-up actions. 

The Global Framework for AI Audits is intended for use by federal and state government agencies and bodies, AI developers and their parent companies, civil society organizations, academic researchers, and legal and public advocates, setting a standard for appropriate risk and impact reviews of this quickly developing technology.

ENDS

For more information on the Panel members, the Global AI Auditing Framework and Summary for Policy Makers, and the full report, visit the panel's page.

For media inquiries, interviews, or more information, please contact: Press@IPIE.info

About the IPIE

The International Panel on the Information Environment (IPIE) is an independent and global science organization providing scientific knowledge about the health of the world's information environment. Based in Switzerland, the IPIE offers policymakers, industry, and civil society actionable scientific assessments about threats to the information environment, including AI bias, algorithmic manipulation, and disinformation. The IPIE is the only scientific body systematically organizing, evaluating, and elevating research with the broad aim of improving the global information environment. Hundreds of researchers worldwide contribute to the IPIE's reports.

By clicking “Accept”, you agree to the storing of cookies on your device to enhance site navigation, analyze site usage, and assist in our marketing efforts. View our Privacy Policy for more information.