AI Audits Key to Governance: IPIE’s Response to UN Governing AI for Humanity report

March 2024 - The UN Secretary-General's AI Advisory Body launched its Interim Report: Governing AI for Humanity in December 2023. The report calls for a closer alignment between international norms and how AI is developed and rolled out. The AI Advisory Body was convened to analyze and advance recommendations for international AI governance and has called for open consultation and feedback on the report.

The IPIE is uniquely positioned to respond to the report as the only scientific, systemic, and global organization established to address the degradation of the information environment. Led by 300 top scientists of different disciplines from 60 countries around the world, the IPIE delivers neutral assessments on the condition of the information environment, allowing empowering policymakers to take meaningful action.

The feedback comes from IPIE’s newly established Scientific Panel on Global Standards for AI Audits, which stemmed from the increasing interest in the possibilities and challenges of AI to the information environment and its regulation. The Panel is building consensus about the operationalization of global AI auditing standards and to aggregate applied research into viable global standards to feed into regulatory frameworks in need of authoritative testimony, such as this UN report.

The IPIE’s Response

The scientific community welcomes the report and acknowledges the UN’s assessment that properly managed AI can bring significant benefits to society and that the development of risk mitigation systems are critical for the avoidance of the massive potential for harm.

The IPIE urges any UN-led approach to AI governance to include the centrality of AI audits to assess both the opportunities and benefits from AI and to fully measure the risks that it poses. AI audits are key to enforcing regulation (and yet receive no mention in the Interim Report).

These audits must be based upon a universally agreed upon set of principles and parameters but must contain adequate flexibility to allow for local utility while at the same time addressing global harms. 

Although many actors are developing AI audits, there is no consensus on what should be audited, by whom, or at which state of the system’s life cycle. Government audits generally focus on legal and regulatory compliance, businesses use a risk mitigation framework, while civil society audits focus on societal good versus harm.  

The IPIE recommends that audits be considered beyond the transparency of AI models. Audits are crucial to meaningful public accountability in several ways, they can:

  • operationalize policy and regulatory aims, 
  • reveal if the opportunities afforded by AI are being realized,
  • identify an AI system’s harms at both local and global levels.

The IPIE also recommends that audits be used to:

  • Ensure public accountability rather than testing adherence to technical performance criteria or regulatory compliance. This would also involve safe harbor protections for independent audits pressing for such accountability, such as academics and journalists. 
  • Help the UN assess ‘AI’s impact on a variety of global economic, social, health, security, and cultural conditions’ and ensure their alignment with ‘universal respect for, and enforcement of, human rights and the rule of law’. This can include ensuring that AI systems adhere to  principles of transparency, accountability, privacy, and fairness. 

Building Global Scientific Consensus Through the IPIE

The IPIE is fulfilling the report’s stated need for an IPCC-like body to lead an “independent, expert-led process that unlocks scientific, evidence-based insights”. 

As well as producing  reports related to global standards for AI auditing, the IPIE aims to identify the biases within existing auditing structures. The IPIE’s Scientific Panel on Global Standards for AI Auditing builds scientific consensus regarding international collaboration on data, compute and talent to help ensure AI systems support the SDGs, international  human rights norms, and modern ESG goals. 

Our reporting and peer review process aggregates knowledge from hundreds of colleagues around the world, in addition to consultations with experts from civil society, auditing organizations, and corporations. Two of the technical reports we will publish this year--a literature review of the current AI auditing  systems and a report on data provenance related to AI systems and researcher access to  that data--speak directly to this function.  

A critical range of challenges to the SDGs and international human rights norms sit in a broader global information environment. AI and machine learning systems are in a family of emerging and frontier technologies that require applied scientific research about social impact. 

The IPIE asserts its independence in funding from national governments. This allows the IPIE to function as a neutral and nonpartisan endeavor that can work across privacy regulations, data sharing rules, economic pressures and protectionisms, and the current global political environment. 

At the same time, the IPIE is crucially global in perspective and involves researchers from around the world. This is especially important in considering the impacts of AI systems, as the data driving AI innovation is often extracted from the global South, the environmental costs of intensive compute are globally borne, and the impact of AI will be on citizens and consumers around the world.

The IPIE functions as an active steward of the global scientific consensus on matters related to the global information environment, including the impacts of AI systems and is happy to assist the UN AI Advisory Body--or any future multilateral organizations governed by member states--through this independent scientific work.

To view the IPIE’s full response to the UN Advisory Body on AI, please click here.

By clicking “Accept”, you agree to the storing of cookies on your device to enhance site navigation, analyze site usage, and assist in our marketing efforts. View our Privacy Policy for more information.