The World Ahead Globe Icon
The World Ahead | AI regulation in 2024

A global agency to oversee AI is a tall order

Setting one up will be as complex as the technology itself

A windup toy robot with a ball and chain attached to its legs
image: Ben Denzer

By Ludwig Siegele

Listen to this story.
Enjoy more audio and podcasts on iOS or Android.

International bodies often start small. The International Civil Aviation Organisation (ICAO), established in 1944, held decades of discussions before it began to set air-traffic rules. In 1952 the European Organisation for Nuclear Research, or CERN, started life in unused offices at the University of Copenhagen. And until 1979 the International Atomic Energy Agency (IAEA), the world’s nuclear watchdog, was based in a hotel in Vienna.

These three organisations, each embodying a different way to govern a powerful technology, are now the preferred templates for a new global entity. The ICAO is mainly a standards-setter; CERN is a research outfit; the IAEA is a nuclear cop. Over the coming year, the world’s governments are expected to decide what kind of global body they want to regulate another technology: artificial intelligence (AI).

Discussion of AI often blurs three types of risk. AI-powered software that, say, interprets medical images, may not be perfectly accurate. Large language models (LLMs), which power “generative AI” services such as ChatGPT, may display prejudice or bias. And some fear that the most powerful “frontier models” could be used to create pathogens or cyber-weapons, and might lead to superhuman “artificial general intelligence” that could even threaten humanity’s survival.

National laws might be able to deal with simpler AI applications and LLMs, but frontier models may require global rules—and an international body to oversee them. Microsoft, for instance, has advocated for an agency similar to the ICAO; OpenAI has called for “an IAEA for superintelligence”; AI researchers, meanwhile, are keener on a CERN-like entity. A compromise would be to create something akin to the Intergovernmental Panel on Climate Change, which keeps the world abreast of research into global warming and develops ways to gauge its impact. Ursula von der Leyen, the president of the European Commission, has endorsed the idea, as has a group of tech executives.

Yet this is unlikely to be the last word. An International Panel on AI Safety, as some call it, could lead to the creation of other global organisations. Based on research about the international institutions spawned by other major technologies, the authors of a recent research paper imagine an entire constellation of bodies. These range from an “AI Safety Project” for risk research to a “Commission on Frontier AI” to build consensus around critical questions. As Margaret Levi of Stanford University, one of the authors, puts it: “a single institution cannot do it all.” Expect to have to learn the meaning of even more acronyms.

Ludwig Siegele, European business editor, The Economist, Berlin

Explore more

This article appeared in the International section of the print edition of The World Ahead 2024 under the headline “AI’s regulatory challenge”

More from The World Ahead

The World Ahead The World Ahead

The World Ahead 2024

Future-gazing analysis, predictions and speculation

The World Ahead The World Ahead

The World Ahead 2024

Future-gazing analysis, predictions and speculation


The World Ahead The World Ahead

The World Ahead 2024

Future-gazing analysis, predictions and speculation