

Thereโs also much talk about democratic AI governance.

By Gideon Lichfield
CITRIS Tech Policy Fellow
Goldman School of Public Policy
University of California, Berkeley
Introduction
In February was the AI Action Summit in Paris, the biggest global AI gathering since the 2023 summit in the UK. Half the people I know are going and the other half are having FOMO.ย One of the buzzwordsย there will be โpublic AIโโbasically, the notion thatย governments need to build their own publicly owned and controlled AI infrastructure to serve societal goals instead of profit motives.ย (EDIT: Itย doesnโt have to be just governments.)
If that made you snooze, Iโm not surprised. Even โpublic AIโ is hardly a rallying cry like โfree healthcareโ or โsave the planet.โ Frankly, the people pushing it havenโt done a great job of explaining it in terms anyone but aย policy wonkย wouldย care about. Thatโs in part because, surprisingly, thereโs no really good historical parallelโno โlike X, but for AI.โ Certainly not a bridge.
But I think itโs a potentially important idea. So Iโm going to try.
The TL;DR:ย Imagine an entire parallel ecosystem of AI tech, but built and owned by governments. By not being beholden to profit, it might provide things the private AI giants have no incentive to create, like AI tailored for doing research on hard social problems and cutting-edge science, or for serving specific geographic regions and languages. It wouldnโt spew misinformation and hate speech, and would generally try to support a healthy public sphere. And affordable access for all would be guaranteed. Nice ideaโbut will anyone pay to build it?
First: The Problem with Private AI
Historically, infrastructure developmentโthink roads, railways, electricity, telephonyโhas gone like this. A company, or a few of them, start building a thing. But itโs not economic to build the thing everywhere, so access is patchy. If thereโs a monopoly, the thing is expensive. If thereโs competition, there may be incompatible versions of the thing, like different railway gauges. Perhaps the thing is also dangerous.
Eventually, the government realizes that society really needs the thing. So it steps in to expand access, standardize requirements, reduce prices, and impose safety regulations. It might, for instance, nationalize the thing (electricity in the UK), create incentives for more people to build the thing (electricity in the US), break up monopolies (Standard Oil), or regulate them (railroads). Thatโs how the thing goes from being just a thing to being infrastructure.
But with 21-century digital servicesโthink social media, cloud computing, online publishing, AIโthings become infrastructure faster than the government can get its socks on.1 The internet makes access to the thing almost ubiquitous. The price of the thing often starts at zero (aside from the hidden price of providing your personal data). The variations are a feature, not a bugโchoose the version you like best. And because AI especially is a general-purpose technology, lots of people think it could be dangerous, but nobody can agree on exactly how, or whether the dangers outweigh the benefits.
So now the thing is already infrastructure, and itโs cheap for users, and choice is good, and the risks are impossible to pin down. So what kind of intervening can a government even do?
This of course is a story that suits tech companies very nicely. In fact, there are lots of problems with letting a handful of secretive, immensely rich organizations set the rules for a powerful general-purpose technology, which I probably donโt need to elaborate for readers of Futurepolis.2
Most people think the solution to these problems is regulation. Governments impose all sorts of rules on physical infrastructure companies. If the water supply is contaminated, the water company (hopefully) gets it in the neck. But that works only because those things have easily identifiable failure modesโwater gets contaminated, bridges collapse, trains get derailed. Itโs much harder to regulate a technology like AI that could take any number of forms and be used in any number ways.
Another oft-touted solution is open-source or open-weight3 AI modelsโalternatives to proprietary ones like chatGPTโwhich anyone can freely use and adapt. These do have the effect of taking power away from the big AI companies. But they donโt ensure that AI is being used for the public goodโthey just allow more people to create their own AI products, which could be exactly as bad for all the same reasons.
Thereโs also much talk aboutย democratic AI governance, which essentially means getting AI companies, funders, and regulators toย listen to citizensย and incorporate their ideas as to what โgoodโ AI is. This is great, but it does kinda depend on all those institutions being willing to play ball.
Hence theย movement for public AI.
What Would Public AI Consist Of?
The idea is for governments to build publicly owned versions of the key components of the AI stack, chiefly:
- datacenters that public sector organizations and researchers as well as small businesses can use for training and running models
- training datasets that anyone can useโwhat some are calling an โopen Library of Alexandriaโโwhich donโt contain junk data, arenโt stolen from copyrighted data, and can be tailored to specific cultural contexts or uses (e.g. for climate modeling)
- truly open-source foundation models that countries, research labs and companies can adapt and build on, trained on reliable data and with democratic values built in
- standards, goals, and governance mechanisms to guide the development of AI in a socially beneficial direction
What Would We Get Out of It?
This is where the lack of a straightforward historical parallel makes it a little hard to explain the point of public AI. When governments have built public infrastructure in the past, itโs usually been to fill a gap left by the private sector. Here the idea is to build a whole alternative system to ones that already exist. Sort of as if you built an entire railroad network alongside the existing one instead of just subsidizing branch lines to remote places.
So you canโt just say this is โbridges, but for AIโ or โpower grids, but for AI.โ However, there are a few different โX for AIโ metaphors that can at least explain different facets of it.
- Itโs a BBC or PBS for AI.ย Private media outlets are free to take whatever positions they want, including pushing misinformation or specific political views. But many governments created public broadcasters whose mission includes creating shared values and understandings and a healthy civil society. (And sometimes more than that: the BBC actuallyย spurred the adoption of radioย in Britain, because its funding was tied to the number of radios sold.) In the same way, public AI could create AI services that promote democratic values and a healthy public discourse instead of being used to spread misinformation or hate.
- Itโs a CERN or DARPA for AI.ย Many of the USโs biggest technological innovations in the 20th century came out of the research labs at firms like AT&T, Xerox, and IBM. But those firms still had profits in their sights, not societal goals. DARPA, however, funds research thatโs crucial to US national security. CERN pools billions of dollars worth of research funding that individual countries wouldnโt be able to muster alone. Public AI could do the same, giving scientists the means to do cutting-edge research and develop AI models for uses the private sector might not. For example, a specialist medical AI for public-health research, a housing AI to help solve problems of affordable housing, or a legal AI to improve the justice system.
- Itโs a Post Office for AI.ย If DHL or FedEx stop serving certain areas or jack up their prices, the postal service ensures everyone will still have an affordable way to send mail. Right now, anyone who wants to use AI for free has a plethora of options, but will they always? Just look at Twitter to see how a platform can change radically when a new owner takes over. Public AI would ensure that the public always has access to high quality AI services for free or at a guaranteed low cost.
- Itโs public utilities for AI.ย A private company has one goal: to make money. It can be regulated against causing harmsโpollution, for instance, or dangerous productsโbut it canโt be forced to do good. A public utility can be. A public power utility, for example, may have to make enough money not only to cover its own costs but to help maintain the electricity grid. A water utility may be required not just to provide clean water but to fund the sewage system or provide irrigation for public parks. In the same way, a public AI utility might be obliged to encode democratic values in its models or support the creation of public datasets. (Hereโsย a slide deckย with some useful diagrams for this.)
- Itโs public libraries for AI.ย The library system ensures anyone can have access to knowledge. Public AI ensures anyone can have access to AI.
There are probably some other metaphors you could pick. Again, though the point is that public AI is all these things. Thatโs what I think makes it hard to get a handle on, because thereโs no one description that covers it.
How about a Supermarket Metaphor?
OK, bear with me. Perhaps the best way to explain public AI is something like this.
There are lots of supermarkets, but a lot of the food they sell is unhealthy or produced in unsustainable ways, and in some places there are food deserts where one chain has a monopoly. So what if the government set up a whole alternative supermarket chain which only sold organic and local produce, foods with low sugar content, nothing too processed, etc, at cost prices, with branches everywhere, and also offered cooking and nutrition classes for free. The economies of scale of that chain would shift incentives for the food industry and boost sustainable and healthy food production so even the private supermarkets would end up changing what they sell. And all of it would ultimately lead to better health outcomes, lower healthcare costs, higher tax revenues (because healthy people can work), and less environmental damage, more than compensating for any initial outlay on the supermarkets.
In this metaphor, the origins of the foodโorganic, low-sugar, etcโis the training data. The food itself is AI models. The government supermarkets are the datacenters and other physical infrastructure. The private supermarkets and the food industry are the AI private sector. And so on.
When you put it this way, it sounds crazy. No government in its right mind would do this for supermarkets. But maybe they should?
Anyway, I donโt know what to call it. But thereโs gotta be something more thrilling than โpublic AI.โ
But Will It Be Built?
There are scattered efforts to build different parts of the public AI stack in different countries. For example, a project calledย OpenEuroLLMย wants to build foundation models for various European languages. Thereโsย Euro Stack, which wants to build โa complete digital ecosystemโ for Europe. There are national AI projects in places likeย Sweden,ย Switzerland,ย Singapore, and aย coupleย inย the USย (though what will become of them in the Trump administration is anyoneโs guess). There are reports that an as-yet unnamed foundation for AI in the public interest will be launched in Paris next week.
But itโs nothing even remotely on the scale of what the private sector is doing. Some people are all abuzz about a report from a few days ago that the EU has decided toย invest $56 millionย in an open-source European model (presumably, OpenEuroLLM, though the article doesnโt say). Some point out that Chinaโs DeepSeek reportedly trained its R1 model, which took the world by storm a couple of weeks ago, for just $6 million.
But that number isย probably a massive underestimate. Meanwhile, the $56 million is less than a tenth of what Mistral, Europeโs biggest homegrown AI company, raised in a funding roundย last summer. Itโs less than oneย six-hundredthย of the โฌ30-35 billionย thatย one study estimatedย it would cost to build a โCERN for AIโ (and that just in the first three years). Never mind the tens or perhaps evenย hundreds of billionsย the US is supposedly planning to throw at the โStargateโ project, though I think we should take those claims with a giant heap of salt.
Still, people in theย Public AI Network, a loose coalition of policy people and researchers working on thisโand whom I must thank for helping me gain whatever meagre understanding I have of this topicโare planning to propose a handful of โmoonshotsโ at the Paris summit next week. The open-source LLM is the main one, followed by the โlibrary of Alexandriaโ (massive public datasets), the โCERN for AIโ (i.e., massive computing infrastructure), and some frameworks for governance and regulation. I wish them luckโฆ and a better name too.
Links
More about why finding the right name matters.ย โJenga politics,โ โreverse hockey-stick dismantlement,โ โarsonโโฆ danah boyd tries to find framings to capture the first weeks of the Trump administration. (apophenia)
And yet more.ย A study of hundreds of words and phrases that Americans of different political persuasions use in different ways. The conclusion: โAmericans seem to speak two different โlanguagesโโ composed of the same words. And a tool that helps you filter out politically charged language from your own writing. (Better Conflict Bulletin)
Europeโs first Gen-Z revolution.ย Protests led by Serbian youth forced the prime minister out of power, and they organized themselves using modern technological tools for direct democracyโa first for the country. (Marija Gavrilovย viaย Exponential View)
Track the gutting of the US administrative state.ย Follow Henry Farrellโsย list of experts on BlueSkyย who are keeping tabs. (BlueSkyย viaย Programmable Mutter)
How to actually save $2 trillion.ย Itโs not even a crazy figure to shoot for. You just have to understand that government employees arenโt the source of the waste, theyโre the solution to itโwhich, of course, Musk and Trump wonโt. (Prospect, and summarized onย Pluralistic)
In aย short but excellent essay, Robin Berjon argues that the proliferation of digital infrastructure faster than governments can react to it “may be the biggest but least recognised shock delivered by the internet.โ
But in case I do:
- Enshittification. Thereโs aย general trendย that private-sector tech services get worse as they consolidate their hold on a market, because they lose the incentive to attract more users.
- Lock-in.ย When the service does go to shit, companies try to keep their users from leavingโfor example, by making it hard to export their data.
- Perverse incentives. The big AI companiesโ primary goal isnโt actually to serve their usersโ needs. Itโs to build the biggest baddest models possible, in the race to get (or so they hope) to artificial general intelligence. This makes them do things like suck up vast amounts of training data that may be copyrighted, or just downright trash.
- Lowest common denominator.ย Nonetheless, private-sector firms do want to get as many users as possible along the way. So they value general-purpose usability over more specific applications aimed at pressing social problems.
- Cultural biases.ย The big LLMs are trained disproportionately on English-language data. While they can answer you in any language you like, those answers are just translations: the underlying content is biased towards what it would be for English-speakers. For other users, it may be culturally insensitive or just plain wrong. Thatโs because thereโs simply less training data available in other languages, and moreover, companies see a diminishing return in training models to serve smaller cultural groups.
- No values. LLMs will happily spit out misinformation, hate speech, and justifications for Nazism, if thatโs in their training data. Their creators may impose content moderationโor they may not. Just as Metaย recently decidedย to stop fact-checking and allow some forms of hate speech it previously blocked, AI companies may decide itโs in their commercial interests to let their models run riot.
- No accountability.ย Nobody really knows what goes into the building of models like chatGPTโwhat data they were trained on or how that data was digested. Relying on them is like trusting a bridge to be safe without knowing what materials the construction company used. If the bridge collapses, whoโs going to take the blame?
- Concentration of wealth and power.ย Companies valued at hundreds of billions of dollars with a stranglehold on a technology can make governments do their bidding.
- Other negative externalities. Such as climate impacts from building massive datacenters.
DeepSeekโs and Metaโs models, for example, are open-weight, despite often being described as open-source. The difference is that open-weight models donโt disclose the data they were trained on or details of the training process, only the outcomeโas if you published the blueprints for a building but not the details of the construction process or where the materials came from. More details on this distinctionย hereย andย here.
This article was first published byย Futurepolisย on 7 February 2025 under the title, “So what, kinda like a bridge, but for AI”.
Originally published by the World Economic Forum, 02.11.2025, under the terms of a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International license.


