Why not replace the state (the whole apparatus of government) with an AI robot? It is not difficult to imagine that the bot would be more efficient than governments are, say, in controlling their budgets just to mention one example. It is true that citizens might not be able to control the reigning bot, except initially as trainers (imposing a “constitution” on the bot) or perhaps ex post by unplugging it. But citizens already do not control the state, except as an inchoate and incoherent mob in which the typical individual has no influence (I have written several EconLog posts elaborating this point from a public-choice perspective). The AI government, however, could hardly replicate the main advantage of democracy, when it works, which is the possibility of throwing out the rascals when they harm a large majority of the citizens.
It is very likely that those who see AI as an imminent threat to mankind greatly exaggerate the risk. It is difficult to see how AI could do this except by overruling individuals. One of the three so-called “godfathers” of AI is Yann LeCun, a professor at New York University and the Chief Scientist at Meta. He thinks that AI as we know it is dumber than a cat. A Wall Street Journal columnist quotes what LeCun replied to the tweet of another AI researcher (see Christopher Mims, “This AI Pioneer Thinks AI Is Dumber Than a Cat,” Wall Street Journal, October 12, 2024):
It seems to me that before “urgently figuring out how to control AI systems much smarter than us,” we need to have the beginning of a hint of a design for a system smarter than a house cat.
The columnist adds:
[LeCun] likes the cat metaphor. Felines, after all, have a mental model of the physical world, persistent memory, some reasoning ability and a capacity for planning, he says. None of these qualities are present in today’s “frontier” AIs, including those made by Meta itself.
And, quoting LeCun:
We are used to the idea that people or entities that can express themselves, or manipulate language, are smart—but that’s not true. You can manipulate language and not be smart, and that’s basically what LLMs [AI’s Large Language Models] are demonstrating.
The idea that manipulating language is not proof of smartness is epistemologically interesting, although just listening to a typical fraudster or a post-truth politician or a fraudster shows that. Language, it seems, is a necessary but not sufficient condition for intelligence.
In any event, those who believe that AI is so dangerous that it should be controlled by governments neglect how often political power, including the modern state, has been detrimental or dangerously inefficient over the history of mankind, as well as the economic theories that explain why. Yoshua Bengio, one of the three godfathers and a friend of LeCun, illustrates this error:
“I don’t think we should leave it to the competition between companies and the profit motive alone to protect the public and democracy,” says Bengio. “That is why I think we need governments involved.”
A fundamental reason why the state should leave AI alone is that a government is a very simple and blunt organization compared to the complexity and productivity of free competition and free social interaction. Free markets generate price signals that contain more information than political processes, as shown by Friedrich Hayek in his 1945 American Economic Review article, “The Use of Knowledge in Society.” Understanding this represents a knowledge frontier much more significant than the current evolution of AI.
Which brings us back to my opening question. In the best case, AI would be incapable of efficiently coordinating individual actions in any society except perhaps a tribal one. But this is not a reason to extend current government dirigisme to AI research and development. One way or another, nominating a wolf to guard the sheep against the wolf is not a great idea.
******************************
On the lighter side (or is it?), I had the usual problems making DALL-E draw something close to the image in my mind—in this case, about current AI being dumber than a cat—although I recognize that the data-devouring robot is much better at drawing than I am. My idea was to show a cat bewildered by what he sees in the world. When he finally got not too far from what I wanted, I wrote, “That’s good! You’re a good cat.” He replied (I am reproducing our “conversation” literally):
“Thank you! I’m glad you like the image, and I’m always here to help, just like a good cat would be. If you need anything else, feel free to ask!”
“Where did you get your little sense of humor?” I asked. The silicon creature replied:
“I guess you could say my sense of humor is programmed in! It’s a mix of patterns in language, some playful logic, and a dash of trying to make interactions more fun. Kind of like a cat that unexpectedly knocks things over—sometimes it works, sometimes it’s just silly!”