Superintelligence, collective stupidity, and the AI agents of the future

First and foremost, James Evans and his colleagues are my greatest role models at the moment, and I can’t think of a better research plan than to follow in their footsteps. This blog post is thus merely friendly chatter about their recent Science essay, but, me being a scientist, it just comes across as a bit polemical. (Cover from a French book about social networks, or social systems, or something.)

Top scientific outlets like to push the AI industry’s narrative. One of these is about the impending breakthrough of the collective superintelligence of AI agents. The argument is that, since humanity as a whole exhibits collective intelligence—humans together can solve intellectual challenges beyond reach of the individual—the same would happen if we unleash socializing AI agents.

Even if we assume it is meaningful to compare human, artificial, and collective intelligence (a big if), this argument sounds hollow to me. There are definitely mechanisms of collective stupidity at work, too. The debates around AI and its integration into society feed us examples of that on a daily basis. The diminishing ROI in academic science is another example (at least showing that collective intelligence doesn’t really scale).

What if the net force of collective intelligence and stupidity balances out at this point in time? A surplus of collective intelligence has pushed us to the current state of knowledge, knowledge that we feed into AI. But at present, neither we nor the agentic AI would be able to get much further, facing a wall of stupidity.

At this point in time, we don’t know which of these narratives is true, if any. But we could pause and try figuring it out before we move ahead. My feeling is that we should stop building arguments on analogies to human society (as I did above). Of course, AI is a model of humans and agentic AI a model of human society, but they don’t have to be. Even in the near future, optimization by capitalism or h-index maximization might branch AI off from that idea for good, and then there’s no point in analogies. In the meantime, I hope social-agentic AI won’t go down the lane of other dreamy mega projects (quantum computing, fusion energy, etc.), where one can’t really trust the promises of breakthroughs around the corner from the people involved because of their vested interests.

Leave a comment