A double-bill blog post with snarky comments about two trending ideas that I’m not much of an expert on.
The state of disruption
In a much-discussed paper from the beginning of the year, Park, Leahey, and Funk proclaimed that science is becoming decreasingly disruptive. This was somehow seen as a worrying trend, but imagine the opposite: That most papers completely tore down the foundations of all previous science and instantly set hordes of scientists off in a never-previously-imagined direction. What’d be good about that? I don’t want a fresh start every other day. And didn’t we agree to make being a scientist less stressful?
Anyway, a week or so ago, a preprint argued that Park et al.’s finding was a consequence of the average disruption index being sensitive to the increasing rates of new papers and citations. This doesn’t mean the index is useless since its primary purpose is to compare papers in the same (citation) network. Actually, I like it. It does a good job of capturing the “fresh start” aspect of disruptive papers.
If the name of the game is to be first with a new way of thinking, or even ahead of one’s time, instead of instantly starting a new trend, then we should maybe rather find late-blooming papers. Like the Eiffel Tower defied the contemporary sense of beauty by its, at the time, unusual proportions made possible by new materials and engineering techniques. It’s painful to rethink one’s worldview. True novelties are eo ipso demanding. Many are instantly forgotten; some eventually get their due citations. Such “sleeping beauties” are less likely to be the product of a hype cycle, and thus more interesting than what the disruption index captures.
Why does everything have to become conscious all the time?
Topic no. 2 to whine about is why oh why people have to discuss AI by means of anthropomorphisms and human analogies. Machines are different, so pushing slightly-off analogies will just lead to misunderstanding. This habit never seems as bizarre as when the topic of conscious LLMs, AGIs, and whatnots comes up.
We don’t have to be so deep: Let’s say consciousness is what’s turned on by the alarm clock in the morning. I.e., that what enables you to think like you couldn’t when you were sleeping. Obviously, human consciousness is not necessarily a prerequisite for other aspects of “machine intelligence.” It’s not a contradiction to say, “The robot behaved like a human, but we don’t know if it is conscious or not.” We need to be conscious to mimic ChatGPT, but whether the reverse is true, or not, or is a misguided question does not matter because it can’t contribute to our understanding of AI. It’s like asking if love feels exactly the same for someone else. One can run over the symptoms—throbbing heartbeats, sleepless nights, F R David “Words” in the headphones—but that’s as far as one can get. Just as we’ve learned to be OK with that situation, we should drop the is-AI-conscious-or-not discussion.
Consciousness is special because we know it very concretely. Intelligence doesn’t just turn off regularly in the same way. Therefore, if a machine were conscious, we would have a reason to feel close to it. This is probably why people repeatedly get the idea that sufficiently complex stuff could become conscious. The Jesuit priest Pierre Teilhard de Chardin’s noosphere is maybe my favorite theory in that direction. Not that I believe in it, but it’s beautiful, intelligent, and . . hmm, disruptive.