What I was gonna say (Tractatus edition)

This is a post about how AI (if used to the best of our abilities) might rid science of its knowledge memes. Which are prone to become factoids, or overshadow more important results.

While adding slides to my keynote talk at the inaugural Cudan conference on cultural data analytics, I wanted to say something about AI and the human way of pinning knowledge on memey tidbits. So here is what I planned to say in 45 seconds or so (before I ran out of time). It’s numbered in Tractatus style (so 2.140 is the 140th comment on proposition 2).

1. As humans, it’s in our nature to want to understand everything.

1.1. In Buckminster Fuller’s words (from Operating Manual for Spaceship Earth): “Nothing seems to be more prominent about human life than its wanting to understand all and put everything together.”

2. Humans prefer knowledge in units of very particular forms.

2.1. Reality is not bounded by these forms.

2.2. One example is analogies.

2.2.1. For example, the quite thoroughly debunked “wood wide web“—that forests are connected by mycorrhizal networks like the World Wide Web connects information in the Internet—which is still presented as a fascinating truth.

2.3. Another example is cyclic behavior.

2.3.1. And, of course, trends, correlations, simple causalities, etc.

2.3.1.1. But maybe not systems diagram, stat-mech models, economics models, kinship algebras, whathaveyou . . ah, it’s not black and white, of course. Anyway . .

2.3.2. Like economic cycles.

2.3.2.1. There are a bunch of different ones: From the Kondratiev waves—multi-decade, technologically driven stages, to the pork cycle related to price/production, supply/demand stuff.

2.3.2.2. They are popular (I think) because they go along with nice narratives. Typically, containing competing effects whose effects kick in with some time lag.

2.3.2.3. Sometimes theory even presents them as more complicated than sinusoidal.

Theorized non-sinusodial economic cycles not matching reality (at its resolution). (I’m not saying Kondratieff waves are a bad theory, but that cycles are a memey type of knowledge—humans like cycles.)

2.3.2.4. Given the knowledge of business cycles, confronted with empirical data, it takes some integrity not to see ghosts. Cycles, as explanations, are more attractive than reality . . (perhaps).

2.3.3. The “strength of weak ties” is another example.

2.3.3.1. Of course, it was a great discovery that changed the way we think about social networks. But what is its status in the entirety of human knowledge today? Suppose social network theory from the get-go teaches that people use different connections differently. In that case, the question is how and when they use what contact—information not included in the “strength of weak ties” dictum (but, to some extent, in Granovetter’s papers).

2.3.3.2. Knowledge memes don’t necessarily belong to a dodgy scientific hinterland.

3. AI does not always (or even usually) produce human knowledge.

3.1. Somewhere between PCA and deep learning, we lose the sense of what’s going on and thus the feeling of understanding (something important to us (1.1)).

3.1.1. Yes, explainable AI is an honorable attempt to rectify this. Still, at least those XAI methods that are coming up with alternative, human ways (2) of arriving at a machine conclusion are insufficient for the purpose of (1.1).

3.2. It is, of course, useful in many ways. I think for more bookkeeping tasks like summarizing texts or corpora, we would be OK with black boxes along the way. Things are not black and white here.

3.3. It is not bad that machines don’t bend to attractive factoids (2). Even humans can gain that type of understanding, only it doesn’t qualify as knowledge, because it cannot be communicated.

3.3.1. We should refrain from modeling AI on human understanding and texts. It’s better if it can discover the world ab initio.

3.3.1.1. After all, LLMs are fed all the stupidity of human text, and still we call them smart.

4. At the moment, I have no clue what would be a good general practice for AI’s use in science.

4.1. Memey knowledge units might prevail. If they are not misleading, why not let them stay?

4.2. There are more elaborate theories and models that are not seductive in their form and are presumably closer to reality (2.3.1.1). These are not affected by the above reasoning and should remain a cornerstone in theory/knowledge building.

4.2.1. There are also truths that are, by nature, of an attractive form to humans. A pendulum can be modeled as a pendulum, etc.

4.3. A good scenario would be if AI made people disenchanted by memey knowledge.

4.4. A bad scenario would be that humans lose their drive for comprehensive knowledge when AI debunks our favorite knowledge memes.

4.4.1. But that won’t happen because Bucky Fuller was always right

4.4.1.1. and always wore three watches.

Leave a comment