Human black boxes

Just some plain reflections that must have been expressed better by someone else, somewhere else.1

AIs are often criticized for being black boxes—good at predicting, but bad at explaining. They get it right, but we don’t know why. That AIs are black boxes2 doesn’t mean that humans are not (which is an opinion I often come across3). Indeed, think of soccer coaches who have watched 10,000+ hours of the game. They would have a pretty good sense of what’s going on at the pitch without necessarily being able to express it in simple terms. This, of course, extends beyond sports maniacs. Aren’t our everyday lives filled with such black-box situations? Maybe humans are even more black boxes than AIs are.

The above observation is related to the almost-cliché4 that what one can’t explain, one doesn’t understand. One must stretch the meaning of “understand” pretty far to agree to that. In my vocabulary, the soccer coach of our example “understands” soccer, but all of this understanding cannot be transformed into knowledge (that can be communicated by language). Human language is an astounding phenomenon. Knowledge sharing, which (kinda) elevates humanity to a hyper-intelligent superorganism, is something even more. At the same time, the discussion above leaves me melancholic over how our language fetters us. What would the world be if we could share all of our understanding? Or even just a larger fraction? It is almost ironic that the pinnacles of AI in 2023, language models, are focusing on producing that very thing that sets our boundaries.5

Footnotes

  1. The cover illustration is by Midjourney, as can be seen by the subtle misunderstanding of what boxes and soccer coaches look like. Whether this has some profound connections to the rest of the post is yet unclear. ↩︎
  2. Yes, there is explainable AI. Some of it is good, at least the part that is AI that can be explained, rather than AI that produces something that sounds like an explanation (but actually comes from an unexplainable black box. ↩︎
  3. It makes more sense to complain when comparing AIs to scientific models (statistical, conceptual, whatever) that are designed, at least in part, to convey knowledge. ↩︎
  4. Often attributed to Feynman, but I guess someone else must have said it before. ↩︎
  5. And, of course, the question looms of what AI could be if we didn’t insist on pegging it to the outputs of our weak and insufficient language. (Well, there are also AI approaches that don’t do that, but anyway.) ↩︎

One thought on “Human black boxes

Leave a comment