Beyond the Naive View of Information: Here's What We're Up Against
Harari's sobering thoughts on the future of post-AI humanity
You’ve likely already read Sapiens, so you’ll know that Harari has a talent for keeping it breezy and light, despite traveling across so many historical contexts and not sacrificing the truth of their complexity (as we expect from great historians!).
NEXUS is another feat that manages to convey the full depth of the stakes we’re looking at without any loss in the precision and clarity needed for understanding it.
His central argument is not about diluting the very real threats we face due to unregulated AI in algorithmic networks (he fully concurs about the height and depth of the risks). He wants everyone to understand what the naive view of information is, so that we can collective refuse to hold that view any longer.
And then he wants us to take more responsibility for consciously shaping our information networks (if we don’t, we will be completely shaped by them in ways that we can neither foresee, let alone control).
Let’s dig in to the details.
The naive view of information is something like the naive view of mass opinion: that more is better; that more information will naturally lead to producing outcomes of truth and greater truth (just as the naive view of crowdsourcing anything assumes it will lead to more wisdom or the “right” or “best” answer).
On the harsh contrary, Harari effectively shows the dangers of that assumption. Taking an historical view based on other networks throughout human history (yes, they’re all relevant) — it’s just as likely, if not more likely, that long before information serves truth it’s going to be put in the service of power.
And now that networks have gone digital and algorithmic — even before introducing the self-supervised learning aspect of artificial intelligence (or “alien intelligence”) — there is actually no good reason to assume that it’s going to be any different.
Indeed, the danger is even greater with these that the networks will come to make their own autonomous decisions about the balance of power in ways that humans can’t predict or gain leverage over.
The Real Black Box? “Inter-Computer Realities"
Diving right in, I think that one of the book’s most crucial contributions to the public discussion is that it brings up the topic of “inter-computer realities” and their potential to cause radical change or consequences on the human world, even and especially because we might not understand them.
Compare this for a moment with “intersubjective realities” in the human world. Most of our social world is a fabric of such constructs — money; time; capital; religions; “race” (it’s a social construct/ reality, not a biological one) — and so on.
Take any human mythology (e.g., Santa Claus) or personage with much mythology surrounding them (e.g., Jesus, Buddha), or social concepts such as the “dollar” (or renminbi, etc.). Think about how much control they actually have over human actions insofar as humans assent to belief in these entities and the systems they’re a part of (major sports leagues are another great example here).
Now imagine that artificially-intelligent computer networks start doing something similar (they already have, to a small degree). They might create unique “intercomputer” entities that humans don’t have access to. They might decide amongst themselves about the creation of certain contracts or concepts or virtual entities that make exchange and communication between them easier. There may be an intercomputational economy that will forever remain a black box to us, but it could dictate important actions or the availability of such actions for the computer network we happen to have access to.
We could be denied certain information or opportunities or facts based on some intercomputational social status that we had no role in creating.
And this is just one example — the phenomenon could develop in all kinds of directions.
Harari’s account of everything that could fall out from this is so sobering that I will point you to the exact pages, because if there’s only one section you could read in the book, it should be this one: pages 284-301.
Just as the law of the jungle is a myth, so also is the idea that the
arc of history bends towards justice. History is a radically open arc, one that
can bend in many directions and reach very different destinations (403).
AI Is ‘Already a Problem’: Regulation Can’t Wait
Just as with personal human boundaries which enable healthy relationships to flourish rather than block them out or inhibit them, so too could regulation allow us to cultivate algorithms in a way that allows human networks to flourish in healthy ways.
But it’s more important or deeper than that, because regulation would really just mean having a deeper understanding of the power of these AI-infused networks and the potential for alien kinds of chaos they are likely to create without it.
In a sense, the LLMs are already under certain types of regulations imposed by their creators — choices to emphasize one thing over another thing, and so on.
Harari shows precisely the scope and nuance of greater understanding that should be guiding autonomous intelligence development — and if only computer engineers or software developers are involved, or even just neuroscientists or cognitive psychologists — this isn’t enough.
His strongest point is that the solution to the problem of AI — yes, it’s already a problem — isn’t going to be technological, but political.
In order to learn anything, an organism or a network must have a goal, which provides the context for meaning that enables learning to happen.
But who is programming the goals?
They go beyond “recognize a face” and “get driver to destination entered.”
As these networks grow (which intrinsically means they will grow in power), each goal will need to refer to a higher goal that pertains to a larger context.
Competing goals are already an issue within human individuals and amongst human societies. How will different AI networks resolve their own competing goals?
What should be the highest goal(s), against which they weigh various decisions?
The rabbit hole goes very deep here, and it’s why some of the earliest programs in cognitive science paired up with philosophy as much as they thought they should with mathematics.
Regulation is a no-brainer, but just like synthetic food testing shouldn’t be handled by the corporations (or their boards) trying to sell that food, so too this regulation can’t be entirely left up to the CEOs and members of the AI corporations.
One obvious short-term antidote to all of this is that we need more widespread education not only on how LLMs actually work (this is still not common knowledge), but about their current GAPING security holes as well as larger social risks.
It’s not fearmongering. It’s not hyperbole. It’s not paranoia.
How many people do you think you know who understand how prompt injection attacks are happening? Do you think this is the only security problem, or is it likely there are others that haven’t reached popular attention yet?
Ultimately, I think there’s a good chance we’ll look back at this moment the way we now look back on Big Oil in the 1970s and 80s. They had knowledge they chose not to act on. Streetcars were removed from Detroit.
And so it seems like it’s going to be with the types of feed-forward risks that are now in play because of LLMs in the form of “search” engines and so on.
I highly encourage you to read this book if any of the above still sounds vague or abstract.
Let me know what you think!
NB: I typically write these book reviews as part of my Book of the Month series for paid subscribers, but I felt that this text has such broad relevance and importance that I’m keeping this one public (I’ll probably be writing a second more in-depth piece anyway, since there’s so much to cover). If you’d like to support my work I’d really appreciate having another monthly subscriber!
I’m an authenticity and consciousness educator with special concerns about the cultural directions of technology and cultivating higher development in gifted and empathic individuals. If this is you, you’re welcome to join my private community, and you might be interested in my guidebooks for helping you tap into your gifts and implement a “reset” on how you’re directing your own life: https://stan.store/alignedselfmedia Doors open soon on a new course!