Does Artificial General Intelligence Explain Fermi’s Paradox? A “Human Compatible” Book Review

“He who controls the algorithms controls the universe.”

Like many tech entrepreneurs, I am a definite optimist. Despite the many challenges we face, I believe that, as a species, we have the resilience and ingenuity to build a better future, overcoming obstacles from climate change to an aging population.

In a similar vein, it has become clear that artificial intelligence will be one of the defining technologies of the twenty-first century. The question is when rather than if machines will be smarter than humans. There’s general consensus that we’re talking decades, not centuries. This is something that many people alive today will probably see.

In spite of this, I have given the question of AI safety and a potential apocalypse the same relative importance in my mind as seat belts likely had in the 1800s. One that seems like a good idea to explore, but an order of magnitude less important than figuring out how to consistently drive at speeds greater than 10mph.

The book that changed all of that was Human Compatible: AI and the Problem of Control by Stuart J. Russell. Human Compatible, courtesy of Quit Genius’ book club, has become one of my favorites of the year. 

In an era of sameness from many Artificial Intelligence books, Russell’s work stands out as one of the first to genuinely enhance my understanding of an incredibly complex subject. It is an outstanding primer covering everything from the history of AI and game theory to the problem with the standard model of AI and the potential implications of an intelligence explosion.

That latter question is, in my opinion, the most fascinating part of the book. Russell states that the ultimate goal of AI is a general-purpose system that makes few assumptions and is capable of interacting with unknown agents enabling it to do any human activity from running the government to teaching molecular biology to a student. 

Paradoxically, as we get closer to achieving this goal, AI researchers seem less willing to discuss the possible negative consequences of their innovations getting out of control, perhaps due to the fear of another AI winter.

The problem with this is that the standard model of AI that researchers rely on today is deterministic. AIs are getting much much better at achieving the rigid human-specific goals we set. The issue is that in the event of an intelligence explosion, a scenario in which a general-purpose AI becomes capable of recursive self-improvement, our deterministic goals may be catastrophic to humanity. 

For example, curing cancer may be a fairly trivial activity for a superintelligence. However, the machine may decide that the fastest path to achieving its goal would be to radiate all humans enabling the world’s largest clinical trial. After all, it would be false to assume that intelligence must always be coupled with human morality or a sense of self. 

Unlike seatbelts, Russell makes the case that a deterministic superintelligence may not give humans an opportunity to react and reverse their mistake, which he refers to as the Gorilla Problem i.e. a loss of control. The Gorilla Problem uses the analogy that while humans quickly evolved, gorillas stayed at the same technological and biological level. In the same manner, as machines evolve, they may misclassify a human with the same worth as a gorilla, at best keeping us as pets and at worst driving us to extinction. 

On reflection, the problem of control may be our best answer to Fermi’s Paradox, which asks the question, where are all the aliens? Given that our Sun and the Earth are relatively young in the grand scheme of the universe and life emerged very quickly after the formation of our planet, one would expect an alien civilization to have achieved interstellar travel and made themselves known by now. Perhaps intelligent life forms all eventually create the means to destroy themselves. 

So is all hope lost? Russell doesn’t think so and lays out his own principles of beneficial AI. Whether his approach works or not, he has made a compelling case that the subject of AI safety is surely worthy of further research and funding. Even Elon Musk, who is not known to be on great terms with any government agency, suggested that all organizations developing advanced AI should be regulated, including Tesla. 

Overall, “Human Compatible” is an excellent read for those curious about AI. The real question is whether it will lay the groundwork for a wider debate on AI and society.

No Spam. No Fluff. No Noise.