Homo Deus explored the possibility of a new human species emerging. If that wasn’t enough to persuade you that we need to be thinking seriously about the future, then try Nick Bostrom’s SuperIntelligence. [Okay, I recognize that readers of this blog are already converted, so I offer this post as ammunition that you can use in your proselytizing].
Superintelligence fits with our “Tech-Led Abundance” family of visions of After Capitalism. Like each member of that family, it has an evil doppleganger that shows up in Collapse. In Superintelligence, Bostrom acknowledges the awesomely good potential for superintelligence, but focuses on the potential for it going bad, since, well, it poses an existential risk for humanity. Thus my suggestion of “healthy fear.”
Let’s review his basic AI framework, which is suggested as a developmental pathway:
- 1st, task intelligence, in which AI outperforms people in a single task. We are currently here.
- 2nd, general intelligence, in which AI is as intelligent as people in a wide range of tasks. This may be decades away….or closer.
- 3rd, superintelligence, in which AI is orders of magnitude more intelligent than people across-the-board
Now, he carefully notes that the AI/machine-learning approach to super-intelligence is not the only way to superintelligence. He suggests another that he calls whole-brain emulation (we basically simulate and transcend the brain structure). Another possibility is a vastly more rapid biotech route, using stem cells and embryos. My take, however, is that the AI machine learning has the inside track.
Decades away, right? Probably, but he posits the interesting scenario that the takeoff from general to superintelligence could happen very fast, on the order of days or weeks. That would mean essentially no time to prepare a world in which humans are no longer the smartest species on the planet.
He lays out some cases in which this superintelligence could pose an existential risk to humanity. Not necessarily out of some malevolence per se, but in fulfillment of its purpose, humans could be more or less swept aside. He talks about how a first mover superintelligence could essential initiate what he calls a singleton ( a world order in which at the global level there is a single decision-making agency). He mentions a treacherous turn in which AI realizes at some points that humans are trying to limit it, and begins to conceal its behavior. And he talks about “perverse instantiation,” which basically includes the Strawberry Fields scenario that Musk has popularized.
All that said, Bostrom is quite careful to add in the appropriate caveats, alternatives, and hedges. It could turn out wonderfully good! It could take a really long time. It may never happen. He simply suggests it is plausible, therefore worthy of serious strategic attention. And now! He discusses approaches to managing it, such as capability control and motivation selection — actually goes into them in some depth.
In sum, we can’t afford to wait until it’s already here, and if the fast takeoff scenario is possible, we can’t even wait for AI to achieve human-level intelligence. I was already in his camp, and after reading the book, I am even moreso. It is worth your time to read the book and see where you net out. Our existence could hang in the balance! – Andy Hines
Gary says
Thanks for the serendipitous reference. I just finished reading a Smithsonian article written by noted author Stephan Talty, who wrote: “As a novelist, I wanted to plot out what the AI future might actually look like, using interviews with more than a dozen futurists, philosophers, scientists, cultural psychiatrists and tech innovators. Here are my five scenarios (footnoted with commentary from the experts and me; click the blue highlighted text to read them) for the year 2065, ten years after the singularity arrives.”
What Will Our Society Look Like When Artificial Intelligence Is Everywhere?
https://www.smithsonianmag.com/innovation/artificial-intelligence-future-scenarios-180968403/