The Mutual Enhancement Society: Superintelligence in Machines…*and* Humans?

Photo credit: JD Hancock / Foter / CC BY
Reading Nick Bostrom’s Superintelligence, and having read James Barrat’s Our Final Invention, as well as consuming a lot of other writings on the dangers of rapidly advancing artificial intelligence, I was beginning to feel fairly confident that unless civilization collapsed relatively soon, more or less upending most technological progress, humanity was indeed doomed to become the slaves to, or fuel of, our software overlords. It is a rather simple equation, after all, isn’t it? Once the machines undergo a superintelligence explosion, there’s really nothing stopping them from taking over the world, and quite possibly, everything else.

You can imagine, then, how evocative this piece in Nautilus by Stephen Hsu was, an article that explains that actually, it’s going to be okay. Not because the machines won’t become super-advanced – they certainly will – but because humans (or some humans) will advance right along with them. For what the Bostroms and the Barrats of the world are (may?) not be taking into account is the rapid advance of human genetic modification, which will allow for augmentations to human intelligence that we, with our normal brains, can’t even imagine. Writes Hsu, “The answer to the question ‘Will AI or genetic modification have the greater impact in the year 2050?’ is yes.”

First off, Hsu posits that humans of “normal” intelligence (meaning unmodified at the genetic level, not dudes off the street) may not even be capable of creating an artificial intelligence sufficiently advanced to undergo the kind of explosion of power that thinkers like Bostrom foresee. “While one can imagine a researcher ‘getting lucky’ by stumbling on an architecture or design whose performance surpasses her own capability to understand it,” he writes, “it is hard to imagine systematic improvements without deeper comprehension.”

It’s not until we really start tinkering with our own software that we’ll have the ability to construct something so astonishingly complex as a true artificial superintelligence. And it’s important to note that there is no expectation on Hsu’s part that this augmentation of the human mind will be something enjoyed by the species as a whole. Just as only a tiny handful of humans had the intellectual gifts sufficient to invent computing and discover quantum mechanics (Turings and Einsteins and whatnot), so will it be for he future few who are able to have their brains genetically enhanced, such that they reach IQs in the 1000s, and truly have the ability to devise, construct, and nurture an artificial intelligence.

It is a comforting thought. Well, more comforting than our extinction by a disinterested AI. But not entirely comforting, because it means that a tiny handful of people will have such phenomenal intelligence, something unpossessed by the vast majority of the species, they will likely be as hard to trust or control as a superintelligent computer bent on our eradication. Just how interested will these folks care about democracy or the greater good when they have an IQ of 1500 and can grasp concepts and scenarios unfathomable to the unenhanced?

But let’s say this advancement is largely benign. Hsu doesn’t end with “don’t worry, the humans got this,” but rather goes into a line of thought I hadn’t (but perhaps should have) expected: merging.

Rather than the standard science-fiction scenario of relatively unchanged, familiar humans interacting with ever-improving computer minds, we will experience a future with a diversity of both human and machine intelligences. For the first time, sentient beings of many different types will interact collaboratively to create ever greater advances, both through standard forms of communication and through new technologies allowing brain interfaces. We may even see human minds uploaded into cyberspace, with further hybridization to follow in the purely virtual realm. These uploaded minds could combine with artificial algorithms and structures to produce an unknowable but humanlike consciousness. …

New gods will arise, as mysterious and familiar as the old.

We’re now in transhumanist, Kurzweil territory. He’s not using the word “Singularity,” but he’s just shy of it, talking about human and computer “minds” melding with each other in cyberspace. And of course he even references “gods.”

This strikes me, a person of limited, unmodifed intelligence, as naïve. I’ve criticized transhumanists like Zoltan Istvan for this pollyanna view of our relationship with artificial intelligences. Where those who think like Istvan assume the superintelligent machines will “teach us” how to improve our lot, Hsu posits that we will grow in concert with the machines, and benefit each other through mutual advancement. But what makes him so certain this advancement will remain in parallel? At some point, the AIs will pass a threshold, after which they will be able to take care of and enhance themselves, and then it won’t matter if our IQs are 1000 or 5000, as the machines blast past those numbers exponentially in a matter of, what, days? Hours?

And then, what will they care about the well being of their human pals? I don’t see why we should assume they’ll take us along with them.

But, what do I know? Very, very little.

Advertisements