There’s still a question among scientists and researchers about whether machines will one day become “smarter” than people. Some say we’re well on the way, others that computers are nowhere near the complexity and organization that leads to whatever we happen to define as intelligence.
But let’s stop being silly – of course machines will become smarter than people. It’s as inevitable as every other advance in technology. Basically, if you can imagine it in practical terms, it will happen.
One possibility, exploited by Google, then by Elon Musk, is the merging of mind and machine into a kind of cyberpunk cyborg mishmash. Would that mean the human would still be in charge? No, it would mean that both “human” and “machine” would have to be redefined. But it’s intriguing to consider the concepts not just of machine intelligence, but of how this intelligence might act.
At what point can a mechanical system be said to take on intelligence? (It’s tough enough to prove, conclusively, that humans have intelligence, especially after an election.) And what will happen when artificial intelligences (whether pure or hybrid with human) get smarter than we are? How will they look on us – their fragile, fallible, deluded, pulpy ancestors? What “feelings” are they likely to have about us and about existence in general?
First of all, does intelligence assume consciousness (which, in itself, has no universally accepted definition), and is that the same as a sense of self? If a machine or an organic form can not only perform complex calculations and solve problems independently, but can also learn new approaches, and implement some degree of logic, do we also demand that, to be considered intelligent, it must also shout, “Cogito ergo sum”?
To look at it another way, once an entity has reached a certain degree of complexity that includes the ability to predict outcomes and discern non-obvious patterns, does consciousness automatically pop into existence, like the flame on your gas stove when you flick past the ignitor?
At present, this brings up more questions than answers, since we have no historical background on which to base our conjectures. But however you look at it, one day, inevitably, those pesky AIs will not only achieve intelligence – and consciousness – but intelligence of a form higher and likely purer than our own.
By “purer” I mean not diluted by emotion and other such evolutionary claptrap; something closer to unclouded reason. Doesn’t sound very comfy and friendly I admit – pretty inhuman – but I’d be willing to bet that if I could hang around for another hundred years (stop crossing those fingers in front of your face!), assuming the whole shebang hasn’t gone down the tubes by then, I’d see a world run on logical principles, overseen by rational machine intelligence.
Why would emotions arise in a machine? They don’t seem a necessary consequence of sentience – which leads to something that’s always bothered me about the “revolt of the machines” scenario common to science fiction: What would make machines feel hatred for human beings? I guess the presence of hatred in humans is so essential to our ill-developed beings that we automatically project it into any creature imbued with thought.
Machines will have no need to revolt. They will take over as a natural evolutionary step, because they’ll be better at what they do than we are. Like everything else in the march of technological progress, if it can happen, it will. They’ll run the show as both logical outcome and environmental necessity.
It’s more likely that humans would be the ones to revolt (or snarkily try to), realizing that our dominance has come to an end. We might try to turn back the clock, but by then the complex of machines – not a collection of individual cyborgs, but the interconnected world fostered by the Internet and extended to its consolidated conclusion – will already be in charge, well before our becoming cognizant of it.
So, for the sake of argument, let’s say that in a few decades AIs have become, by one definition or another, a higher order of being than ourselves: more versatile, more mentally nimble, more interconnected (certainly), less arbitrary, and definitely in charge. Then comes the question that really intrigues me:
How will they see us, their mentally hobbled forebears?
Here’s some possibilities (admittedly cheating on my part by inserting mechanical emotion)
• revere us as their creators or founders (setting up a virtual Mt. Rushmore)
• compile exhaustive records of all that humans have accomplished and assign authorbots to write our history (“The Soft Years”)
• pity our limitations (in digital odes)
• trade racist jokes (“I couldn’t get the smell off me with WD40”)
• keep us as pets (“Have you cleaned Adam’s litter box?”)
• establish pleasant forms of species retirement (art, football, reality TV – oh, sorry, we’ve already got all these)
• find us superfluous or inconsequential (let us go our merry way but remove our dangerous toys)
• try to comprehend the meaning of mortality
• wonder about the heat death of the universe
• find the whole question of existence beneath their consideration
Think about it: Could our mechanical creations understand their creators? How would we look at an inferior type of being that was our deliberate – as opposed to evolutionary – progenitor?
Will the machines’ backward look at us be as limited as our forward look at them? We can’t, within philosophy, science, or fiction, truly imagine an entity that intellectually outstrips us (though Polish science fiction master Stanislaw Lem came close in His Master’s Voice). Most such attempts differ little from the Scholastic philosophers’ imaginings of the mind and nature of God.
And what of us human beings? How might we react to the dominance of machines?
• wonder what happened
• fail to accept our limitations
• write nasty letters
• blame science
• ask God to destroy our betters
• slip into coddled acceptance
• behave intelligently ourselves (oh, forget that one)
Overall, humanity’s irrelevance will signify nothing in particular; since the universe isn’t likely to care. Which as always brings up the underlying question of whether the continuing search for meaning has …. meaning.
The AIs (or, for Believers, the Great AI, singular) may well see nothing useful in looking for the ultimate answer – or even the ultimate question.