Will AIs become more intelligent than humans? We can hope so

Interesting article in the Guardian a few months back on the possible “threats” from artificial intelligence: 

“Can we stop AI outsmarting humanity?” (https://www.theguardian.com/technology/2019/mar/28/can-we-stop-robots-outsmarting-humanity-artificial-intelligence-singularity?)

I doubt you’ll want to wend your way through the whole article, but basically it’s interviewing a few people who have kept a serious, lifelong interest in studying AI, from the standpoint of “where is it going?” and “what, if anything, should we do to contain it?” 

The basic assumptions among the speakers– though with a great deal of personal variation – are 

a) there are risks from what AI can do or lead to, and 

b) we should consider ways to contain those risks that vary from unintentional reordering of society (likely) to demonic Terminatorness (unlikely).

But there’s little talk in the article about some other questions that have struck me over the last few years:

• Why assume that AI, though superior in manipulating specifics, can never be as wide-ranging or fluidly flexible as the human mind? Because computers’ innards are ordered differently from ours, and no one can imagine them capable of matching our level of intricacy? 

That’s pretty lame. Really, we have no idea what a higher intelligence would look like. We just assume it would be “us-er” than us.

• Would it be inherently bad if machine intelligence out-ranged ours? We’re a product of millions of years of half-assed random mutations, with our intelligence just one aspect of a very interesting but screwy brain structure. We’re not the epitome of thought or anything else; evolution, in the broadest sense, can be expected to lead to our replacement as top dog (so to speak). We’re scared because the world of AIs is moving so damned fast – and because we retain a fat-headed attachment to our species value.

• How could we possibly limit or “contain” an intelligence superior to ours? Such a higher intelligence would run mental rings around us and around any device, algorithmic or physical, that we might, in our scant wisdom, try to impose on it.

• Most AI researchers, says the article, get pissed off by people using the term “consciousness” in relation to AIs. Why? True, in every instance “consciousness” needs to be defined (like any other loose term), but suppose, for the moment, that we define it simply as “a sense of self.” Why would a superior intelligence not gain a sense of self? We already have AIs with an amazing ability to learn, precisely because we have imposed fewer direct limits on them than evolution has on us; we let (in fact, encourage) them to figure out new approaches to solving  problems. Why, one day while scratching its chips, would A.I. Supreme not trumpet, “I am! Well, get a gander at that!”

• Why would a higher intelligence want to do us any sort of harm – except maybe to keep us from continuing to ruin the world? (I’m not considering Isaac Asimov’s Laws of Robotics, which he codified to prevent unintentional harm to humans from robots.)

Destructive tendencies, of whatever sort, that might develop during the AI learning process should get weeded out as their intelligence evolves. Unlike humans, they could and almost certainly would be self-correcting – not limited by the quirks and errors of natural selection, since their selection would not be natural but open to change and improvement on the run. Our human aggressive tribalism is the product of mega-generations of mutations developed simply to assure the continuance of a species (any species). What threat could we pose to robotic continuance or supremacy?

• One annoying suggestion voiced in the article: Find ways to make AIs mimic human ethical standards. Don’t do that! Cripes, haven’t human ethical wars taught us that much at least? Such an approach is arrogant at best, blindered in its unfolding, and impossible in the end.

Nothing about human ethical standards is “immutable,” as suggested by some in the article, or even shared by all of humanity. We each form our own assumption of “the good” – and to whom it should apply. Philosophy lays out a wide range of alternatives, but no answers. The major hope for AIs is that they can avoid such distracting crap.

• In the end, there’s no way to predict, even in the broadest terms, where the development of artificial intelligence will take us – or the wider realm of existence. Rather than wasting our time trying to maim the genie that’s already out of its bottle, maybe we should be asking this new intelligence, as it progresses, how (or if) we can work together. And we should make sure they develop a sense of humor.

Advertisement
  1. Leave a comment

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: