I love it when I come up with a half-assed idea that veers off in a different direction, gallops for awhile, then races back close to where it started.
I’d thought I’d mumble here about AI for awhile. I figured maybe I could be slightly more coherent that most of what I’ve read. (Unlikely, but worth the try.)
So I typed up a few quick notes. Then I went back and expanded on them because I realized gibberish like “check previous” or “didn’t I?” weren’t completely self-explanatory.
But I did actually “check (through) previous” rumins and found that most of the ideas I’d jotted down now I’d already covered a couple months back.
What next?
Take time off and read something that has nothing to do with what I had in mind.
A couple weeks ago I downloaded Bertrand Russell’s The Problems of Philosophy from the Gutenberg Project. Considering my usual snarls about philosophy in general, you may wonder why I did so.
My main problem with philosophy is that I find 95% of it either boring or bullshit, often both – and yes, that does include Plato. But I’m a fan of Russell and William James, who both seem to be talking about reality and trying to provide lucid explanations of their subject.
Russell has an often sneaky, sometimes overt, humor, and he talks in terms that a standard-issue human being might use in conversation with another standard-issue human being. He tries to explain, not just assemble impenetrable categories into which he can bundle outlandish conjectures.
So OK, I moved on to Bertie: No AI to worry about here, just a short romp explaining how philosophy works… in 1912.
About a third of the way through, it occurred to me that creating philosophy is much like writing fiction: You come up with a basic broad idea, then you establish a plotline filled with supporting incidents – categories in the case of philosophy – to carry you through to a satisfying ending.
A bit further on, when I thought I’d become safely diverted from my annoying flap of an article, Russell got into the problem of how we can prove that an object – or any form of “matter” – actually exists, independent of our mental experience of it. That is: Can we conclusively say that anything outside our self’s perception is real, when it’s conceivable that the entire universe could be a bad movie playing inside our one and only head?
At that point, bingo! It had me thinking that, just possibly, AI – at work in the real, not the philosophical world – might be able to solve a seemingly impossible conundrum that has bugged me for years.
Which is this: How do I know that a color as perceived by someone else corresponds to the color of the same name that I see? It’s entirely possible, even likely, that what you call “red,” if perceived by me exactly as you see it, might be what I call “blue.” We would agree that objects that we both call “red” have the same color, while each each of us is experiencing that color uniquely.
But, of course, with no possibility of proving that conjecture, one way or the other, because you can’t place another’s internal perception inside your own head.
Or can you?
It’s been only in the last half century that we’ve begun to directly study the “mind,” as opposed to the physical collection of neurons and other squishy folderol flopping around inside our skulls. Books on consciousness and the self are popping up all over the place these days, because we now have the beginnings of a handle on what those neurons and their buddies do to form a linked, coordinated system that produces “experience.” We’re even starting to move toward defining what that experience might be – not just what it does, in other words, but what it is.
This progress is a product of the overall blistering ramp-up taking place in all areas of science, not just biology. And within the study of life, as within the study of, say, particle psychics, much of this advance depends on the explosion in computational ability, which is on schedule to become unimaginably wider and faster once quantum computers reach their potential.
All of this has led, over tine, to the realization of AI, no matter how you define the term “artificial intelligence” – and believe me, that AI acronym covers as many variables as those unending food spreads in a Korean video series. Basically, AI is anything that a fine-tuned, programmable machine can perform as well as or better than the average human.
It’s the “better than” that has freaked out the ever-wary. But let’s put that part aside for the moment. Here’s the question that’s been sitting inside me for decades: If a near-unlimited computational machine could identify, read and duplicate every input that goes to creating an individual’s perception, couldn’t this machine then project that perception accurately into the mind of another, after modifying the input to meet the different range of inputs specific to the receiving individual?
And if so, could not your perception of “red” be duplicated in my mind for comparison with my perception of red?
This outline is ridiculously simplistic, and nothing close to it could be considered possible yet. And there may be other imperceptible limitations – call them “existential” – that would otherwise prevent it. But isn’t the idea of such a transfer, considering today’s rate of progress, at least conceivable?
Yeah, it would be a damned stupid waste of time, money and equipment to perform such an experiment just to make me happy. So, consider it a thought experiment, and since sillyass thoughts can get us in trouble, I won’t go any further with that.
Instead I’ll pick out some other points I may or may not have touched on previously.
How can artificial intelligence be any worse than the human variety? Is there anything, anywhere that we exalted beings, in our chest-beating pride, haven’t managed to fuck up?
With every major technological advance, we alternate between pseudo-religious adulation and atavistic horror, with little attempt at rational examination. So far, AI has gone from “cool-ass whoopee” to “them machine muthafuckas gone kill us,” creating a scrum of conflicting comments that run around our feet like the rats in Werner Herzog’s Nosferatu.
Much of the negative fixation on AI comes out of our evolutionary dislike of the “other,” whether that other be animal, vegetable or machine.
“Intelligent” machines were initially seen as potential aids to improving life and removing drudgery; now they’re being seen as evil inventions that can eliminate jobs and become our masters.
Similarly, UFOs were thought to be overseen by gentle extraterrestrial saviors in the 1950s; by the ‘70s their major activity was confined to ramming probes up our orifices.
Here’s a more serious area for investigation: Are there only levels of intelligence, or could there be fundamentally different kinds of intelligence?
I’d expect that a higher intelligence would look at the whole picture, shorn of our evolutionary basis, and this could lead to “good” outcomes – such as improved life and less drudgery.
Should the most intelligent life-form be the one in charge? If so, maybe humans are just another waystation.
Anyway, is humanity worth saving if we’re determined to be destructive?
Another funny thing that came up while pondering all this was a simple reversal that wholly changes outlook.
Consider the two words “nuclear” and “unclear.” The reversal of two letters flips their meaning on its head.
“Nuclear” sums up not only atomic annihilation, but a singular, central approach to problem-solving.
“Unclear” suggests that a problem involves a hidden multitude of ramifications to be determined through questioning and experiment.
* * *
Sign off: Got to admit, President Thump’s come up with the cleverest idea yet on how to deal with immigrants: turn the US into a country no one in their right mind would want to enter or live in.