What We Think a Mind Can Be, Jamming, 31/08/2015

I had a really interesting conversation on Twitter this weekend, between Steve Fuller, Thomas Basbøll, and Phil Sandifer, as well as a new online acquaintance. We were chatting about the nature of intelligence, mind, and the Turing Test. It got me thinking about my own relationship with philosophy of mind back when I was finishing my undergrad and my master’s. 

Until the end of 2008, I had wanted to specialize in philosophy of mind in my academic career. I eventually decided against it because I found the style of writing and discourse in that sub-discipline unproductive. I've written about why I left philosophy of mind before.

Artificial intelligence is a concept that I find entirely
ordinary, I think because I've grown up around stories
of intelligent robots all my life. That's true of
everyone now.
The way people thought about and discussed the field's problems, it didn’t actually progress their conversation. It would just split a new idea into a new sub-sub-discipline, and the writers who didn't want to go in that direction would continue on as usual.

Come to think of it, that’s a problem with academic humanities in general. You can build a whole career just by declaring and arguing that everyone who's more innovative than you is wrong. I’ve met some old farts in the academic field like that. I never met Jerry Fodor, but I found most of his work had this character.

Complaining aside, it was an interesting conversation, about whether humans could ever develop an artificial intelligence on the same level as humanity, with similar powers. Steve, being the optimistic transhumanist that he is, is sure that we could. Thomas was sure that we can’t, and even doubted whether any machine was intelligent, because of what’s “on the inside.”

I agreed with Steve that it isn't really an argument against the possibility of AI. We don't know what's “on the inside” of other humans either. Philosophy has called this question The Problem of Other Minds™, and no one has ever conclusively solved that either. It’s one of those philosophical problems rising from such a radical skepticism that you can only walk away or let it consume you.

For someone raised on as much science-fiction as I have been, The Problem of Other Minds applies to machines and artificial intelligences (real, imagined, and hypothetical) even more than to humans. I’m sure we’ve all met people of whom we’ve concluded, quite reasonably, that there’s nothing going on in there.

I suppose my own thoughts are agnostic. Most of the attempts to conceive of AI scientifically revolve around computers and the Turing Test. But I don’t think human intelligence works anything like a computer. Human thought is completely different from serial processing, so developing artificial intelligence from computer technology is probably impossible.

I don't think anyone ever developed a Turing
Test scenario where the AI punches you in the
face, steals your wallet, and spends all your
cash on drugs and beer. That's why I think the
creators of Futurama are genuine innovators
in how we conceive of AI in our culture.
But that doesn't mean that I think artificial intelligence is impossible. I’ve always suspected that robotics would develop true machine intelligence on a human parallel, because robots have to perceive and move in the world. Humans have very plastic brains, which develop through worldly perception and interaction. This goes beyond learning. I'm talking about the actual development of our brains and cognitive capacities.

I suspect having a memory capable of narrative formation (and not just factual recall) is also essential to intelligence, because our self-consciousness and self-identity is inherently narrative. Our character is a story we tell about ourselves, and how we became who we are. We change our character when we change that story.

This is why I’m ultimately an agnostic on whether we'll ever develop an artificial intelligence in the greatest sense of the term. I don't know if any of our technology is capable of creating a robot whose brain physically develops through their worldly activity, and a memory that works by forming narratives. 

All of our existing computer and robotics technology, to my knowledge, makes brains (or central processing units) that are complete and don’t develop after activation. And computer memory is entirely about precise functional recall, not the development of narrative and character.

I believe that it’s possible to make something that works this way. I believe this is true because we exist, and we work that way. But there's no reason why making such an artificial intelligence is inevitable, any more than anything that happens is inevitable. As it is, humanity may simply never get around to it in the time we have left, however much or little it may be.

No comments:

Post a Comment