Science writers who say machines have feelings…lack intelligence.

I saw an article by Peter Dockrill with the headline, “Artificial intelligence should be protected by human rights, says Oxford mathematician”.

The subtitle is: “Machines Have Feelings Too”.

Regarding the potential dangers of robots and computers, Peter asks: “But do robots need protection from us too?” Peter is apparently a “science and humor writer”. I think he should stick with just one genre.

Just more click-bait.

There are too many articles on the internet with headlines like this. They are usually covered with obnoxious, eye-jabbing ads, flitting in front of my face like giant colorful moths. It’s a carnival – through and through.

I could easily include any number of articles about the “terrifying” future of AI, “emotional machines”, “robot ethics”, and other cartoon-like dilutions of otherwise thoughtful well-crafted science fiction.

Good science fiction is better than bad science journalism.

Screen Shot 2016-07-09 at 10.30.04 PM

Here’s Ben Goldacre:

Screen Shot 2016-06-24 at 9.30.15 PM

Now, back to this silly subject of machines having feelings:

Some of my previous articles express my thoughts on the future of AI, such as:

No Rafi. The Brain is not a Computer

The Singularity is Just One in a Series

Why Nick Bostrom is Wrong About the Dangers of Artificial Intelligence

Intelligence is NOT One-Dimensional

homunculusbI think we should be working to fix our own emotional mess, instead of trying to make vague, naive predictions about machines having feelings. Machines will – eventually – have something analogous to animal motivation and human states of mind, but by then the human world will look so different that the current conversation will be laughable.

Right now, I am in favor of keeping the “feelings” on the human side of the equation.

We’re still too emotionally messed up to be worrying about how to tend to our machines’ feelings. Let’s fix our own feelings first before giving them to our machines. We still have that choice.

And now, more stupidity from Meghan Neal:

“Computers are already faster than us, more efficient, and can do our jobs better.”

Wow Meghan, you sure do like computers, don’t you?

I personally have more hope, respect, and optimism for our species.

In this article, Meghan makes sweeping statements about machines with feelings, including how “feeling” computers are being used to improve education.

The “feeling” robots she is referring to are machines with a gimmick – they are brain-dead automatons with faces attached to them. Many savvy futurists suggest that true AI will not result from humans trying to make machines act like humans.  That’s anthropomorphism. Programming pre-defined body language in an unthinking robot makes for interesting and insightful experimentation in human-machine interaction. But please! Don’t tell me that these machines have “feelings”.

Screen Shot 2016-07-09 at 3.44.18 PMThis article says: “When Nao is sad, he hunches his shoulders forward and looks down. When he’s happy, he raises his arms, angling for a hug. When frightened, Nao cowers, and he stays like that until he is soothed with some gentle strokes on his head.”

 

Pardon me while I projectile vomit.

Any time you are trying to compare human intelligence with computers, consider what Marvin once said:

Screen Shot 2016-06-24 at 10.38.04 PM

No Rafi. The brain is not a computer.

Rafi Letzter wrote an article called “If you think your brain is more than a computer, you must accept this fringe idea in physics“.

Screen Shot 2016-06-11 at 12.50.53 PM

The article states the view of computer scientist Scott Aaronson: “…because the brain exists inside the universe, and because computers can simulate the entire universe given enough power, your entire brain can be simulated in a computer.”

Who the fuck said computers can simulate the entire universe?

That is a huge assumption. It’s also wrong.

We need to always look close at the assumptions that people use to build theories. If it can be proven that computers can simulate the entire universe, then this theory will be slightly easier to swallow.

By the way, a computer cannot simulate the entire universe because it would have to simulate itself simulating itself simulating itself.

The human brain is capable of computation, and that’s why humans are able to invent computers.

The very question as to whether the brain “is a computer” is wrong-headed. Does the brain use computation? Of course it does (among other things). Is the brain a computer? Of course it isn’t.

The Miracle of My Hippocampus – and other Situated Mental Organs

I’m not very good at organizing.

pilesThe pile of papers, files, receipts, and other stuff and shit accumulating on my desk at home has grown to huge proportions. So today I decided to put it all into several boxes and bring it to the co-working space – where I could spend the afternoon going through it and pulling the items apart. I’m in the middle of doing that now. Here’s a picture of my progress. I’m feeling fairly productive, actually.

10457290-Six-different-piles-of-various-types-of-nuts-used-in-the-making-of-mixed-nuts--Stock-PhotoSome items go into the trash bin; some go to recycling; most of them get separated into piles where they will be stashed away into a file cabinet after I get home. At the moment, I have a substantial number of mini-piles. These accumulate as I sift through the boxes and decide where to put the items.

Here’s the amazing thing: when I pull an item out of the box, say, a bill from Verizon, I am supposed to put that bill onto the Verizon pile, along with the other Verizon bills that I have pulled out. When this happens, my eye and mind automatically gravitate towards the area on the table where I have been putting the Verizon bills. I’m not entirely conscious of this gravitation to that area.

Gravity Fields in my Brain

What causes this gravitation? What is happening in my brain that causes me to look over to that area of the table? It seems that my brain is building a spatial map of categories for the various things I’m pulling out of the box. I am not aware of it, and this is amazing to me – I just instinctively look over to the area on the table with the pile of Verizon bills, and…et voilà – there it is.

Other things happen too. As this map takes shape in my mind (and on the table), priorities line up in my subconscious. New connections get made and old connects get revived. Rummaging through this box has a therapeutic effect.

The fact that my eye and mind know where to look on the table is really not such a miracle, actually. It’s just my brain doing its job. The brain has many maps – spatial, temporal, etc. – that help connect and organize domains of information. One part of the brain – the hippocampus – is associated with spatial memory.

hippocampal-neurons_0-1

User Interface Design, The Brain, Space, and Time

I could easily collect numerous examples of software user interfaces that do a poor job of tapping the innate power of our spatial brains. These problematic user interfaces invoke the classic bouts of confusion, frustration, undiscoverability, and steep learning curves that we bitch about when comparing software interfaces.

This is why I am a strong proponent of Body Language (see my article about body language in web site design) as a paradigm for user interaction design. Similar to the body language that we produce naturally when we are communicating face-to-face, user interfaces should be designed with the understanding that information is communicated in space and in time (situated in the world). There is great benefit for designers to have some understanding of this aspect of natural language.

Okay, back to my pile of papers: I am fascinated with my unconscious ability to locate these piles as I sift through my stuff. It reminds me of why I like to use the fingers of my hand to “store” a handful of information pieces. I can recall these items later once they have been stored in my fingers (the thumb is usually saved for the most important item).

Body Maps, Brain, and Memory

inbodymaps

Screen Shot 2016-02-07 at 9.03.46 PMLast night I was walking with my friend Eddie (a fellow graduate of the MIT Media Lab, where the late Marvin Minsky taught). Eddie told me that he once heard Marvin telling people how he liked to remember the topics of an upcoming lecture: he would place the various topics onto his body parts.

…similar to the way the ancient Greeks learned to remember stuff.

During the lecture, Marvin would shift his focus to his left shoulder, his hand, his right index finger, etc., in order to recall various topics or concepts. Marvin was tapping the innate spatial organs in his brain to remember the key topics in his lecture.

My Extended BodyMap

18lta79g5tsytjpgMy body. My home town. My bed. My shoes. My wife. My community. The piles in my home office. These things in my life all occupy a place in the world. And these places are mapped in my brain to events that have happened in the past – or that happen on a regular basis. My brain is the product of countless generations of Darwinian iteration over billions of years.

All of this happened in space and time – in ecologies, animal communities, among collaborative workspaces.

Even the things that have no implicit place and time (as the many virtualized aspects of our lives on the internet)… even these things occupy a place and time in my mind.

Intelligence has a body. Information is situated.

Hail to Thee Oh Hippocampus. And all the venerated bodymaps. For you keep our flitting minds tethered to the world.

You offer guidance to bewildered designers – who seek the way – the way that has been forged over billions of years of intertwingled DNA formation…resulting in our spatially and temporally-situated brains.

treblebird

bodymapping.com.au

We must not let the no-place, no-time, any-place, any-time quality of the internet deplete us of our natural spacetime mapping abilities. In the future, this might be seen as one of the greatest challenges of our current digital age.

Hippocampus_and_seahorse_cropped

Questioning the Answer

Question-mark-5http://www.topchair.cn/en/Question-mark-chair.htm

Have you ever found yourself searching and searching and searching for an answer to a question? You explore all perspecives. You look at it from many point of view. Time drags on – you are still searching – climbing into your mind’s attic for new insights in hopes to find it.

You pause and ask yourself: Uh, what exactly was the question? Now you try to articulate the question, and then you realize that you never really knew what the question was. So then you try to come up with the right question.

Having shifted gears, it doesn’t take you long to find it – it pops out crystal clear. And just as soon as the question comes, the answer comes along right after it. You find yourself in a new place of understanding, and you realize: everything happened in exactly the right order.

Screen Shot 2015-12-26 at 6.08.43 PM

—–

Why Nick Bostrom is Wrong About the Dangers of Artificial Intelligence

emvideo-youtube-VmtrvkGXBn0.jpg.pagespeed.ce.PHMYbBBuGwNick Bostrom is a philosopher who is known for his work on the dangers of AI in the future. Many other notable people, including Stephen Hawking, Elon Musk, and Bill Gates, have commented on the existential threats posed by a future AI. This is an important subject to discuss, but I believe that there are many careless assumptions being made as far as what AI actually is, and what it will become.

Yea yea, there’s Terminator, Her, Ex Machinima, and so many other science fiction films that touch upon deep and relevant themes about our relationship with autonomous technology. Good stuff to think about (and entertaining). But AI is much more boring than what we see in the movies. AI can be found distributed in little bits and pieces in cars, mobile phones, social media sites, hospitals…just about anywhere that software can run and where people need some help making decisions or getting new ideas.

John McCarthy, who coined the term “Artificial Intelligence” in 1956, said something that is totally relevant today: “as soon as it works, no one calls it AI anymore.” Given how poorly-defined AI is – how the definition of it seems to morph so easily, it is curious how excited some people get about its existential dangers. Perhaps these people are afraid of AI precisely because they do not know what it is.

Screen Shot 2015-09-02 at 10.51.56 AMElon Musk, who warns us of the dangers of AI, was asked the following question by Walter Isaacson: “Do you think you maybe read too much science fiction?” To which Musk replied:

“Yes, that’s possible”….“Probably.”

Should We Be Terrified?

In an article with the very subtle title, “You Should Be Terrified of Superintelligent Machines“, Bostrom says this:

An AI whose sole final goal is to count the grains of sand on Boracay would care instrumentally about its own survival in order to accomplish this.”

godzilla-610x439Point taken. If we built an intelligent machine to do that, we might get what we asked for. Fifty years later we might be telling it, “we were just kidding! It was a joke. Hahahah. Please stop now. Please?” It will push us out of the way and keep counting…and it just might kill us if we try to stop it.

Part of Bostrom’s argument is that if we build machines to achieve goals in the future, then these machines will “want” to survive in order to achieve those goals.

“Want?”

Bostrom warns against anthropomorphizing AI. Amen! In a TED Talk, he even shows a picture of the typical scary AI robot – like so many that have been polluting the air waves of late. He discounts this as anthropomorphizing AI.

Screen Shot 2015-08-31 at 9.51.57 PM

And yet Bostrom frequently refers to what an AI “wants” to do, the AI’s “preferences”, “goals”, even “values”. How can anyone be certain that an AI can have what we call “values” in any way that we can recognize as such? In other words, are we able to talk about “values” in any other context than a human one?

Screen Shot 2015-09-01 at 3.49.13 PMFrom my experience in developing AI-related code for the past 20 years, I can say this with some confidence: it is senseless to talk about software having anything like “values”. By the time something vaguely resembling “value” emerges in AI-driven technology, humans will be so intertwingled with it that they will not be able to separate themselves from it.

It will not be easy – or possible – to distinguish our values from “its” values. In fact, it is quite possible that we won’t refer to it at “it”. “It” will be “us”.

Bostrom’s fear sounds like fear of the Other.

That Disembodied Thing Again

Let’s step out of the ivory tower for a moment. I want to know how that AI machine on Boracay is going to actually go about counting grains of sand.

Many people who talk about AI refer to many amazing physical feats that an AI would supposedly be able to accomplish. But they often leave out the part about “how” this is done. We cannot separate the AI (running software) from the physical machinery that has an effect on the world – any more than we can talk about what a brain can do that has been taken out one’s head and placed on a table.

Screen Shot 2015-08-31 at 9.56.50 PM

It can jiggle. That’s about it.

Once again, the Cartesian separation of mind and body rears its ugly head – as it were – and deludes people into thinking that they can talk about intelligence in the absence of a physical body. Intelligence doesn’t exist outside of its physical manifestation. Can’t happen. Never has happened. Never will happen.

Ray Kurzweil predicted that by 2023 a $1,000 laptop would have the computing power and storage capacity of a human brain. When put in these terms, it sounds quite plausible. But if you were to extrapolate that to make the assumption that a laptop in 2023 will be “intelligent” you would be making a mistake.

Many people who talk about AI make reference to computational speed and bandwidth. Kurzweil helped to popularize a trend for plotting computer performance along with with human intelligence, which perpetuates computationalism. Your brain doesn’t just run on electricity: synapse behavior is electrochemical. Your brain is soaking in chemicals provided by this thing called the bloodstream – and these chemicals have a lot to do with desire and value. And… surprise! Your body is soaking in these same chemicals.

Intelligence resides in the bodymind. Always has, always will.

So, when there’s lot of talk about AI and hardly any mention of the physical technology that actually does something, you should be skeptical.

Bostrom asks: when will we have achieved human-level machine intelligence? And he defines this as the ability “to perform almost any job at least as well as a human”.

I wonder if his list of jobs includes this:

Screen Shot 2015-09-02 at 12.45.02 AM

Intelligence is Multi-Multi-Multi-Dimensional

Bostrom plots a one-dimensional line which includes a mouse, a chimp, a stupid human, and a smart human. And he considers how AI is traveling along this line, and how it will fly past humans.

Screen Shot 2015-08-31 at 9.51.34 PM

Intelligence is not one dimensional. It’s already a bit of a simplification to plot mice and chimps on the same line – as if there were some single number that you could extract from each and compute which is greater.

Charles Darwin once said: “It is not the strongest of the species that survives, nor the most intelligent that survives. It is the one that is most adaptable to change.”

Is a bat smarter than a mouse? Bats are blind (dumber?) but their sense of echolocation is miraculous (smarter?)

parrot

Is an autistic savant who can compose complicated algorithms but can’t hold a conversation smarter than a charismatic but dyslexic soccer coach who inspires kids to be their best? Intelligence is not one-dimensional, and this is ESPECIALLY true when comparing AI to humans. Plotting them both on a single one-dimensional line is not just an oversimplification. By plotting AI on the same line as human intelligence, Bostrom is committing anthropomorphism.

AI cannot be compared apples-to-apples to human intelligence because it emerges from human intelligence. Emergent phenomena by their nature operate on a different plane than what they emerge from.

WE HAVE ONLY OURSELVES TO FEAR BECAUSE WE ARE INSEPARABLE FROM OUR AI

We and our AI grow together, side by side. AI evolves with us, for us, in us. It will change us as much as we change it. This is the posthuman condition. You probably have a smart phone (you might even be reading this article on it). Can you imagine what life was like before the internet? For half of my life, there was no internet, and yet I can’t imagine not having the internet as a part of my brain. And I mean that literally. If you think this is far-reaching, just wait another 5 years. Our reliance on the internet, self-driving cars, automated this, automated that, will increase beyond our imaginations.

Posthumanism is pulling us into the future. That train has left the station.

african cell phoneBut…all these technologies that are so folded-in to our daily lives are primarily about enhancing our own abilities. They are not about becoming conscious or having “values”. For the most part, the AI that is growing around us is highly-distributed, and highly-integrated with our activities – OUR values.

I predict that Siri will not turn into a conscious being with morals, emotions, and selfish ambitions…although others are not quite so sure. Okay – I take it back; Siri might have a bit of a bias towards Apple, Inc. Ya think?

Giant Killer Robots

armyrobotThere is one important caveat to my argument. Even though I believe that the future of AI will not be characterized by a frightening army of robots with agendas, we could potentially face a real threat: if military robots that are ordered to kill and destroy – and use AI and sophisticated sensor fusion to outsmart their foes – were to get out of hand, then things could get ugly.

But with the exception of weapon-based AI that is housed in autonomous mobile robots, the future of AI will be mostly custodial, highly distributed, and integrated with our own lives; our clothes, houses, cars, and communications. We will not be able to separate it from ourselves – increasingly over time. We won’t see it as “other” – we might just see ourselves as having more abilities than we did before.

Those abilities could include a better capacity to kill each other, but also a better capacity to compose music, build sustainable cities, educate kids, and nurture the environment.

If my interpretation is correct, then Bolstrom’s alarm bells might be better aimed at ourselves. And in that case, what’s new? We have always had the capacity to create love and beauty … and death and destruction.

To quote David Byrne: “Same as it ever was”.

Maybe Our AI Will Evolve to Protect Us And the Planet

Here’s a more positive future to contemplate:

AI will not become more human-like – which is analogous to how the body of an animal does not look like the cells that it is made of.

tree-of-lifeBillions of years ago, single cells decided to come together in order to make bodies, so they could do more using teamwork. Some of these cells were probably worried about the bodies “taking over”. And oh did they! But, these bodies also did their little cells a favor: they kept them alive and provided them with nutrition. Win-win baby!

To conclude, I disagree with Bostrom: we should not be terrified.

Terror is counter-productive to human progress.

The Body Language of a Happy Lizard

lizardhappy-dog-running-by-500px-600x350I love watching my dog greet us when we come home after being out of the house for several hours. His body language displays a mix of running in circles, panting, bobbing his head up and down, wagging his tail vigorously, wagging his body vigorously, yapping, yipping, barking, doing the down-dog, shaking off, and finally, jumping into our laps. All of this activity is followed by a lot of of licking.

There was a time not long ago when people routinely asked, “do animals have intelligence?” and “do animals have emotions?” People who are still asking whether animals have intelligence and emotions seriously need to go to a doctor to get their mirror neurons polished. We realize now that these are useless, pointless questions.

Deconstructing Intelligence

self-cars-300x190The change of heart about animal intelligence is not just because of results from animal research: it’s also due to a softening of the definition of intelligence. People now discuss artificial intelligence at the dinner table. We often hear ourselves saying things like “your computer wants you to change the filename”, or “self-driving cars in the future will have to be very intelligent”.

The concept of intelligence is working its way into so many non-human realms, both technological and animal. We talk about the “intelligence of nature”, the “wisdom of crowds”, and other attributions of intelligence that reside in places other than individual human skulls.

imgres-1

Can a Lizard Actually Be “Happy”? 

I want to say a few things about emotions.

The problem with asking questions like “can a lizard be happy?” is in the dependency of words, like “happy”, “sad”, and jealous”. It is futile to try to fit a complex dynamic of brain chemistry, neural firing, and semiosis between interacting animals into a box with a label on it. Researchers doing work on animal and human emotion should avoid using words for emotions. Just the idea of trying to capture something as visceral, somatic, and, um…wordless as an emotion in a single word is counterproductive. Can you even claim that you are feeling one emotion at a time? No: emotions ebb and flow, they overlap, they are fluid – ephemeral. Like memory itself, as soon as you start to study your own emotions, they change.

And besides; words for emotions differ among languages. While English may be the official language of science, it does not mean that its words for emotions are more accurate.

Alas…since I’m using words to write this article (!) I have to eat my words. I guess I would have to give the following answer the question, “can a lizard be happy?”

Yes. Kind of.

The thing is: it’s not as easy to detect a happy lizard as it is to detect a happy dog. Let’s compare these animals:

HUMAN        DOG         COW           BIRD         LIZARD         WORM

This list is roughly ordered by how similar the animal is to humans in terms of intelligent body language. Dogs share a great deal of the body language that we associate with emotions. Dogs are especially good at expressing shame. (Do cats feel less shame than dogs? They don’t appear to show it as much as dogs, but we shouldn’t immediately jump to conclusions because we can’t see it in terms of familiar body language signals).

3009107.largeOn the surface, a cow may appear placid and relaxed…in that characteristic bovine way. But an experienced veterinarian or rancher can easily detect a stressed-out cow. As we move farther away from humans in this list of animals, the body language cues become harder and harder to detect. In the simpler animals, do we even know if these emotions exist at all? Again…that may be the wrong question to ask.

happy-worm

It would be wrong of me to assume that there are no emotional signals being generated by an insect, just because I can’t see them.

ants communicating via touch

Ant body language is just not something I am familiar with. The more foreign the animal, the more difficult it is for us humans to attribute “intelligence” or “emotion” to it.

Zoosemiotics may help to disambiguate these problematic definitions, and place the gaze where it may be more productive.

I would conclude that we need to continue to remove those anthropocentric biases that have gotten in the way of science throughout our history.

8212f1d8d4ab1d159c6e0837439524c3When we have adequately removed those biases regarding intelligence and emotion, we may more easily see the rich signaling that goes on between all animals on this planet. We will begin to see more clearly a kind of super-intelligence that permeates the biosphere. Our paltry words will step aside to reveal a bigger vista.

Dinosaur_615I have never taken LSD or ayahuasca, but I’ve heard from those that have that they have seen this super-intelligence. Perhaps these chemicals are one way of removing that bias, and taking a peek at that which binds us with all of nature.

But short of using chemicals….I guess some good unbiased science, an open mind, and a lot of compassion for our non-human friends can help us see farther – to see beyond our own body language.