I saw an article by Peter Dockrill with the headline, “Artificial intelligence should be protected by human rights, says Oxford mathematician”.
The subtitle is: “Machines Have Feelings Too”.
Regarding the potential dangers of robots and computers, Peter asks: “But do robots need protection from us too?” Peter is apparently a “science and humor writer”. I think he should stick with just one genre.
Just more click-bait.
There are too many articles on the internet with headlines like this. They are usually covered with obnoxious, eye-jabbing ads, flitting in front of my face like giant colorful moths. It’s a carnival – through and through.
I could easily include any number of articles about the “terrifying” future of AI, “emotional machines”, “robot ethics”, and other cartoon-like dilutions of otherwise thoughtful well-crafted science fiction.
Good science fiction is better than bad science journalism.
Here’s Ben Goldacre:
Now, back to this silly subject of machines having feelings:
Some of my previous articles express my thoughts on the future of AI, such as:
I think we should be working to fix our own emotional mess, instead of trying to make vague, naive predictions about machines having feelings. Machines will – eventually – have something analogous to animal motivation and human states of mind, but by then the human world will look so different that the current conversation will be laughable.
Right now, I am in favor of keeping the “feelings” on the human side of the equation.
We’re still too emotionally messed up to be worrying about how to tend to our machines’ feelings. Let’s fix our own feelings first before giving them to our machines. We still have that choice.
And now, more stupidity from Meghan Neal:
“Computers are already faster than us, more efficient, and can do our jobs better.”
Wow Meghan, you sure do like computers, don’t you?
I personally have more hope, respect, and optimism for our species.
In this article, Meghan makes sweeping statements about machines with feelings, including how “feeling” computers are being used to improve education.
The “feeling” robots she is referring to are machines with a gimmick – they are brain-dead automatons with faces attached to them. Many savvy futurists suggest that true AI will not result from humans trying to make machines act like humans. That’s anthropomorphism. Programming pre-defined body language in an unthinking robot makes for interesting and insightful experimentation in human-machine interaction. But please! Don’t tell me that these machines have “feelings”.
This article says: “When Nao is sad, he hunches his shoulders forward and looks down. When he’s happy, he raises his arms, angling for a hug. When frightened, Nao cowers, and he stays like that until he is soothed with some gentle strokes on his head.”
Pardon me while I projectile vomit.
Any time you are trying to compare human intelligence with computers, consider what Marvin once said: