Maybe I’m obsessing over a tiny bit of language here, but I really believe that the language we use has a large impact on the way we think about things, and thus, the way we go about solving problems. Take the concept of “gene” for example.
Everything I’ve leaned about genetics tells me that there is no clear obvious separation of genes and environment. It’s like the boundary of the Mandelbrot Set.
If you try to untangle the source of something to determine whether it is from genes or environment (nature vs. nurture), you usually fail. And that’s because the interactions of genes with the environment is really like the boundary of the Mandelbrot Set. You can keep zooming in, but you’ll never find the boundary.
And this is fundamental to how nature operates.
Nature versus nurture debates assume that variation in a trait is primarily due to either genetic differences or environmental differences. However, the current scientific opinion holds that neither genetic differences nor environmental differences are solely responsible for producing phenotypic variation, and that virtually all traits are influenced by both genetic and environmental differences.
it is rarely productive to talk about a “gene” in the singular. “Genes” is almost always a plural concept. And the reason is because the interaction of genes and environment (the fundamental basis for evolution) goes all the way down to the level of the genes themselves. In other words:
At a Basic Level: Genes are the environment for genes.
The way a gene is expressed is influenced by the other genes who take part in the choreography of expression.
I originally learned this from reading Richard Dawkins’ The Selfish Gene. From the point of view of the single gene, being the most atomic unit of selection, EVERYTHING other than itself…constitutes the environment. That includes other genes.
So, when you hear a science writer claiming that “Researchers determine that there is no ‘math gene’…”, you should conclude that the author is (1) correct, and (2) ignorant about biology.
Of course there is no math gene. Math skill (or any skill) grows out of a tangled interaction of inherited instinct (genetic makeup) and environmental factors (experience, learning, outside influences). The “nature vs. nurture” debate is counter-productive. The question should not be about determining which is the cause. It should be about determining the way these two factors come together to continually bring the natural world into being.
Because it’s a tangled hierarchy of influences, people get uncomfortable. Science is supposed to untangle these things, right? Not always. Science can help us understand that tangled hierarchies are actually the norm. That’s nature.
This is not to say that there are no culprit genes for certain diseases or observable traits. They do in fact exist in certain cases. For instance: there do exist “single gene disorders“. But these are usually mutations – deviations of an otherwise natural situation.
John Oliver recently made a compelling rant against science journalism, and how perfectly valid science often gets trivialized, simplified, and even rendered false…for mass consumption.
There is no single bullet theory in nature. Science writers should spend less time looking for a simple story to catch people’s eye with a punchy headline. Nature is complex…like the Mandelbrot Set. And that’s awesome.
I have been thinking about the uncanny valley for decades. Here are some things I’ve written on the subject:
Over time, animated filmmakers have become more savvy about the uncanny problem. They are getting generally better at avoiding the creeps. According to this article, Disney learned its lesson, the hard way….
“And that’s why realism-fetishizing technology like motion capture is much more susceptible to creeping us out than more “primitive” or stylized animation: it’s only when you’re purporting to offer that level of detail in the first place that you can totally, utterly screw it up.”
Despite the fact that animators are more savvy about the Valley, I still can’t help but notice a nagging, low-grade fever of optical realism that has crept into the lineup of popular animated characters (even as the accidental monsters get shuffled off to quarantine). Consumers of animated films may be unaware of it…because it has become normalized. The realism has increased, bit by bit, so that now we have quivering hair follicles, sparkling teeth, and eyeballs reflecting the light of the environment.
Imagine if our favorite classic characters were rendered like this.
But the discomfort we call the uncanny valley doesn’t only occur when the thin veneer of visual realism unexpectedly reveals a mindless robot where “nobody is home”. The phenomenon could be seen in a larger context: it is caused by the clash of any two aspects of an artificial character that operate at incompatible levels of realism. For instance…
Can Animals Become Too Human?
I recently saw Zootopia. I really enjoyed it. Great film. But I must say, I did catch a glimpse of the Valley. There’s no denying it.
I also recently saw Guardians of the Galaxy, with Rocket Raccoon, who exhibits two very different kinds of realism: (1) Raccoon! (2) A tough guy with attitude – and a very human intelligence.
Can contradictory behavioral realism create a different sort of valley? Technology for character animation has enabled a much higher level of expressivity than has ever been possible, with fine detail in subtle eye and mouth movements. One might conclude that since behavioral realism has caught up with visual realism, the uncanny valley should now be a thing of the past. But then again, that depends on whether the behavior and the visuals apply to the same species!
Nothing abnormal about a cartoony raccoon throwin’ shapes and talkin’ tough. But when this animal is rendered in a hyper-realistic manner, AND evoking high-res human expression, things start to feel odd.
Pandas, ants, lobsters, bison, eels … in order for all of these various animals to assume the range of human emotion needed to deliver a clever line, they have to be equipped with a face with all the expected degrees of freedom. The result is what I call “rubber mask syndrome”.
One example is the characters in Antz, whose faces stretch in very un ant-like ways in order to express very human-like things. More and more animals (unlikely animals even) are being added to the cast of movie stars. They are snarky, sly, witty, sexy, clever…and oh so human. It has all gotten a little weird if you ask me.
I saw an article by Peter Dockrill with the headline, “Artificial intelligence should be protected by human rights, says Oxford mathematician”.
The subtitle is: “Machines Have Feelings Too”.
Regarding the potential dangers of robots and computers, Peter asks: “But do robots need protection from us too?” Peter is apparently a “science and humor writer”. I think he should stick with just one genre.
Just more click-bait.
There are too many articles on the internet with headlines like this. They are usually covered with obnoxious, eye-jabbing ads, flitting in front of my face like giant colorful moths. It’s a carnival – through and through.
I could easily include any number of articles about the “terrifying” future of AI, “emotional machines”, “robot ethics”, and other cartoon-like dilutions of otherwise thoughtful well-crafted science fiction.
Good science fiction is better than bad science journalism.
Here’s Ben Goldacre:
Now, back to this silly subject of machines having feelings:
Some of my previous articles express my thoughts on the future of AI, such as:
I think we should be working to fix our own emotional mess, instead of trying to make vague, naive predictions about machines having feelings. Machines will – eventually – have something analogous to animal motivation and human states of mind, but by then the human world will look so different that the current conversation will be laughable.
Right now, I am in favor of keeping the “feelings” on the human side of the equation.
We’re still too emotionally messed up to be worrying about how to tend to our machines’ feelings. Let’s fix our own feelings first before giving them to our machines. We still have that choice.
And now, more stupidity from Meghan Neal:
“Computers are already faster than us, more efficient, and can do our jobs better.”
Wow Meghan, you sure do like computers, don’t you?
I personally have more hope, respect, and optimism for our species.
In this article, Meghan makes sweeping statements about machines with feelings, including how “feeling” computers are being used to improve education.
The “feeling” robots she is referring to are machines with a gimmick – they are brain-dead automatons with faces attached to them. Many savvy futurists suggest that true AI will not result from humans trying to make machines act like humans. That’s anthropomorphism. Programming pre-defined body language in an unthinking robot makes for interesting and insightful experimentation in human-machine interaction. But please! Don’t tell me that these machines have “feelings”.
This article says: “When Nao is sad, he hunches his shoulders forward and looks down. When he’s happy, he raises his arms, angling for a hug. When frightened, Nao cowers, and he stays like that until he is soothed with some gentle strokes on his head.”
Pardon me while I projectile vomit.
Any time you are trying to compare human intelligence with computers, consider what Marvin once said:
I remember several decades ago learning that we were at the beginning of an information revolution. The idea, as I understood it, was that many things are moving towards a digital economy; even wars will become information-based.
The information revolution takes over where the industrial revolution left off.
I am seeing an even bigger picture emerging – it is consistent with the evolution of the universe and Earth’s biosphere.
At the moment, I can hear a bird of prey (I think it’s a falcon) that comes around this neighborhood every year about this time and makes its call from the tree tops. When I think about the amount of effort that birds make to produce mating calls, and other kinds of communication, I am reminded of how much importance information plays in the biological world. The variety and vigor of bird song is amazing. From an evolutionary point of view, one has to assume that there is great selective pressure to create such energy in organized sound.
This is just a speck of dust in comparison to the evolution of communication in our own species, for whom information is a major driver in our activities. Our faces have evolved to give and receive a very high bandwidth of information between each other (Compare the faces of primates to those of less complex animals and notice the degree to which the face is optimized for giving and receiving information).
Our brains have grown to massive proportions (relatively-speaking) to account for the role that information plays in the way our species survives on the planet.
Now: onto the future of information…
Beaming New Parts to the Space Station
Guess which is more expensive:
- Sending a rocket to the space station with a new part to repair an old one.
- Beaming up the instructions to build the part on an on-board 3D printer.
You guessed it.
And this is where some people see society going in general. 3D printing will revolutionize society in a big way. Less moving atoms, More moving bits.
To what degree will the manipulation of bits become more important than the manipulation of atoms?
Not Just a Revolution: Evolution
My sense is that the information revolution is not merely one in a series of human eras: it is the overall trend of life on Earth. We humans are the agents of the latest push in this overall trend.
Some futurists predict that nanotechnology will make it possible to infuse information processing into materials, giving rise to programmable matter. Ray Kurzweil predicts that the deep nano-mingling of matter and information will be the basis for a super-intelligence that can spread throughout the universe.
For now, let’s ride this information wave and try to use the weightlessness of bits to make life better for all people (and all life-forms) on Earth – not just a powerful few.
Rafi Letzter wrote an article called “If you think your brain is more than a computer, you must accept this fringe idea in physics“.
The article states the view of computer scientist Scott Aaronson: “…because the brain exists inside the universe, and because computers can simulate the entire universe given enough power, your entire brain can be simulated in a computer.”
Who the fuck said computers can simulate the entire universe?
That is a huge assumption. It’s also wrong.
We need to always look close at the assumptions that people use to build theories. If it can be proven that computers can simulate the entire universe, then this theory will be slightly easier to swallow.
The human brain is capable of computation, and that’s why humans are able to invent computers.
The very question as to whether the brain “is a computer” is wrong-headed. Does the brain use computation? Of course it does (among other things). Is the brain a computer? Of course it isn’t.
I’m reading Kurzweil’s The Singularity is Near.
It occurs to me that the transition that the human race is about to experience is similar to other major transitions that are often described as epochs – paradigm-shifts – in which a new structure emerges over a previous structure. There are six key epochs that Kurzweil describes. (The first four are not unlike epochal stages described by Terrance Deacon and others.)
- Physics and Chemistry
- Biology and DNA
- Human Intelligence Merges with Human Technology
- Cosmic Intelligence
When a new epoch comes into being, the agents of that new epoch don’t necessarily eradicate, overcome, usurp, reduce, or impede the agents of the previous epoch. Every epoch stands on the shoulders of the last epoch.This is one reason not to fear the Singularity…as if it is going to destroy us or render us un-human. In fact, epoch number 5 may allow us to become more human (a characterization that we could only truly make after the fact – not from our current vantage point).
I like to think of “human” as a verb: as a shift from animal to post-human, because it characterizes our nature of always striving for something more.
There are debates raging on whether the Singularity is good or bad for humanity. One way to avoid endless debate is to do the existential act: to make an attempt at determining the fate of humanity, rather than sit passively and make predictions. As Alan Kay famously said, “the best way to predict the future is to invent it”. We should try to guide the direction of the next epoch as much as we can while we are still the ones in charge.
In a previous article I wrote that criticizes some predictions by Nick Bostrom, I compare our upcoming epochal shift to a shift that happened in the past, when multi-cellular beings evolved. Consider:
Maybe Our AI Will Evolve to Protect Us And the Planet
Billions of years ago, single cells decided to come together in order to make bodies, so they could do more using teamwork. Some of these cells were probably worried about the bodies “taking over”. And oh did they! But, these bodies also did their little cells a favor: they kept them alive and provided them with nutrition. Win-win baby!
I am not a full-fledged Singularitarian. I prefer to stay agnostic as long as I can. Its not just a human story. Our Singularity is just the one that is happening to us at the moment.
Similarly, the emergence of previous epochs may have been experienced as Singularities to those that came before.