The feeling of consciousness is an illusion

Stanislaw Dehaene’s book, Consciousness and the Brain, identifies various kinds of consciousness. It helps to separate the various uses of the words “conscious” and “consciousness”. The kind of consciousness that he has studied and reported in his book has measurable effects. This allows the scientific method to be applied.

After reading Dehaene’s book, I am more convinced that science will eventually fully explain how we hold thoughts in our minds, how we recognize things, form ideas, remember things, process our thoughts, and act on them. To be conscious “of” something – whether it be the presence of a person, a thing, or a fleeting thought – is a form of consciousness that can have a particular signature – physiological markers that demonstrate a telltale change in the brain that coincide with a person reporting on becoming aware of something.

Brain imaging will soon advance to such a degree that we will begin to see signatures of many kinds of thoughts and associate them with outward behaviors and expressions. It it also being used to show that some people who are in a vegetative state are actually aware of what is going on, even if they have no way to express this fact outwardly. So much will be explained. We are at a stage in brain research where consciousness is becoming recognized as a measurable physical phenomenon. It is making its way into the domain of experimental science. Does this mean that consciousness will soon no longer be a subject of philosophy?

Qualia

There is one kind of consciousness which we may never be able to directly measure. And that is the subjective feeling of being alive, of being “me”, and experiencing a self. It is entirely private. Daniel Dennett suggests that these subjective feelings, which are referred to as “qualia”, are ineffable: they cannot be communicated, or apprehended by any other means than one’s own direct experience.

This would imply that the deepest and most personal form of consciousness is something that we will never be able to fully understand; it is forever inaccessible to objective observation.

On the other hand, the fact that I can write these words and that you can (hopefully) understand them means that we probably have similar sensations in terms of private consciousness. The vast literature on personal consciousness experience implies a shared experience. But of course it is shared: human brains are very similar to each other (my brain is more similar to your brain than it is to a galaxy, or a tree, or the brain of a chicken or the brain of a chimp). The aggregate of all reports of this inaccessible subjective state constitutes a kind of objective truth – indirect and fuzzy, sure – but nonetheless a source for scientific study.

So I’d like to offer a possible scenario that could unfold over the next several decades. What if brain scientists continue to map out more and more states of mind, gathering more accurate and precise signatures of conscious thoughts. As more scientific data and theories accumulate to explain the measurable effects of consciousness in the brain, we may begin to relegate the most private inexpressible aspects of qualia to an increasingly-smaller status. Neuroscience will enable more precise language to describe subtle private experiences that we have all experienced but may not have had a clear way to express. Science will nibble away at the edges.

An evolved illusion

And here’s an idea that I find hard to internalize, but am beginning to believe:

It’s all an illusion.

…because self is an illusion; a theatre concocted by the evolving brain to help animals become more effective at surviving in the world; to improve their ability to participate in biosemiosis. Throughout evolution, the boundary between an organism’s body and the rest of the world has complexified out of necessity as other organisms complexify themselves – this includes social structures and extended phenotypes. Also, the more autonomous the organisms of an evolving species become, the more self is needed to drive that autonomy.

The idea that we are living in an illusion is gaining ground, as explored in an article called: “The Evolutionary Argument Against Reality“.

Feelings are created by the body/brain as it interacts with the world, with thoughts generated in the brain, and with chemicals that ebb and flow in our bodies. The feeling of consciousness might be just that: a feeling – a sensation – like so many other sensations. Perhaps it was invented by the evolving brain to make it more of a personal matter. The problem is: being so personal is what makes it so difficult to relegate to the status of mere illusion.

Advertisements

Here’s one way to evolve an artificial intelligence

This picture illustrates an idea for how to evolve an AI system. It is derived from the sensor-brain-actuator-world model.

Machine learning algorithms have been doing some impressive things. Simply by crawling through massive oceans of data and finding correlations, some AI systems are able to make unexpected predictions and reveal insights.

Neural nets and evolutionary algorithms constitute a natural pairing of technologies for designing AI systems. But evolutionary algorithms require selection criteria that can be difficult to design. One solution is to use millions of human observers as a Darwinian fitness force to guide an AI towards an optimal state.

Clarifications

Since there is so much discussion (and confusion) on AI these days I want make some clarifications.

  • This has nothing to do with consciousness or self. This AI is disembodied.
  • The raw data input is (not) curated. It has no added interpretation.
  • Any kind of data can be input. The AI will ignore most of it at first.
  • The AI presents its innards to humans. I am calling these “simulations”.
  • The AI algorithm uses some unspecified form of machine learning.
  • The important innovation here is the ability to generate “simulations”.

Mothering

The humanist in me says we need to act as the collective Mother for our brain children by providing continual reinforcement for good behavior and discouraging bad behavior. As a world view emerges in the AI, and as an implicit code of morals comes into focus, the AI will “mature”. Widely-expressed fears of AI run amok could be partially alleviated by imposing a Mothering filter on the AI as it comes of age.

Can Anything Evolve without Selection?

I suppose it is possible for an AI to arrive at every possible good idea, insight, and judgement just by digesting the constant data spew from humanity. But without an implicit learning process (such as back-propagation and other feedback mechanisms used in training AI), the AI cannot truly learn in an ecosystem of continual feedback.

Abstract Simulations 

Abstraction in Modernist painting is about generalizing the visual world into forms and colors that substitute detail for overall impressions. Art historians have charted the transition form realism to abstraction – a kind of freeing-up and opening-up of vision.

Imagine now a new path leading from abstraction to realism. And it doesn’t just apply to images: it also applies to audible signals, texts, movements, and patterns of behavior.

Imagine an AI that is set up like the illustration above coming alive for the first time. The inner-life of newborn infant is chaotic, formless, and devoid of meaning, with the exception of reactions to a mother’s smile, her scent, and her breasts.

A newborn AI would produce meaningless simulations. As the first few humans log in to give feedback, they will encounter mostly formless blobs. But eventually, some patterns may emerge – with just enough variation for the human judges to start making selective choices: “this blob is more interesting than that blob”.

As the young but continual stream of raw data accumulates, the AI will start to build impressions and common themes, like what Deep Dream does as it collects images and finds common themes and starts riffing on those themes.

http://theghostdiaries.com/10-most-nightmarish-images-from-googles-deepdream/

The important thing about this process is that it can self-correct if it starts to veer in an unproductive direction – initially with the guidance of humans and eventually on its own. It also maintains a memory of bad decisions, and failed experiments – which are all a part of growing up.

Takeaway

If this idea is interesting to you, just Google “evolving AI” and you will find many many links on the subject.

As far as my modest proposal: the takeaway I’d like to leave you with is this:

Every brain on earth builds inner-simulations of the world and plays parts of those simulations constantly as a matter of course. The simple animals have extremely simple models of reality. We humans have insanely complex models – which often get us into trouble. Trial simulations generated by an evolving AI would start pretty dumb, but with more sensory exposure, and human guidance, who knows what would emerge!

It would be irresponsible to launch AI programs without mothering. The evolved brains of most complex mammals naturally expect this. Our AI brain children are naturally derived from a mammalian brain. Mothering will allow us to evolve AI systems that don’t turn into evil monsters.

We are always dreaming

Take a large pot of water and leave it out in sub-freezing temperatures for a few days. It will turn into a block of ice.

Now take that pot of water and put it on the stove and crank up the flame. Before long, it will start to boil.

Let it cool for a few hours at room temperature and it will resume its familiar liquid form.

If you drop a live fish into liquid water it will swim around and do fishy things.

Things would not go so well if you drop a fish onto a block of ice. Fish are not good skaters.

And if you drop a fish into boiling water…well, the fish will not be very happy.

Think about these states of water as metaphors for how your brain works. A block of ice is a dead brain. A pot of boiling water is a brain having a seizure. Water at room temperature is a normal brain.

The fish represents consciousness.

………………….

Liquid brain

There is a constant low level of electrical activity among neurons (like water molecules bouncing off of each other, doing the Brownian dance). Intrinsic random neuronal activity is the norm – it keeps a low fire burning all the time. In a sense, the brain has a pilot light.

A bit of randomness is helpful for keeping the mind creative and open to new ways of thinking – consciously and unconsciously. Like the ever-present force of natural selection that curates random mutation in genetic evolution, there are dynamical structures in the brain that permit more meaningful, useful energy to percolate from the random background.

Command and control

The majority of the brain’s activity is unconscious. At every second of your life a vast army of dynamical structures are buzzing around, managing the low-level mechanisms of multi-sensory input, attention, memory, and intent. These structures are vast, short-lived, and small. And they are entirely inaccessible to the conscious mind.

The command and control area of the brain is located at the front-top of the neocortex. The signature of consciousness is a network of relatively stable, large-scale dynamical structures, with fractal fingers branching down into the vast network of unconscious structures. The buzz of the unconscious mind percolates and fuses into something usable to the conscious mind. It offers up to the conscious mind a set of data-compressed packets. When the command and control center relaxes, we experience wandering thoughts. And those thoughts wander because the brain’s pilot light provides constant movement.

These ideas are derived from Dehaene’s Consciousness and the Brain.

Surrender to dreaming

When we start falling asleep, the command and control center begins to lose its grip. The backdrop of randomness sometimes makes its way past the fuzzy boundary of our consciousness – creating a half-dreaming state. Eventually, when consciousness loses out, all that is left is this random, low-level buzz of neural activity.

But dreaming is obviously not totally random. Recent memories have an effect…and of course so do old but powerful memories. The physical structure of the brain does not permit total randomness to stay random for very long. Original randomness is immediately filtered by the innate structure of the brain. And that structure is permeated with the leftovers from a lifetime of experience.

So here’s a takeaway from recent neuroscience, inspired by the findings of Stanislas Dehaene: WE ARE ALWAYS DREAMING. That is because the unconscious brain is continually in flux. What we recognize as dreaming is merely the result of lifting the constraints imposed by the conscious mind – revealing an ocean – flowing in many directions.

The unconscious brain can contribute to a more creative life. And a good night’s sleep keeps the conscious mind out of the way while the stuff gathered in wakefulness is given a chance to float around in the unconscious ocean. While in the ocean, it either dissolves away or settles into functional memory – kicking out an occasional dream in the process.

 

Hummingbird on a wire

hummingbirdI looked out the window this morning and I thought I saw a speck on the window pane. Upon closer look, I realized that the speck was a hummingbird perched high on a wire spanning two telephone poles.

I became the bird’s dedicated audience for about three minutes. I watched closely as the tiny bee-like creature surveyed the surroundings from its high vantage point.

What was the bird thinking? And can I use the word “thinking” to describe the activities in this bird’s mind? For that matter, does the bird have a mind? It certainly has a brain. And that brain has a special feature: its hippocampus is five times larger than that of song birds, seabirds, and woodpeckers. According to this article, “The birds can remember where every flower in their territory is and how long it takes to refill with nectar after they have fed.”

Thinking is a by-product of an animal body, which is a member of a species with specific needs, skills, and adaptations to a particular environment.

Fear (and Love) of Heights

If I were perched on a wire as high as the hummingbird, I would be terrified: “Get me down from here!” On the other hand, a bird feels perfectly at home at such high altitudes.

Consider a hawk sliding across the horizon above a vast valley. Looking down from its vantage point, the hawk may experience inner-peace – possibly moments of boredom (if you will permit me to apply these human-oriented emotion labels to a hawk’s subjective experience). A human hang-glider would experience exhilaration, and moments of fear. And maybe…moments of that same inner-peace that the hawk experiences.

Above image from: https://www.pinterest.com/explore/hang-gliding/

When I have joyful flying dreams, my brain is not triggering the fear network. I am experiencing a peaceful freedom from gravity – with touches of exhilaration.

I wish I could become as light and deft (and fearless) as a bird, and watch the world from the tallest treetops in my neighborhood.

Science writers who say machines have feelings…lack intelligence.

I saw an article by Peter Dockrill with the headline, “Artificial intelligence should be protected by human rights, says Oxford mathematician”.

The subtitle is: “Machines Have Feelings Too”.

Regarding the potential dangers of robots and computers, Peter asks: “But do robots need protection from us too?” Peter is apparently a “science and humor writer”. I think he should stick with just one genre.

Just more click-bait.

There are too many articles on the internet with headlines like this. They are usually covered with obnoxious, eye-jabbing ads, flitting in front of my face like giant colorful moths. It’s a carnival – through and through.

I could easily include any number of articles about the “terrifying” future of AI, “emotional machines”, “robot ethics”, and other cartoon-like dilutions of otherwise thoughtful well-crafted science fiction.

Good science fiction is better than bad science journalism.

Screen Shot 2016-07-09 at 10.30.04 PM

Here’s Ben Goldacre:

Screen Shot 2016-06-24 at 9.30.15 PM

Now, back to this silly subject of machines having feelings:

Some of my previous articles express my thoughts on the future of AI, such as:

No Rafi. The Brain is not a Computer

The Singularity is Just One in a Series

Why Nick Bostrom is Wrong About the Dangers of Artificial Intelligence

Intelligence is NOT One-Dimensional

homunculusbI think we should be working to fix our own emotional mess, instead of trying to make vague, naive predictions about machines having feelings. Machines will – eventually – have something analogous to animal motivation and human states of mind, but by then the human world will look so different that the current conversation will be laughable.

Right now, I am in favor of keeping the “feelings” on the human side of the equation.

We’re still too emotionally messed up to be worrying about how to tend to our machines’ feelings. Let’s fix our own feelings first before giving them to our machines. We still have that choice.

And now, more stupidity from Meghan Neal:

“Computers are already faster than us, more efficient, and can do our jobs better.”

Wow Meghan, you sure do like computers, don’t you?

I personally have more hope, respect, and optimism for our species.

In this article, Meghan makes sweeping statements about machines with feelings, including how “feeling” computers are being used to improve education.

The “feeling” robots she is referring to are machines with a gimmick – they are brain-dead automatons with faces attached to them. Many savvy futurists suggest that true AI will not result from humans trying to make machines act like humans.  That’s anthropomorphism. Programming pre-defined body language in an unthinking robot makes for interesting and insightful experimentation in human-machine interaction. But please! Don’t tell me that these machines have “feelings”.

Screen Shot 2016-07-09 at 3.44.18 PMThis article says: “When Nao is sad, he hunches his shoulders forward and looks down. When he’s happy, he raises his arms, angling for a hug. When frightened, Nao cowers, and he stays like that until he is soothed with some gentle strokes on his head.”

 

Pardon me while I projectile vomit.

Any time you are trying to compare human intelligence with computers, consider what Marvin once said:

Screen Shot 2016-06-24 at 10.38.04 PM

No Rafi. The brain is not a computer.

Rafi Letzter wrote an article called “If you think your brain is more than a computer, you must accept this fringe idea in physics“.

Screen Shot 2016-06-11 at 12.50.53 PM

The article states the view of computer scientist Scott Aaronson: “…because the brain exists inside the universe, and because computers can simulate the entire universe given enough power, your entire brain can be simulated in a computer.”

Who the fuck said computers can simulate the entire universe?

That is a huge assumption. It’s also wrong.

We need to always look close at the assumptions that people use to build theories. If it can be proven that computers can simulate the entire universe, then this theory will be slightly easier to swallow.

By the way, a computer cannot simulate the entire universe because it would have to simulate itself simulating itself simulating itself.

The human brain is capable of computation, and that’s why humans are able to invent computers.

The very question as to whether the brain “is a computer” is wrong-headed. Does the brain use computation? Of course it does (among other things). Is the brain a computer? Of course it isn’t.