Redefining consciousness in order to solve the Big Question

Consciousness is an emergent property of evolution. Like all things that resulted from evolution, we can gather evidence to come up with theories and explanations.

We should avoid (or postpone) the problem of subjective experience (qualia); we should intentionally remove the question of personal experience and switch to scientifically observable evidence.

This idea was proposed by Stanislas Dehaene, in his book Consciousness and the Brain.

(image from http://www.brainfacts.org/neuroscience-in-society/supporting-research/2014/book-review-consciousness-and-the-brain)

A variation/interpretation of this idea is to redefine consciousness to be a property of living things or complex adaptive systems in general where certain common behaviors are exhibited. In the case of a wildcat hunting a rodent, with the implications of recognition, focus, attention, and other factors, we might be able to collect a set of markers of this kind of consciousness. There would not be a single marker, and we would not expect these markers to be consistent in all species, because consciousness could come in varying degrees, kinds, and loci.

In terms of degree, a snake probably has “less consciousness” than a fox. And a fox probably has “less consciousness” than a human. And all of these animals have “more consciousness” than a carrot.

But it may not be a matter of degree – perhaps it is more a matter of kind. (Is it possible to map raccoon-like consciousness to dolphin-like consciousness?)

Or it could be more a matter of locus (if there is anything like consciousness among ants – can it be found in a single ant’s brain? Or is it more likely to be distributed among a swarm of ants?)

Brain imaging has become a powerful tool for using evidence-based science to get at the problem.

(image from https://www.lesswrong.com/posts/x4n4jcoDP7xh5LWLq/book-summary-consciousness-and-the-brain)

There’s an old gem of wisdom: if a Big Question defies the Big Answer, you might need to change the Question. Consciousness may need to be unshackled from subjectivity in order to be redefined using scientific evidence. As a consequence, there may be new and better ways to understand subjective experience.

Our subjective experience causes us to resist the act of defining consciousness based on evidence, because subjective experience is precious and tied to the self, which wants to be immortal.

When the answer to the Big Question comes, it might have two possible effects: (1) It might be unsavory and counterintuitive – similar to the way quantum physics is counterintuitive – but nonetheless indisputable and scientifically verified; or (2) It might unleash an orchestra of language, mental tools, metaphors, and intuitions, forming a major advance in human knowledge and understanding – not unlike the theory of natural selection itself.

Advertisements

Deconstructing Agnosticism

 

Take a random phrase from the left column, a random phrase from the middle column, and a random phrase from the right column. Combine them to construct a question about your belief in God. How many possible questions can you construct?

The the answer is 1080. That doesn’t include the many many possible phrases you might want to include in this list. This illustrates the expansiveness of questioning everything. Since “God” is difficult to define, and since there are many ways to represent, understand, and experience God, one can’t truly answer the question “do you believe in God” unless the asker and answerer both share the same sense of what they are talking about

One conclusion from this exploration is that we cannot escape the realm of words and language in the effort to articulate the nature of our beliefs. Can any one think about belief without using some form of (internal or external) language? 

Is belief naturally binary (I do believe vs. I don’t believe)? If it is not binary, can it be called a “belief”? Cultural/social forces and neural structures may cause a predisposition towards binarism in beliefs. In any case, I suspect that it is good to subdue these tendencies, for matters of intelligence as well as for social ease.

In my opinion (which could always change), agnosticism is (1) a good way to exercise one’s own intellectual agility, and (2) socially productive; it helps you hear and accept other people’s many kinds of beliefs, non-beliefs, assumed beliefs and believed assumptions.

True agnostics are not compelled to agree or disagree. In terms of epistemology, they are incapable of doing either.

No doubt, for many people, belief and faith are passionate and deeply-felt, and so it may not be easy to take such a dispassionate attitude. But as long as people are using language to question and express belief, the mechanics of logic necessarily come into play. 

In that case, the art of living may be the wordless expression that escapes the realm of agreement and disagreement.  Thus, God (or the absence of God) is best expressed in terms of how we live rather than what we say.

The feeling of consciousness is an illusion

Stanislaw Dehaene’s book, Consciousness and the Brain, identifies various kinds of consciousness. It helps to separate the various uses of the words “conscious” and “consciousness”. The kind of consciousness that he has studied and reported in his book has measurable effects. This allows the scientific method to be applied.

After reading Dehaene’s book, I am more convinced that science will eventually fully explain how we hold thoughts in our minds, how we recognize things, form ideas, remember things, process our thoughts, and act on them. To be conscious “of” something – whether it be the presence of a person, a thing, or a fleeting thought – is a form of consciousness that can have a particular signature – physiological markers that demonstrate a telltale change in the brain that coincide with a person reporting on becoming aware of something.

Brain imaging will soon advance to such a degree that we will begin to see signatures of many kinds of thoughts and associate them with outward behaviors and expressions. It it also being used to show that some people who are in a vegetative state are actually aware of what is going on, even if they have no way to express this fact outwardly. So much will be explained. We are at a stage in brain research where consciousness is becoming recognized as a measurable physical phenomenon. It is making its way into the domain of experimental science. Does this mean that consciousness will soon no longer be a subject of philosophy?

Qualia

There is one kind of consciousness which we may never be able to directly measure. And that is the subjective feeling of being alive, of being “me”, and experiencing a self. It is entirely private. Daniel Dennett suggests that these subjective feelings, which are referred to as “qualia”, are ineffable: they cannot be communicated, or apprehended by any other means than one’s own direct experience.

This would imply that the deepest and most personal form of consciousness is something that we will never be able to fully understand; it is forever inaccessible to objective observation.

On the other hand, the fact that I can write these words and that you can (hopefully) understand them means that we probably have similar sensations in terms of private consciousness. The vast literature on personal consciousness experience implies a shared experience. But of course it is shared: human brains are very similar to each other (my brain is more similar to your brain than it is to a galaxy, or a tree, or the brain of a chicken or the brain of a chimp). The aggregate of all reports of this inaccessible subjective state constitutes a kind of objective truth – indirect and fuzzy, sure – but nonetheless a source for scientific study.

So I’d like to offer a possible scenario that could unfold over the next several decades. What if brain scientists continue to map out more and more states of mind, gathering more accurate and precise signatures of conscious thoughts. As more scientific data and theories accumulate to explain the measurable effects of consciousness in the brain, we may begin to relegate the most private inexpressible aspects of qualia to an increasingly-smaller status. Neuroscience will enable more precise language to describe subtle private experiences that we have all experienced but may not have had a clear way to express. Science will nibble away at the edges.

An evolved illusion

And here’s an idea that I find hard to internalize, but am beginning to believe:

It’s all an illusion.

…because self is an illusion; a theatre concocted by the evolving brain to help animals become more effective at surviving in the world; to improve their ability to participate in biosemiosis. Throughout evolution, the boundary between an organism’s body and the rest of the world has complexified out of necessity as other organisms complexify themselves – this includes social structures and extended phenotypes. Also, the more autonomous the organisms of an evolving species become, the more self is needed to drive that autonomy.

The idea that we are living in an illusion is gaining ground, as explored in an article called: “The Evolutionary Argument Against Reality“.

Feelings are created by the body/brain as it interacts with the world, with thoughts generated in the brain, and with chemicals that ebb and flow in our bodies. The feeling of consciousness might be just that: a feeling – a sensation – like so many other sensations. Perhaps it was invented by the evolving brain to make it more of a personal matter. The problem is: being so personal is what makes it so difficult to relegate to the status of mere illusion.

Here’s one way to evolve an artificial intelligence

This picture illustrates an idea for how to evolve an AI system. It is derived from the sensor-brain-actuator-world model.

Machine learning algorithms have been doing some impressive things. Simply by crawling through massive oceans of data and finding correlations, some AI systems are able to make unexpected predictions and reveal insights.

Neural nets and evolutionary algorithms constitute a natural pairing of technologies for designing AI systems. But evolutionary algorithms require selection criteria that can be difficult to design. One solution is to use millions of human observers as a Darwinian fitness force to guide an AI towards an optimal state.

Clarifications

Since there is so much discussion (and confusion) on AI these days I want make some clarifications.

  • This has nothing to do with consciousness or self. This AI is disembodied.
  • The raw data input is (not) curated. It has no added interpretation.
  • Any kind of data can be input. The AI will ignore most of it at first.
  • The AI presents its innards to humans. I am calling these “simulations”.
  • The AI algorithm uses some unspecified form of machine learning.
  • The important innovation here is the ability to generate “simulations”.

Mothering

The humanist in me says we need to act as the collective Mother for our brain children by providing continual reinforcement for good behavior and discouraging bad behavior. As a world view emerges in the AI, and as an implicit code of morals comes into focus, the AI will “mature”. Widely-expressed fears of AI run amok could be partially alleviated by imposing a Mothering filter on the AI as it comes of age.

Can Anything Evolve without Selection?

I suppose it is possible for an AI to arrive at every possible good idea, insight, and judgement just by digesting the constant data spew from humanity. But without an implicit learning process (such as back-propagation and other feedback mechanisms used in training AI), the AI cannot truly learn in an ecosystem of continual feedback.

Abstract Simulations 

Abstraction in Modernist painting is about generalizing the visual world into forms and colors that substitute detail for overall impressions. Art historians have charted the transition form realism to abstraction – a kind of freeing-up and opening-up of vision.

Imagine now a new path leading from abstraction to realism. And it doesn’t just apply to images: it also applies to audible signals, texts, movements, and patterns of behavior.

Imagine an AI that is set up like the illustration above coming alive for the first time. The inner-life of newborn infant is chaotic, formless, and devoid of meaning, with the exception of reactions to a mother’s smile, her scent, and her breasts.

A newborn AI would produce meaningless simulations. As the first few humans log in to give feedback, they will encounter mostly formless blobs. But eventually, some patterns may emerge – with just enough variation for the human judges to start making selective choices: “this blob is more interesting than that blob”.

As the young but continual stream of raw data accumulates, the AI will start to build impressions and common themes, like what Deep Dream does as it collects images and finds common themes and starts riffing on those themes.

http://theghostdiaries.com/10-most-nightmarish-images-from-googles-deepdream/

The important thing about this process is that it can self-correct if it starts to veer in an unproductive direction – initially with the guidance of humans and eventually on its own. It also maintains a memory of bad decisions, and failed experiments – which are all a part of growing up.

Takeaway

If this idea is interesting to you, just Google “evolving AI” and you will find many many links on the subject.

As far as my modest proposal: the takeaway I’d like to leave you with is this:

Every brain on earth builds inner-simulations of the world and plays parts of those simulations constantly as a matter of course. The simple animals have extremely simple models of reality. We humans have insanely complex models – which often get us into trouble. Trial simulations generated by an evolving AI would start pretty dumb, but with more sensory exposure, and human guidance, who knows what would emerge!

It would be irresponsible to launch AI programs without mothering. The evolved brains of most complex mammals naturally expect this. Our AI brain children are naturally derived from a mammalian brain. Mothering will allow us to evolve AI systems that don’t turn into evil monsters.

We are always dreaming

Take a large pot of water and leave it out in sub-freezing temperatures for a few days. It will turn into a block of ice.

Now take that pot of water and put it on the stove and crank up the flame. Before long, it will start to boil.

Let it cool for a few hours at room temperature and it will resume its familiar liquid form.

If you drop a live fish into liquid water it will swim around and do fishy things.

Things would not go so well if you drop a fish onto a block of ice. Fish are not good skaters.

And if you drop a fish into boiling water…well, the fish will not be very happy.

Think about these states of water as metaphors for how your brain works. A block of ice is a dead brain. A pot of boiling water is a brain having a seizure. Water at room temperature is a normal brain.

The fish represents consciousness.

………………….

Liquid brain

There is a constant low level of electrical activity among neurons (like water molecules bouncing off of each other, doing the Brownian dance). Intrinsic random neuronal activity is the norm – it keeps a low fire burning all the time. In a sense, the brain has a pilot light.

A bit of randomness is helpful for keeping the mind creative and open to new ways of thinking – consciously and unconsciously. Like the ever-present force of natural selection that curates random mutation in genetic evolution, there are dynamical structures in the brain that permit more meaningful, useful energy to percolate from the random background.

Command and control

The majority of the brain’s activity is unconscious. At every second of your life a vast army of dynamical structures are buzzing around, managing the low-level mechanisms of multi-sensory input, attention, memory, and intent. These structures are vast, short-lived, and small. And they are entirely inaccessible to the conscious mind.

The command and control area of the brain is located at the front-top of the neocortex. The signature of consciousness is a network of relatively stable, large-scale dynamical structures, with fractal fingers branching down into the vast network of unconscious structures. The buzz of the unconscious mind percolates and fuses into something usable to the conscious mind. It offers up to the conscious mind a set of data-compressed packets. When the command and control center relaxes, we experience wandering thoughts. And those thoughts wander because the brain’s pilot light provides constant movement.

These ideas are derived from Dehaene’s Consciousness and the Brain.

Surrender to dreaming

When we start falling asleep, the command and control center begins to lose its grip. The backdrop of randomness sometimes makes its way past the fuzzy boundary of our consciousness – creating a half-dreaming state. Eventually, when consciousness loses out, all that is left is this random, low-level buzz of neural activity.

But dreaming is obviously not totally random. Recent memories have an effect…and of course so do old but powerful memories. The physical structure of the brain does not permit total randomness to stay random for very long. Original randomness is immediately filtered by the innate structure of the brain. And that structure is permeated with the leftovers from a lifetime of experience.

So here’s a takeaway from recent neuroscience, inspired by the findings of Stanislas Dehaene: WE ARE ALWAYS DREAMING. That is because the unconscious brain is continually in flux. What we recognize as dreaming is merely the result of lifting the constraints imposed by the conscious mind – revealing an ocean – flowing in many directions.

The unconscious brain can contribute to a more creative life. And a good night’s sleep keeps the conscious mind out of the way while the stuff gathered in wakefulness is given a chance to float around in the unconscious ocean. While in the ocean, it either dissolves away or settles into functional memory – kicking out an occasional dream in the process.

 

Hummingbird on a wire

hummingbirdI looked out the window this morning and I thought I saw a speck on the window pane. Upon closer look, I realized that the speck was a hummingbird perched high on a wire spanning two telephone poles.

I became the bird’s dedicated audience for about three minutes. I watched closely as the tiny bee-like creature surveyed the surroundings from its high vantage point.

What was the bird thinking? And can I use the word “thinking” to describe the activities in this bird’s mind? For that matter, does the bird have a mind? It certainly has a brain. And that brain has a special feature: its hippocampus is five times larger than that of song birds, seabirds, and woodpeckers. According to this article, “The birds can remember where every flower in their territory is and how long it takes to refill with nectar after they have fed.”

Thinking is a by-product of an animal body, which is a member of a species with specific needs, skills, and adaptations to a particular environment.

Fear (and Love) of Heights

If I were perched on a wire as high as the hummingbird, I would be terrified: “Get me down from here!” On the other hand, a bird feels perfectly at home at such high altitudes.

Consider a hawk sliding across the horizon above a vast valley. Looking down from its vantage point, the hawk may experience inner-peace – possibly moments of boredom (if you will permit me to apply these human-oriented emotion labels to a hawk’s subjective experience). A human hang-glider would experience exhilaration, and moments of fear. And maybe…moments of that same inner-peace that the hawk experiences.

Above image from: https://www.pinterest.com/explore/hang-gliding/

When I have joyful flying dreams, my brain is not triggering the fear network. I am experiencing a peaceful freedom from gravity – with touches of exhilaration.

I wish I could become as light and deft (and fearless) as a bird, and watch the world from the tallest treetops in my neighborhood.