This sentence makes sense, BUT…

I am reading a non-fiction book. Some of the sentences that the author writes make no sense to me. I have noticed several times that the author uses the words “but”, “however”, “yet”, and other conjunctions in ways that leave me scratching my neocortex.

The conjunction “and” is rather harmless. You could say:

“The pig is hungry and I will be home by 7:30pm”.

Admittedly, this is a strange sentence. But it doesn’t contradict itself. It’s just strange.

Now, what if I say:

“The pig is hungry but I will be home by 7:30pm.”

This has a different meaning. There is an implied contradiction or conflict between the two sides of the “but”.

but glue

But Glue

Imagine having a tube of “but glue” that you can use to attach separate statements or clauses. The statement above has two parts that were connected with but glue. The problem is that the second half of the sentence doesn’t contradict, correct, or oppose the first half of the sentence. Compare that with this statement:

“The jug is intended to store water, but you can fill it with apple juice”.

…which makes more sense.

200504238-001To be fair, it is possible that given a particular situation, the first sentence might make sense. Consider: “The pig is hungry and usually gets fed at 5:30pm, but she’ll be okay because I’ll be home by 7:30pm to feed her.”

If the reader were already familiar with the context, the backstory, and the implications, it might make sense.

But… out of context, it makes no sense.

Missing Context

Back to the book I mentioned earlier: the problem is that the author is not able to craft clear context for his general reader. The author frequently drills down into his smallish area of expertise, or his momentary logical twiddling, with its particular conflictual dynamics. There may be clashing assumptions or contradictions in his context, but he has not successfully expressed those opposing assumptions. Either he is too absorbed in his arcane world of logic, or he is just not that good at crafting an argument or illuminating new ways of thinking about a familiar subject.

Bad Buts

“But” can also be used to manipulate an argument to the benefit of the but-wielder. Beware of but’s that subtly change the subject, creating the false appearance of a contradiction…when there is none. 

A friend of mine was caught doing this. She may not even have been aware of it, but she evoked butness to apparently weaken my argument. I called her on it.

Another friend of of mine avoids glueing sentences together with “No…but”. She much prefers “Yes…and”.

No But Yes And

Attention to buts and ands amounts to more than just a nerdy obsession with grammar. It can illuminate large-scale philosophies on life, and the contradictions within it. Or the lack of contradictions.

Now…I am not an expert on grammar or good writing…

but there’s one thing I can say for sure:

this coffee sure smells good!

Advertisements

Very large numbers are not numbers: Infinity does not exist

(this blog post was originally published in https://eyemath.wordpress.com/ . It has been moved to this blog – with slight changes.)

Remember Nietzsche’s famous announcement, “God is dead“? In the domain of mathematics, Nietzsche’s announcement could just as well refer to infinity.

There are some philosophers who are putting up a major challenge to the Platonic stronghold on math: Brian Rotman, author of Ad Infinitum, is one of them. I am currently reading his book. I thought of waiting until I was finished with the book before writing this blog post, but I decided to go ahead and splurt out my thoughts.

————————

Charles Petzold gives a good review of Rotman’s book here.

Petzold says:

“We begin counting 1, 2, 3, and we can go on as long as we want.

That’s not true, of course. “We” simply cannot continue counting “as long as we want” because “We” (meaning “I” the author and “you” the reader) will someday die — probably in the middle of reciting a very long (but undoubtedly finite) number.

What the sentence really means is that some abstract ideal “somebody” can continue counting, but that’s not true either: Counting is a temporal process, and at some point everybody will be gone in a heat-dead universe. There will be no one left to count. Even long before that time, counting will be limited by the resources of the universe, which contains only a finite number of elementary particles and a finite amount of energy to increment from one integer to the next.”

Is Math a Human Activity or Eternal Truth?

Before continuing on to infinity (which is impossible of course), I want bring up a related topic that Rotman addresses: the nature of math itself. My thoughts at the moment are this:

You (reader) and I (writer) have brains that are almost identical as far as objects in the universe. We share common genes, language, and we are vehicles that carry human culture. We cannot think without language.  “Language speaks man” – Heidegger.

Since we have not encountered any aliens, it is not possible for us to have an alien’s brain planted into our skulls so that we can experience what “logic”, “reality” or “mathematical truth” feels like to that alien (yes, I used the word, “feel”). Indeed, that alien brain might harbor the same concept as our brains do that 2+2=4….but it might not. In fact, who is to say that the notion of “adding” means anything to the alien? Or the concepts of “equality”? And who is to say that the alien uses language by putting symbols together into a one-dimensional string?

More to the point: would that alien brain have the same concept of infinity as our brains?

It is quite possible that we can never know the answers to these questions because we cannot leave our brains, we can not escape the structure of our langage, which defines our process of thinking. We cannot see “our” math from outside the box. That is why we cannot believe in any other math.

So, to answer the question: “Is math a human activity or eternal truth?” – I don’t know. Neither do you. No one can know the answer, unless or until we encounter a non-human intelligence that either speaks an identical mathematical truth – or doesn’t.

Big Numbers are Patterns

My book, Divisor Drips and Square Root Waves, explores the notion of really large numbers as characterized by pattern rather than size (the size of the number referring to where it sits in the countable ordering of other numbers on the 1D number line). In this book, I explore the patterns of the neighborhoods of large numbers in terms of their divisors.

This is a decidedly visual/spatial attitude of number, whereby number-theoretical ideas emerge from the contemplation of the spatial patterning.

The number:

80658175170943878571660636856403766975289505440883277824000000000000

doesn’t seem to have much meaning. But when you consider that it is the number of ways in which you can arrange a single deck of cards, it suddenly has a short expression. In fact it can be expressed simply as 52 factorial, or “52!”.

So, by expressing this number with only three symbols: “5”, “2”, and “!”, we have a way to think about this really big-ass number in an elegant, meaningful way.

We are still a LONG way from infinity.

Now, one argument in favor of infinity goes like this: you can always add 1 to any number. So, you could add 1 to 52! making it 80658175170943878571660636856403766975289505440883277824000000000001.

Indeed, you can add 1 to the estimated number of atoms in the universe to generate the number 1080 + 1. But the countability of that number is still in question. Sure you can always add 1 to a number, but can you add enough 1’s to 1080 to each 10800?

Are we getting closer to infinity? No my dear. Long way to go.

Long way to “go”?  What does “go” mean?

Bigger numbers require more exponents (or whatever notational schemes are used to express bigness with few symbols – Rotman refers to hyper-exponents, and hyper-hyper-exponents, and further symbolic manipulations that become increasingly hard to think about or use).

These contraptions are looking less and less like everyday numbers. In building such contraptions in hopes to approach some vantage point to sniff infinity, one finds a dissipative effect – the landscape becomes ever more choppy.

No surprise: infinity is not a number.

Infinity is an idea. Really really big numbers – beyond Rotman’s “realizable” limit – are not countable or cognizable. The bigger the number, the less number-like it is. There’s no absolute cut-off point. There is just a gradual dissipation of realizability, countability, and utility.

Where Mathematics Comes From

Rotman suggests taking God out out mathematics and putting the body back in. The body (and the brain and mind that emerged from it) constitute the origins of math. While math requires abstractions, there can be no abstraction without some concrete embodiment that provides the origin of that abstraction. Math did not come from “out there”.

That is the challenge that some thinkers, such as Rotman, are proposing. People trained in mathematics, and especially people who do a lot of math, are guaranteed to have a hard time with this. Platonic truth is built in to their belief structure. The more math they do, the more they believe that mathematical truth is discovered, not generated.

I am sympathetic to this mindset. The more relationships that I find in mathematics, the harder it is to believe that I am just making it up. And for that reason, I personally have a softer version of this belief: Math did not emerge from human brains only. Human brains evolved in Earth’s biosphere – which is already an information-dense ecosystem, where the concept of number – and some fundamental primitive math concepts – had already emerged. This is explained in my article:

The Evolution of Mathematics on Planet Earth

I have some sympathy with Roger Penrose: when I explore the Mandelbrot Set, I have to ask myself, “who the hell made this thing!” Certainly no mathematician!

After all, the Mandelbrot Set has an infinite amount of fractal detail.

But then again, no human (or alien) will ever experience this infinity.

An intelligent car that can’t communicate with its driver is a dumbass car.

(image from https://www.tribuneindia.com/2014/20141116/spectrum/motor.htm)

Let’s talk about body language.

DogBodyLanguage.1jpg.jpgA key property of body language is that it is almost always unconscious to both giver and receiver.

Image from: https://talbotspy.org/but-his-tail-was-wagging-understanding-dog-body-language-part-1/

This is not a problem in itself – in fact, it’s actually really good that body language happens mostly unconsciously. Body language is necessarily unconscious. The flood of signals from a talking body is vast, high-bandwidth, high-rate, and highly-parallel. It must bypass the higher-brain in order to do its work. The higher brain is too busy making decisions and trying to be rational to be bothered with such things.

The problem with the backchannel nature of body language is that it is often in competition with explicit, linear verbal language, which is a pushy tyrant. There are too many pushy tyrants in the tech industry that are poor at social signaling. Body language tends to be relegated to a lower priority in many areas of digital technology, including the design of software interfaces, productivity tools, kitchen appliances…and cars. This is just one symptom of the lack of diversity in the tech industry.

High tech culture is obsessed with metrics; seeking to measure as much as possible, to be data-driven, to have tangible results and ways of measuring success. This obsession with data is a mistake. Tossing out what can’t be measured or converted into data is a very big mistake. And the digitally-designed world we live in suffers as a result. Let me try to explain what I mean by all this….

A computer on wheels

Screen Shot 2018-08-09 at 11.59.52 AM.png

The automobile was invented in the industrial age – an age defined by energy, force, mechanics, chemistry, electricity, and physicality.

We are  now fumbling through the information age.

Apple Inc. has managed to reduce the thickness of laptop computers – they have become so thin that you can cut steak with them. But it should come as no surprise that the surface areas of laptop screens and keyboards have not been reduced, compared to the degree that computer chips have been miniaturized. There is a simple reason for this: human eyes and hands are still the same size. This will never change.

The same applies to communication. The more digital our machines become, the more we have to communicate with them, and they, in turn, have to communicate with us.

Screen Shot 2018-08-10 at 2.15.32 PM.pngAn old-fashioned industrial-age car comes with natural affordances: communication happens simply as a result of the physical nature of knobs, wheels, wires, engine sounds, torques, and forces. There are many sensory stimuli that the driver sees, feels, hears and smells – and often they are unconscious to the driver – or just above the level of consciousness.

Driving is a very different experience now. It is turning into a video game…a simulation. There is a disconnect between driver and car that seems to be growing wider.

That’s not necessarily a bad thing. But here’s the problem:

Body language between driver and car has become an arbitrary plaything, mediated by cockeyed displays and confusing controls. It is up to the whims of user interface designers – too many of whom have their heads up their asses. Idiots who call themselves designers live under the illusion that they can invent visual language on the fly and shove it into our long-lived lives, expecting their clever interfaces to fall naturally into use.

Or maybe they don’t actually think this – but don’t care anyway, because they are paid well. I’m not sure which is worse.

IZJA7NAP2JDJXMBG7PIVH5RVA4.jpg

According to Matt Bubbers:

There’s nothing wrong with the volume knob. It does not need reinvention, nor disruption, nor innovation. The volume knob is perfect the way it is. Or, rather, the way it was.

Get into a new car in 2018 and you’re faced with a multitude of complicated ways to adjust the stereo volume: buttons or dials on the steering wheel, voice commands that rarely work, fiddly non-buttons on the centre panel, touchscreens that take your eyes off the road, even gesture controls that make you wave your hand as if you’re conducting a symphony.

Cars are too complicated. The volume knob is indicative of the problem. Call it feature bloat or mission creep: Cars are trying to do more, but they’re not doing it all well. These infotainment features can be distracting, therefore dangerous, and they cost money.

A new generation of digital designer is out of touch with nature. It is infuriating, because here we are, fumbling to bake a cake, turn on the AC, or change a channel on the TV: “Now, which of these 2,458 buttons on this TV remote do I need to press in order to change the channel?…”

“Oh shit – no wonder I’m confused: this is the remote control for the gas fireplace! Is that why it’s so hot in here?”

Driving under the influence of icons

Britania Rescue, a firm providing a breakdown service in England, conducted a survey, interviewing over 2000 drivers. The revelations are quite startling. It revealed that more than 52 per cent of drivers cannot correctly identify 16 of the most common symbols.

IMG_1207.JPG

Interpreting a bunch of unfamiliar icons invented by out-of-touch dweebs is not how we should be interacting with our technology – especially as technology sinks deeper into our lives.

Just this morning, my 87-year-old mother and I spent about a half-hour trying to figure out how to set my sister’s high-tech oven to bake. To make matters worse, my mother, who is visually-impaired, can’t even feel the controls – the entire interface consists of a dim visual glow behind slick glass. We eventually had to read a manual. WE HAD TO READ A FUCKING MANUAL TO FIGURE OUT HOW TO TURN ON AN OVEN. How long must this insanity go on?

A car’s manual should be the car itself

100212-biz-bmwshifter-hmed-8a.grid-6x2.jpg

Dan Carney, in the article, Complex car controls equal confused drivers, quotes Consumer Reports:  “You shouldn’t have to read the owner’s manual to figure out how to use the shifter.”

He says, “The BMW iDrive had a controller for functions like the radio and air conditioning that was so baffling that it forced drivers to take their eyes off the road.”

My Prius experience

services.edmunds-media.jpgMy first experience with a Prius was not pleasant. Now, I am not expecting many of you to agree with my criticism, and I know that there are many happy Prius owners, who claim that you can just ignore the geeky stuff if you don’t like it.

I don’t like it, and I can’t ignore it.

I have found that most people have a higher tolerance for figuring-out technology than I. It’s not for lack intelligence or education; it’s more that I am impatient with stupid design. It makes me irate, because these fumblings are entirely unnecessary.

We all suffer because of the whims of irresponsible designers, supposedly working in teams which include human factors engineers and ergonomics engineers, whom I assume are asleep on the job.

I place the blame for my impatience squarely on Donald Norman, whose book, “The Design of Everyday Things” implored readers to stop blaming themselves for their constant fumbling with technology. The source of the problem is irresponsible design. He converted me profoundly. And now I am a tech curmudgeon. Thanks Don.

I once had to borrow a friend’s Prius because my Honda Fit was in the shop. I liked the energy-saving aspects and the overall comfort, but the dashboard was unfamiliar. My hippocampi threw up their hands in despair. What’s worse: after parking the car in a parking lot, I put the key in my pocket and decided to check the doors to make sure they were locked. The doors were not locking. Why? I tried locking the doors many times but every time I walked around to the other side, the opposite door would unlock. I called the owner, and told her that I am not able to lock the car. She said, “Oh – that’s because the doors automatically – and magically – unlock when you walk up to them. Smart, eh?”

Hello? Discoverability? 

Thank you Prius for not telling me about your clever trick. You are one step ahead of me! Perhaps I should just stop trying to understand what the fuck you are doing and just bow to your vast intelligence. You win, Prius.

My Honda Fit is relatively simple, compared to many cars these days. But it does things that infuriate me. It decides that I want the back window wipers to turn on when the front wipers are on, and I happen to be backing up. It took my car mechanic to explain the non-brilliance of this behavior. Thanks Honda for taking away my choice in the matter. My car also decides to turn on the interior light at night after I have parked the car. I have to wait a long time for the light to go out. My car knows how long. I am not privy to this duration. What if I don’t want strangers to see me – because I’d like to finish picking my nose before getting out?

Whether or not strangers can see me picking my nose is no longer my choice. My car has made this decision for me. Sure: I could reach up to the ceiling and turn off the light – but then I will forget to turn it on again when I actually need it. This never used to be so complicated.

Smart = dumb

I have come to the conclusion that a car without any computers is neither smart nor dumb. It has no brain and so it cannot even try to be intelligent. On the other hand, if a car has computational processing then it has an opportunity to be either smart or dumb. Most cars these days are poor communicators, I call them dumb.

The decider

Another gripe: sometimes my door locks when I’m driving, and then I have to unlock it to get out – but not always. There is no rhyme or reason (that I am aware of) for when and why this happens. Yes…I know – some of you will probably offer to enlighten me with a comment. But the fact that this has to be learned in the first place is what bugs me. I would prefer one of two things: (1) My car makes it apparent to me why it is making decisions for me, or (2) it stays out of the way and lets me be the decision-maker.

Am I old-fashioned? If wanting to be in charge of basic things like locking doors and turning on lights makes me old-fashioned, then…yes, I’m old-fashioned.

The-Hog-Ring-Auto-Upholstery-Community-Complex-Dashboard.jpg

(image from http://www.thehogring.com/2013/07/31/10-most-ridiculous-dashboards-of-all-time/

Confessions of a MIT luddite

People confuse me for a techy because of my degree. And then they are shocked at how critical I am of technology. The truth is that I am a design nerd rather than a computer nerd. I have nothing against information technology – after all, I write software – and love it. I just want the technology that I rely on to be better at communicating. For example: why do gas pumps still have a one-word vocabulary? ….

Beep.

Okay, I’m a neo-luddite. There, I said it. And I will remain a neo-luddite as long as the tech industry continues to ignore a billion years of evolution, which gave us – and the rest of the living world – the means to generate signals and interpret signals – the body language of the biosphere that keeps the living world buzzing along.

This natural flow of communication among organisms is a wonderful thing to behold. It happens on a level of sophistication that makes ovens and VCR’s look like retardation in a box.

1_30x7H4l8REa9KXr2UEWyOw.jpg

But then again, the evolution of information technology is extremely short compared to the evolution of natural language, which has kept the social ecosystems of Homo Sapiens running for a very long time.

Perhaps I am thrashing in the midst of the Singularity, and I should just give up – because that’s what you do in the Singularity.

But I would still like to understand what the future of communication will look like. This is especially important as more and more communication is done between people and machines. At the moment, I am still a little hesitant to call it “communication”.

Anorexic Typeface Syndrome

As a design thinker of the Don Norman ilk, I place ample blame for human error on negligent or arrogant design from trend-setters who seem to be more intent on presenting slick, featureless interfaces than providing the natural affordances expected by human eyes, brains, and hands. Take typefaces for instance.

As the pixel resolution of our computer displays becomes higher, the arrogant designers who get hired by trend-setting corporations find it necessary to choose typefaces that are as thin as possible, because…thin is in!

Well, I have something to say about that: Anorexia Kills!

How about people’s ability to fucking read? I kind of like it when I can read. And I don’t like it when I am made to feel like an 87 year-old who needs a magnifying glass (like my mother – who is especially challenged when she has to actually read words on an iPad).

And it’s not just the rapidograph-like spiderweb of fonts that are becoming so hard to read. Designers are now fucking with contrast:

“There’s a widespread movement in design circles to reduce the contrast between text and background, making type harder to read. Apple is guilty. Google is, too. So is Twitter.”

—Kevin Marks WIRED: https://www.wired.com/2016/10/how-the-web-became-unreadable/

Sarah Knapton: https://www.telegraph.co.uk/science/2016/10/23/internet-is-becoming-unreadable-because-of-a-trend-towards-light/

I was in Boulder, looking for the Apple store so I could by a new MacBook Pro. I had a hard time finding the store because the symbol I was looking for was barely visible from a distance, or unless I was looking straight at it.

Apple is becoming less interested in helping us be productive than they are in being the most slick designed thing in the room – or the mall. Apple originally earned a reputation for good user-interface design. But the capitalist engine of unlimited growth and the subsequent need to differentiate among the competition has created a pathology. It has created a race to the bottom. At that bottom…our senses are being starved.

I have similar thoughts on the way physical Apple products have become so thin as to be almost dangerous – in this blog post.

It may be my imagination, but since buying my new MacBook, this very blog post seems harder to read. Did WordPress go on a font diet? Or is Apple the culprit? Check out this screenshot of this blog post as I am seeing it on my MacBook:

You may have heard the saying: “good design should be invisible”.

“Design should help people and be a silent ambassador to your business. Good designs are those that go unnoticed, that are experienced, that are invisible; bad designs are everywhere and stand out like a sore thumb.”

To say that good design should be invisible does not mean eliminating as many features as possible from a visual interface – causing it to become a wisp of gossamer that requires squinting. The human senses naturally rely on signals – we are accustomed to a high rate and high density of signals from our workable environments.

Okay, the trend away from serif to sans serif was reasonable. But I have a request of Apple and other design trend-setters: please stop eroding away at what few features remain.

Anorexia Kills!

How much negentropy is Earth capable of?

Negentropy is the opposite of entropy. It refers to an increase in order, complexity, and usefulness, while entropy refers to the decay of order or the tendency for a system to become random and useless.

The universe as a whole tends toward total entropy, or heat death. This does not mean that ALL parts of the universe are becoming less ordered. There can be isolated parts of the universe that are actually increasing in order; becoming more organized and workable. The best example of this is our home: planet Earth.

A miracle of 7,000,000,000,000,000,000,000,000,000 atoms

I was walking from my bedroom to my bathroom this morning, pondering the miracle of my body purposefully moving itself from one place in the universe to another. Consider the atoms that make up my body; they are assembled in just the right way to construct a human capable of locomotion. It is a miracle. Of course, the atoms themselves are not the driving force of this capability. The driving force is a collaboration of emergent systems (molecules, tissues, electrochemical activity, signals between organs, and of course, a brain – which evolved in the context of a complex planet, with other brains in societies, and with an ever-complexifying backdrop of shared information.

It’s a curious thing: planet Earth – with its vast oceans, atmosphere, ecosystems and organisms – is determined to go against the overall tendency in the universe to decay towards the inevitable doom of heat death.

While walking the seven billion billion billion atoms of my body to the bathroom, I considered how far the negentropic urge of our planet could possibly push itself, in a universe that generally tries to ruin the party; a universe that will ultimately win in the end. The seven billion billion billion atoms currently in my body will eventually be strewn throughout a dead universe. At that point there will be nothing that can re-assemble them into anything useful.

How not to ruin a party

The party is not over; there is ample reason to believe that Earth is not done yet. Earth generated a biosphere – the only spherical ecosystem we know of – which produced animals and humans, and most recently – post-biological systems (technology and AI). I would not dismiss entirely the notion that Earth really wants us to invent AI, and to allow it to take over – because our AI could ultimately help Earth stay healthy, and continue its negentropic party. We humans (in our old, biological manifestation) are not capable of taking care of our own planet. We are only capable of exploiting its resources – left to our own primitive survival devices. It is only through our post-human systems that we will be able to give Earth the leverage it needs to continue its negentropic quest.

This is another way of saying that the solutions to climate change and mass extinction will require massive social movements, corporate and governmental leadership, global-scale technologies, and other trans-human-scale systems that far exceed the mental capacities of a single human brain. It is possible that the ultimate victory of AI will be to save ourselves from an angry Mother on the verge of committing infanticide.

In the meanwhile, Earth may decide that it needs to get rid of the majority of the human population; just another reason to reconsider the urge to make babies.

But just how far can Earth’s negentropic party extend? As Earth’s most potent agents of negentropy, we humans are preparing to tap the moon, asteroids, and other planets for resources. Will we eventually be able to develop energy shields to deflect renegade asteroids? Will our robots continue to colonize the solar system? How far will Earth’s panspermia extend?

There are plenty of science fiction stories and hypothetical explorations that offer exciting and illuminating possible answers to these questions; I will not attempt to venture beyond my level of knowledge in this area. All I will say is…I think there are two possible futures for us humans:

(1) Earth will decide it has had enough of climate change, and smack us down with rising oceans and chaotic storms, causing disease, mass migrations, and war, resulting in our ultimate demise (Earth will be fine after a brief recovery period).

(2) We will evolve a new layer of the biosphere – built of technology and AI – and this will regulate our destructive instincts, thus allowing Earth to stay healthy and to keep complexifying. It will allow Earth to reconsider what it currently sees as a cancer on its skin – and to see us as agents of health.

In the case of future (2), we will lose some of our autonomy – but it just might be a comfortable existence in the long run – because Earth will be better off – and it will want to keep us around. Eventually, the panspermic negentropic party will not be our own – we will be just one of the intermediate layers of emergence emanating from the planet. We will become mere organs of an extended post-Earth ecosystem that continues to defy the general entropy of the universe…at least for a few billion more years.

The feeling of consciousness is an illusion

Stanislaw Dehaene’s book, Consciousness and the Brain, identifies various kinds of consciousness. It helps to separate the various uses of the words “conscious” and “consciousness”. The kind of consciousness that he has studied and reported in his book has measurable effects. This allows the scientific method to be applied.

After reading Dehaene’s book, I am more convinced that science will eventually fully explain how we hold thoughts in our minds, how we recognize things, form ideas, remember things, process our thoughts, and act on them. To be conscious “of” something – whether it be the presence of a person, a thing, or a fleeting thought – is a form of consciousness that can have a particular signature – physiological markers that demonstrate a telltale change in the brain that coincide with a person reporting on becoming aware of something.

Brain imaging will soon advance to such a degree that we will begin to see signatures of many kinds of thoughts and associate them with outward behaviors and expressions. It it also being used to show that some people who are in a vegetative state are actually aware of what is going on, even if they have no way to express this fact outwardly. So much will be explained. We are at a stage in brain research where consciousness is becoming recognized as a measurable physical phenomenon. It is making its way into the domain of experimental science. Does this mean that consciousness will soon no longer be a subject of philosophy?

Qualia

There is one kind of consciousness which we may never be able to directly measure. And that is the subjective feeling of being alive, of being “me”, and experiencing a self. It is entirely private. Daniel Dennett suggests that these subjective feelings, which are referred to as “qualia”, are ineffable: they cannot be communicated, or apprehended by any other means than one’s own direct experience.

This would imply that the deepest and most personal form of consciousness is something that we will never be able to fully understand; it is forever inaccessible to objective observation.

On the other hand, the fact that I can write these words and that you can (hopefully) understand them means that we probably have similar sensations in terms of private consciousness. The vast literature on personal consciousness experience implies a shared experience. But of course it is shared: human brains are very similar to each other (my brain is more similar to your brain than it is to a galaxy, or a tree, or the brain of a chicken or the brain of a chimp). The aggregate of all reports of this inaccessible subjective state constitutes a kind of objective truth – indirect and fuzzy, sure – but nonetheless a source for scientific study.

So I’d like to offer a possible scenario that could unfold over the next several decades. What if brain scientists continue to map out more and more states of mind, gathering more accurate and precise signatures of conscious thoughts. As more scientific data and theories accumulate to explain the measurable effects of consciousness in the brain, we may begin to relegate the most private inexpressible aspects of qualia to an increasingly-smaller status. Neuroscience will enable more precise language to describe subtle private experiences that we have all experienced but may not have had a clear way to express. Science will nibble away at the edges.

An evolved illusion

And here’s an idea that I find hard to internalize, but am beginning to believe:

It’s all an illusion.

…because self is an illusion; a theatre concocted by the evolving brain to help animals become more effective at surviving in the world; to improve their ability to participate in biosemiosis. Throughout evolution, the boundary between an organism’s body and the rest of the world has complexified out of necessity as other organisms complexify themselves – this includes social structures and extended phenotypes. Also, the more autonomous the organisms of an evolving species become, the more self is needed to drive that autonomy.

The idea that we are living in an illusion is gaining ground, as explored in an article called: “The Evolutionary Argument Against Reality“.

Feelings are created by the body/brain as it interacts with the world, with thoughts generated in the brain, and with chemicals that ebb and flow in our bodies. The feeling of consciousness might be just that: a feeling – a sensation – like so many other sensations. Perhaps it was invented by the evolving brain to make it more of a personal matter. The problem is: being so personal is what makes it so difficult to relegate to the status of mere illusion.

The sleeping sponge: on the evolution of waking up

From the book, Wide Awake at 3:00am, I learned that researchers had come up with an answer to a common question, “Why do we sleep”?

It’s a valid question. What’s the actual purpose of sleep? Why would nature favor having the majority of animal species waste several hours each day in a state of unconsciousness, getting nothing done, and becoming vulnerable to predators?

The answer the researchers came up with required turning the question on its head: “Why should any living thing bother waking up at all?” Perhaps sleep is the normal state of all life, and wakefulness is just some aberration – a phenomenon that evolved later – as a part-time activity to more efficiently pursue food and sex.

As a lover of naps and hater of alarm clocks, I kind of like this idea.

I recall reading somewhere that sponges are “always asleep”. But I also read recently that sponges “never sleep”. Rather than go back and do more research to clear up this issue, I shall instead declare that the problem lies the definition of  “sleep”.

If you’re a sponge, you have no neurons. Having no neurons is a good indication that you have no brain. And no brain means no dreaming. Sponges are not like us in that they are sessile: they have no motility (except in the larval stage, when genetic dispersal occurs). If you don’t have to get up and go to work, why bother having a brain? Brains provide inner-representations of the outside world – used to navigate unpredictable terrains. Sponges just sit there at the bottom of the ocean and collect ambient nutrition. A task so easy that anyone can do it in their sleep.

Brains for Movement

The evolution of mobility required not only the direct control of muscles but also representations of reality that determined when and how those muscles get activated. Brains evolved in order for animals to evolve.

Long ago, there was no such thing as “waking up”. Until brains came along and gave organisms a reason to get off their asses and get a job. Perhaps asses and jobs had to evolve as well. But let’s not get too technical here.

It is possible that the binary states of wakefulness and sleep were not invented by brains themselves, but earlier in evolutionary history, by simple neuronal networks that generate sleep-like dynamics. Given that every location on Earth other than the poles has been cycling between day and night since before life emerged, it makes sense that organic periods would emerge to harmonize with this cycle.

Perhaps the very process of storing representations of reality – no matter how small or simple – requires a periodic cycle – as indicated by research finding that sleep is required for brains to prune useless memories and absorb useful ones.

My takeaway from all of this is that I have an organ that likes to make me do complicated things for many hours each day: sixteen to be exact. That’s a long time each day being on the move and getting worked up about other brains that are wreaking havoc on the world, such as the shriveled-up shitball inside of Donald Trump’s skull.

Before I die, I will thank my brain for collecting a massive library of memories that fueled a lifetime of dreams. And then I will say goodnight to my brain, and get back to sleep.