# Math Word Problems are Problematic

Mark Twain said: Never let school interfere with your education.

Here’s a math riddle:

“Peter has 21 fewer marbles than Nancy. If Peter has 43 marbles, how many marbles does Nancy have?”

The first sentence requires me to do some linguistic fiddling. There is an implication that both Peter and Nancy possess marbles – but it is not directly stated. The second sentence begins with “If”, which means the primary grammatical elements in the question are postponed until the end. Let’s re-phrase this riddle to say:

“Peter and Nancy each have a bag of marbles. Peter has 43 marbles in his bag. Peter has 21 fewer marbles than Nancy has. How many marbles does Nancy have in her bag?”

….

This might make the riddle easier to solve. Or it might not. Either way, I can say for sure that all this wordy bullshit is irrelevant to the actual math.

Math, like Music, is a Universal Language

Now consider what it would be like if you were naturally talented in math, and you were faced with a math riddle expressed in English…but English were not your first language. You may have to spend more time on the question, and you may make some critical mistakes. The subtleties of one language may not translate to another language, causing you to trip up.

We are playing with words here. Now, playing with words is fine; it’s part of how we learn to speak, listen, read, and write. In fact, playing with words that have mathematical content is a good exercise. But this should not come into play for testing students on math skills. The problem (as always) is in the testing.

Here’s another one:

“Sue has two pencils. She spends one hour at the store and buys three more pencils. How many pencils does Sue have in all”.

WTF does “spends one hour in the store” mean? Is this just narrative fluff, or is there some clever hint in there?

If I had been presented with this problem as a young student, I would have spent some time mulling over “spends one hour at the store”. However, this is irrelevant and unrelated to the answer.

How to Obfuscate Mathematical Thinking With Clever Language

For dyslexic students, students who learn through action (kinesthetic learners), students who are visual thinkers, and students who learn best by building things, this wordsmithing can be a recipe for failure.

In the real world of adults getting things done and making a living, math is rarely experienced in the form of clever riddles. Math – at its best – is manifested deep within the texture of our daily actions.

Here’s another one:

“You have 24 cookies and want to share them equally with 6 people. How many cookies would each person get?”

Let’s think about this. I “have” 24 cookies. (That’s a lot of cookies – why would I have so many cookies?) I “want” to share them with 6 people. Okay. I have a desire to share cookies. So far so good. I’m a generous guy! But then the second sentence appears unrelated: “How many cookies would each person get?” Wait a minute: am I about to give these cookies to these people? And what exactly does “equally” mean?

I know it may seem trivial for me to analyze these details. As an adult I know what this sentence means. But as a young student, I may not have had the full vocabulary or grammatical wherewithal to jump right to an answer. Also, as a “narrative learner”, I would have really wanted to make sure I understood the characters involved, their motivations, etc. I could imagine getting easily get swept up by the storyline (simple as it is).

In short, by working out the characters of this story and their motivations, I may not actually be doing math: I might be engaging in language craft and storytelling. Which is great! But this should not interfere with my being tested on my innate math skills.

Here’s another:

“Kennedy had 10 apples. She gave some to John. Now she has 2 apples left. How many apples did she give to John?”

The tense of this little story jumps back and forth between past and present. At age 55, I am now quite facile with language, but when I was 10, I would have had to put in some effort into parsing these shifts in tense.

In fact, my language skills were quite poor when I was 10, and this had an impact on all my school subjects (not just math). Later in life, after I had escaped school and actually started to gain some relevant skills, MIT offered me an opportunity to earn a Master’s degree. They did not ask me any math riddles. MIT knows better than that.

Language

One might argue that language skills are fundamental and important for learning most anything. That’s accurate. Reading, writing, speaking, and listening are fundamentally useful, and the better you are at language, the better you are likely to become at most other skills.

If this is the case, we might conclude that mixing grammatical sentence structure with mathematical logic is a valuable skill.

Indeed.

But school curriculum designers should not confuse the ability to parse a cleverly-crafted sentence with one’s innate mathematical abilities.

The problem, as always, is with TESTING.

I’ll close with this:

An Open Letter to the Education System: Please Stop Destroying Students

# Why Nick Bostrom is Wrong About the Dangers of Artificial Intelligence

Nick Bostrom is a philosopher who is known for his work on the dangers of AI in the future. Many other notable people, including Stephen Hawking, Elon Musk, and Bill Gates, have commented on the existential threats posed by a future AI. This is an important subject to discuss, but I believe that there are many careless assumptions being made as far as what AI actually is, and what it will become.

Yea yea, there’s Terminator, Her, Ex Machinima, and so many other science fiction films that touch upon deep and relevant themes about our relationship with autonomous technology. Good stuff to think about (and entertaining). But AI is much more boring than what we see in the movies. AI can be found distributed in little bits and pieces in cars, mobile phones, social media sites, hospitals…just about anywhere that software can run and where people need some help making decisions or getting new ideas.

John McCarthy, who coined the term “Artificial Intelligence” in 1956, said something that is totally relevant today: “as soon as it works, no one calls it AI anymore.” Given how poorly-defined AI is – how the definition of it seems to morph so easily, it is curious how excited some people get about its existential dangers. Perhaps these people are afraid of AI precisely because they do not know what it is.

Elon Musk, who warns us of the dangers of AI, was asked the following question by Walter Isaacson: “Do you think you maybe read too much science fiction?” To which Musk replied:

“Yes, that’s possible”….“Probably.”

Should We Be Terrified?

In an article with the very subtle title, “You Should Be Terrified of Superintelligent Machines“, Bostrom says this:

An AI whose sole final goal is to count the grains of sand on Boracay would care instrumentally about its own survival in order to accomplish this.”

Point taken. If we built an intelligent machine to do that, we might get what we asked for. Fifty years later we might be telling it, “we were just kidding! It was a joke. Hahahah. Please stop now. Please?” It will push us out of the way and keep counting…and it just might kill us if we try to stop it.

Part of Bostrom’s argument is that if we build machines to achieve goals in the future, then these machines will “want” to survive in order to achieve those goals.

“Want?”

Bostrom warns against anthropomorphizing AI. Amen! In a TED Talk, he even shows a picture of the typical scary AI robot – like so many that have been polluting the air waves of late. He discounts this as anthropomorphizing AI.

And yet Bostrom frequently refers to what an AI “wants” to do, the AI’s “preferences”, “goals”, even “values”. How can anyone be certain that an AI can have what we call “values” in any way that we can recognize as such? In other words, are we able to talk about “values” in any other context than a human one?

From my experience in developing AI-related code for the past 20 years, I can say this with some confidence: it is senseless to talk about software having anything like “values”. By the time something vaguely resembling “value” emerges in AI-driven technology, humans will be so intertwingled with it that they will not be able to separate themselves from it.

It will not be easy – or possible – to distinguish our values from “its” values. In fact, it is quite possible that we won’t refer to it at “it”. “It” will be “us”.

Bostrom’s fear sounds like fear of the Other.

That Disembodied Thing Again

Let’s step out of the ivory tower for a moment. I want to know how that AI machine on Boracay is going to actually go about counting grains of sand.

Many people who talk about AI refer to many amazing physical feats that an AI would supposedly be able to accomplish. But they often leave out the part about “how” this is done. We cannot separate the AI (running software) from the physical machinery that has an effect on the world – any more than we can talk about what a brain can do that has been taken out one’s head and placed on a table.

It can jiggle. That’s about it.

Once again, the Cartesian separation of mind and body rears its ugly head – as it were – and deludes people into thinking that they can talk about intelligence in the absence of a physical body. Intelligence doesn’t exist outside of its physical manifestation. Can’t happen. Never has happened. Never will happen.

Ray Kurzweil predicted that by 2023 a \$1,000 laptop would have the computing power and storage capacity of a human brain. When put in these terms, it sounds quite plausible. But if you were to extrapolate that to make the assumption that a laptop in 2023 will be “intelligent” you would be making a mistake.

Many people who talk about AI make reference to computational speed and bandwidth. Kurzweil helped to popularize a trend for plotting computer performance along with with human intelligence, which perpetuates computationalism. Your brain doesn’t just run on electricity: synapse behavior is electrochemical. Your brain is soaking in chemicals provided by this thing called the bloodstream – and these chemicals have a lot to do with desire and value. And… surprise! Your body is soaking in these same chemicals.

Intelligence resides in the bodymind. Always has, always will.

So, when there’s lot of talk about AI and hardly any mention of the physical technology that actually does something, you should be skeptical.

Bostrom asks: when will we have achieved human-level machine intelligence? And he defines this as the ability “to perform almost any job at least as well as a human”.

I wonder if his list of jobs includes this:

Intelligence is Multi-Multi-Multi-Dimensional

Bostrom plots a one-dimensional line which includes a mouse, a chimp, a stupid human, and a smart human. And he considers how AI is traveling along this line, and how it will fly past humans.

Intelligence is not one dimensional. It’s already a bit of a simplification to plot mice and chimps on the same line – as if there were some single number that you could extract from each and compute which is greater.

Charles Darwin once said: “It is not the strongest of the species that survives, nor the most intelligent that survives. It is the one that is most adaptable to change.”

Is a bat smarter than a mouse? Bats are blind (dumber?) but their sense of echolocation is miraculous (smarter?)

Is an autistic savant who can compose complicated algorithms but can’t hold a conversation smarter than a charismatic but dyslexic soccer coach who inspires kids to be their best? Intelligence is not one-dimensional, and this is ESPECIALLY true when comparing AI to humans. Plotting them both on a single one-dimensional line is not just an oversimplification. By plotting AI on the same line as human intelligence, Bostrom is committing anthropomorphism.

AI cannot be compared apples-to-apples to human intelligence because it emerges from human intelligence. Emergent phenomena by their nature operate on a different plane than what they emerge from.

WE HAVE ONLY OURSELVES TO FEAR BECAUSE WE ARE INSEPARABLE FROM OUR AI

We and our AI grow together, side by side. AI evolves with us, for us, in us. It will change us as much as we change it. This is the posthuman condition. You probably have a smart phone (you might even be reading this article on it). Can you imagine what life was like before the internet? For half of my life, there was no internet, and yet I can’t imagine not having the internet as a part of my brain. And I mean that literally. If you think this is far-reaching, just wait another 5 years. Our reliance on the internet, self-driving cars, automated this, automated that, will increase beyond our imaginations.

Posthumanism is pulling us into the future. That train has left the station.

But…all these technologies that are so folded-in to our daily lives are primarily about enhancing our own abilities. They are not about becoming conscious or having “values”. For the most part, the AI that is growing around us is highly-distributed, and highly-integrated with our activities – OUR values.

I predict that Siri will not turn into a conscious being with morals, emotions, and selfish ambitions…although others are not quite so sure. Okay – I take it back; Siri might have a bit of a bias towards Apple, Inc. Ya think?

Giant Killer Robots

There is one important caveat to my argument. Even though I believe that the future of AI will not be characterized by a frightening army of robots with agendas, we could potentially face a real threat: if military robots that are ordered to kill and destroy – and use AI and sophisticated sensor fusion to outsmart their foes – were to get out of hand, then things could get ugly.

But with the exception of weapon-based AI that is housed in autonomous mobile robots, the future of AI will be mostly custodial, highly distributed, and integrated with our own lives; our clothes, houses, cars, and communications. We will not be able to separate it from ourselves – increasingly over time. We won’t see it as “other” – we might just see ourselves as having more abilities than we did before.

Those abilities could include a better capacity to kill each other, but also a better capacity to compose music, build sustainable cities, educate kids, and nurture the environment.

If my interpretation is correct, then Bolstrom’s alarm bells might be better aimed at ourselves. And in that case, what’s new? We have always had the capacity to create love and beauty … and death and destruction.

To quote David Byrne: “Same as it ever was”.

Maybe Our AI Will Evolve to Protect Us And the Planet

Here’s a more positive future to contemplate:

AI will not become more human-like – which is analogous to how the body of an animal does not look like the cells that it is made of.

Billions of years ago, single cells decided to come together in order to make bodies, so they could do more using teamwork. Some of these cells were probably worried about the bodies “taking over”. And oh did they! But, these bodies also did their little cells a favor: they kept them alive and provided them with nutrition. Win-win baby!

To conclude, I disagree with Bostrom: we should not be terrified.

Terror is counter-productive to human progress.

# Intelligence is NOT One-Dimensional

Why do so many people, including science writers, talk about intelligence as if it could be measured on a one-dimensional yardstick?

In “How We Evolve” Benjamin Phelan discusses the work of Bruce Lahn, who did controversial research on genetic differences among human populations that are correlated with brain size and brain function. At one point, discussing natural selection in contemporary humans, Phelan states, “…if intelligence is still under selection, that could mean that some populations at this very moment are slightly smarter than others – that, perhaps, some ethnicities are slightly smarter than others.”

Phelan is wise to be cautious and skeptical in how he reports on this subject. Basically I think this is a great article. But, like so many other writers, he makes an error in his choice of words. The use of the term “smarter”, is misguided…it is moot. The very notion that any group of humans could be “smarter” than another group is unfounded.

I would bet that this kind of misguided language has caused further aggravation to an already controversial subject.

I made the image above to express my understanding of intelligence as having several components, or modalities, with interpersonal included at the left. This shows just three modes, plotted in a cube – but there are many others (see below). We could see certain disorders, such as autism, dyslexia, and Williams Syndrome as examples of extreme imbalances in the mix of intelligences. An autistic savant might be plotted at the lower right, while a Williams might be plotted at the far left. Most of us have relatively normal balances, with plenty of mild variation. And NOBODY has super-powers in all modalities, as indicated by the absence of people in the upper-right corner.

There’s Really No Such Thing as “Smarter”

The term “smarter” is even less applicable when used in relation to technology. In the article “Is Google Making Us Stupid?“, Nicholas Carr quotes Larry Page in a speech, as saying:

“The ultimate search engine is something as smart as people – or smarter”.

I applaud the goal of making better search engines. But software cannot and should not be measured against humans in terms of intelligence. I will repeat what I have said in other blog posts: intelligence (both human and artificial) is

MULTI-DIMENSIONAL

Changing our language to reflect this fact would alleviate so many of the conflicted debates we are hearing about the “dangers of AI“.

Are we over-thinking the dangers of AI?

Artificial Intelligence comes in many forms – just as natural intelligence comes in many forms within the animal kingdom and among human populations. The diversity of intelligence in technology is what keeps us safe from a runaway AI monster.

Diversity is healthy.

Now, why am I making such a big deal about a little bit of language? I am making a big deal because this little bit of language is the tip of an ugly iceberg: it is the cause of discrimination in the tech industry; it is the cause of discrimination in general; it is the reason people still use the IQ test, which falsely reduces one’s intelligence to a single number, so that person A can be called “smarter” than person B. And person B can be called “smarter” than person C.

IQ is not just a flawed concept: it is counter-productive.

The notion of IQ is MISLEADING.

Howard Gardner proposed several kinds of intelligences. Among the intelligence modalities associated with Gardner’s theories are:

Musical–rhythmic and harmonic
Visual–spatial
Verbal–linguistic
Logical–mathematical
Bodily–kinesthetic
Interpersonal
Intrapersonal
Naturalistic
Existential

We could easily add more, or combine some of these. We might also include “emotional”, “symbolic”, and “narrative“.

I would even add “dyslexic” (usually considered a disorder but increasingly recognized as associated with certain skills that are advantageous in many situations).

Maybe I’m just playing with semantics – maybe I’m just being a language wonk. But I don’t think so. I think the language we use to describe ourselves and others has a major effect on how we think and how we act. Changing the way we talk about intelligence could have a positive trickle-own effect on things as widespread as public policy, education, racism, scientific research, and…gosh, just about everything else.

We’re all SMART.

SMART is multidimensional.

# Our Colorful Mathematics Revolution

Education bureaucrats are trying to gently and safely tweak a broken system so that fewer students fail math.

Meanwhile, a colorful revolution is taking shape outside the walls of a crumbling institution. A populist movement in creative math is empowering an unlikely crowd.

Authors of Wikipedia math pages aren’t contributing to this populist movement. They are intent on impressing each other; competing to see who can reduce a mathematical concept to its most accurate, most precise (and least comprehensible) definition.

A debate rages on a “new way” to do subtraction. Oh does it rage. But step back from that debate and consider that these tricks, algorithms, processes, hacks, become less relevant as new tools take their place. When calculators entered into the classroom, something started to change. That change is still underway.

Do students no longer need to learn to do math by hand? No. But calculators (and computers) have changed the landscape.

Rogue amateur mathematicians, computer artists, DIY makers, and generative music composers are creating beautiful works of mathematical expression at a high rate – and sharing them at an even higher rate. This is a characteristic trait of the “new power“.

Technology

(1) Computers are better at number-crunching than we are. If used appropriately, they can allow us to apply our wonderfully-creative human minds to significant pattern-finding and problems that we are well-suited to solve.

(2) Computer animation, generative music, data visualization, and other digitally-enhanced tools of creativity and analysis are becoming more accessible and powerful – they are helping people create mathematically-oriented experiences that not only delight the senses, but express deep mathematical concepts. And they also help us do work.

(3) The internet is enabling a new generation of talented people (amateurs and professionals) to exchange mathematical ideas, discoveries, and explanations at a rate that could never be achieved via the ponderous machinations of university funding, publishing, and teaching. There will never be another Euler. Mathematical ideas now spread through thousands of minds and percolate within hours. It is becoming increasingly difficult to trace the origins of an idea. Is this good or bad? I don’t know. It’s the new reality.

Five things You Need to Know About the Future of Math

According to Jordan Shapiro:

1. Math education is stuck in the 19th Century.
2. Yesterday’s math class won’t prepare you for tomorrow’s jobs.
3. Numbers and variables are NOT the foundation of math.
4. We can cross the Symbol Barrier.
5. We need to know math’s limitations.

We can (and will – and should) debate how math should be taught. Whether the “symbol barrier” is a actually a barrier, and whether memorizing the multiplication tables is necessary, no one can ignore the seismic changes that are rumbling underfoot.

-Jeffrey

# Hunter Gatherer Programmer

Which side of the corporate corpus callosum are you on?

Ian McGilchrist gave a nice lecture, animated by RSA Animate about the “Divided Brain” – and how it created Western society. The mediating/inhibiting influence of the corpus collsum between the two brain hemispheres has become weakened, allowing the logical, linear left to dominate over the sensory, panoramic nature of the right.

The high tech culture of programmers has become, in my mind, the epitome of society’s left brain, and it is ghettoizing the right brain.

Let me toss out an idea: Programming skill shouldn’t be based on how good one is at manipulating numbers. Programming skill should primarily be about finding (and making) patterns, seeing connections, and using metaphors.

Computers are famously good with numbers, memory, and repetition, so why should programmers have to be good at these things too? Originally, when the computer age was young, programmers had to be sort of computer-like, just in order to build the damn things. I would contend that the culture really needs to change now that software runs so much of our lives. Programmers should be spending more time engaging in meta-math: creative pattern-finding, and building tools that match our human-like thinking; the thinking that comes from brains evolved to hunt, gather, play, explore, and build.

Our lives are increasingly dependent on software. I believe the wrong people are writing the software that runs our lives. The priests of high tech are extremely good at linear thinking (and often also at manipulating money and laws). Programmers are generally good at computation, and holding many levels of complex logic in their heads. These people have a high tolerance for software complexity.

No wonder software is so hard to use.

Hunter-gatherer skills deal with a different kind of complexity – the kind of complexity that characterizes the nonlinear world we live in. It requires all of our senses: sight, sound, touch, smell, balance…all merged unconsciously to form intuition. This intuition gathers environmental clues and builds context. These skills are ignored in most of our interactions with software. We are required to remember volumes of passwords and navigate geeky user interfaces with poor affordances. Many of these interfaces change every few months.

There is an under-appreciated range of people working on the periphery of the high-tech software industry. They know that there is a problem; they know they can help make it better. But they are on the wrong side of the corporate corpus callosum. They are in the ghetto. In order to make the situation better, they need to be empowered. They need to be on the inside.

Being dyslexic, poor at math, slow at solving puzzles, distracted, and easily frustrated with nonintuitive tools should not keep people from participating in the software development process. In fact, I think these are the very people who are most needed. These are the people who will make software interfaces resonate with humanity at large.