Math Word Problems are Problematic

Mark Twain said: Never let school interfere with your education.

Here’s a math riddle:

“Peter has 21 fewer marbles than Nancy. If Peter has 43 marbles, how many marbles does Nancy have?”

The first sentence requires me to do some linguistic fiddling. There is an implication that both Peter and Nancy possess marbles – but it is not directly stated. The second sentence begins with “If”, which means the primary grammatical elements in the question are postponed until the end. Let’s re-phrase this riddle to say:

14_shutterstock_2584271“Peter and Nancy each have a bag of marbles. Peter has 43 marbles in his bag. Peter has 21 fewer marbles than Nancy has. How many marbles does Nancy have in her bag?”

….

This might make the riddle easier to solve. Or it might not. Either way, I can say for sure that all this wordy bullshit is irrelevant to the actual math.

Math, like Music, is a Universal Language

Now consider what it would be like if you were naturally talented in math, and you were faced with a math riddle expressed in English…but English were not your first language. You may have to spend more time on the question, and you may make some critical mistakes. The subtleties of one language may not translate to another language, causing you to trip up.

We are playing with words here. Now, playing with words is fine; it’s part of how we learn to speak, listen, read, and write. In fact, playing with words that have mathematical content is a good exercise. But this should not come into play for testing students on math skills. The problem (as always) is in the testing.

Here’s another one:

“Sue has two pencils. She spends one hour at the store and buys three more pencils. How many pencils does Sue have in all”.

bivdg4wrggssxq8akk2r

WTF does “spends one hour in the store” mean? Is this just narrative fluff, or is there some clever hint in there?

If I had been presented with this problem as a young student, I would have spent some time mulling over “spends one hour at the store”. However, this is irrelevant and unrelated to the answer.

How to Obfuscate Mathematical Thinking With Clever Language

For dyslexic students, students who learn through action (kinesthetic learners), students who are visual thinkers, and students who learn best by building things, this wordsmithing can be a recipe for failure.

In the real world of adults getting things done and making a living, math is rarely experienced in the form of clever riddles. Math – at its best – is manifested deep within the texture of our daily actions.

Here’s another one:

“You have 24 cookies and want to share them equally with 6 people. How many cookies would each person get?”

Let’s think about this. I “have” 24 cookies. (That’s a lot of cookies – why would I have so many cookies?) I “want” to share them with 6 people. Okay. I have a desire to share cookies. So far so good. I’m a generous guy! But then the second sentence appears unrelated: “How many cookies would each person get?” Wait a minute: am I about to give these cookies to these people? And what exactly does “equally” mean?

I know it may seem trivial for me to analyze these details. As an adult I know what this sentence means. But as a young student, I may not have had the full vocabulary or grammatical wherewithal to jump right to an answer. Also, as a “narrative learner”, I would have really wanted to make sure I understood the characters involved, their motivations, etc. I could imagine getting easily get swept up by the storyline (simple as it is).

In short, by working out the characters of this story and their motivations, I may not actually be doing math: I might be engaging in language craft and storytelling. Which is great! But this should not interfere with my being tested on my innate math skills.

Here’s another:

“Kennedy had 10 apples. She gave some to John. Now she has 2 apples left. How many apples did she give to John?”

The tense of this little story jumps back and forth between past and present. At age 55, I am now quite facile with language, but when I was 10, I would have had to put in some effort into parsing these shifts in tense.

tumblr_ma5ndonxYD1rgqecyo1_500

In fact, my language skills were quite poor when I was 10, and this had an impact on all my school subjects (not just math). Later in life, after I had escaped school and actually started to gain some relevant skills, MIT offered me an opportunity to earn a Master’s degree. They did not ask me any math riddles. MIT knows better than that.

Language

One might argue that language skills are fundamental and important for learning most anything. That’s accurate. Reading, writing, speaking, and listening are fundamentally useful, and the better you are at language, the better you are likely to become at most other skills.

If this is the case, we might conclude that mixing grammatical sentence structure with mathematical logic is a valuable skill.

Indeed.

But school curriculum designers should not confuse the ability to parse a cleverly-crafted sentence with one’s innate mathematical abilities.

The problem, as always, is with TESTING.

I’ll close with this:

An Open Letter to the Education System: Please Stop Destroying Students

Why Nick Bostrom is Wrong About the Dangers of Artificial Intelligence

emvideo-youtube-VmtrvkGXBn0.jpg.pagespeed.ce.PHMYbBBuGwNick Bostrom is a philosopher who is known for his work on the dangers of AI in the future. Many other notable people, including Stephen Hawking, Elon Musk, and Bill Gates, have commented on the existential threats posed by a future AI. This is an important subject to discuss, but I believe that there are many careless assumptions being made as far as what AI actually is, and what it will become.

Yea yea, there’s Terminator, Her, Ex Machinima, and so many other science fiction films that touch upon deep and relevant themes about our relationship with autonomous technology. Good stuff to think about (and entertaining). But AI is much more boring than what we see in the movies. AI can be found distributed in little bits and pieces in cars, mobile phones, social media sites, hospitals…just about anywhere that software can run and where people need some help making decisions or getting new ideas.

John McCarthy, who coined the term “Artificial Intelligence” in 1956, said something that is totally relevant today: “as soon as it works, no one calls it AI anymore.” Given how poorly-defined AI is – how the definition of it seems to morph so easily, it is curious how excited some people get about its existential dangers. Perhaps these people are afraid of AI precisely because they do not know what it is.

Screen Shot 2015-09-02 at 10.51.56 AMElon Musk, who warns us of the dangers of AI, was asked the following question by Walter Isaacson: “Do you think you maybe read too much science fiction?” To which Musk replied:

“Yes, that’s possible”….“Probably.”

Should We Be Terrified?

In an article with the very subtle title, “You Should Be Terrified of Superintelligent Machines“, Bostrom says this:

An AI whose sole final goal is to count the grains of sand on Boracay would care instrumentally about its own survival in order to accomplish this.”

godzilla-610x439Point taken. If we built an intelligent machine to do that, we might get what we asked for. Fifty years later we might be telling it, “we were just kidding! It was a joke. Hahahah. Please stop now. Please?” It will push us out of the way and keep counting…and it just might kill us if we try to stop it.

Part of Bostrom’s argument is that if we build machines to achieve goals in the future, then these machines will “want” to survive in order to achieve those goals.

“Want?”

Bostrom warns against anthropomorphizing AI. Amen! In a TED Talk, he even shows a picture of the typical scary AI robot – like so many that have been polluting the air waves of late. He discounts this as anthropomorphizing AI.

Screen Shot 2015-08-31 at 9.51.57 PM

And yet Bostrom frequently refers to what an AI “wants” to do, the AI’s “preferences”, “goals”, even “values”. How can anyone be certain that an AI can have what we call “values” in any way that we can recognize as such? In other words, are we able to talk about “values” in any other context than a human one?

Screen Shot 2015-09-01 at 3.49.13 PMFrom my experience in developing AI-related code for the past 20 years, I can say this with some confidence: it is senseless to talk about software having anything like “values”. By the time something vaguely resembling “value” emerges in AI-driven technology, humans will be so intertwingled with it that they will not be able to separate themselves from it.

It will not be easy – or possible – to distinguish our values from “its” values. In fact, it is quite possible that we won’t refer to it at “it”. “It” will be “us”.

Bostrom’s fear sounds like fear of the Other.

That Disembodied Thing Again

Let’s step out of the ivory tower for a moment. I want to know how that AI machine on Boracay is going to actually go about counting grains of sand.

Many people who talk about AI refer to many amazing physical feats that an AI would supposedly be able to accomplish. But they often leave out the part about “how” this is done. We cannot separate the AI (running software) from the physical machinery that has an effect on the world – any more than we can talk about what a brain can do that has been taken out one’s head and placed on a table.

Screen Shot 2015-08-31 at 9.56.50 PM

It can jiggle. That’s about it.

Once again, the Cartesian separation of mind and body rears its ugly head – as it were – and deludes people into thinking that they can talk about intelligence in the absence of a physical body. Intelligence doesn’t exist outside of its physical manifestation. Can’t happen. Never has happened. Never will happen.

Ray Kurzweil predicted that by 2023 a $1,000 laptop would have the computing power and storage capacity of a human brain. When put in these terms, it sounds quite plausible. But if you were to extrapolate that to make the assumption that a laptop in 2023 will be “intelligent” you would be making a mistake.

Many people who talk about AI make reference to computational speed and bandwidth. Kurzweil helped to popularize a trend for plotting computer performance along with with human intelligence, which perpetuates computationalism. Your brain doesn’t just run on electricity: synapse behavior is electrochemical. Your brain is soaking in chemicals provided by this thing called the bloodstream – and these chemicals have a lot to do with desire and value. And… surprise! Your body is soaking in these same chemicals.

Intelligence resides in the bodymind. Always has, always will.

So, when there’s lot of talk about AI and hardly any mention of the physical technology that actually does something, you should be skeptical.

Bostrom asks: when will we have achieved human-level machine intelligence? And he defines this as the ability “to perform almost any job at least as well as a human”.

I wonder if his list of jobs includes this:

Screen Shot 2015-09-02 at 12.45.02 AM

Intelligence is Multi-Multi-Multi-Dimensional

Bostrom plots a one-dimensional line which includes a mouse, a chimp, a stupid human, and a smart human. And he considers how AI is traveling along this line, and how it will fly past humans.

Screen Shot 2015-08-31 at 9.51.34 PM

Intelligence is not one dimensional. It’s already a bit of a simplification to plot mice and chimps on the same line – as if there were some single number that you could extract from each and compute which is greater.

Charles Darwin once said: “It is not the strongest of the species that survives, nor the most intelligent that survives. It is the one that is most adaptable to change.”

Is a bat smarter than a mouse? Bats are blind (dumber?) but their sense of echolocation is miraculous (smarter?)

parrot

Is an autistic savant who can compose complicated algorithms but can’t hold a conversation smarter than a charismatic but dyslexic soccer coach who inspires kids to be their best? Intelligence is not one-dimensional, and this is ESPECIALLY true when comparing AI to humans. Plotting them both on a single one-dimensional line is not just an oversimplification. By plotting AI on the same line as human intelligence, Bostrom is committing anthropomorphism.

AI cannot be compared apples-to-apples to human intelligence because it emerges from human intelligence. Emergent phenomena by their nature operate on a different plane than what they emerge from.

WE HAVE ONLY OURSELVES TO FEAR BECAUSE WE ARE INSEPARABLE FROM OUR AI

We and our AI grow together, side by side. AI evolves with us, for us, in us. It will change us as much as we change it. This is the posthuman condition. You probably have a smart phone (you might even be reading this article on it). Can you imagine what life was like before the internet? For half of my life, there was no internet, and yet I can’t imagine not having the internet as a part of my brain. And I mean that literally. If you think this is far-reaching, just wait another 5 years. Our reliance on the internet, self-driving cars, automated this, automated that, will increase beyond our imaginations.

Posthumanism is pulling us into the future. That train has left the station.

african cell phoneBut…all these technologies that are so folded-in to our daily lives are primarily about enhancing our own abilities. They are not about becoming conscious or having “values”. For the most part, the AI that is growing around us is highly-distributed, and highly-integrated with our activities – OUR values.

I predict that Siri will not turn into a conscious being with morals, emotions, and selfish ambitions…although others are not quite so sure. Okay – I take it back; Siri might have a bit of a bias towards Apple, Inc. Ya think?

Giant Killer Robots

armyrobotThere is one important caveat to my argument. Even though I believe that the future of AI will not be characterized by a frightening army of robots with agendas, we could potentially face a real threat: if military robots that are ordered to kill and destroy – and use AI and sophisticated sensor fusion to outsmart their foes – were to get out of hand, then things could get ugly.

But with the exception of weapon-based AI that is housed in autonomous mobile robots, the future of AI will be mostly custodial, highly distributed, and integrated with our own lives; our clothes, houses, cars, and communications. We will not be able to separate it from ourselves – increasingly over time. We won’t see it as “other” – we might just see ourselves as having more abilities than we did before.

Those abilities could include a better capacity to kill each other, but also a better capacity to compose music, build sustainable cities, educate kids, and nurture the environment.

If my interpretation is correct, then Bolstrom’s alarm bells might be better aimed at ourselves. And in that case, what’s new? We have always had the capacity to create love and beauty … and death and destruction.

To quote David Byrne: “Same as it ever was”.

Maybe Our AI Will Evolve to Protect Us And the Planet

Here’s a more positive future to contemplate:

AI will not become more human-like – which is analogous to how the body of an animal does not look like the cells that it is made of.

tree-of-lifeBillions of years ago, single cells decided to come together in order to make bodies, so they could do more using teamwork. Some of these cells were probably worried about the bodies “taking over”. And oh did they! But, these bodies also did their little cells a favor: they kept them alive and provided them with nutrition. Win-win baby!

To conclude, I disagree with Bostrom: we should not be terrified.

Terror is counter-productive to human progress.