The Metric is the Message

Yesterday’s post by Dan Meyer hit all the right buttons for a comment frenzy.

Even worse, at this moment in history, computers are not a natural working medium for mathematics.

For instance: think of a fraction in your head.

Say it out loud. That’s simple.

Write it on paper. Still simple.

Now communicate that fraction so a computer can understand and grade it. Click open the tools palette. Click the fraction button. Click in the numerator. Press the “4″ key. Click in the denominator. Press the “9″ key.

I’m sympathetic to anything that brings media studies analysis onto our teaching techniques, but the mixed targets and strawmen examples confuse me.  The first example sounds like a struggle with typesetting mathematics in Word (or some other word processor).  Yet for software that actually understands fractions, I can get away with typing “4/9” and I am done.  Simple, and at least as quick as pen and paper.

The key there is which software we’re talking about.  Some software is horrible at communicating mathematics. This can even vary wildly between versions; Word 2007 is significantly more useless than Word 2010 when it comes to typing in math equations.*  On the other hand, something from Wolfram is going to get math because it was built with math as its focus.  If you give Wolfram Alpha “4/9”, it will not only make it look right, but it understands it as an irreducible number and will inform you that 1.333333333 is only an approximation.

So maybe we can’t treat all software as equal when thinking about it as a medium?

On the other hand, there are things which even W/A will only understand if typed in a cryptic format, or with excessive parentheses to keep things unambiguous.  I don’t know if that’s really getting to the heart of what Dan’s complaint is about, though.

Do you want to know where this post became useless to Silicon Valley’s entrepreneurs, venture capitalists, and big thinkers? Right where I said, “Computers are not a natural working medium for mathematics.” They understand computers and they understand how to turn computers into money so they are understandably interested in problems whose solutions require computers.

Okay, so full disclosure, I have a Computer Engineering degree and I like doing recreational programming from time to time.  However I’ve also tried to teach kids with little-to-no computing background how to do math on Geogebra, and watched them flounder, and I get that there’s truth in here.

The thing that confuses me is that mathematics is the medium by which computers operate.  This doesn’t mean that the reverse is true, but it does mean that it’s weird to me to think that computers are a bad tool for doing math when that’s the very domain in which they live and breathe.  (Or one branch of mathematics, at least.)

So we’re closer to but not quite at Dan’s real point which is that computers are lousy at assessing mathematical ability.  His strongest examples, the ones which are carrying his message, are not really about computers as a communication medium but about computers as an assessment machine.  This is where it starts to actually make sense to me.  I don’t see any reason why a computer would be a poor medium to use to communicate mathematical reasoning to an actual human teacher – kids can type their math essays as well as they type their English ones.  Where this falls apart is when we use computers as more than a medium, but rather as an analysis tool to do the assessing for us.

Assessment defines what students are told is valuable about a subject.  If all you assess is computation, students will get the message that computation is all that matters in mathematics.  And computers are only good at assessing what they’re already good at doing – computation.

If we tell students that math ability is only about doing what computers can already do better, we’re clearly not going to convince them of the importance of math.

So, I don’t think this discussion has ruled out computers as a medium to communicate mathematics.  But it definitely highlights that in education, just as in applied mathematics and engineering, we can’t expect a computer to do the thinking for us.


* Word 2010, in fact, does let me type “4/9” and typeset it properly – if I’ve made an Equation object, which is an extra click.


8 thoughts on “The Metric is the Message

  1. Pingback: dy/dan » Blog Archive » What Silicon Valley Gets Wrong About Math Education Again And Again

  2. +1 josh g. for your followup post, it’s great. I like how you reframe the subject correctly. Hardware and software combinations create different medium and the misuse of it could be the problem.

    I think you don’t talk about the people/money aspect in your reply post.

    “Do you want to know where this post became useless to Silicon Valley’s entrepreneurs, venture capitalists, and big thinkers?”

    Dan is correctly implying that some people just want to turn math into money (to the detriment of teachers and students), and that’s where the real fighting will be. Dan was dismissing current computers, maybe with the goal of dismissing bad policy.

    Carl from BuzzMath

  3. The ‘new style’ equation edition is in Word 2007, and there’s no need to click anything. Type [Alt]+= then just type your expression. So

    [Alt]+= 4/9 [space]

    will give you a nicely formatted fraction.

    It’s Word 2003 that has the clunky Equation Editor object that I always hated using.

    • Hrm, okay. I had trouble this year where equations made in PPT2010 wouldn’t display properly in PPT2007 … at least I think it was 2007, maybe it was actually 2003, so whatever.

      Also, thanks, I hadn’t actually found that keyboard shortcut yet!

  4. Your distinction between communicating and assessing is worthwhile, but I don’t think it’s quite that clear. The computer can assess mathematics just fine, provided that you’re communicating in a dialect it can respond to. The tile pattern essay is a great example, in that you could change the question to “recreate in LOGO/Scratch” and have a learning artifact that shows similar (andI’d argue equivalent) levels of mathematical understanding.

    That’s not a computation problem. Even though the computer is performing computations, the intellectual challenge is full of those other math verbs – inspect, analyze, reduce, abstract, model, describe.

    Computers are bad at ASSESSING human-to-human mathematical COMMUNICATION.

    I’m not trying to (completely) handwave the whole other raft of skills that’s implied by this comparison, or suggest that most individuals could somehow achieve fluency in mathematics or “computational thinking” without passing through a whole lot of natural language teaching and learning. But the strengths of computing is that it’s a discipline where communicating and assessing are often synonymous.

    • I totally don’t buy that a computer could do even a mediocre job of assessing how well a student recreated that tiled pattern in LOGO or Scratch. I’m sure it would be able to tell if you got it *exactly* right, but that’s easy. I would be surprised if it could do a reasonable (read as: pedagogically sound) job of telling the difference between “basically got it right except for one small mistake made throughout”, or “squished the shape a bit but still pulled off the most difficult part of using function calls to create the repeated pattern”, or “had no clue and flailed wildly”.

      This isn’t to say it isn’t possible to write some kind of evaluating algorithm that assigns part marks based on … something. I just doubt it’s going to be as meaningful as what happens when a knowledgeable teacher takes the time to make that decision. (Not because it’s technically impossible, but because the cost-benefit analysis means basically no one’s going to do it. It’s a Hard Problem and therefore expensive to solve)

      • Thanks for clarifying that distinction. I wasn’t considering a system where the computer was in any way involved with producing fine-grained distinctions of “almost” right. Rather, I think that an exercise which asks the student to recreate that tile diagram will produce an artifact that would reveal, to a human observer, as much insight and nuance about the student’s mathematical understanding as a written response to the Common Core question.

        Although it wasn’t what I had in mind, I’m sure there’s a fair amount of mechanically observable information in student responses. Symmetry along various axis. Draw order and construction pattern. But since I’m not a Valley VC funder, those are really a low-order concern. Instead, I’m excited by the fact that a Scratch/LOGO solution would be testable and observable by the student themselves. My experience is that this feedback loop is far richer than what most students experience with “explain it” math problems of similar complexity.

        But, yes, I don’t see any fully automated solution that can address the richness of meaning in student responses. My best hope is that easily shareable electronic responses, in whatever form, would give rise to a a StackExchange level of distributed human response.

      • I think we’re on the same page.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s