The Three Dimensions of Math Edtech
Reflections on four math education conferences in four weeks.
I have spent the last month traveling between several different math education conferences—the national conference in DC, a state conference in Montana, and a regional conference in Southern California. At each conference, I spent a bunch of time in the exhibit halls where vendors sell print-based math curriculum, physical manipulatives, financial services for teachers, shirts with mathy puns, and, of course, math education technology.
To an outside observer, someone new to math, education, or technology, all of the math edtech products probably look the same. They’re all trying to help kids learn math with technology after all! To the trained eye, however, they differ from each other in three key ways I’ll outline below.
Full disclosure: my own company, Amplify, was a vendor at two of these conferences and has multiple math edtech products that’d settle at different locations on each dimension. I’m trying to be descriptive here rather than (per usual) judgy, trying to help a lost tourist find their restaurant rather than directing them to my favorite cafe.
Mathematical Creativity
On average, how many different representations of mathematical thought does the product invite in each lesson or unit? Some math edtech products lean very heavily on numerical response and multiple choice items. Does the product invite other representations like voice recording, sketches, graphs, written responses, card sorts, constructive geometry, etc?
Data Resolution
Is student data available to teachers at high resolution? Low resolution? Let’s say the highest resolution—the clearest expression of a single student’s thought—is found via an interview with the student or through close examination of their work on a piece of paper. At the other end of the spectrum, you have low resolution displays like the amount of time spent by an entire school system on a particular program or the number of students scoring at proficient or above on a particular standard. Both produce very different kinds of actions on behalf of a student.
Enriched Feedback
Whenever I see some new math edtech I perform the same check every time: I get a question wrong. I try to get it wrong in a way that is thoughtful, misapplying some formula or overgeneralizing an idea in ways that are common to kids. My question, then, is how many bits of information does the software give me about my answer. Is it one bit of information: right or wrong? Or does it offer me multiple bits of information, perhaps written or visual information about my answer?
There are other useful dimensions, like the amount of social contact between students that the technology facilitates, or how tightly the technology integrates caregivers into student learning. But if you plot every math edtech product in every vendor hall at every conference I have attended this fall along those first three axes, you won’t find a single product in the same location. Hope this helps you make your way!
Dan,
First, I have been following your work for years and have been inspired by it, both in my role as teacher and teacher educator.
Second, I really appreciate the distinction you have drawn here (and elsewhere) about low and high resolution data
Thanks so much m
Thank you Dan! Love your take on enriched feedback. I tested out resources like Khan SAT Practice by developing a realistic persona based on my students and staying in character while taking an entire test. (I had to take the test on Saturday with kids for a work assignment, not fun.) It's a great way to evaluate the feedback delivered by the product, and helped me train teachers to interpret that feedback and act on it constructively with students. Hope to see you keynote at a conference again soon.