If we only measure the obvious we fail to capture value…
I’ve been reflecting on this as I’ve been casting an eye over graphs and stats for our nclmoocs – considering how my own activity would be measured. For me this last year could define me as a MOOC dropout. Those analysing my activity would see: 3 weeks of Data to Insights good scores, then nothing; sporadic access to Udacity’s Data Analysis with R, engagement with tests but no predictable signs of ever reaching the end … and of course, because I haven’t got to the end of it I haven’t filled out any form of post course survey.
The data defines me as a dropout who has lost interest or time or both. The reality is hugely different, I’ve found both of these courses to be extremely well thought out, hugely interesting and very useful. Indeed, part of the reason for my slow progress has been the way they have stimulated me to think differently and come up with ideas for meaningful representations. Rather than moving on to the next exercise I have been using what I have learned for real. In this respect my engagement (learning) should surely in the “success” box.
But, there is absolutely no way for the course designer to capture this, especially as my reflections come months after any formal evaluation would be penned. And here is the rub: these courses have a set of learning outcomes and curriculum (whether explicit or implicit); but is it really hard to find out what my learning goals were and how they changed. Viewing my apparent disengagement as failure completely misses the value that I would place on these courses.
Hurrah for MOOCS I love learning!