Testing Times

testing

Onto the use of MCQs used formatively.  I enjoyed reading Ekins (2007) discussion of OpenMark and took up the encouragement to have a go…. happily mis-reading the scales on graphs on their test questions in “how’s your mathematics“.

Lots of good things here:

  • targeted feedback – different hints depending on the nature of your incorrect answer.
  • getting fewer marks for second attempts
  • at the end of the test providing targeted resources to help students follow up areas of weakness.
  • videoing current students “thinking aloud” as they do these questions as a way of improving quality.
  • using question statistics to improve test quality.

MCQs are common fare in VLEs.  Blackboard allows me to give feedback (at a question level), permit multiple attempts, and following our summer upgrades will give neat “item analysis” to tutors on questions.

What we struggle to do so readily is to provide feedback to individual students based on their performance on groups of themed questions (eg “on q15-20 [molarities] you scored less than average…please review Resources XX and YY”).

I also had a peek at Anderson (2009) who used online quizzes in a Business Finance course.  Mastery of these quizzes was a strong predictor of future summative tests – a lever that Anderson then used to encourage new students to engage.  A further benefit arose from his ability to monitor and target areas of weakness in f2f settings.  He quotes one of his students:

“Hamish pitched the sessions at exactly the right level for the class – he seemed to know which areas and formulas would cause the most concern.”

Both of these authors point out the up-front investment required to author these useful student resources. The challenge remains how to factor this investment into workload models that favour contact time. (Perhaps one answer is to use more student-generated question banks aka StudyMate/Peerwise.)

Finally on my e-assessment rounds I had a read of what our own School of Maths and Statistics have been up to with Numbas – their own in-house (and open source) testing engine.  Computer Based Assessments (CBAs) are designed so that the questions are variable (they use different numbers and parameters), meaning that the tests can be run for one week in “open” mode and then one week in “test mode”.  Students therefore have many opportunities to visit the problems and their model solutions before doing the test for real.  One criticism of CBAs is that they don’t give partial credit for “correct method” – Numbas provides functionality to insert partial answers.  Parker (2013) admits that these CBAs can’t easily assess deeper learning, but he does point out the real value that they have for learning, helping students to practice standard techniques.    He hints that maybe they should not be used summatively at all!

Ekins, J. (2007) ‘The use of interactive on-line formative quizzes in Mathematics’, paper presented at the 11th International Computer Assisted Assessment Conference, 10–11 July 2007, pp. 163–75;

Anderson, H. (2009). Formative Assessment: Evaluating the Effectiveness of On-line Quizzes in a Core Business Finance CourseMassey U. College of Business Research Paper, (2).

Parker, N (2013)  An Analysis of Computer-Based Assessment in the School of Mathematics and Statistics  Numbas Blog

Advertisements
This entry was posted in H817. Bookmark the permalink.