This weekend has seen me happily engaged in the creation of a PetchaKucha (my first!)
I’ve seen 1 “live” and watched one online ….here’s the recipe I’ve used to work up a draft…
- Work out that I need about an 800-1000 wordcount (MAX)
- Generate some ideas on a trusty mindmap
- Dictate my way around my map creating a script (Dragon NaturallySpeaking is grand for this)
- Sigh at having too much content
- Generate 20 slides and reorder ideas
- Use Audacidy to generate 20 second chunks
- Add the exported mp3 to the timed powerpoint
- Generate bullet points for my speaker notes
….now all I need to do is practice.
Which leads me to think about “equivalence” – methinks this task has to be equivalent (in at least time) to writing and referencing 1200 words. Goodness knows how you would mark it though!
Shirley Williams has been providing some excellent behind the scenes posts for #flmobigame which I have been enjoying immensely. Musing on Shirley’s posts I realise that I am now both an example of the degree owning student AND the one who is now running out of sync.
OK I am only one week behind. I had hoped to catch up quickly (I am for/do/while/until savvy) but had what could only be described as an unscheduled-learning-opportunity posed by pasting the week 4 code into the wrong place. The compiler of course complained but it took me the entire evening to work it out, after I had revisited the week 1 video AGAIN. The comments section showed one student with the exact same error, but sadly no explanation of resolution.
I know from experience that stupid coding errors create chaos. In an office or face to face learning environment its easy to look pleadingly at a buddy on the next computer and ask for fresh eyes to point your error. Asynchronously this is harder particularly if you have doubts: “Is there someone helpful online when I am?” “How much info do they need to be able to help?” And perhaps more tellingly, “am I willing to put my struggle out there in the sure knowledge that the ‘Doh!’ moment will follow”. There’s a degree of confidence (robustness?) required which is minimised by good and early examples of peer and professional support. It’s here that the open education/ experienced online learners have a role to play alongside the course team in setting the tone.
I’ve been impressed both on this MOOC and on H817Open about how a small number of people can do just this. The Reading course team have been great, modelling how to get students to share code and contribute to bug finding. Indeed, I do believe I heard Kartsen on the video referring to bug fixing as fun.
As someone who has dropped out of previous MOOCs, being behind now could put me at risk of being yet another non-completion statistic. But, I would like to catch up and complete it, not just because of some academic interest in all things MOOC, but because I have got a bit hooked:
- I am producing something (it won’t go on the fridge but I will inflict it on passing friends)
- the course isn’t at all earnest, but fun (lets remember learning can be fun)
- it’s not taken over my life (cf MA-ODE)
- I get a (perhaps worrying) sense of achievement from answering the quizzes and ticking off my progress
- the skill (a bit of Java) was something I wanted to do, this is infinitely better than a teachyourself book and the “evented-ness” will keep me going.
Posted in MOOC
What a glorious idea FutureLearn – to construct courses on the basis of 4 hours study a week – I need something to do now that The Great British BakeOff has ended and I still have a slight Post H817 void. It’s been fun to start a gentle 7 week journey with the UniOfReading on FutureLearn – to grapple with a bit of JAVA.
- Like the platform, nice and clean, not too much going on, works neatly on mobile or desktop
- Friendly approachable videos
- I did (sadly) enjoy sliding the sliders to check off when I had done a bit
- Lots of great peer support going on even on day one
The not so good
- (Personal preference warning:) I do hate trying to follow long video instructions! Video is such a good medium for overview but then I really want a static list of things to do. I was hopeful about the transcript…it got me a certain distance until “copy these files to here” …then it was back to pausing and restarting the video.
- Muddle: unsurprisingly I got into exactly the same sort of muddle I got into circa 1983. This is nothing to do with the course delivery – more my personal wiring which appears not to have improved significantly since I battled with Pascal on CP/M. A major difference this time though is that undoubtedly my muddle and thrice re-visitation of the video will be recorded for future analysis.
So that’s week 1 done in about the length of time it would take me to cook a carrot cake and clear up. Seems like I still have enough time to watch “Strictly: It Takes Two”.
(With apologies to Simon Nelson “Going into an online environment to learn is fun, social, an alternative to television“)
We had a great group on “Openness and Innovation in e-learning” and the bonus badge above was a genius idea birthed by David in a midnight Facebook support group. Although H817 is a passed, I find myself still intrigued by OpenBadges – my reading has left me persuaded that OpenBadges will continue to gain both traction and meaning. What I am less sure about is what is their place in UK HE. I’m diving into #OpenBadgesMOOC to have a chance to think about it some more.
Before I get in too far here are a few things that are rattling around my head at the moment:
- The HEAR is relatively new – an institutionally produced record of modules, grades, assessments, plus non-credit involvements that the institution can validate. As I understand it non-credit involvements could include things like widening participation activities, being a course rep etc. You could argue that this record is part of the missing set of micro-credentials – so is there any sense in a direct mapping of badges to events already in the HEAR? Perhaps so if pick-n-mix curation of these entries is valuable in the future.
- Similarly, I am skeptical about too many badges being generated by Blackboard events like doing or passing a test – surely this would be replicating HEAR entries, but at an even finer granularity?
- Mozilla’s position paper for OpenBadges uses the term “Conversation starters” and it is this aspect of OpenBadges that I find most intriguing. One of the things resonating from from Alt-C this week is that many students working on university engagement projects need a bit of help identifying the skills they have displayed and developed in those settings. Maybe this is an area where badges representing skills and professional qualities would be pertinent? In my mind there’s a crossover with eportfolio use, especially where the eportfolio is being used to record and evidence graduate skills. Could the action of issuing badges create student-student and student-tutor conversations around skills, and in so doing help students better articulate these to future employers?
- Badges have the potential of pointing to evidence rather than claims. But to do this evidenceURLs need some permanence. For badges to have some currency post graduation we can’t link these to ePortfolios that no longer exist or to the walled VLE.
- Can we plumb into externally badged standards (eg Mozilla’s Web Literacy) to provide recognition for some of the foundational digital literacy skills, or should we create our own? Is the “fun” profile of OpenBadges a better match for digital literacy skills that are often informally learned?
- Are badges appropriate in Staff Development? (Maybe the “challenge” approach of P2PU would work really nicely with some edtech eg: set up a PeerMark assignment in Blackboard en route to “VLE Master” badge)
So lots of possibilities and questions and I’m looking forward to exploring this a bit more. As a backdrop to the start of the MOOC it was helpful to listen to colleagues from Sheffield talking positively about badge pilots in HE (Open Badges in Higher Education – Perception and Potential). From that session I am still chuckling about the Borders College Students issuing badges to their lecturers. Much to learn….
This week for me has included some reading on badges (building up to an EMA), and I couldn’t resist penning a few informal words on the way.
Firstly well done Mozilla on a brilliant job “bootstrapping” OpenBadges. Secondly Huzzah! to OU for being innovative and giving me a chance to get my first few.
I appreciated the honesty of the OLDSMOOC folk “….none of us are very sure what the impact of using badges will be…but we have thought carefully about the approach we have used“. The post-course evaluation has some interesting comments on badges (from 17 respondents) some enjoying the fun, some valuing the award, others seemingly patronised by the scouting overtones. Cross (2013) tells us that the badge numbers represent 30-50% of the active course applicants.
What I’ve done in the graph above is to plot the effort/engagement badges awarded for OLDSMOOC (their blue badges) against those awarded on H817Open. My goal was to visualise the rate of decay for both. Of course these aren’t directly comparable (and you should never compare siblings) :
- On H817Open the first two badges were effort (finish a task) and the last one was achievement (a task & the other two badges) – but they were evenly spread across the course.
- H817Open had a subset of students who had paid to do the free course (and had already carved out study time). I would have expected to see this group represented disproportionately in the badge roll call, but I don’t recognise that many H817 names. (There again there was the small distraction of a pressing 3,000 word essay.)
I can’t conclude that much here, but I was surprised to count up the H817Open badges and see that 57 got the first badge (for Activity 7) and nearly half of these made it through to the creative reflection at the end. In post essay fatigue I certainly wouldn’t have bothered with the last activity unless some reward was involved.
If badges were an extrinsic motivator for a subset of students I would expect the rate of decay for badge awards to be less than the overall engagement decay. hmmm I’d better stop now, this was just a minor bloggage to clear synapses for the EMA. TBC
Cross, Simon (2013). Evaluation of the OLDS MOOC curriculum design course: participant perspectives, expectations and experiences. OLDS MOOC Project, Milton Keynes.
The focus of REAP was to “re-engineer” assessment practices. While individual academics and can bring about positive changes through adjustments to individual modules it is clear that to harnessing a broader set of principles (and indeed they are meant to be complementary) requires a more radical look at module and programme design. I’ve blogged earlier about participating in a “Train the Trainers” session using the Assessment and Feedback resources of the Viewpoints project. Just recently, though I’ve enjoyed reading through the evaluation report written by the ‘father of REAP’ David Nichol, from which I have borrowed a few images and jotted a few more notes.
The Viewpoints approach is essentially a follows:
- A group (eg module team) get together round a table and write out their design challenge on their A0 sheet placed horizontally on the table.
- They are given a pack of cards with summary principles on one side and examples on the reverse and are encouraged to place cards on the timeline and move them around. Discussion ensues: eg “are we giving our students enough time to reflect on feedback” when studying a card like “provide opportunities to act on feedback”
- As discussion continues the group move cards around the timeline – the focus at this stage is aspirational
- At a suitable point the facilitator encourages the group to turn over the cards, consider real examples, and begin to pin down what this may mean in practice for their situation
- The output of this session lends itself to being photographed, and this is then a “shared visual representation” – a shortcut to the discussions, evaluations and desired outcomes.
There are so many good things here.
- The process encourages course designers to take a learner focus
- Design groups can bring together the views of experienced and inexperienced staff and students too could meaningfully contribute
- During discussions the group engage in deep way with the principles (it’s an authentic, social-constructivist setting for coming to joint understanding) or as Nichol’s puts it Viewpoints has heightened the usefulness of the of the principles by repackaging them as “tangible social objects”
- By placing the principles at the heart of dialogue and discussion they become ideas and concepts that can be considered and evaluated without threat or judgement. (Nichol is an advocate of casting educational principles as “rhetorical resources” which catalyse dialogue and can more easily diffuse through organisations.)
And.. usefully for me as I consider TMA4 the evaluations show that a number of the principles had high currency: ‘clarify good performance’, ‘time on task’, ‘act on feedback’, ‘reflection and self-assessment’ and ‘motivational beliefs’.
[As an aside, I loved the fact that the project team started out to design an online resource, but ended up with a much more collaborative, time-line based activity in which teams engaged with REAP principles with the aid of A0 sheets, cards and post it notes. ]
Nicol, D (2012) Transformational Change in Teaching and Learning, Recasting the Educational Discourse Evaluation of the ViewPoints Project
JISC (2012) Design Studio Assessment and Feedback Printable Cards
It has been difficult to focus on studying when it has been so wonderfully warm here in Blighty. However, OpenMentor and our forum discussions have got me thinking.
While I see some benefits in the “let’s do feedback better” focus that tools like OpenMentor bring I do have a sense of unease that a lot of effort is going into perfecting the mechanics and standardisation of feedback without an equal emphasis on the more difficult question of “how do we help and encourage students to do something with it?”. Crisp’s (2007) small scale study makes for tutor-depressing reading. There’s little evidence that students make changes in their next submissions as a result of feedback. When Nichol (2010) looks at student dissatisfaction with feedback he says:
“When students complain that feedback comments do not meet their needs, this is as much a symptom of a failure of dialogue as it is a symptom of weaknesses in the quality of the comments” (italics mine!)
Of course those of you working with smaller groups or individuals may not have this “dialogue failure”. Nicol discusses the need for students to become “active constructors of feedback information”. The implication being that feedback needs to be not just received, but analysed, discussed, and connected with prior understanding. What’s really challenging is how to do this? Take this module for example. We’ve all agreed that feedback on assignments is good – and I know that many of my peers are much more grown up self-directed learners than I am who do feed take feedback forward. But I’ve been wondering how could deeper reflection on feedback be encouraged and embedded?
Here’s a few things that I for example would value:
- Generic tutor feedback sessions (eg. what made a good TMA2, what common strengths and weaknesses were there).
- Peer review – post submission swap assignments with one or two others for review and comments – then compare peers comments a few weeks later with tutor marked copies.
- Have exemplars available before the assignment and critique the extent to which they met the assessment criteria.
Crisp, B. R. (2007). Is it worth the effort? How feedback influences students’ subsequent submission of assessable work. Assessment & Evaluation in Higher Education, 32(5), 571-581.
Nicol, D. (2010). From monologue to dialogue
: improving written feedback processes in mass higher education. Assessment & Evaluation in Higher Education
Posted in H817