Here’s the big problem about formal education. I currently *should* be polishing off 6000 words as the last bit of hoop-jumping for OU H800, but I’ve gone and got myself interested in Twitter.
I’m freshly back from Alt-C where there was much tweeting in evidence, none of it I am afraid to say was from me. I’ve enough trouble taking notes in meetings/presentations without dividing my attention to any significant monitoring of the back channel. Being able to go a step further and live-tweet with the right hashtag deserves the same respect as getting round the Great North Run in less than 2 hours.
I’ve been discovering that poor-old-twitter search can’t fully digest and process what’s going on for much more than a week or so. So have used The Archivist to dump off the search results #altc2012 to a file to see what volumes resulted in the hashtag trending (as reported by @jamesclay and others).
Here’s what I see after an excel import of .txt, a pivot table and a bit of =frequency() [I know I must be doing this the long way round, but as I said I'm supposed to "be studying" rather than "learning" and maybe I'll visit the TAGS project another time]
- Total number of tweets = 1499 (hmm that sounds a bit too close to the 1500 limit <- methinks there must have been more!)
- Total number of unique tweeeters = 321
But, the way the tweets are distributed by tweeters is not uniform, its far from 5 tweets per person. 180 people sent out just one tweet, one user managed a whopping 130. Here’s the frequency mapped to bins matching powers of 2.
So, even if I’ve not got all the tweets, it’s not particularly surprising that the tweet-power is coming from super-tweeters. Now what would be fun would be to look at visualisations of the connections from the top tweeters and see where that gets. But then I’d be straying into Nodexl ….and I’m still avoiding my text based assignment.