Skip to content

Slaughterhouse-Five

Slaughterhouse-FiveSlaughterhouse-Five by Kurt Vonnegut

My rating: 3 of 5 stars

This is a difficult story to decipher, but I think that was thematically intentional. Slaughterhouse-Five is a novel that purports to be about the fire bombing of Dresden in the Second World War, but very rarely addresses the issue directly. It skirts around this kernel through the mechanic of the protagonist becoming “unstuck in time”: in essence, his mind wanders to reliving events that have already happened or are yet to happen. Combine this with a sprinkling of alien abduction and you’ve got a story that leaps from the sombre to the surreal in a matter of sentences.

It is the context of this non-linear narrative that explains the lack of cohesion; in trying to tell the story of the Dresden fire bombing, the narrator tells us about everything else. This approach, I am fairly certain, is intended to mimic the conversational style, and perhaps even thought processes, of individuals that lived through and participated in such attrocities. It may even be reflective of what we now call post-traumatic stress disorder (PTSD), an affliction little recognised until many years after the great conflicts of the 20th Century.

As a novel, Slaughterhouse-Five read very quickly. This made it felt short, but there was no lack of story here. The language used is very short and quite simple (in the most part). In fact at one point I read a section of conversation out loud and I couldn’t help but feel it sounded like I was reading a children’s book (though maybe that’s the only time I practice reading books out loud!) This is not a criticism, just an observation as to why I read it so quickly. It perhaps also represents the everyday man that was sent to war, deliberately contrasting them with the highly educated and verbose politicians who decided to send them?

While Slaughterhouse-Five didn’t move me as deeply as perhaps it should have, it is very readable and gives excellent insight into the darkest side of a war that many of us are too many generations away from to know about. I expect I’ll read it again at some point, to better understand it.

View all my reviews

Advertisements

Research grants yield Twitter followers?

I went to a nice little event on Monday evening, organised by the data.ac.uk guys at the University of Southampton. Effectively the aim was to coerce some people with ideas and people with having skills to build some little tools, applications and visualisations using data gathered about institutions under the .ac.uk second-level domain (largely UK universities, entities related to research or other academic pursuits).

I personally had a really rewarding time for a number of reasons. A handful of our first year undergraduates (from a class I teach called “Space Cadets“) attended, contributed and seemed to enjoy themselves. I also learned about the research topic of one of our new PhD students (Johanna) and was impressed at how pertinent and insightful her topic is, and at how she was using this event as a way to gather preliminary data. I also got to catch up with some friends I haven’t seen for a while, like Marja and Colin. And all this while almost hacking something together!

Full disclosure, we didn’t quite finish what we aimed to do within the time, but I managed to pull it together in the pub afterwards ūüôā
Gateway to Research homepage
So what did we do? Well we started off looking at the data on Gateway to Research, as we were going to see if we could link it to news stories on university RSS feeds (do universities publish many stories about their research?). Organically, this evolved into looking at their Twitter feed instead (as the data.ac.uk Observatory already scrapes the Twitter account from homepages). As a simple goal, we wondered if there’s any observable link between number of Twitter followers and number of research grants granted.

By the end of session we’d just about extracted all the relevant data (name of university, Twitter account, followers and number of research grants – 4 bits of data from 4 independent data sources) and displayed it as a list. We were somewhat hampered by my poor decision to attempt this in Javascript, as the Same Origin policy made it impossible to AJAX data from live APIs (why make your data available in JSON then not allow me to access it in Javascript, I say)*. However, a quick rewrite into PHP got us back on track.

As I said, we weren’t quite done, as I wanted to visualise this data somehow, as well as fix a few bugs. In the pub, I tried to make use of the (unfortunately deprecated) Google Image Chart API, but it was capping at some weird values. To resolve this, I outputted the data as CSV and imported into Google Sheets and generated the graph manually (hack events require cutting some corners and thinking on your feet!) This is what we got:

grants vs followers

This is the number of research grants a university has had funded against number of Twitter followers on the first Twitter account on their homepage. It’s on a log scale.

The grants vs followers data in a Google spreadsheet, in case you want to look.

What does it tell us? Well it says that the more successful research universities also have more people listening to them on social media. Is this what we would have guessed anyway? It’s easy to say yes in hindsight, but it’s nice to have some numbers to support it. Of course, I’ve not yet run the correlation to see if this is a significant relationship; that’ll come with a bit more time.

Perhaps more importantly, it has helped us identify some quirks in the data and the nuances of how to handle it. For example, the Observatory will record all Twitter handles referenced on the homepage. If there’s a widget displaying a Twitter feed on the homepage, it will include all accounts @replied and retweeted. It also stores the date of an observation as the name of a property in an object, which are hard to sort, so it’s difficult to get latest observation (clearly this requires a smidgen of preprocessing). We spotted these by delving into a couple of the outliers, and interestingly by cleaning up the data, it moved them closer to the centre of the cluster of points.

To conclude, the event was a great success. I think the 2-hour hack might be the perfect format for exploratory data hacks. It’s demoralising to spend a day or three hacking and have nothing to show for it; spending an hour or three and having a result (even a small one) is massively rewarding. I hope to tidy up this code, check the details of the data (especially what grants GtR includes) and do some stats on it. We’ve observed there’s some link (though no inference about the cause of that link) between research funding and social media popularity of universities. I became a bit more confident in having with data within a time constraint, and had fun doing it!

Resources


* I realise now that what I needed was JSONP. Unfortunately, GtR doesn’t support that anyway. I could have used a JSON proxy (e.g. JSONProxy or written my own in PHP) but I didn’t think of that until the day after the hack! At learning has happened ūüôā

New Year’s Day 2015 parkrun double finder

I got a comment on¬†the¬†post about my NYD double finder from last year asking whether I was going to do one for this year. This inspired me to run the code again, and it seems to have worked. There’s now a list of parkrun doubles that are (just about) feasible to¬†do on¬†1st January 2015:

parkrun double finder NYD 2015

Please be aware that it may be physically impossible or unsafe for you to attempt some of these doubles. A double is included in the list, if the second parkrun is 13 minutes plus a Google-estimated driving time. This means you’d have to run a world record 5km, and hit no traffic on the drive to the next one.

Therefore, please check whether the double you want to do is realistic. Work out roughly how long you think you’ll take to run the first parkrun, then allow¬†time to get your breath back, get barcode scanned, walk to your car and drive safely to the next double.

If this is useful to you, or you think there‚Äôs a possible double missing, or you have any other feedback, please comment below. I love to hear back from anyone who makes use of this ūüôā

Rendering template with array of data in SammyJS

In my first play around with SammyJS, I was severely restricted for time, so just had to hack it to work. However, there were a couple of things I couldn’t work out how to do that bothered me, so I thought I’d figure out what was going on, and maybe try to fix anything that’s broken (be it buggy code, or lacking documentation).

My main concern was that, after having added things to the DOM using SammyJS, I couldn’t seem to select those things using jQuery. I was also concerned that if I ran the same loop twice (to print different bits of the array) the output was unintentionally interleaved. Finally, I couldn’t really get my head around how to insert some HTML into the DOM, then perform my main loop. It turns out these were a all a little interrelated ūüôā

What I Was Doing

In my initial hack, I had a number of routes that looked a bit like this:

this.get('#/drunkbeers', function(context) {
	context.app.swap('');

	context.render('browse-header.template', {category: 'Drunk Beers'}).appendTo(context.$element());

	context.render('link.csv.template').appendTo('#accordion');

	// Display drunk
	$.each(this.items, function(i, item) {
		if(localStorage[item.id+'_drunk'] == "true") {
			var id = item.id;
			
			var wantclass = 'btn-default';
			if(localStorage[id+'_want'] == "true")
				wantclass = 'btn-primary';

			var drunkclass = 'btn-default';
			if(localStorage[id+'_drunk'] == "true")
				drunkclass = 'btn-success';

			var unavailableclass = 'btn-default';
			if(localStorage[id+'_unavailable'] == "true")
				unavailableclass = 'btn-warning';

			var notes = localStorage[id+'_notes'];
			if(notes == undefined) notes = "";

			context.render('beer.template', {beer: item, wantclass: wantclass, drunkclass: drunkclass, unavailableclass: unavailableclass, notes: notes})
				//.appendTo(context.$element());
				.appendTo('#accordion');
		}
	});

});

This first uses two very simple templates to render some header information, then iterates over all the items in an array (loaded elsewhere from a JSON file), and if that item is marked as “drunk”, in localStorage, it renders¬†that item using a template. Some issues here are that if I code¬†a jQuery select after the iteration, it doesn’t find any of the things added to the DOM, and because I used $.each() to go through all the items in the array, I can’t use .then() afterwards to¬†force ordering on the actions, which I believe is what was causing the interleaving of two array outputs.

How to use renderEach()

In my investigations for how to do this right, I was¬†noticed that example on the front page and its line this.load('posts.json').renderEach('post.mustache').swap(); The important thing here is renderEach(), which we assume is probably rendering the template post.mustache for every item in posts.json. But what if we’ve already loaded posts.json and stored in a variable (as recommended in the tutorial)? SammyJS certainly doesn’t attach .renderEach() to the Array prototype (though maybe it should?)

Let’s take a look at the documentation for renderEach():

sammyjs - renderEach documentation

 

Well, the example sort of makes sense. Run renderEach(template_location, array_of_data) and it will render the template for each item in the array. But that doesn’t sync with the arguments listed. What is name for? And why isn’t it passed in the examples?

Also, what about that this.load('posts.json').renderEach('post.mustache')? How does it get the data when nothing is passed as the 2nd nor 3rd argument? And what if your array isn’t associative: what variable name is each item put in (important for referring to in the template)?

Previous Data

It turns out, that whenever you call one of these functions in a RenderContext, it pushes the results from the previous function out of the content property and into the previous_content property. If data is not set, and previous_content is an array, then it will use previous_content in .renderEach(). This is how the this.load('posts.json') gets into .renderEach().

Reference Variable Name

If the array (either from previous_content or data) is just pure values (e.g. [“pete”, 15, true, “hey”, “there”, 22]) then it’s unclear what variable name this will be passed in as to the templating engine, so it will be hard to refer to this value in the template. If this is the case, .renderEach() lets you provide a string name for this variable, in the 2nd argument, name. If you set this, make sure you use the same name as in your template ūüôā

Order of Arguments

If your array already has a key for each value (e.g. because it’s an array of objects – [{x: “pete”}, {x: 15}, {x: true}, {x: “hey”}, {x: “there”}, {x: 22}]), then you may omit the name argument. SammyJS detects this (because argument 2 is now an array, instead of a string), and shifts all the arguments along one. It’s times like these that you wish JavaScript had named parameters!

Updated DOM

Interestingly, now that we’re using .renderEach() to render the items of the array, instead of $.each(), it means we can tag a .then() on the end. Now, inside the callback provided to .then(), the DOM is up-to-date and any jQuery select you perform will work. Hooray!

Also, there seems to be another way to at least attach event handlers to the dynamically added elements, called event delegation:

$('#main').on('click', 'li', function() {
	console.log(this);
});

However, this doesn’t really work for other jQuery actions (not really sure why), so it’s easier to just use .then() where possible.

Printing to Document

Using templates is excellent for situations where you need to repeat the same format of information lots of times, but when you just want to print a line of HTML, it’s a bit laborious. However, when you realise that this.$element() (or context.$element(), not sure which is more appropriate when), works just like the jQuery function $, and therefore you can call regular jQuery functions, such as .append(), it becomes a little easier:

context.$element().append('<a href="#">bob</a><ul></ul>');

Confusingly,¬†you can’t use .then() after this, because it’s a jQuery function. However, it seems you don’t need to, presumably as this has its effect and returns immediately (whereas the¬†rendering operations must work asynchronously).

Contribution

I made an amendment to the documentation of EventContext.renderEach(), and submitted a pull request, so hopefully this very powerful function will be a little clearer to future users (if it gets accepted and merged in)!

Lessons

What were the main things I’ve learned here?

  • You can treat this.$element or context.$element like $ or jQuery (use it to select things).
  • .renderEach() is tricksy but powerful once you understand what the arguments do!
  • If you’re generating parts of the DOM through SammyJS rendering, you must use .then() to ensure your code sees the latest¬†version of the DOM, else¬†your jQuery selections may occur before the rendering has completed.

Offline mobile web app with SammyJS

Last week I went to the Great British Beer Festival (gbbf.org.uk) and to help me track the beers I wanted from the 900 or so draught and bottle beers and ciders available, I built a little web app.

The GBBF beer selector doesn't really work on mobile...

The GBBF beer selector doesn’t really work on mobile…

Great British Beer Festival do provide a beer selector on their website, but much like most organisations who make a web-thing for marketing, they’ve completely missed the point: people want to peruse the list while they’re *at* the beer festival, and their site is rather unfriendly to use on smartphones. What is needed is a mobile web app.

Last year a few of us had a go at this. My approach was to build a jQuery bookmarklet on top of their beer list, to allow the user to mark beers as “wanted”, “drunk” and “unavailable”. The advantage of the bookmarklet approach was that I didn’t have to make a copy of their data (and all the IP issues that incurs), and I was just augmenting what was already there (so hopefully would be less work). The disadvantage was that it required me to be online to access their website, and the jQuery to filter through their rather div-heavy DOM was slow on my Nexus One!

This year my aim was to build a small, simple web app in as few files as possible, so that I could download it onto my phone and use it offline (i.e. in airplane mode). This would involve getting the data to store in the web app (I have a lot of screen-scraping PHP code already) and then building a single page web app to let the user browse the beers and mark them as “wanted”, “drunk”, “unavailable” and make notes.

Getting the data

Unfortunately, the Great British Beer Festival are not an open data company (other than accidentally leaving an Excel of the real ales on their site). This means I had to screen-scrape the website. I’ve done a lot of this in some other projects, so just adapted the PHP code I already had. This involved reading every row of the table on the GBBF site into an associative array, then spitting it back out as JSON ready to be loaded into my JavaScript web app.

This took 3 or so hours the night before I was meant to be going to the beer festival. I was running out of time, but I intentionally wanted to restrict how long I could spend on this ūüôā I went to bed and decided to do the front end of the web app in the morning.

Single Page App framework

Despite¬†never having built an offline web app, I knew I’d¬†need a framework to handle the displaying of different “pages”,¬†because there will in fact only be one web page. There are a bunch out there, but I wanted something that I could quickly pick up and build something with in about 3 hours: that significantly reduces the field! One thing that annoys me about software frameworks is how ridiculously complicated they are to start with. You typically have to read several days worth of tutorials, which explain in excruciating detail all the quirks and inconsistencies of that framework (as if you’re going to remember any of it)!

Basically all you need for a simple client-side web app!

SammyJS Hello, World: Basically all you need for a simple client-side web app!

With PHP I spent a while trying Zend, CodeIgniter and CakePHP, all of which suffer from this flaw, before stumbling upon FatFreeFramework (F3), which is now my PHP framework of choice. I’ve recently been learning Django, which is a little bit heavy, but I’ve not investigated simpler frameworks yet (though I hear that it’s probably flask or bottle). Now was the time to find one for JavaScript.¬†My initial searches brought up things like AngularJS, EmberJS and BackboneJS, but as always the ones people shout about most are the ones they spend 40 hours a week working with in their job, so they know all the ins and outs. Eventually I found SammyJS, which had a Hello, World of only 9 lines of code, on its homepage, which instantly showed that it did everything I needed: handle a GET request, load data from a JSON, and render all items with a template. Perfect!

I first followed the first part of the SammyJS tutorial, which walks through loading from JSON, displaying items using templates, and then displaying other pages. I implemented what I was reading, but using my data rather than their example data. This quickly got me a base for what I needed: listing all the beers that are available.

I then added the ability to search by style. This was just the code for displaying all the beers, with an if statement to only display ones of a chosen style. Which style was defined by a URL route parameter. After repeating this for country, I realised that the code is the same for bar, brewery, country and style, so I created a JSON file for each of these and generalised the code to load the JSON file for whichever category is in the URL.

I wanted to make it look not-too-terrible, so went to put in jQuery Mobile to make it really look like a¬†mobile app. However, when I added it in, it started constantly reloading the root of the web server! It turns out that jQuery Mobile is more than just a set of widgets, it is also a framework for¬†single-page apps (if only I’d known)! Anyway, that broke everything, so I immediately got rid of it and dropped in Bootstrap, so I could give the lists of links and buttons a more user-friendly look and feel.

Finally, I had to add the functionality for letting the user select which beers they want to try, have drunk and any notes they want to make. I had written most of this last year, using jQuery and HTML5 localStorage, so it should have been a pretty easy job to port it over. However, SammyJS doesn’t seem to let you use regular jQuery, which is a bit of a pain. I ended up using regular onclick and onblur attributes in the HTML to call functions I defined in my JavaScript. This got the job done, but it was a bit disappointing to not be able to attach the events programmatically.

While I’m on the subject, HTML5 localStorage is really lovely. It is so much nicer to use than cookies, and just involves reading and writing values in an object that acts like an associative array called localStorage, and that gets magically saved by the browser between visits to the page. Simple!

I ran into another peculiarity of SammyJS when trying to display wanted beers as a separate list from unwanted beers, on the same page. I tried duplicating the loop, just with a different if statement in each. However, this still displayed some of the unwanted beers interleaved with the wanted beers. I think this is because some of the actions in SammyJS are based on asynchronous tasks (like loading a JSON file), so you’re meant to chain them using .next(). However, some of the SammyJS functions don’t seem to return an object that has .next(), so I was unable to chain in the way I wanted. A related problem meant that I found it difficult to insert text into the page before loading the JSON file. I’m sure I can figure out how to fix both these problems, I just didn’t have the time to do so during¬†the development of this app!

The last thing I did before running out the door to get the train to London was to put it online and test it briefly on my smartphone. It looked like it worked, and the only thing I needed to tweak was the CSS to increase the size of the text a little and set the viewport width.

<meta name="viewport" content="width=device-width, initial-scale=1">

Webserver on my phone

A web server on your Android phone!

A web server on your Android phone!

On the train, I tested whether I could save the app and use it offline. My very quick search indicated that in Android you could save a bookmark to the home screen for offline use. This did not work. I’d hoped that the (almost) single file aspect would mean Chrome would easily cache it and give me access to it, but as soon as I turned on airplane mode, Chrome immediately refused to even load any page (because it’s offline)!

I tried downloading all the files from the web server (luckily I had left a zip of the directory on the server), and using the file:/// protocol but this seemed to stop any of the JavaScript working.

My final chance was to download a web server to run on my phone and host the files locally.¬†There is a web server called kWS on the Play Store, which is free and did exactly what I needed! It’s also one of the few apps I’ve ever installed that wanted permission to access only 2 things (files and network, obviously for a web server). Once setup, with the files in /sdcard/htdocs/, I could access my app at localhost:8080. Brilliant!

Offline App Cache

The palaver with having to host a web server on my phone got me thinking: given a settings file at a certain location relative to the web app (say app.offline in the same directory as the home page), I could build an app that downloads all the appropriate files, and then hosts them locally on a web server on the phone. This seemed a little overkill, as someone must have thought of this before, so I hunted around a little.

And of course this is already a solved problem! I first found an Adobe blog about taking web apps offline, and once I knew the name of the technique (“app cache manifest”) I managed to find an HTML5rocks¬†application cache tutorial which explained a few of the nuances and an offline HTML5 tutorial that explained some of the tricks to debugging. It’s dead easy: create a manifest file which lists the files that are required offline (in my case index.html, app.js and a few .json data files) and reference it in the <html> tag of your pages. The only downside is that you do have to manually list every file you want offline (wildcards don’t really work – it can if you have directory listing switched on, but it didn’t work for me), but you could write a server side script to generate this if you have lots of files.

It’s quite useful to know that in Chrome you can go to¬†chrome://appcache-internals/ to see whether your app has been cached, and to clear the cache after you’ve updated your app. The console in Chrome Inspector is also helpful, as it tells you each of the files it is caching, and if it fails at any of them, gives you an error message.

Summary

There we go. In less than a day’s work I managed to¬†learn a client-side app framework, build an app to meet my needs and figure out how to get it to work offline. Hopefully, this will be useful practice for future app development! Of course, some of this has built on previous skills I have learned (PHP web site scraping,¬†jQuery and localStorage), but I’m glad¬†I managed to¬†find the tools to use quite easily, wrangle them into what I wanted and have the chance to share those tools here!

SammyJS is pretty lovely. Unfortunately it seems to become a bit of a pain to do things that don’t fit exactly in its view of the world, but it does do those basics elegantly enough that I’d like to use it again. localStorage is brilliant for storing simple information between uses of the app. The application cache is such an easy way to save your app offline and keep using it when not connected to a network. The bits for building offline web apps are all there, and they’re really easy to pick up. My advice is to start with these, and learn something more complex when you need it.

My code is at¬†https://github.com/rprince/gbbf2014-webapp in case you want to check it out (note you’ll need to get SammyJS as I’ve not figured out the licensing yet). You can check out a working version of it here:¬†http://users.ecs.soton.ac.uk/rfp07r/gbbf/2014/.

Thunder Run 2014

I have once again returned from Thunder Run. My blog post about Thunder Run 2013¬†was quite well received, and served as a reference when planning for this year’s for at least a couple of my friends, and myself.¬†I thought I should do the same again, but I’ll try not to repeat too much of what I said last year!

The event was, like last year, excellent. The overall setup was pretty much identical. The course was almost identical, give or take a slight adjustment at about 9k, to take you along the east of that bit of the campsite (rather than the west, which was a bit too boggy last year). The vendors by the start/finish were largely the same (Buff, Adidas, ice cream van, fish and chips, cycling accessories), with the addition of an “Alpine” food stall selling tartiflette, hot chocolate and mulled wine(!!). The staff in the main food tent seemed a little¬†green to start with, but were¬†fairly well drilled by the end of 24 hours of serving hungry runners!¬†I also heard that they ran out of jacket potatoes and pasta with meatballs in the middle of the night, but this didn’t seem to be an ongoing complaint, so I assume they rectified this.

The nutrition stand were there again, though with Osmo hydration and Honey Stinger energy bars (rather than Clif bars). They still did the unlimited refills of hydration powder, which was excellent, and there was even a choice of bottle this year ūüôā

parkrun

http://connect.garmin.com/activity/551377976

LRR Thunder Runners at Conkers parkrun (thanks to @runbeckrun for the photo)

The nearest parkrun to Thunder Run is Conkers parkrun. As parkrunners tend to like a bit of tourism (visiting other parkruns around the world), it’s become a bit of a tradition for parkrunners who are in the area for Thunder Run to visit Conkers on the Saturday morning. I went last year, but for some reason didn’t write about it in the blog post.

This year there was a much larger crowd of us (2 packed cars) which made for a very fun and social parkrun (we stopped for a lot of photos, especially with the sign-holding volunteers at Conkers). It’s a lovely park and well worth checking out if you’re in the area on a Saturday morning.

Lap 1

http://connect.garmin.com/activity/551377994

At 2pm, it was hot! I tried not to go too hard, as I knew I had lots of running left ahead of me, but probably went up the first hill a bit too fast, then definitely went down the first descent too fast!

Stopped for a drink at the water station, then continued running. Second half of the lap was probably a bit cooler (more tree cover provided more shade), but I was still slower, due to the drink stop and having gone out a bit hard in the first half.

Embarrassingly, I fell over on a completely flat piece of ground,¬†just past the 4 mile mark. Now it was through one of the forests, so I guess there might have been a tree root or something, but it didn’t feel like tripping on something that hard. I think I just got lazy with my feet, didn’t lift them high enough and clipped the front edge of a dip in the mud. One thing’s for certain: for the rest of that lap my feet were exaggeratedly high! The kind runner ahead of me stopped and called back to check I was ok; as I rolled onto my back and sighed, I yelled “I’m ok thanks” and realised I should get up and running. I overtook the friendly runner not long after and thanked him again: I had a little scuff on my left knee (I normally do, I’m a goalkeeper) and was covered in dusty mud, but other than that I was fine.

Lap 2

http://connect.garmin.com/activity/551378017

By 8pm, the weather had cooled a little, with a little more cloud cover. The organisers announced that anyone going out after 8.20pm had to wear a head torch. I took mine with me, but it was still light enough to easily recognise the course.

An absolutely beautiful shot of the Thunder Run campsite. (Thanks to James Saunders for the photo!)

This was probably my most comfortable lap of the weekend: not too hot, good light, I was beginning to anticipate which bit of the route was coming next and I wasn’t too tired yet. I kinda knew this was my best opportunity for a fast lap, as¬†I was less tired than I would be on Lap 4! I pushed a little harder than Lap 1, and did end up going faster by 45 seconds, though it felt harder than that. I’ll put this down to lack of endurance training (or any training)!

It started raining lightly towards the end of the lap, which was nice and cooling, but also meant my clothes that were drying on the gazebo got wet again!

Lap 3

http://connect.garmin.com/activity/551378037

My third lap was my only one in total darkness. Despite only about 4 hours sleep, I woke up pretty easily and prepared for the lap. It definitely took half a kilometre or so to get going. I’d¬†done some dynamic stretching, but should probably have gone for a little jog.¬†Head torch was good, didn’t have to adjust much. It was still quite warm (I wore a t-shirt rather than a vest). Uneventful lap, really.

Lap 4

http://connect.garmin.com/activity/551378064

Feeling fatigued by now. Chest felt tight (in a muscular, rather than breathing way) for the first half a mile, presumably because my arms and shoulders were tired.

Took the first few hills relatively easy, though was still overtaking.

Was exhausted by 5km, so decided to walk to the water station and then up Conti hill. From there I ran the rest, but not quick. It was not as hot as Lap 1, but I was just knackered!

It was quite a nice feeling to finish this lap, have a shower and get some food, knowing I was done for the day. I was pleased to have completed one more lap than last year.

Shoes

Last year, my major mistake was not taking a suitable¬†pair of trail shoes. I assumed the weather would stay good (because it was summer), but the torrential storm created a running surface that my Vibram FiveFingers Trek Sports just couldn’t handle.

I did a bunch of research into minimalist trail shoes and went to Thunder Run 2014 with much more appropriate footwear.

Fellow Brontophobic and blogger, Tamsyn. (Thanks to James Saunders for the photo!)

Firstly, I was given a pair of Inov-8 Bare-Grip 200 for Christmas. These have some proper lugs on them, almost the size of football boot studs, and¬†more of them! I ran at a couple of very cross country parkruns on New Year’s Day (Lloyd Park and Roundshaw Downs). I am very confident I would have survived last year’s Thunder Run with these. Unfortunately I didn’t get an opportunity to wear them this year, as it barely rained!

More recently, I bought a pair of Vibram FiveFingers Spyridons from Sport Pursuit. My mistake was that I did not make it to any CC6s or RR10s this year (mostly due to injury), so did not wear them in before Thunder Run. I ended up wearing the Spyridons for every lap of Thunder Run this year (I wore my Trek Sports for parkrun), so I ended up with a blood blister on my second toe, despite covering my toes in Vaseline. The Spyridons are a lovely minimalist trainer. While they felt a tad stiffer and heavier than my other FiveFingers, they had a bit more protection on the sole, so tree roots, stones and even bits of half-buried brick were no problem at all. Grip wise, I was happy enough as I had no problems with sliding sideways or down hills, though the ground was 95% dry for all of my laps. I think they would have coped with a bit more rain, but I have no evidence of whether they would have survived last year!

Nutrition

I followed a fairly similar approach to last year: cook lots of pasta on Friday night, and save the rest for mini-meals after each lap. I also had cereal bars, malt loaf,¬†peanut butter, bananas and jelly babies as energy snacks¬†in preparation for a lap. I never felt short of energy, though I think that’s more to do with lack of stamina meaning I never dared run too fast.

Also, as I was feeling a bit more comfortable with Thunder Run as a whole, I ate some of the catered food before and during the event (last year I wouldn’t have dared, as it was untested). A late night tartiflette, following my second lap, did a very good job of warming me up and filling me up ūüôā

Fitness and Performance

I didn’t train for this Thunder Run. In fact, I’ve barely been training at all. I still don’t feel confident that I’m over my injury. I seem to have got past the glute medius problem, though that might just not be being aggravated because I’m not running very much. However, my¬†the front of my left hip still gets tight when running, which I understand is linked to having very tight adductors. I try to stretch every day to improve this, but it’s slow and uneven progress.

Lordshill Road Runner soloists Jim and Rob.
(Thanks to James Saunders for the photo!)

My training recently has consisted of 1 x Run Camp, 1 x parkrun and occasionally 1 x another run each week. No long runs, no track sessions, no hill work. I want to get back into my training, but I’m worried about getting re-injured. Also, now I’m out of the habit¬†of going to certain training sessions, it’s hard to find where to fit them back in (even though I don’t seem to be doing anything particularly important with that time).

Nevertheless, I’m still happy with my performance at Thunder Run this year. All of my laps were under 60 minutes, and I did 4 of them. That was effectively my goal last year, which I failed at massively due to lack of preparation for the conditions! Now I’ve had a simple, straightforward, successful Thunder Run I can start planning ahead to a more daring goal next year. 5 laps? 4 laps under 50 minutes each? Who knows? I’ll see how training goes!

Reflection on tips from last year

My Thunder Run blog post last year had some tips for myself this year. Stupidly, I didn’t listen to¬†all of my own¬†advice!

I didn’t take 4 of everything to cope with all the laps. I had enough technical t-shirts and vests to have a new one every lap, but I only have 3 pairs of running shorts (so took them all), 3 appropriate pairs of shoes, 2 base layers and 1 towel. To be fair, the multiple shoes is only really required if it’s raining a lot, but in the heat my base layers and shorts got very sweaty, which meant trying to rinse and dry them between laps.

I didn’t take a clothes horse (too unwieldy to carry) and forgot to take coat hangers (I don’t have many spare). I hung things from the gazebo, but got caught out when it started raining while I was out for my 2nd lap and everything got wet again (it was almost dry when I headed out).

I didn’t take enough cash. There were many more Buffs that I wanted to buy, and they didn’t even bother bringing a card machine this year! I had enough for any food I wanted to buy, but I still felt like I was having to¬†be careful about what I spent, which is a distraction from running!

I also failed to acquire a camping chair, but there was nearly always someone not sitting at any one time, so there was often a spare. This is a bad thing to rely on though, so I’ll sort this out next year.

Tips for next year

These are additional tips for myself, as I failed to heed my own advice from last year and I wish I had done the following:

  • Take coat hangers
  • Take more cash
  • Get a camping chair
  • Get more base layers
  • Get more shorts
  • Get another towel
  • Go for a short¬†jog before each lap.

Of course, last year’s¬†tips still stand true:

  • Take 1 of everything that you need for each lap you hope to do. Ideally this would include:
    • base layer
    • vest/shirt
    • shorts
    • shoes
    • socks
    • towel
  • Figure out a way of getting things dry, even when it’s raining. Maybe a clothes horse or a few coat hangers if your tent is tall enough.
  • Have good trail shoes. Try them out in the CC6 races to make sure they’ll be adequate.
  • Cook up your night-before-race meal (e.g. pasta) and have enough left over to snack on after each lap. Other pre-race snacks (e.g. energy bars, cereal bars, jelly babies, peanut butter sandwiches) are handy as well.
  • Take some cash for food and gear at the site. Some of the vendors have card machines, but mobile phone signal was appalling so they didn’t work!
  • Camping chairs and gazebos are very useful.
  • A tent you can stand up in is great.

And some I didn’t think to add last year, but might be useful to someone who has never been before, or may not camp very often:

  • Some way of transferring water in bulk from the water tanker to your campsite. A couple of the 5L bottles from the supermarket work¬†nicely.
  • Get a good head torch, charge it well before going.
  • Have another torch for moving around the tent and campsite at night (so you don’t run out the batteries in your head torch).
  • An inflatable mattress is helpful.
  • A fairly light sleeping bag will do. It’s summer, so is still quite warm at night. Having a jumper, hoodie or towel close by can come in handy as extra warmth if it happens to get unexpectedly cooler while you’re sleeping.
  • Take your parkrun barcode #dfyb!

Next year I’ll rewrite this as a checklist.

Overall

Great Thunder Run. Thanks to all my Brontophobics team mates, and to my fellow Lordshill Road Runners. It was an excellent weekend away, and I can’t wait till next year! I’d recommend it to anyone who enjoys the social side of running, though I think it’s best if you can go with a team who have some experienced Thunder Runners to help you settle in.

WAISfest 2014

The last weekend of July 2014 was dominated by WAISfest, the research festival held by my research group WAIS. This year I was heavily involved in organising the event, after we managed to convince the academics to relinquish organisational control to the people of the lab, who make up more than three-quarters of the participants of WAISfest. I decided to blog about the event as a way of capturing the essence of the event, for future members of WAIS.

Day 1

Kick Off

WAISfest pitch coffee At 9.30am on Thursday, fuelled by pastries and coffee, 55 or so¬†members of WAIS arrived in a lecture theatre to kick off WAISfest. Our new head of group, Luc Moreau, gave an introduction to the idea of WAISfest and a thanks to the organisers and for everyone for taking part. Then I sped through a reminder of the schedule, pointing out when I expected people to meet back up, when the social events were, and when the opportunities for free food were ūüôā There were a record 11 themes this year, more than any previous year. Their titles were:

Charlie pitching Implicitly-crowdsourced sensing

Checking the pulse (Thur 2pm)

After I returned from lunch, I spent about quarter of an hour buzzing round all the groups I could find to see how they were getting on. Unfortunately a lot of them had disappeared for a late lunch. Visualising Impact – I had a chat with Will about the impact tracker his company has built, and whether it could be used to track WAISfest. It was a little bit late to be asking the question, now the event had already started, but he did give me access to the system, so I intend to test whether I can capture interesting information in it, with scope to get everyone to use it next time. Coffee Room – there were remnants of 2 groups in the coffee room. The crowdsourcing open data group had mostly headed off to manually gather some data, but I did find their plans on the whiteboard. The other group were the printing receipts for transactions where you exchange data for a product. At the time, they were having some issues because their thermal printer hadn’t been delivered yet and they couldn’t source one locally, but they were coming up with some novel solutions, such as printing to a web-connected thermal printer and watching on the web-cam! Linking accessibility data – seemed engrossed, so I didn’t bother them too much. They seemed to have a lot of data to be working with, so were busy! Location-aware narrative – this group seem to do well every WAISfest, and everyone looked busy doing something. I believe they’re heading out to build a narrative of the Southampton City Walls tomorrow, which should be exciting: field trip! Model Internet – working hard as well. They had calculated that actually modelling packets moving around even a small corner¬†of the Internet would be way too intensive, so have rescoped to¬†simulate traffic levels at various parts of the Internet, which should make the task more manageable. Recognising sites from afar – my group, so I know how we were going! We had gathered screenshots for the 50 most visited sites in the UK, to were discussing the effect of shrinking them and a protocol for doing an in-person experiment (can people recognise the sites on a screen from different distances). I didn’t really get a chance to checkup on the groups again because I was working on my theme, but I’ll do another summary after the Friday lunchtime status update.

Boardgames (social)

Thursday evening consisted of an opportunity to play some boardgames, an event organised by Jonny. There were about a dozen people there, so we split into 2 groups. I think the other group played Once Upon a Time and War on¬†Terror. Charlie, Matt, Jonny and I played Robinson Crusoe, which was interesting, though difficult. It’s a collaborative game (I like these a lot), but gruellingly hard. The premise is that all the players have been shipwrecked on an island, and you have to take various actions to survive (by getting food and shelter). We played one of the 6 scenarios (purportedly the easiest!), which required us to collect enough wood to build a big fire, and survive long enough for a ship to sail past (which was actually fixed at round 12). We lost because one of our players “died” in round 7, because the weather had turned inclement and we didn’t have enough food and shelter to help us through.¬†It felt as if the game was balanced a little on the difficult side, but it may also be that the amount of randomness (there is a lot of dice-rolling, as well as a lot of shuffled decks of cards to draw from) means¬†that it’s only possible to win unless you get perfect dice-rolls. Nevertheless, it was intriguing enough to make me want to play it again and¬†get closer to winning ūüôā WAISfest lunchtime update

Day 2

Quite a quiet day. I was busy running an experiment, so didn’t get a chance to float around the groups too much. We did have a lunch and status update session, which seemed to go quite well. We videoed it:

BBQ

Sunday of WAISfest weekend saw the annual WAIS barbecue! Stalwarts and contemporary members of the lab, along with their families, joined together to eat, drink and play. BBQ pit and people Chris' gazebo BBQ pit and people This year our venue was the BBQ pit between South Hill and Hartley Grove halls, which has¬†a huge open, grassy area proividing us¬†plenty of room for frisbee and football. With the addition of Chris’ gazebo and a couple of picnic blankets and camping chairs we had the perfect environment for a WAIS BBQ!

Charlie, Jon and Phil at the BBQ Disposable BBQs Smoky BBQ

Disaster almost struck, when it turned out that BBQ pit did not have a shelf for charcoal. Luckily, before the ravenous hordes¬†started tucking into raw meat, Chris produced a portable BBQ and Mike sped off to B&Q to pick up 12 disposable barbecues for ¬£24 (bargain)! Before too long, the frisbee pitch was filled with smoke and the smell of chargrilled food ūüôā Under the gazebo Salads on table Chillin' on the grass While the BBQ situation was being resolved, Yvonne, Pla and Priyanka handmade a fabulous selection of salads to accompany the meat. These were all amazing, so we must heartily thank these three for the delicious salad! WAIS BBQ: The End

Day 3

Narrative: almost done! Model Internet: almost done! Alan Walks Wales: almost done! The final day of WAISfest had a similar feel to the second day, with maybe a¬†soup√ßon of panic, as the 4pm wrap-up deadline approached. Just after lunch, I managed to do a short visit to all the groups I could find, to remind them about the wrap up. Everyone seemed reasonably calm though many reported there were things they didn’t have time to finish. MOOC Observatory: almost done! Almost empty coffee room Linking people: almost done! The wrap up was run fairly ruthlessly, with 5 minutes per group (getting warnings at 2 minutes to go and 30 seconds to go). Each group reported back their successes and findings, as well as their experiences of attempting their task and¬†how they might continue this work in the future.

COMING SOON: In another blog post, I will post videos of the wrap ups with a little summary of each.Whiteboard walls Internet graph

Immediately after the wrap up finished, everyone headed back over to the Building 32 Coffee Room for pizza and an opportunity to discuss each other’s results (as there was no time for questions between presentations)! Lots of talking¬†went on here, and everyone seemed in good spirits, so I think overall WAISfest did a good job of letting our researchers work on something a little different and engage in conversation with¬†some new¬†people. In my mind at least, that’s a success!

My response to “What Happened to Video Game Piracy?”

I read this article called “What Happened to Video Game Piracy?” on Communications of the ACM, and was a little disappointed that it seemed to miss a couple of key points, so I have addressed them here.

For a start, due to technical limitations music was first down the well, and the games industry learned from that. In 1999, when Napster launched, home internet speeds made it barely feasible to pirate music, let alone a video game. This meant by the time home broadband was viable, the games industry could learn from the mistakes of the music industry.

The younger games industry also seemed more receptive to new business models. id had already demonstrated shareware still worked in the 90s with DOOM, following on from that distribution model being popular in home PC gaming in the 80s. The free and open source movement, the sharing of game code through magazines in the 80s and the culture of demos embedded in the industry the mindset that money could be made despite giving something away for free.

The music industry couldn’t grasp this and rallied too hard. Although iTunes was released before Steam, it was under the restrictive control music companies demanded. It was not the product consumers wanted, at a price the market considered fair (non-physical MP3 albums costing more than the CD!!). Steam did the opposite: it offered gamers what they wanted, at a competitive price and the service continually improved (Steam sales, pre-order downloads, achievements).

This is not to say that piracy does not exist in the games industry. The market is split: consoles have little piracy, because modchips were made illegal and the industry clamped down on them; the PC market still has elements of piracy and this is unfortunately affecting companies that only release on PC. If the music industry had a continually improving technology, then perhaps most people would listen to music on their equivalent of a games console, and they could control piracy better. But it’s not, it’s a static technology. Playing of music has not significantly changed since the MP3 became popular over 15 years ago, and it is unclear whether the music industry (rather than the tech industry on its behalf) are investing anything to change this matter.

In effect, the answer isn’t solely that there’s consoles, a different market and some legislation. The industries themselves have taken different approaches to handling the problem. The games industry embraced downloading as a distribution method, rather than fear it, and gave gamers what they wanted sooner than the music industry did.

The Cathedral and the Bazaar

The Cathedral & the Bazaar: Musings on Linux and Open Source by an Accidental RevolutionaryThe Cathedral & the Bazaar: Musings on Linux and Open Source by an Accidental Revolutionary by Eric S. Raymond
My rating: 5 of 5 stars

The Cathedral & the Bazaar is absolutely fundamental reading for any computer scientist that wishes to have an anywhere near reasonable discussion about the state of software development today. It should really also be required reading in management, entrepreneurship and politics, as it outlines some interesting human motivations that, if embraced, could do great good for the world. The open source model of software development should not be feared or abused (as the immediate human nature responses appear to default to), but understood, respected and aspired to in all areas.

This paper copy of the books is a collection of essays by Eric S. Raymond, which fit together rather fluently. Each essay is available for free on Raymond’s website, but I think reading any in isolation would make far less sense than the whole, which this book presents. “A Brief History of Hackerdom” nicely sets the scene, “The Cathedral and the Bazaar” presents the central thesis explaining why the open source model produces better software than the closed source model, and “Homesteading the Noosphere” explains the human motivations behind the entire culture (relating it back to John Locke’s philosophy on property). “The Magic Cauldron” discusses the novel business models that are now possible due to open source software and why they are more sustainable than those relied on by closed source software, while “Revenge of the Hackers” explains how Raymond went from regular open source contributor to spokesperson for the community. These only make sense in the context of each other, and each one piles on more convincing evidence to support the model. You must read them all, because they are worth more than the sum of their parts.

Being around real hackers, open source developers, computer science academics, or even reading the tech community news gets you part of the way to understanding the culture and ethics behind open source. However, Raymond has thought so hard about this, lived through it and communicated it so succinctly that it is a crime not to read this so you can fully understand why an innovation in a relatively small technology community is going to have long lasting and important effects on the whole of society.

Raymond’s ego rises to the surface on occasion, but forgive him that and look past it into the principles of these essays. If you have any interest in software development at all, you must read this. I am frankly embarrassed that I haven’t read it before now, so don’t make the same mistake as me!

View all my reviews

The Invisible Man

The Invisible ManThe Invisible Man by H.G. Wells

My rating: 4 of 5 stars

The Invisible Man is the story Frankenstein would have been if Mary Shelley had been a more experienced author. HG Wells masterfully leads us through a mysterious build up, an action-packed middle and a treacherous finale, all in less than 200 pages. Much like Frankenstein, it is a story which I have seen adaptations of throughout popular culture, but only just got around to reading the original. It is refreshing to read the source material for these much used tropes.

There are few characters in The Invisible Man. Everything revolves around he whom the book is named after, but part of the enjoyment is the slow exposition of his history and motivations. Very few others are introduced in any detail, but that’s fine as they are just there to move the story along. The aim here is to keep the reader guessing for as long as possible and The Invisible Man does this very well.

Definitely worth reading, it won’t take long and you will enjoy it. Solid HG Wells and a much better mystery, horror sci-fi than Frankenstein.

View all my reviews