I wrote a short piece on Medium about why I think freely available company records are a good thing for public interest reporting – you can read it here.
I’ve written previously about the scraper I’ve produced, which grabs Ofsted inspection ratings for open and closed free schools. (See here for a blogpost on the Python back-end, and here for a blogpost on the original front-end.)
Briefly, my reason for doing this was that there’s no easy way to get up-to-date Ofsted ratings for a particular group of schools, such as free schools. And in the absence of a track record of exam results by which to judge the new schools, Ofsted ratings offer one of the few ways we have of assessing how they’re performing.
Updated, 9 May 2015: See bottom of the post for the changes.
I finished the last blogpost having scraped the Ofsted ratings with Python, using Scraperwiki to turn the data into a JSON API. So, what to do with the output of my scraper?
Well, my ambitions were fairly modest as far as presenting or visualising the data went. For now at least, all I wanted to produce was a basic table of all inspection ratings. Importantly though, it had to update automatically as new schools were inspected and additional ratings came through.
Updated, 9 May 2015: See bottom of the post for the changes.
One of the projects I’ve spent quite a bit of time working on over the past year has been the specialist free schools news site I co-founded, EverythingFreeSchools [NB: as of September 2014 this site is no longer live. The free school Ofsted ratings resource I’ve developed can now be found here.]
The site has performed well, building a regular readership of people with an interest in the policy. And besides the news stories the site has broken, one thing that has proved popular are data resources – such as this look at how many teachers without Qualified Teacher Status each free school used.
Back in late February, me and four others – two journalists, two devs – took part in Build The News, a hackathon that the Times/Sunday Times‘s digital news team had put on.
(Somewhat unexpectedly, that is. The tool we built -a Twitter search tool, designed to allow journalists covering a breaking news story to find original information amid the noise of a thousand retweets – worked for the first time roughly five seconds before we had to present it at the end of the weekend.)
Off the back of this, we were invited in to spend last week developing the tool in The Times‘s offices, supported and encouraged by the digital news team.
A full account of our week can be found in this blogpost I wrote for the digital news team’s Tumblr, but, in short, it was great.
It gave us a brilliant opportunity to really crack on with developing the tool, and – I’m still not quite sure how Nick Petrie wangled this – we also got facetime with a huge number of reporters, social media journalists and executives across the News UK titles, who contributed really valuable thoughts on use cases.
So what happens next? Well, we all have various commitments, so it wouldn’t be an exaggeration to say that, as a group, we’ve had barely a moment’s thought to consider what happens next. So us putting our heads together and working out where we see Low Pass going is probably what happens next.
Hopefully it won’t be too long before we can roll it out in some form, though.
The tool has been developed on an open source basis to date, and is on Github.
Time to introduce something I’ve been working on, and that has been occupying considerable amounts of my time for the last few months.
EverythingFreeSchools.co.uk is a news site that I co-founded with a number of others in October last year.
As a site, we cover the Government’s flagship education policy of free schools, and have set out to do so in a way that is balanced and incisive. As well as straight news stories, we’ll also be bringing more analytical pieces, and data-driven stories, trying to understand what impact the policy is having.
The site builds upon my interest in education, and public policy more generally, and it’s been pleasing to see the response it has received so far.
Further to my post a few days ago, five more former Met Police buildings have been put up for sale with Knight Frank.
The buildings – four former police stations and one residential site – don’t appear on Zoopla yet (as the others did), but the sale brochures can be found here:
- Former Brentford police station
- Former Kenley police station
- Former Norbury police station
- Former Sydenham police station
- 1-8 Park Close, Brook Street, Windsor
Bidding for all sites closes on 18 October.
UPDATE, 28 February 2014:
Slow to post this, but a number of other police stations went on sale with a bid date of 25 October last year. The sites were as follows:
- Former Brockley police station
- Former Chadwell Heath police station
- Former East Dulwich police station
- Former Streatham police station
- Former Tabor Grove police station
- Former Wanstead police station
- Former Wealdstone police station
- Former West Drayton police station
- Former Harold Hill police station
I’ll be updating once more is known about the sales.
In March this year the Mayor’s Office for Policing and Crime (MOPAC) announced that they would be closing a number of police stations as part of moves to save a forecast £60m, with 29 of the buildings to be sold off.
A few days ago I noticed that one of the sites earmarked for disposal – the former Hackney Central police station – had gone on the market with estate agents Knight Frank, resulting in this story for the Hackney Citizen.
The turnaround between the police leaving the building and the site being listed for sale has been quick – less than a month between it closing its doors as a police station and the site being put up for sale.
And a quick browse on property site Zoopla turns up listings for a number of other former police stations which have been listed with similar speed:
Another short post, just to pull together the output of a couple of weeks spent working with Trinity Mirror Regionals’ recently established data journalism unit.
The unit have adopted a smart operating model that makes the most of the reach of the Regionals group of papers, and the fact that, when it comes to dealing with national datasets, story ideas can be worked up efficiently by one central team before being offered to local teams.
Similarly, if something works well as an FOI request in one part of the country then in many cases it will work well elsewhere too.
The focus of my work with the unit was therefore on supporting the whole Regionals family. I did find some data/story ideas which lent themselves to stories for the Manchester Evening News, though, and – originating from that part of the country – couldn’t resist writing those myself. Stories as follows:
- an FOI-led splash on mobile phone thefts;
- an investigation into access to NHS dentists across Greater Manchester; and
- a story about a local council completing the digital equivalent of a vanishing trick – disappearing entirely from Google’s search listings.
There also seems to be widespread acceptance across the group of the benefits which a good visualisation can bring to a web story, so I produced a couple of interactive maps, used by the MEN, and the Newcastle Chronicle.
Another short post, just to link to some things I’ve done in the last two weeks – spent working with the digital news team of the Times and Sunday Times.
Much of my time was spent on a number of longer-term data journalism projects which – until publication – I won’t be posting about.
Needless to say, though, heavy use was made of Google Fusion Tables and ScraperWiki, as well as
healthy amounts of Excel. The fortnight gave me the best reason I’ve had to date to experiment with scraping, so that was a really good thing.
I also wrote a number of online news stories, ranging from the serious to the thoroughly silly season. Links to most of the articles I wrote here:
All behind the great paywall of Wapping, but the first three paragraphs are yours for nothing.