Wednesday, 17 December 2014

The Era of Radical Concrete meets Twitter

Back in 2008, when I moved into my office at the University of Sheffield, there were three mashed up boxes sitting under the desk. My curiosity got the better of me and I spent the next few hours looking through thousands of slides in hanging folders. This turned out to be the image archive of Jimmy James, the first Professor of Planning in our Department and former Chief Planner in England.

In the summer of 2013, after securing funding from the University of Sheffield Alumni Fund, I was able to hire Joe Carr and Philip Brown (recent graduates) to digitise the entire collection and upload it to our dedicated JR James Archive Flickr site. After more than 2 million views and a year online, the BBC magazine did a piece on the archive, called 'The era of radical concrete' which created huge interest and led to a follow up from residents of the places featured in the images. But one image in particular puzzled us... It was simply labelled 'Housing Scotland' (see below for the original slide and the online scanned image)  and we just couldn't fine an exact location for it, but we thought it was in the New Town of Cumbernauld, 15 miles or so from Glasgow.

Note the 'MAY 67' date stamp

Are you the boy? Let us know!
These images were used in teaching planners and urbanists for many years but some of them were not in the right place when we found the slides. We did well to track down the location of most images since between me, Philip and Joe we have a pretty good knowledge of most of the UK and we had thought the one above was of Cumbernauld, because that's where it appeared in the archive.

The New Towns section of the original archive

However, our interest was piqued when a commenter on Flickr said it definitely wasn't Cumbernauld. We tried a few times on Twitter, but to no avail. Even @MunicipalDreams couldn't help us. So, I tried one last time yesterday on Twitter and after 17 retweets we had suggestions that it was definitley NOT South Africa (!), but it might be 'somewhere in Cumbernauld', Falkirk, Dyce, Cwmbran, Killingworth, Peterlee or Saltcoats - among other places. I spent half the night looking round Craigavon, Crawley and Harlow, just to be sure. Nothing.

But this is where Twitter and the power of the crowd came to the fore. Actually, I should say this is when the determination of Michael Coates came to the fore, because ... wait for it ... he found where the photo was taken. It is in fact Abbots View in Haddington, East Lothian - as you can see from the embedded Street View image below. I've shown the 2008 Street View here as it is a better match for location and light but this is definitely it. The perspective of the Google cameras is a little different from Jimmy's original lens but it's a clear match so I declare Michael Coates winner of the Internet today and forever, though the Visual Resources Centre at Manchester Metropolitan University also deserve credit for joining the hunt.




So, that's one mystery solved. But what about the other mystery? Who is the boy on the bike?

By my estimation he'd be about 50 now and if anyone tracks him down we'll really be impressed. In the meantime, if you haven't already looked at the JR James Acrhive, what are you waiting for?


Thursday, 11 December 2014

Are you STILL here? Big Data's bad smell

Rambling introduction
Big data, big data, big data, big data. Yes, we're all probably sick to death of hearing the term 'big data' now* and somewhere along the lines the meaning has disappeared and whatever feelings the term originally engendered have now morphed into angst, disillusionment, embarrassment and general scoffing. Maybe this is because the famous 3 Vs of big data (vanity, vanity and vanity**) have not actually produced many good examples that can help explain why and how big data is useful. Actually, this kind of thing is pretty common in tech, academic, business and all sorts of other fields. A new concept comes along, people start jumping on the bandwagon, people start jumping off the bandwagon, bandwagon crashes and burns, and everyone says 'I was never on the bandwagon and, anyway, it was going in the wrong direction'. Despite the sarcasm and general rambling nature of this opening paragraph, below the floppy disk I'm going to make the case for not abandoning 'big data' at all - even if we do decide to stop using the term. I may also throw in even more bad metaphors, analogies and bad writing.

A 'big data' enabled storage device

Shark Jumping
In 2011 an article appeared on dbms2.com saying 'big data' had 'jumped the shark' (see below, and here for definition) and it makes for interesting reading because much of what it says is true. However, like most articles about big data, the core of the critique is not always directed towards the data or the analytical process but towards the hype. The comments section of this article is also very interesting because Doug Laney, the originator of the much-cited 3v concept in big data, has a few things to say. Fast forward to September 2014 and Techsling ask whether big data has jumped the shark (conclusion seems to be yes, probably), whereas in 2013 Wired said not to worry because big data had definitely not jumped the shark. However, my favourite article in this vein has to be the syncsort piece entitled 'has big data nuked the fridge', which actually contains a lot of common sense from a real 'big data' person. As for me, I completely agree that the hype has gone far too far. However, let's keep working on large datasets with powerful machines and then let people know when we get some useful or interesting results. And let's not use big data as an excuse for clever, anti-social people to avoid speaking to real people.

Big data didn't make it

Voices of reason
When you are immersed in hype, it's very important to find some voices of reason. Two prominent voices that I like the sound of are Rob Kitchin and David Lazer. Rob Kitchin is a professor at Maynooth University in Ireland and has written extensively on the need to approach big data sensibly and with a healthy degree of critique; most notably in his 2014 book The Data Revolution. My personal favourite is his piece from June 2014 entitled 'Big Data, new epistemologies and paradigm shifts' where he explores Anderson's 'end of theory' piece and argues both that big data is disruptive but also that there is 'an urgent need for wider critical reflection'. Rob's stance is particularly interesting from a social science perspective but actually I find his conclusions resonate much more widely.

My other voice of reason in big data is David Lazer, professor in Political Science and Computer and Information Science at Northeastern University and Visiting Scholar at the Kennedy School at Harvard, who wrote a great piece with colleagues on 'The Parable of Google Flu: Traps in Big Data Analysis' for Science in March 2014. Most people with an interest in big data probably know the story of Google Flu Trends because it made headlines for the wrong reasons in February 2013. Lazer et al. use this story to bring some reason to the big data debate and critique 'big data hubris'. Interestingly, they also talk about the need to incorporate 'small data':

"However, traditional “small data” often offer information that is not contained (or containable) in big data, and the very factors that have enabled big data are enabling more traditional data collection."

The conclusion of the Lazer et al. piece is not that we should abandon big data but rather that we need to understand what the recent data revolution means and then use innovative analytics to move towards a clearer understanding of our world.


Big dog, small dog
I have posted a photo below of a big dog and a small dog. They are both dogs. I can see why some people would get excited about big dogs. They can fetch bigger sticks. They can keep burglars at bay more easily and they are stronger, but they do take up more space. But the small dog has a really loud bark, can go places big dogs can't, knows just as much as the big dog and takes up less space in your house. They do require different approaches in relation to being looked after, but that's another issue altogether. Someone has even produced a nice visual representation of different kinds of big and small dogs.

They are both dogs


How to proceed
Apologies for the big dog, small dog nonsense above but I've been sucked in to the big data debate over the past few years and it always makes me think of this. I don't even have a dog. But I do have lots of data and a fancy computer and this is what I have in common with lots of other people who are 'doing big data'. So, to conclude and by way of trying to say something useful about big data, here's a final few bullet points...

  • Let's accept that the hype around big data has gone too far and put that to one side. It's not novel or useful to say that 'big data has jumped the shark', 'big data is all hype', 'big data is dead' or other similar comments. The people who are working with big data and have a critical mind (Kitchin, Lazer et al.) already know all this.
  • Let's try to take a more nuanced approach to understanding what big data is***, what it is not and what it can and cannot do - along the lines of what Kitchin refers to as a 'contextually nuanced epistemology'.
  • We ought to understand that the reason 'big data' emerged was because of enhanced processing power in computers which arrived at roughly the same time as access to very large datasets. This created the ability to ask questions of data that we previously could not answer because of problems of 'small tools'. But it hasn't really led to many transformative developments that people know about. This needs to change.
  • Let's start with big questions rather than big data. A very obvious point but the criticism that big data so far has been a solution in search of a problem is in some cases justified. 
  • Let's let the term 'big data' fade into the distance and keep working with large datasets and powerful computers on big societal challenges that we need to find the answers to (i.e. keep doing big data but stop calling it that).
  • Finally, let's keep in mind this statement form a Financial Times Magazine piece on big data from early 2014: "a theory-free analysis of mere correlations is inevitably fragile. If you have no idea what is behind a correlation, you have no idea what might cause that correlation to break down". 



* or 1 3, 5, 7, 10 years ago depending upon how far ahead of the curve you are.
** I think that's right...
*** data is, data are... I like is, even if some say it's wrong
This blog is written in a somewhat rambling, tongue-in-cheek style just to make a point

Wednesday, 3 December 2014

WIMD 2014 Shapefiles and Maps

The Welsh Index of Multiple Deprivation 2014 was published on 26 November, using more up to date and improved indicators. The Welsh Government have provided some nice interactive mapping, but James Trimble's version is I think even better. I've done basic interactive versions in the past, but today I just want to share the raw GIS data and a few maps, for anyone who is interested in either looking more closely at their area or doing a bit of spatial analysis themselves. First of all, here are a few basic WIMD maps, clipped to building outlines (click images to enlarge).








I made these in QGIS, using an automated atlas production technique I've described on the blog before. If you are looking to produce some WIMD maps, you might want to try this method. I've also produced some WIMD 2014 maps in standard choropleth format, as you can see below. These are okay in areas which are densely populated but I find them more misleading for larger rural areas where there are not many people.




I know a lot of people across the public sector in particular will be looking closely at the data, so I thought it would be helpful to make the underlying shapefiles publicly available since the data are open. If you click on the link below you'll be taken to a download page via Google Drive. Any questions, then feel free to get in touch.



Thursday, 13 November 2014

The Urban Fabric of Los Angeles

I've recently become a bit obsessed with looking at the urban fabric of different cities. I've looked at building footprints in English Cities, Scottish Cities, and collected data on a number of US cities, including San Francisco, New York, Chicago and Los Angeles. I've blogged and tweeted about this quite a bit over the past few months and the reason I'm so fascinated by it is that it provides some really interesting visual insights into the spatial structure of cities. I also find it fascinating to make visual comparisons between cities, as in my 'Urban Fabric of English Cities' blog post from October 2014. Now I'm going to look at Los Angeles. I've only been there once but I found it fascinating in so many ways (urban structure; fragmented political, social, racial geography; freeways; congestion; In-N-Out Burger...). Famously, Los Angeles is also the most populous of the 3,144 counties of the United States with a population of just over 10 million. The first image below shows all 3,000,000 or so buildings in LA county - all 4,000 square miles of it (for comparison, London is 607 square miles).

Los Angeles County - population 10 million (higher res)

Anyone who knows anything about American cities will understand that the concept of a 'city' and a 'boundary' is not necessarily as straightforward as it might appear - well, at least not to me as a British person. The City of Los Angeles is a rather odd shape and has a population of about 3.8 million. The image below shows the urban fabric (building footprints) for the City of Los Angeles.

The City of Los Angeles - population 3.8 million (full resolution)

Finally, I've produced a map showing the wider Los Angeles metro area (within Los Angeles County), with the urban footprint of the City of Los Angeles highlighted. The wider metro area extends beyond the boundary of Los Angeles County and has about 13 million people. What's the point of all this? Partly a mapping experiment and partly to illustrate the idiosyncrasies of political and administrative boundaries in urban areas compared to the underlying urban fabric. This is very acutely demonstrated in the case of Los Angeles.

The City of Los Angeles in context (higher res)

If you like these images and want them in higher resolution versions just get in touch via twitter - @undertheraedar or e-mail. I've linked to a high res version for the LA city map.

Data source: Los Angeles County - Countywide Building Outlines

Friday, 7 November 2014

Automatic map production with QGIS

This longer post explains how to automate map production in QGIS using the atlas generation tool. It's based on QGIS 2.4 but will work in later versions and some earlier versions too. I've found this tool immensely useful and a great time saver so I'm sharing the method here. Similar outcomes are possible in ArcMap's Data Driven Pages but I find that the rendering quality of QGIS is better, so I use this approach. Before going any further, here are some example outputs from the atlas tool - it shows some recent mayoral election results from Toronto, which I saw on twitter via Patrick Cain. You can download the PDF mapbook and the individual images files below.

Download the complete map book of results (150MB PDF)

Download individual image files for the whole city (58MB)

I'll explain how you can produce multiple maps and also how I achieved some of the effects in the image above. This tutorial tells you how to produce one map per page. For multiple maps per page (as above) I'll say more at the end. There are two data layers in my map - one layer for the 44 wards of Toronto and one for the election results. The election results were posted on CartoDB by Zack Taylor from the University of Toronto Scarborough with his interactive map so that's the data I'm using here. I've used this dataset because a) it's very interesting and b) I wanted to compare a small number of variables across single areas of a city and produce a map book from it.

I added my layers to QGIS and then I styled them as I wanted. Basically, this involved copying the ward boundary layer and making the top one just a hollow black outline style by changing the symbology to 'No Brush' and a black outline of about 0.5 width. Here's what I did to create the glow effect round each ward... On the copy of the ward layer I symbolised it using the 'Inverted Polygon' and then in the Sub-renderer options in the same window I selected 'Rule-based' (screenshot below).




I then clicked to edit the rule so that my map looked as I want it to when I move to the atlas production phase in the Print Composer later on. You just need to select the colour patch and then click on Edit Rule (the little pencil and paper icon, as shown below). I wanted to make sure there was a glow effect around each individual ward when the atlas was zoomed to that feature so using tips from Nyall Dawson on shapeburst fill styles in QGIS 2.4, Hugo Mercier on inverted polygons and Nathan Woodrow on highlighting current atlas features I managed to achieve this - note the text in the Filter box in the second image below. You'll see that in the second screenshot I've symbolised the layer using a black to white shapeburst (this creates the black glow effect outside the active polygon, since we're using inverted polygon symbology). 

The final thing I did was to go back to the Fill colour patch on the second image below and change the transparency to 33%. This means that you'll be able to see surrounding areas but with a 'lights off' effect. You might find that when you do this your layer completely disappears but don't worry; it will come back on when you move to the Print Composer.




I then symbolised my Toronto 2014 mayoral election results layer using a graduated colour scheme using 'Pretty Breaks' so that it had breaks at 10%, 20%, 30% and so on. I then made sure I saved my project (!) and moved to the print composer (CTRL+P) and added a map to my page. 

This is where it gets a little confusing if you're new to it. Once the map is added, go to the Atlas generation tab on the right and click the 'Generate an atlas' box. You then need to set a 'Coverage layer'. This serves as the positioning device for each atlas page, so that when you produce an atlas QGIS will zoom to the extent of each feature to create a map for that area. So, in this example, I specified the coverage layer as the Ward boundary layer (the top one) because I wanted a map for each ward. Still on the Atlas generation tab, go down to Output and you can set to sort the atlas pages by features from your coverage layer. In my case I ticked the 'Sort by' box and used the NAME field so that my atlas would be organised alphabetically by neighbourhood name. Nothing will happen yet. I then went back to the Item properties tab and scrolled down until I could see 'Controlled by atlas' and then clicked that box. 

There are a number of options here but I chose a 'Margin around feature' of 25%. This means each map will zoom to your area and leave a nice margin around the ward - useful if you want to leave space for other map elements such as a legend. To finally make something happen, go to the Atlas menu and then select 'Preview Atlas' - the map will then zoom to your first feature - as you can see in the example below. If you then click on the blue Atlas preview arrows you can preview successive pages and the map will zoom to each feature in turn. You can also see that the inverted polygon glow is now active only for each active atlas feature (that's what the $id = $atlasfeatureid filter does).



I then added in a text item which would serve as a title but I wanted this to automatically change as I moved through the atlas so once I added a text item I then deleted the default text and clicked on 'Insert an expression' below the text box and used the Function list to insert the 'NAME' field in the Expression box (as below). What this means is that as you go through each successive page on the Atlas, QGIS will enter the name of the ward currently in view in the atlas. A good tip here is that you should size your text box and font appropriately so that the longest name in your dataset still fits within the box. You can spot mistakes in the Atlas preview but if you're generating hundreds of pages this is not always practical. 



If you close your project and open it up again, remember that you'll have to go to Atlas, Preview Atlas again. Now you just need to take some time to add any other map items you want, such as a legend, north arrow, and so on. Scale bars are a bit problematic but I don't normally use them for this kind of map. At this stage you'll want to check the Composition tab to check on the export resolution. Obviously, the higher it is, the longer it will take to export.

Finally, go back to the Atlas generation tab and look down to the Output options. By default, QGIS will call your individual map exports 'output_1', 'output_2' and so on but you can click the little expression button here and choose to name your individual files using a field from your Coverage layer. I did this when I exported my maps by using the NAME field and each file now has the name of a Toronto neighbourhood: immensely useful when you're generating hundreds of maps.

If you click the 'Single file export when possible' it means that (e.g.) you'll get one big PDF rather than individual ones for each area in your Coverage layer. I chose this option in the example at the start with 4 maps per page, but it creates large file sizes.

That's pretty much it. I have not explained everything here in relation to the basics because I'm really aiming this at people with experience of QGIS but feel free to get in touch if you need any tips. This kind of thing can also be done programatically in R, but since I don't have the coding skills of Alex Singleton et al. I'm using QGIS. 

Finally, here's the zipped folder (58MB) containing 44 neighbourhood maps of the Toronto 2014 Mayoral Elections - with 4 maps on each page relating to % voting Ford, Tory, Chow and Other candidates. Feel free to use these as you wish. I didn't do this as a Toronto mapping project but the results do look quite nice in my opinion.

People in this ward seem to like John Tory


Want multiple maps on each atlas page, each of which show a different variable?
To achieve the effect I have in the images at the top of this post, all I did was create a smaller map item in Print Composer using the symbology I wanted for the electoral results layer (e.g. % voting Ford) and then adding a legend for that frame and then locking the layers for that map item (see below). I would then create a duplicate of the electoral results layer, add a new map frame to Print Composer and repeat the process. Just re-symbolising the same layer will not work. Once I am happy with how a map looks, I simply lock the layers on Item properties so that it doesn't change. I wanted to show a little inset map to show the general location of the ward within the city of Toronto and I did this in a very similar way. The main different here is that in the 'Controlled by atlas' options I used a fixed scale.




Notes: apologies if I've made any of this too complicated or if I've missed anything. Do get in touch if you notice any errors. I'm sorry if the map projection is not the one Toronto natives would use or if I've made any other foreigner gaffes. The point of this was really to demonstrate the technique using an interesting recent dataset so hopefully I've achieved that.


Tuesday, 4 November 2014

Manchester: a "Northern Powerhouse"

The announcement yesterday from the Chancellor that we need a "Northern Powerhouse" in England was greeted with much enthusiasm - and also a good bit of cynicism, given that it's not far from the General Election. After the announcement, the BBC asked whether more English cities should be like Manchester. What I wanted to know was how this nascent northern powerhouse compared to the already established southern one (London, I think). We all know the numbers - well most do - but what about spatial scale, density and visual comparisons? Given a previous bit of mapping on the urban fabric of England and the fact that I've been thinking about the 'underbounded' nature of the City of Manchester (as opposed to the city region), I thought I'd go a bit further with this post.

The first image here shows how Greater London compares to Greater Manchester - they're mapped at the same scale - and I've also put the Greater Manchester boundary over Greater Manchester and vice versa, just to show that they're not that radically different in size.

Greater Manchester - a large urban area (higher res version)

I've also produced the same image without the respective boundaries overlaid on each city, just to provide a quick visual comparison at the same scale.

Click for higher resolution image

Finally, in order to demonstrate the way in which the City of Manchester is really 'underbounded' in relation to its wider city region, I've shown just the urban fabric of the City of Manchester (i.e. building footprints) in relation to the ten local authority areas of Greater Manchester, beside Greater Manchester as a whole. 

Click for bigger version

I have some even higher resolution images if anyone is interested - if so, feel free to get in touch via e-mail or twitter (@undertheraedar). I just wanted to produce these images to illustrate the fact that Greater Manchester is actually very big (about 80% of the size of Greater London in area) though of course has far fewer people (about 33% of the population). It does seem like the most likely 'northern powerhouse' in England and it will be very interesting to see how things pan out over the coming years.


Sunday, 26 October 2014

How we read maps and dataviz - new research needed?

There's a fairly long academic tradition of looking at how humans interact with maps but, in my view, there is a need to revisit such research in relation to the new wave of digital mapping and dataviz currently available online. Some of it is fantastic and some less so, but this isn't about being critical of the bad stuff. Instead, I'm hoping others will share what they've been doing or what they've seen (via @undertheraedar) to try to understand the effect of new dataviz/mapping on how we perceive/read maps - and what impact this might have on cognition/understanding of underlying issues. 

Early last year I had some discussions about this with a very helpful colleague in psychology at Sheffield - Megan Freeth - and I gave her one of my blog images to test with her eye tracking technology. The results are shown below, in sequence (click to enlarge). I've also put them together in a slide show if you want to download them all at once.


The original 3D image


Scan path from first 10 seconds of map viewing


Scan path for one minute of map viewing


Heat map showing areas focused on most


'Region of interest' analysis

I'm aware that I am probably just not up to date with the kind of research being done in this area but before going further I should say that I am aware of people across the world who have done work in these fields - e.g. Alan M. MacEachren and others at the GeoVISTA Center at Penn State and this study from Brodersen et al at Risø National Laboratory in Denmark - but I'm not aware of what's been done in the last 4 or 5 years in particular to help us understand the effects of new approaches to mapping and visualisation on cognition and perception.

Are we understanding more because of the new wave of mapping and dataviz? Are we understanding less? Are we just enjoying how things look and being wowed by the technology more than we are critically engaging with the underlying content? Has the method become the message?

I'm as guilty as anyone of posting maps and images on twitter and this blog without necessarily thinking too much, though my aim is always to inform and engage - but as the protagonist in David Lodge's Changing Places says, "Every decoding is another encoding" and my visual 'decodings' of spatial data will always be 'encoded' by the viewer in ways I might not have expected - or even want. It's always interesting to see how people interpret things and whether this aligns with what we'd hoped. This perception issue might also come up tomorrow when one of my maps appears in the new HS2 report in the UK - we'll see.

Anyway, thoughts and insights welcome via @undertheraedar.