Sunday, 26 October 2014

How we read maps and dataviz - new research needed?

There's a fairly long academic tradition of looking at how humans interact with maps but, in my view, there is a need to revisit such research in relation to the new wave of digital mapping and dataviz currently available online. Some of it is fantastic and some less so, but this isn't about being critical of the bad stuff. Instead, I'm hoping others will share what they've been doing or what they've seen (via @undertheraedar) to try to understand the effect of new dataviz/mapping on how we perceive/read maps - and what impact this might have on cognition/understanding of underlying issues. 

Early last year I had some discussions about this with a very helpful colleague in psychology at Sheffield - Megan Freeth - and I gave her one of my blog images to test with her eye tracking technology. The results are shown below, in sequence (click to enlarge). I've also put them together in a slide show if you want to download them all at once.


The original 3D image


Scan path from first 10 seconds of map viewing


Scan path for one minute of map viewing


Heat map showing areas focused on most


'Region of interest' analysis

I'm aware that I am probably just not up to date with the kind of research being done in this area but before going further I should say that I am aware of people across the world who have done work in these fields - e.g. Alan M. MacEachren and others at the GeoVISTA Center at Penn State and this study from Brodersen et al at Risø National Laboratory in Denmark - but I'm not aware of what's been done in the last 4 or 5 years in particular to help us understand the effects of new approaches to mapping and visualisation on cognition and perception.

Are we understanding more because of the new wave of mapping and dataviz? Are we understanding less? Are we just enjoying how things look and being wowed by the technology more than we are critically engaging with the underlying content? Has the method become the message?

I'm as guilty as anyone of posting maps and images on twitter and this blog without necessarily thinking too much, though my aim is always to inform and engage - but as the protagonist in David Lodge's Changing Places says, "Every decoding is another encoding" and my visual 'decodings' of spatial data will always be 'encoded' by the viewer in ways I might not have expected - or even want. It's always interesting to see how people interpret things and whether this aligns with what we'd hoped. This perception issue might also come up tomorrow when one of my maps appears in the new HS2 report in the UK - we'll see.

Anyway, thoughts and insights welcome via @undertheraedar.



Thursday, 16 October 2014

The Urban Fabric of English Cities

[now updated, thanks to @udlondon - scroll to bottom of page]
Inspired by some mapping in the US by Seth Kadish, the availability of new GIS open data, and the fact that I love looking at patterns of urban form, structure and density, I have created a comparative graphic showing the building footprints of nine English cities, with London at the centre (just because it's biggest). I have done this in a very simple way, with all cities mapped at a scale of 1:125,000 in the full size versions (which are massive), plus one small scale bar and a little explanatory text. Here's what it looks like:

The urban fabric of English cities (black/red, medium res)

This graphic does a good job - in my view - of demonstrating the compactness or otherwise of the cities in question. It also illustrates how tightly-bounded some places are and how under-bounded others are. For example, Liverpool is very dense and compact in contrast to Leeds but this really is a boundary effect because the size of the local authorities differs so much. The urban area of 'Liverpool' extends far beyond the boundaries of the local authority area, which is what I show above. I wanted to compare the local authority areas rather than the wider city-region because I wanted to highlight this boundedness issue and compare like with like in terms of formal administrative areas. London is obviously a bit different so I've shown the 33 constituent parts of Greater London.

Take a closer look at the graphic by clicking on the two larger images below - one in white and one in black. They are both just a bit bigger than A0 paper size in their full size versions in the zipped folder below so if you want to take a really close look, download them. I've also uploaded smaller-sized versions in the same folder. I deliberately didn't include more information on the graphic itself, but at the bottom of the post you'll see the population of each city in 2011 (which relates to the individual city images), plus its urban area and metropolitan area population. The population of Greater London in 2011 was 8.2 million (compared to 4.4 million for the other cities shown). The cities I selected are the English members of the Core Cities group, which now also includes Glasgow and Cardiff.


Click here for a full screen white version


Click here for a full screen black version

Download a zipped folder with black and white versions in different sizes.

Update: the @udlondon people got in touch via twitter to show their attempt at fitting the core cities inside the London boundary - as below - so this inspired me to try the same with the original data. The first image below is the original @udlondon artwork and the next one is my attempt using GIS. Finally, as a reminder that nothing is ever really new, I have added a similar map which we found as part of the JR James urban image archive which we launched last year. This version has 13 different cities.

A manual approach to GIS!

My attempt at the same thing, using QGIS - full size

Some of the boundaries were a bit different in those days


City
Population
Urban area
Metropolitan area
Birmingham
         1,085,400
         2,440,986
                   3,683,000
Bristol
             428,200
             587,400
                   1,041,000
Leeds
             751,500
         1,499,465
                   2,302,000
Liverpool
             466,400
             816,216
                   2,241,000
Manchester
             503,100
         2,240,230
                   2,556,000
Newcastle
             280,200
             879,996
                   1,599,000
Nottingham
             305,700
             666,358
                   1,543,000
Sheffield
             552,700
             640,720
                   1,569,000


Totals: the population of the 8 city local authority areas is 4.4 million, for their urban areas it is 9.8 million and for their metropolitan areas it is 16.5 million. I may compare metropolitan areas next time, but mapping this is a little more time consuming.

Saturday, 11 October 2014

Flow mapping with QGIS

[Now updated with sample data file - see Step 1.]
I've written quite a bit about flow mapping with GIS in the past, including on this blog, and in a couple of academic papers. Previously, I'd used ArcView 3.2, ArcGIS 9 or 10 and MapInfo. MapInfo in particular has been my 'go to' GIS for mapping large flow matrices, thanks to a very short line of MapBasic code explained to me by Ed Ferrari. Others, such as James Cheshire, have used R to great effect, but this post is instead about flow mapping with QGIS, which I am extremely impressed with for its flow map capabilities. I've posted many of my QGIS flow maps on my twitter but in this post I want to explain a little bit about the method so others can experiment with their own data. Here's an example of a flow map created in QGIS - though in this case it's not a very satisfying result because of population distribution, county shape and so on*.

US county to county commuting

So, to the method. If you want to create these kinds of maps in QGIS, it's mostly about data preparation. I should also add that I currently use version QGIS 2.4 but I believe the method is the same in any version. Here's the ingredients you need.

1. A file with some kind of flow data, such as commuting, migration, flight paths, trade flows or similar. There should be columns with an origin x coordinate, origin y coordinate, destination x coordinate, destination y coordinate, some other number (such as total commuters) and any other attributes your dataset has (such as area codes and names). Here's an example csv file of global airline flows, if you want to experiment - it's the one from the screenshots below. I put it together using data from OpenFlights - by combining the airports.dat and routes.dat files. 

2. Once you have a file with the above ingredients, you then need to create a new column which has the word 'LINESTRING' in it, followed by a space, an open bracket, then the origin coordinates separated by a space, followed by a comma and a space, then the destination coordinates separated by a space and then a close bracket - as you can see below. You don't actually need to call the column 'Geom' as I have below, but when you import the file into QGIS it will ask you which column is the 'geom' one. You can create the new column in Excel by using the 'concatenate' function. If you're not familiar with it, there are loads of explainers online.

This bit probably takes the most time

3. Once you have your data in this format, you need to save it as a CSV so it's ready to import into QGIS. From within QGIS, you simply click on the 'Add Delimited Text Layer' button (the one that looks like a comma) and then make sure your settings look like the example below.

Make sure you click the right import button
Import CSV dialogue in QGIS - should be on WKT

4. Once you've done this, you simply click OK and wait a few seconds for QGIS to ask which CRS (coordinate reference system) you want to use. Select your preferred option here and then wait a few more seconds and QGIS will display the results of the import. You can then right click on the new layer and Save it as a shapefile, or your other preferred format. In the screenshot example above, the file with c60,000 airline flows took only about 10 seconds to appear on my fairly average PC running 64 bit Windows 7. I also tried it with 2.4 million lines and it only took about a minute. If you try this in ArcGIS - in my experience - it normally doesn't work with that many flows but MapInfo will handle it okay, but take longer. However, QGIS will render it more nicely as it handles transparency in a more sophisticated way and with hundreds of thousands of flows you usually have to set the layer transparency to 90% or higher.

The results, once you've done a bit of symbolisation and layer ordering, will look like some of the examples below.

Rail flows


All commuter flows


Bus flows - no labels, obviously

* I'm still trying to make sense of the US county to county flow map. The spatial structure of the counties and the distribution of the population make it more difficult to filter, so the above example is just a very rough (and not very satisfying) example.

Wednesday, 3 September 2014

A national map of cycling to work

I've recently being doing some visualisation work with the newly released Census commuting data from 2011. I've produced maps of all travel to work, and travel by car, train and bus. I've now done a map of cycling to work (below). This map is particularly interesting in relation to the patterns it reveals but also in relation to the strange long-distance flows we can see. I'm certainly not saying that anyone actually commutes by bike between Manchester and Bristol, as the map may suggest. Click on the big version and have a look around to see if you can spot anything interesting or particularly unexpected. A version with some place name labels can be found here.
This data comes from Question 41 of the 2011 Census form, which asked people to say how they 'usually' travelled to work in relation to the mode of transport which accounted for the largest part, by distance, of their journey. The results can look quite beautiful on a map, but they can also be confusing. Look closely at the map above and you'll ask yourself why there are so many long distance cyclists in England and Wales. More seriously, you might begin to question the validity of the data, the honesty of respondents or some other aspect of the results. 

The ability to interrogate datasets in this way is one of the strengths of visualising large datasets in that we can often immediately identify anomalous patterns or results that confound expectations or are just plain wrong. I'm not entirely sure what's going on with the long-distance flows. Perhaps some people take their bike on a train so ticked the 'bike' option, despite the train journey being longer. Perhaps some people live in one part of the country during the week and cycle to work there but then live at their usual address during the weekend and this is registered as their residence on the Census forms. I'm only speculating but this could be one possible explanation. 

In the image below, I've filtered the data so that only flows of 2 or more are shown. This significantly reduces the visual clutter, but also draws out stronger long distance connections between places such as Bristol and Manchester, and indeed Manchester and lots of other places. Take a closer look by clicking the link below this map. I've added some place names to this map to help with orientation.

Go to the full size version

I'd be keen to hear different interpretations on the data. You get similar results when you map the 'walk to work' data so there's definitely something interesting going on with how people have answered the Census question and the data we have to work with. I'm certainly not saying it's 'wrong', more that we need to understand what exactly it tells us. For now, I'll leave it at that.



N.B. Why didn't I include Scotland and Northern Ireland? The data are not out yet. It's not some ploy to exclude anyone and I know the blog title says 'national' so forgive me if that threw you. I intend to expand the analysis in due course.

Tuesday, 26 August 2014

Why you should start using QGIS

I've been a user of GIS since the late 1990s and in that time have mostly used ESRI software, such as ArcView 3.2 and ArcGIS versions 8 to 10. The first piece of GIS software I ever used was MapInfo 5 and I continue to use it now and again (in version 9.5 or above) - mostly for manipulating large datasets with hundreds of thousands or millions of records. I still really like both of these for different reasons so this post is definitely not a proprietary-GIS-bashing piece. It's just an encouragement to current GIS users to take a serious look at QGIS if you haven't already. I've been using it on and off for a couple of years and in that time have seen serious improvements. Most recently, I've done a good bit of mapping with it - as in the example below (commuter flows in Scotland, in case you're asking).

A flow map made in QGIS 2.4

There are many reasons to start using QGIS. The most obvious one might be that it is a cross-platform free and open source GIS that can do many things as good as or better than paid-for software. Take a look at the QGIS Flickr map showcase for some more examples. Of course, it is possible to make stunning maps with other open source packages such as R, but there is a really steep learning curve and many people don't have the time or inclination to get into it.

If I was to pick my four favourite features of QGIS, I'd have to go with the following:

1. The high quality map rendering and symbology options available to you - for example, QGIS handles layer and feature transparency in such a way that you can produce really attractive maps. QGIS includes by default so many nice looking, sensible colour schemes that it's much easier to produce quality maps. Anita Graser (QGIS author and guru) highlighted the way QGIS integrated ColorBrewer at version 1.4, for example. With version 2.4, you can also automatically invert colour schemes - which was one of the rare things that frustrated me in previous versions.


Flow map layout created in QGIS 2.4


2. The Processing Toolbox, with which you can access a huge range of spatial analysis and data management tools to perform a massive variety of tasks. See screenshot below for how it looks. If you want to add x,y coordinates to a polygon layer, this can be done really simply here, in addition to so many other geocomputation tasks (e.g. calculating area, line lengths and so on). Beyond the basics there are also so many other more complex tasks you can perform here.

The Processing Toolbox in QGIS 2.4

3. QGIS Plugins - which really are fantastic. The one I probably use the most is OpenLayers, which allows you to add a large number of different base layers to your QGIS map - from Google streets and Bing Roads to OpenStreetMap and Stamen Toner layers. As I write, there are currently 214 available plugins listed in QGIS 2.4. Another fantastically useful plugin is Table Manager, which allows you to very quickly change field headers in attribute tables.

The Plugins menu in QGIS 2.4

4. Flow mapping in QGIS. This is something I've done a lot of over the years but recently I've been blown away by the simplicity and elegance of the way QGIS can convert massive CSV files into large flow maps. MapInfo had served me very well in the past - and is still amazing when you use a single line of MapBasic - and recently ArcGIS has improved, but it still has a way to go. All you need to do in QGIS is format a CSV file and have one Geom field with the LINESTRING command and x, y coordinate pairs formatted as in the image below. Once you import this file using the Add Delimited Text Layer tool the job is done. The results - following a bit of styling - can be amazing. What I love even more about this is that I picked up this tip from a StackExchange post by a 73 year old retiree! Isn't the internet amazing?


This csv file, is easily turned into images like the one below

Travel to work flows - car and train

As I've been writing this I've mentally added several more things to the list but I'll stick with the above for my favourite four right now. I do, however, also love the labelling options, the coordinate system selector options, the vector tools and the fact the user community is so helpful. I still find the Print Composer a bit fiddly for creating maps in but this is a relatively minor issue.

I expect I'll always take a portfolio approach to working with GIS software and continue to use QGIS alongside ArcGIS and MapInfo, but I'd be a bit lost without QGIS now. I'm probably quite behind the curve with all this and I should have got more into QGIS a long time ago but it's still relatively early days in the mass take-up of the software, though some UK councils are now big users

So, why should you start using QGIS? Because it's absolutely fantastic, really powerful and pretty straightforward to learn if you already know your way around another GIS. Oh, and it's free - though it costs money to develop so you can always donate here.

Friday, 18 July 2014

Mapping Blight in the Motor City

In my preparations for the launch of our MSc in Applied GIS, I've been putting together lots of case studies of GIS in action. Luckily for me, this has coincided with the launch of the Motor City Mapping project in Detroit; part of a wider attempt by the city to understand and prevent urban blight. One part of this project has produced an amazing survey dataset covering nearly 380,000 land parcels in the city. An overview of this is provided by Motor City Mapping in the following graphic.

Source: www.motorcitymapping.org

This data was generated by survey staff over a short period of time during winter 2013/4 and is probably the most detailed, parcel-level city survey carried out in recent times. For more about the project, take a look at the short video below. One great feature - in addition to all the rest - is that the final dataset contains a link to the photo taken of each land parcel by the survey staff (residents of Detroit surveyed their own neighbourhoods). The entire dataset is pretty big - close to 1GB - but it can be downloaded via this page and used in your GIS. This direct link worked for me.


The image below shows you what it looks like when you map the data using the land use category. 

Link to bigger version

Finally, since they very cleverly included a photo url for each land parcel in a separate column, I decided to extract a small area and put it in a web map using CartoDB so that you can click each land parcel and see what it looks like, in addition to some of the characteristics of the parcel. I extracted the data for Grand Boulevard since it's an important street in Detroit's history, with important locations such as Lee Plaza, Motown Records and Henry Ford Hospital. Click on the image below to go to the full size version. You'll see that I've coloured the map by building condition - mostly good on Grand Boulevard - and when you click on a land parcel you'll see an image of what's on it plus details about condition, occupancy and use. I also included a date of when the survey was carried out.

Full screen version

This is all part of a wider city planning project called 'Time to End Blight', and you can read more about it on their web pages. The report is a great piece of work in a really difficult time in Detroit's history so it's great to see so many people coming together for this. If you have any interest in cities, urban blight, regeneration or revitalisation then I suggest you take a closer look at the report and its recommendations in particular.


Friday, 23 May 2014

The Wonderful World of Open Access

I'm one of the editors of an open access journal, but that's not what this post is about. Instead, it's about the wider world of open access, which I've blogged about before, with some charts and stats. The web is full of opinions on open access, with comments from sceptics, advocates and others somewhere in between. I'm really excited by open access publishing, but of course - like any publishing model - it's not perfect. What I've been doing over the past few years is trying to learn a lot more about the world of open access and really understand what the open access landscape looks like. 

In doing so, I've become pretty familiar with where to find information and for this purpose DOAJ is my first port of call. What you find out very quickly is that there are literally thousands of open access journals - about 10,000 - and that they constitute a very diverse, colourful group. One way to demonstrate this is to look at the metadata on the DOAJ website. Here you can find, amongst other things, a list of URLs. So, I took a screenshot of them all - the results of which you can see here (or by clicking the image below - it takes a while to load).



Why on earth did I do this? Partly as a little spare time project to see how easily it could be done but mostly because I wanted a quick way to see what all the websites looked like - i.e. how many are full-blown fancy websites backed by international publishing houses and how many are more small-scale ventures. It also allows you to more easily identify families of open access journals (scroll down and you'll see quite a bit of this). This doesn't necessarily say anything about the quality of journals (that's for readers to decide) but it does provide a visual overview in a more accessible way. Looking through the full list of 10,000 websites would take a little longer! I used a Firefox extension for this task, and it did take quite a while. The DOAJ spreadsheet I used is from late in 2013 so some more recent journals are not included. To finish with, here are some of my favourites...


'Fast Capitalism' - I love the name and the musical intro:

'Studies in Social Justice' - nice cover shot:

'International Journal of Dental Clinics' - so many languages:

'Reading in a Foreign Language' - I just like this idea:

Not sure what caught my eye about this one, but I like it: