Data Visualisation: Lonely Hearts Club

My data visualisation life outside work is missing something. I’m lonely. The hours I spend hunched over the PC visualising data remain unfulfilling. When I’m not “vizzing” the rest of my time is spent on social media networks with other single vizzers. We all pretend we’re happy being single, but deep down I know many of us aren’t. I think it’s important to talk about the loneliness.

You see I’ve spent years now without an audience. At first it was fun, I had the freedom to do what I wanted when I wanted; I didn’t have to worry about pleasing the other half. I spent so many weekends on the equivalent of a boys night out, visualising random datasets, where I splurged out having fun and not really caring about the consequences. Usually I was in the company of lots of other singles and we had a blast. I even had a few meaningless relationships out of those nights, I hope they prepared me for what it’s like to be in a real relationship but I worry they taught me bad habits. After all those nights were all about impressing my mates, not my prospective partner, and so while the results were impressive I’m not sure either of us got any long term value out of the fling.


Having an audience, so we’re told, is the norm. Articles everywhere tell us how to keep our audience when we’ve found her, but there’s never any clue in them about how to find one in the first place. “Know your audience” everyone says, and every time I hear that a little piece of me dies because I know so many people who don’t have one.

A life in Data Visualisation without an audience is hard. I try my best but I end up vomiting data points and facts onto a page in attempt to make something meaningful. I make them engaging, I add pictures and I try to piece a story together but if I’m honest it’s nothing more than a bit of data porn. Something I know my fellow singles will find entertaining, briefly, but that will be quickly binned as they click on looking for something a little bit more hardcore.

Recently I’ve been attending a few singles nights with the aim of finding a long term partner / audience. Last weekend I was at #OpenDataCamp where I made an appeal for an audience, a user, someone, anyone who I could work with to help solve real issues with visualisation. Yes I know they’d give me problems and challenges but I want to do something meaningful; I think I’m ready for some commitment. Maybe I came across desperate because no one was interested. It was fun, I met plenty of people looking for the same thing as me from a slightly different angle, they had the data but also no audience…some even suggested if I found someone then they could join us in a threesome. I liked the sound of that but perhaps having three in the relationship will only complicate things more….


Ultimately I guess everyone wants to settle down like me but many of my older friends have settled into the single life as a permanent bachelor. Some of them I never hear from, it’s really sad to see people disappear because they couldn’t find an audience, I wonder where they go? Maybe they found one and never told me…. Others are happy telling others how to have productive relationships without having one themselves. Still others have taken themselves off the market, thrown themselves into work where they can have real relationships, again we don’t see them much anymore. Yes, some of the old timers still join us on boys nights out, but if I’m honest it’s a bit sad seeing them on nights out with the young crowd. I don’t want to be one of them, I want to have a meaningful relationship with someone I can commit to. Wven if it’s just short term I want it to be meaningful. I hope there’s still time, I think I have a lot to offer if I meet the right partner.

If you know anyone who can be my audience let me know, I’d love to meet one and try and work together to create something special.


Why we’re going to #opendatacamp

screenshot-2017-02-18-11-42-58On Saturday and Sunday fellow Tableau Zen Master, Rob Radburn and I will be attending Open Data Camp in Cardiff.

So why are we spending a Saturday and Sunday in Cardiff away from our families and spending a small fortune on hotels?

Well sometimes data visualisation can be frustrating. We’re both prominent members of the Tableau Community and we’ve spent countless hours producing visualisations for our own projects as well as community initiatives such as Makeover Monday and Iron Viz. There’s lots of fun and rewards for this work, both personally and professionally and so why is it frustrating? Well shouldn’t there be more to data visualisation than just producing a visualisations for consumption on Twitter? How do we do produce something meaningful and useful (long term) through data and visualisations?

Open Data seems a suitable answer however with so many data sets, potential questions and applications it’s hard to know where to start. The open data community have done a great job at securing access to many important datasets but I’ve seen little useful visualisation / applications of open datasets in the UK beyond a few key datasets. How do we do more?

tableau_logo_crop-jpg_resized_460_Tableau Public on the other hand has done a fantastic job of ensuring free access to data visualisation for all, but few in the community have worked with the open data community to enable the delivery of open data through the platform.

Rob and I are hoping that our pitch at Open Data Camp will facilitate a discussion around bridging the gap between the Tableau Community and the Open Data Community. On the one side we have a heap of engaged and talented data viz practitioners on Tableau Public looking for problems, on the other hand a ton of data with people screaming for help understanding it….on the face of it there seems some exciting possibilities, we just need to pick through the .

Oh and while we’re there if anyone wants us to pitch a Tableau Introduction and / or Intro to Data Visualisation we’d be happy to facilitate a discussion around that too.

Would love your thoughts

Chris and Rob

Using Inspect / Javascript to scrape data from visualisations online

My last post talked about making over this visualisation from The Guardian:


What I haven’t explained is how I found the data. That is what I intend to outline in this post. Learning these skills is very useful if you need to find data for re-visualising data visualisations / tables found online.

The first step with trying to download data for any visualisation online is by looking checking how it is made, it may simply be a graphic (in which case it may be hard unless it is a chart you can unplot using WebPlotDigitiser) but in the case of interactive visualisations they are typically made with javascript unless they are using a bespoke product such as Tableau.

Assuming it is interactive then you can start to explore by using right-click on the image and choose Inspect (in Chrome, other browsers have similar developer tools).


I was treated with this view:


I don’t know much about coding but this looking like the view is being built by a series of paths. I wonder how it might be doing this? We can find out by digging deeper, let’s visit the Sources tab:


Our job on this tab is to look for anything unusual outside the typical javascript libraries (you learn these by being curious and looking at lots of sites). The first file gay-rights-united-states looks suspect but as can be seen from the image above it is empty.

Scrolling down, see below, we find there is an embedded file / folder (flat.html) and in that is something new all.js and main.js….


Investigating all.js reveals nothing much but main.js shows us something very interesting on line 8. JACKPOT! A google sheet containing the full dataset.


And we can start vizzing! (btw I transposed this for my visualisation to get a column per right).

Advanced Interrogation using Javascript

Now part way through my visualisation I realised I needed to show the text items the Guardian had on their site but these weren’t included in the dataset.


I decided to check the javascript code to see where this was created to see if I could decipher it, looking through main.js I found this snippet:

function populateHoverBox (type, position){

 var overviewObj = {
 'state' : stateData[position].state
if(stateData[position]['marriage'] != ''){
 overviewObj.marriage = 'key-marriage'
 overviewObj.marriagetext = 'Allows same-sex marriage.'
 } else if(stateData[position]['union'] != '' && stateData[position]['marriageban'] != ''){
 overviewObj.marriage = 'key-marriage-ban'
 overviewObj.marriagetext = 'Allows civil unions; does not allow same-sex marriage.'
 } else if(stateData[position]['union'] != '' ){
 overviewObj.marriage = 'key-union'
 overviewObj.marriagetext = 'Allows civil unions.'
 } else if(stateData[position]['dpartnership'] != '' && stateData[position]['marriageban'] != ''){
 overviewObj.marriage = 'key-marriage-ban'
 overviewObj.marriagetext = 'Allows domestic partnerships; does not allow same-sex marriage.'
 } else if(stateData[position]['dpartnership'] != ''){
 overviewObj.marriage = 'key-union'
 overviewObj.marriagetext = 'Allows domestic partnerships.'
 } else if (stateData[position]['marriageban'] != ''){
 overviewObj.marriage = 'key-ban'
 overviewObj.marriagetext = 'Same-sex marriage is illegal or banned.'
 } else {
 overviewObj.marriagetext = 'No action taken.'
 overviewObj.marriage = 'key-none'

…and it continued for another 100 odd lines of code. This wasn’t going to be as easy as I hoped. Any other options? Well what if I could extract the contents of the overviewObj. Could I write this out to a file?

I tried a “Watch” using the develop tools but the variable went out of scope each time I hovered, so that wouldn’t be useful. I’d therefore try saving the flat.html locally and try outputting a file with the contents to my local drive….

As I say I’m no coder (but perhaps more comfortable than some) and so I googled (and googled) and eventually stumbled on this post

I therefore added the function to my local main.js and added a line in the populateHoverBox function….okay so maybe I can code a tiny bit….

var str = JSON.stringify(overviewObj);
download(str, stateData[position].state + '.txt', 'text/plain');

In theory this should serialise the overviewObj to a string (according to google!) and then download the resulting data to a file called <State>.txt

Now for the test…..


BOOM, BOOM and BOOM again!

Each file is a JSON file


Now to copy the files out from the downloads folder, remove any duplicates, and combine using Alteryx.


As you can see using the wildcard input of the resulting json file and a transpose was simple.


Finally to combine with the google sheet (called “Extract” below) and the hexmap data (Sheet 1) in Tableau…..


Not the most straightforward data extract I’ve done but I thought it was useful blogging about so others could see that extracting data from visualisation online is possible.

You can see the resulting visualisation my previous post.


No one taught me this method, and I have never been taught how to code. The techniques described here are simply the result of continuous curiosity and exploration of how interactive tables and visualisations are built.

I have used similar techniques in other places to extract data visualisations, but no two methods are the same, nor can a generic tutorial be written. Simply have curiosity and patience and explore everything.


Combining Multiple Hexmaps using Segments

After my #Data16 talk Chad Skelton challenged me to do a simple remake of the Guardian sunburst-type visualisation that I critiqued in my Sealed with a KISS talk (which you can now watch live at this link).

The original visualisation is show below:


While initially engaging, I find this view complex to read and extracting any useful information involves several round trips to the legend. The circular format makes the visualisation appealing while sacrificing simple comprehension. Could I do better though?

Chad suggested small multiple maps and I agreed this might be the simplest approach but I was not happy with the resulting maps:



Alaska and Hawaii why do you ruin my maps? The Data Duo have several solutions and my favourite is the tile map.

Thankfully Zen Master Matt Chambers has made Tile Maps very easy in this post and so I followed the instructions, joining the Excel file he provided onto my data and giving a much more visually appealing and informative result. The resulting visualisation is below (click for an interactive version):


However I still wasn’t satisfied with this visualisation, it has several problems:

  • it separates out the variables per state, meaning the viewer till has a lot of work to do to compare each states full rights.
  • it still requires the use of the legend to fully understand
  • the hover action reveals extra info meaning the users has to drag around to reveal the story
  • the legend is squashed due to space

How to solve these issues? I spent a while pondering it and eventually I found a possible answer: I could use a single map but split each hexagon into segments (ignoring marriage as it is allowed in all states – another solution woudl have been to cut out a dot in the middle for the seventh segment).

To do this I’d need to split up each Hexagon into segments, therefore I took out my drawing package and created six shapes:

These six shapes have transparent backgrounds and, importantly, when combined create a single hexagon.

Now with these shapes I can use a dimension (such as Group below) on shape, and then use colour to combine each hegaxon into different segment colours on the map (using Matt’s method and data for Hex positions).


Using this technique I therefore created the visualisation below (click for interactive version):


Using this method it would be possible to combine 3, 6, 9 or 12 (or possibly more) dimensions on a single map by segmenting the hexagons. Similarly using a circle in the middle would allow 4 or 7 dimensions.

I’m not sure how applicable this type of method is to other visualisations but please let me know if you use it as I’d love to see some more examples.

MM Week 44: Scottish Index of Multiple Deprivation

This weeks Makeover Monday (week 44) focuses on the Scottish Index of Multiple Deprivation.


Barcode charts like this can be useful for seeing small patterns in Data but the visualisation has some issues.

What works well

  • It shows all the data in a single view with no clicking / interaction
  • Density of lines shows where most areas lie e.g. Glasgow and North Lanarkshire can quickly be seen as having lots of areas among the most deprived
  • It is simple and eye catching

What does work as well

  • No indication of population in each area
  • Areas tend to blur together
  • It may be overly simple for the audience

In my first attempt to solve these problems I addressed the second problem above using a jitter (using the random() function)


However it still didn’t address the population issue and given the vast majority of points had similar population with a few outliers (see below) I wondered whether to even address the issue.


Then I realised I could perhaps go back to the original and simply expand on it with a box plot (adding a sort for clarity):


Voila, a simple makeover that improves the original and adds meaning and understanding while staying true to the aims of the original. Time for dinner.

Done and dusted…wasn’t I? If I had any sense I would be but I wanted to find out more about the population of each area. Were the more populated areas also the more deprived?

There have been multiple discussions this week on Twitter about people stepping beyond what Makeover Monday is was intended to be about. However there was story to tell here and I dwelled on it over dinner and, with the recent debates about the aims of Makeover Monday (and data visualisation generally), swirling in my head I wondered what I should do.

I wondered about the rights and wrongs of continuing with a more complex visualisation, should finish here and show how simple Makeover Monday can be? Or should I satisfy my natural curiosity and investigate a chart that, while perhaps more complex, might show ways of presenting data that others hadn’t considered….

I had the data bug and I wanted to tell a story even if it meant diving a bit deeper and perhaps breaking the “rules” of Makeover Monday and spending longer on the visualisation. I caved in and went beyond a simple makeover….sorry Andy K.

Perhaps a scatter plot might work best focusing at the median deprivation of a given area (most deprived at the top by reversing the Rank axis):



Meh, it’s simple but hides a lot of the detail. I added each Data Area and it got too messy as a scatter – but how about a Pareto type chart…


So we can see from the running sum of population (ordered by the most deprived areas first) that lots of people live in deprived areas in Glasgow, but we also see the shape of the other lines is lost given so many people live in Glasgow.

So I added a secondary percent of total, not too complex….this is still within the Desktop II course for Tableau.


Now we were getting somewhere. I can see from the shape of the line whether areas have high proportions of more or less deprived people. Time to add some annotation and explanation….as well as focus on the original 15% most deprived as in the original.

Click on the image below to go to the interactive version. This took me around 3 hours to build following some experimenting with commenting and drop lines that took me down blind (but fun) alleys before I wound back to this.



Makeover Monday is good fun, I happened to have a bit more time tonight and I got the data bug. I could have produced the slightly improved visualisation and stuck with it, but that’s not how storytelling goes. We see different angles and viewpoints, constraining myself to too narrow a viewpoint felt like I was ignoring an itch that just needed scratching.

I’m glad I scratched it. I’m happy with my visualisation but I offer the following critique:

What works well:

  • it’s more engaging than the original, while it is more complex I hope the annotations offer enough detail to help draw the viewer in and get them exploring.
  • the purple labels show the user the legend at the same time as describing the data.
  • there is a story for the user to explore as they click, pop-up text adds extra details.
  • it adds context about population within areas.

What doesn’t work well:

  • the user is required to explore with clicks rather than simply scanning the image – a small concession given the improvement in engagement I hope I have made.
  • the visualisation take some understanding, percent of total cumulative population is a hard concept that many of the public simply won’t understand. The audience for this visualisation is therefore slightly more academic than the original. Would I say this is suitable for publishing on the original site? On balance I probably would say it was. The original website is text / table heavy and clearly intended for researchers not the public and therefore the audience can be expected to be willing to take longer to understand the detail.

Comment and critique welcomed and encouraged please.

Makeover Monday Week 43: US National Debt


This weeks Makeover Monday tackles National Debt. Let’s start by looking at the original visualisation.

Apparently the US National Debt is one-third of the global total. Showing these two values in a pie chart is a good idea as it quickly shows the proportions involved. However the pie chart chosen does have a strange white think slice between the two colours and a black crescent / shadow effect on its outside edge which add no real value (in fact the white slice added a bit of confusion for me).

The visualisation then goes on to show $19.5 trillion dollars in proportion to several other (equally meaningless) large figures. The figures do add some perspective on just how big that figure is and the use of $100 billion blocks in the unit chart does allow an easy comparison. One slightly critical feature, if we were to pick holes in the visualisation, is that half-way through the view starts showing the shaded blocks to compare to the 19.5 trillion, whereas before it doesn’t.


with shaded blocks


no shaded blocks

Achieving consistency is important in data visualisation as it lets the reader know what to expect and gives them a consistent view each time to aid comparisons. So making a design decision to add shaded blocks across each comparison would perhaps have been a better choice as opposed to switching half way through.

Visualising Small Data

The dataset provided for the weeks makeover has simply two rows, showing the debt for each area (US and Rest of the World).


Clearly this presents a visualisation challenge. Visualising small datasets is hard, as there are limited choices. One can attempt to include secondary datasets to show the numbers in context, as the original author has done but another, simpler choice, might be to show them relative to each other – similar the original’s pie chart. One might even attempt to show how the data corresponds to the population of the US or the world, attempting to bring the figure down to something manageable (in the US the debt is a more comprehensible $61,000 per head).

Before we attempt to visualise something though we need to think about the audience and message we want to provide. Are we simply trying to show the figures without any comment? or do we want to focus on how large they are? or are we commenting on how large the US debt is to the rest of the world and making a social / political comment?

With a dataset so small any editorial comment is difficult though. For example we have no context on the direction of movement of these figures. The US might be quickly bringing it’s debt under control, while the ROW grows, or the opposite might be true. The ROW figure might be dominated by other developed countries, or might be shared equally. How can we comment without further analysis on temporal change or the context of this figure?

If we can’t comment editorially then we are left with simply showing how huge these numbers are. My criticism of the original is that the number it shows in comparison are equally huge, and equally incomprehensible for a lay person. Given this visualisation is published on a website Visual Capitalist perhaps their audience is more familiar with global oil production or the size of companies but for any visualistion published away from the site a more meaningful figure is needed. Personally I think the amount per head is an especially powerful metaphor. In the US $61,000 dollars each would be required to clear the debt, the ROW world would just have to pay a little over $5.

To Visualise or not to Visualise

Now there is an important decision here, how to effectively show those figures in context. However with such small data is there any point in doing so? Everyone can quickly see $5 is much less than $61,000 – we don’t need a bar chart or bubble to show that, and we certainly don’t need a unit chart or anything even more complex. This is the problem with small datasets, any visual comparison is slightly academic given we can quickly mentally interpret the numbers.

One might be tempted to argue that a data visualisation is needed to engage our audience. Perhaps a beautiful and engaging data visual might do a good job of this, however so would the use of non-data images like the the below.


Defining Data Visualisation

Makeover Monday is a weekly social data project, should a visual that includes only text be included?

What if the pile of dollars in the image above had exactly 61,000 dollar bills would that make it any more of a data visualisation than one that contained a random amount? What if, instead, we added as a unit chart with 12,200 units of $5 bills? These accompanying items don’t help us visualise the difference any better than the text. One could argue where the main purpose of a visualisation isn’t to inform or add meaning or context, and is instead used as a way of engaging the user, then it becomes no different to any other image used in this way. Therefore adding any more data related visualisations to the above text wouldn’t make the image any more of a data visualisation than the one above.

Semantic arguments that attempt to define data visualisation are interesting but academic. Ultimately each project that uses data does so because it needs to inform its audience, and it is the success of the transaction from author to audience that deems how successful the project is.

So should we define a data visualisation as more (or less) successful because of its accompanying “window decoration” (or lack thereof)? In my opinion yes. Accompanying visuals and text help provide information to the audience and can help speed up the transfer of information by giving visual and textual clues.

Do charts / visuals that make no attempt (or poor attempts) to inform the audience add any more value to a data visualisation project simply because they use data? In my opinion, no. This isn’t the same same thing as saying they have no value but simply producing a beautiful unit chart, say. with the data for this Makeover Monday project would add no intrinsic extra value in educating the audience and therefore would be no more valuable than any other picture or image.

Is the above image a successful Data Visualisation? Let’s wait and see on that one. I’m intrigued to see what the community makes of a purely text based “visualisation”.

Does it do a better job at informing the audience than the original? Again this is hard to answer but I believe I understand more about the size of the debt when it is visualised in terms of dollars per head. By bringing these numbers down to values I understand I did’t need to add any more visualisation elements in the same way as the original author, therefore you might say mine is more successful because it manages to pass across information in a simpler, more succinct transaction.

UK Netflix Movie Finder

Click on the image below to see my submission for the “Mobile” IronViz contest – it should work on both mobile, tablet and desktop.



I’ll be honest, I didn’t start this Viz until Friday night before the contest ended on Sunday. My wife was out Friday and Saturday evenings and so I knew I had a few hours…however with little time I didn’t want to waste my time producing a visualisation on something trivial. Instead I wanted to produce a visualisation on something useful, an “app”, something I’d use.

For a long time now I’ve wished I could find something that would save me the job of hunting down decent movies on Netflix. I have a Netflix Subscription but sometimes hidden gems can be hard to root out. I have a good track record of finding good movies though it.takes me a long time to hunt through reviews online as I look through Netflix. It’s become a running joke I’ll spend longer picking a movie than actually watching it.

If only there was an app or website that would give me both combined…..

Friday Night is Data Night

So with my idea in mind Friday night I spent trying to get data…after some googling there weren’t many easy options and so I found myself installing Python to try a Python script to scrape Netflix but hours went by without luck (all while watching Butch Cassidy and the Sundance Kid on Netflix). The lesson: don’t try and learn python in an hour.

Head in hands and running out of ideas I google some more and found – it was purpose built to do what I needed and had all the data I needed in its search (wish I’d found this site ages ago!) but how to get at the data?

Brute force searching seemed the best option so I ran a search for Netflix UK movies and then scrolled down and down and down to populate the dynamic page…then Ctrl+A, Ctrl+C and Ctrl+V gave:


Ouch….I also took the HTML source for use later. Time for sleep….

Saturday Night Feels Alright

Now the previous night had proved a mixed bag, so I turned to my trusty companion Alteryx to solve my data woes:


What does this spaghetti do? Well it takes the pretty horrible format txt file and turns into into rows and columns of proper data. The trick was to assign a row ID to each row and restart at the “Play Trailer” section marking a new movie. Then I simply needed to crosstab and rename the data. It also pulls out the Movie images from the html source uisng Regex and finds their URL. It combines the two and then splits out multiple genres and casts / directors into separate fields (in the end this last step wasn’t needed but I thought I might do, without it the Alteryx module is massively simplified removing the whole last row).

Then it was to Tableau…I decided to design for mobile first and quickly over a couple of hours designed a few initial drafts of pages then, as it was getting late I posted them to my colleagues for comment.

Next morning, while I was getting the kids ready to head out, the comments started coming back:



I love that I have access to such a great and diverse range of opinions and talent from my peers at The Information Lab. As you can see I got loads of useful feedback – if you want to make a visualisation better just share it as much as possible, ideally on a collaboration tool with image commenting so people can highlight their comments with the corresponding piece of the image.

Sunday Night Polish

Tonight, Sunday, was all about acting on the feedback and building the Desktop and Tablet views. I designed a background theme of the visualisation (using a very quick piece of photoshop work with the Netflix logo) which I incorporated into Desktop – the black left panel but I soon realised that there were some limitations with the Device Designer in Tableau in this initial 10.0 version.

Firstly changing the background of Filters and Parameters alters them for ALL devices. Ouch. That meant ones overlaid on black looked odd on mobile when overlaid on white. Normally a quick solution would be to add a Container and colour it black but in device mode you can’t format object…grrr I was getting frustrated due to my lack of knowledge of this new feature and it’s limitations.

It was hard work fitting the phone layout to different sized phones. Lack of real estate means having to compromise on design vs functionality. All aspects of the visualisation, Text, Filters, Logos need justifying in terms of space. I loved the challenge that working on mobile provided and I hope it makes people entering the competition focus on simple (KISS) visualisations.

I ended up working on the smallest device and then checking it resized okay onto larger screens. As you can see below the differences are quite big depending on the phone.

In the end I decided the best approach was to switch my designs to Floating to overcome this limitation…while not ideal it did allow me to work round most of the problems. However images needed some tweaking as they expand / contract using Fit Width / Fit All.

Anyway..I got it done so I’m happy…and before midnight too. All in all I remain pleased with what was just around 12 hours work!