Spring’s Got Speed

I must admit when Iron Viz was announced for this qualifier I had mixed feelings. On a positive side I know nature and animals, as a photographer and bird watcher I’ve spent a long time gathering data on animals. I even run a website dedicated to sightings in my local area (sadly it’s been neglected for years but still gets used occasionally). On the negative side though I knew I would be faced with competition from a bunch of amazing looking visualisations, using striking imagery and engaging stories.

Did I want to compete against that? With precious time for vizzing lately (especially at end of quarter – you can tell the Tableau Public don’t work in the sales team!) I only wanted to participate in Iron Viz if I could be competitive, and for those who don’t know me I like to be competitive….

So, as you perhaps guessed, my competitive edge won and I pulled some late hours and risked the wrath of my family to get something that did the competition justice.

A Note On Visualisations and creating a Brand

I’ve noted above I expected a lot of pictures and text from people in this qualifier, after all Jonni Walker has created his own brand around animal visualisations, stock photography and black backgrounds. However I have my style of visualisation,  I’m not Jonni Walker, what he does is amazing but there’s only place for so many “Jonni Walker” vizzes. I couldn’t replicate what he does if I tried.

In the past I’ve deliberately combined analytics and design, treading the fine line between best practice and metaphor, staying away from photograph and external embellishments and preferring my visualisations to speak for themselves through their colours and data. The subject this time was tricky though…was it possible to produce an animal visualisation without pictures?

Finding a Subject

I could turn section this into a blog post on it’s own! . I trawled the internet for data and subjects over days and days. Some of the potential subjects :

  • Homing Pigeons (did you know their sense of smell affects their direction)
  • Poo (size of animal to size of poo) – this was my boys’ favourite
  • Eggs (had I found this data I’d have been gazumped: http://vis.sciencemag.org/eggs/)
  • Zebra Migration
  • Sightings Data of Butterflies

Literally I couldn’t find any data to do these enough justice, I was also verging on writing scientific papers at points. I was running out of ideas when I found this website: http://www.naturescalendar.org.uk/ – and I dropped them an email to see if I could use their data. Phenology (studying natures cycles) has always interested me and getting my hands on the data would be fantastic. There was even a tantalising  mention of “measuring the speed of spring” on their news site with some numbers attached but no mention of the methodology….

Now, I’m impatient and so….using a few “dark art” techniques I didn’t wait for a reply and scraped a lot of the data out of their flash map using a combination of tools including Alteryx.

Thankfully a few days later they came back and said I’d be able to use it (after a short discussion) and so all ended well.

Measuring the Speed of Spring

Now I had the data working out how to measure my own “speed of spring” was difficult. Several options presented themselves but all had their drawbacks…the data is crowd-sourced from the public, mainly by people who can be trusted but amateur outliers could affect the result (do you want to say Spring has hit Scotland based on one result). Also the pure number of recorders in the South East, say, could affect any analysis, as could the lack of them in say, Scotland. Given we’re expecting to see Spring move from South to North then that could seriously sway results.

In the end I played between two methods:

  1. A moving average of the centroid of any sightings – and tracking it’s rate of movement
  2. A more complex method involving drawing rings round each sighting and then tracking the overall spread of clusters of sightings across the UK.

In the end I opted for the latter method as the former was really too likely weighted by the numbers of sightings in the south.

Very briefly I’ll outline my methodology built in Alteryx.

  1. Split the country into 10 mile grids and assign sightings to these based on location
  2. Taking each grid position calculate the contribution to the surrounding grids within 50 miles based on a formula: 1-(Distance/50). Where Distance is the distance of the grid from the source grid.
  3. Calculate the overall “heat” (i.e. local and surrounding “adjusted” sightings) in each grid cell
  4. Group together cells based on tiling them into groups dependent on “heat”
  5. Draw polygons based on each set of tiles
  6. Keep the polygon closest to the “average” grouping i.e. ignoring outliers beyond the standard deviation

I then did the above algorithm for each week (assigning all sightings so far this year to the week) and for each event and species.

These polygons are what you see on the first screen in the visualisation and show the spread of the sightings. I picked out the more interesting spreads for the visualisation from the many species and events in the data.

Small Multi Map

The above process was all coded in Alteryx.

Alteryx

If you look closely there’s a blue dot which calls this batch process:

Alteryx Batch

which in turn calls the HeatMap macro. Phew, thank god for Alteryx!

Now to calculate the speed, well rate of change of area if you want to be pedantic! Simple Tableau lookups helped me here as I could export the area from Alteryx and then compare this weeks area to the last. The “overall speed” was then an average of all the weeks (taking artistic licence here but given the overall likely accuracy of the result this approximation was okay in my book).

Iterate, Feedback, Repeat

I won’t go into detail on all the ideas I had with this data for visualisation but the screenshots will show some of what I produced through countless evenings and nights.

 

“Good vizzes don’t happen by themselves they’re crowdsourced”

I couldn’t have produced this visualisation without the help of many. Special mentions go to:

Rob Radburn for endless Direct Messages, a train journey and lots of ideas and feedback.

Dave Kirk for his feedback in person with Elena Hristzova at the Alteryx event midweek and also for the endless DMs.

Lorna Eden put me on the right path when I was feeling lost on Friday night with a great idea about navigating from the front to the back page (I was going to layer each in sections).

Also everyone else mentioned in the credits in the viz for their messages on Twitter DM and via our internal company chat (I’m so lucky to have a great team of Tableau experts to call on as needed).

Difficulties

Getting things to disappear is a nightmare! Any Actions and Containers need to be in astrological alignment….

Concentrating on one story is hard – it took supreme will and effort to concentrate on just one aspect of this data

Size – I need so much space to tell the story, this viz kept expanding to fit it’s different elements. I hope the size fits most screens.

The Result

Click below to see the result:

Final.png

 

Why we’re going to #opendatacamp

screenshot-2017-02-18-11-42-58On Saturday and Sunday fellow Tableau Zen Master, Rob Radburn and I will be attending Open Data Camp in Cardiff.

So why are we spending a Saturday and Sunday in Cardiff away from our families and spending a small fortune on hotels?

 
Well sometimes data visualisation can be frustrating. We’re both prominent members of the Tableau Community and we’ve spent countless hours producing visualisations for our own projects as well as community initiatives such as Makeover Monday and Iron Viz. There’s lots of fun and rewards for this work, both personally and professionally and so why is it frustrating? Well shouldn’t there be more to data visualisation than just producing a visualisations for consumption on Twitter? How do we do produce something meaningful and useful (long term) through data and visualisations?

Open Data seems a suitable answer however with so many data sets, potential questions and applications it’s hard to know where to start. The open data community have done a great job at securing access to many important datasets but I’ve seen little useful visualisation / applications of open datasets in the UK beyond a few key datasets. How do we do more?

tableau_logo_crop-jpg_resized_460_Tableau Public on the other hand has done a fantastic job of ensuring free access to data visualisation for all, but few in the community have worked with the open data community to enable the delivery of open data through the platform.

Rob and I are hoping that our pitch at Open Data Camp will facilitate a discussion around bridging the gap between the Tableau Community and the Open Data Community. On the one side we have a heap of engaged and talented data viz practitioners on Tableau Public looking for problems, on the other hand a ton of data with people screaming for help understanding it….on the face of it there seems some exciting possibilities, we just need to pick through the .

Oh and while we’re there if anyone wants us to pitch a Tableau Introduction and / or Intro to Data Visualisation we’d be happy to facilitate a discussion around that too.

Would love your thoughts

Chris and Rob

MM Week 44: Scottish Index of Multiple Deprivation

This weeks Makeover Monday (week 44) focuses on the Scottish Index of Multiple Deprivation.

2016-10-30_21-54-04

Barcode charts like this can be useful for seeing small patterns in Data but the visualisation has some issues.

What works well

  • It shows all the data in a single view with no clicking / interaction
  • Density of lines shows where most areas lie e.g. Glasgow and North Lanarkshire can quickly be seen as having lots of areas among the most deprived
  • It is simple and eye catching

What does work as well

  • No indication of population in each area
  • Areas tend to blur together
  • It may be overly simple for the audience

In my first attempt to solve these problems I addressed the second problem above using a jitter (using the random() function)

2016-10-30_22-05-26

However it still didn’t address the population issue and given the vast majority of points had similar population with a few outliers (see below) I wondered whether to even address the issue.

2016-10-30_22-08-40

Then I realised I could perhaps go back to the original and simply expand on it with a box plot (adding a sort for clarity):

2016-10-30_22-16-23.jpg

Voila, a simple makeover that improves the original and adds meaning and understanding while staying true to the aims of the original. Time for dinner.

Done and dusted…wasn’t I? If I had any sense I would be but I wanted to find out more about the population of each area. Were the more populated areas also the more deprived?

There have been multiple discussions this week on Twitter about people stepping beyond what Makeover Monday is was intended to be about. However there was story to tell here and I dwelled on it over dinner and, with the recent debates about the aims of Makeover Monday (and data visualisation generally), swirling in my head I wondered what I should do.

I wondered about the rights and wrongs of continuing with a more complex visualisation, should finish here and show how simple Makeover Monday can be? Or should I satisfy my natural curiosity and investigate a chart that, while perhaps more complex, might show ways of presenting data that others hadn’t considered….

I had the data bug and I wanted to tell a story even if it meant diving a bit deeper and perhaps breaking the “rules” of Makeover Monday and spending longer on the visualisation. I caved in and went beyond a simple makeover….sorry Andy K.

Perhaps a scatter plot might work best focusing at the median deprivation of a given area (most deprived at the top by reversing the Rank axis):

2016-10-30_22-11-22

 

Meh, it’s simple but hides a lot of the detail. I added each Data Area and it got too messy as a scatter – but how about a Pareto type chart…

2016-10-30_22-23-23.jpg

So we can see from the running sum of population (ordered by the most deprived areas first) that lots of people live in deprived areas in Glasgow, but we also see the shape of the other lines is lost given so many people live in Glasgow.

So I added a secondary percent of total, not too complex….this is still within the Desktop II course for Tableau.

2016-10-30_22-26-03.jpg

Now we were getting somewhere. I can see from the shape of the line whether areas have high proportions of more or less deprived people. Time to add some annotation and explanation….as well as focus on the original 15% most deprived as in the original.

Click on the image below to go to the interactive version. This took me around 3 hours to build following some experimenting with commenting and drop lines that took me down blind (but fun) alleys before I wound back to this.

2016-10-30_21-51-53

Conclusion

Makeover Monday is good fun, I happened to have a bit more time tonight and I got the data bug. I could have produced the slightly improved visualisation and stuck with it, but that’s not how storytelling goes. We see different angles and viewpoints, constraining myself to too narrow a viewpoint felt like I was ignoring an itch that just needed scratching.

I’m glad I scratched it. I’m happy with my visualisation but I offer the following critique:

What works well:

  • it’s more engaging than the original, while it is more complex I hope the annotations offer enough detail to help draw the viewer in and get them exploring.
  • the purple labels show the user the legend at the same time as describing the data.
  • there is a story for the user to explore as they click, pop-up text adds extra details.
  • it adds context about population within areas.

What doesn’t work well:

  • the user is required to explore with clicks rather than simply scanning the image – a small concession given the improvement in engagement I hope I have made.
  • the visualisation take some understanding, percent of total cumulative population is a hard concept that many of the public simply won’t understand. The audience for this visualisation is therefore slightly more academic than the original. Would I say this is suitable for publishing on the original site? On balance I probably would say it was. The original website is text / table heavy and clearly intended for researchers not the public and therefore the audience can be expected to be willing to take longer to understand the detail.

Comment and critique welcomed and encouraged please.

Makeover Monday Week 43: US National Debt

2016-10-23_11-50-50

This weeks Makeover Monday tackles National Debt. Let’s start by looking at the original visualisation.

Apparently the US National Debt is one-third of the global total. Showing these two values in a pie chart is a good idea as it quickly shows the proportions involved. However the pie chart chosen does have a strange white think slice between the two colours and a black crescent / shadow effect on its outside edge which add no real value (in fact the white slice added a bit of confusion for me).

The visualisation then goes on to show $19.5 trillion dollars in proportion to several other (equally meaningless) large figures. The figures do add some perspective on just how big that figure is and the use of $100 billion blocks in the unit chart does allow an easy comparison. One slightly critical feature, if we were to pick holes in the visualisation, is that half-way through the view starts showing the shaded blocks to compare to the 19.5 trillion, whereas before it doesn’t.

2016-10-23_12-02-50

with shaded blocks

2016-10-23_12-03-18

no shaded blocks

Achieving consistency is important in data visualisation as it lets the reader know what to expect and gives them a consistent view each time to aid comparisons. So making a design decision to add shaded blocks across each comparison would perhaps have been a better choice as opposed to switching half way through.

Visualising Small Data

The dataset provided for the weeks makeover has simply two rows, showing the debt for each area (US and Rest of the World).

2016-10-23_12-10-08

Clearly this presents a visualisation challenge. Visualising small datasets is hard, as there are limited choices. One can attempt to include secondary datasets to show the numbers in context, as the original author has done but another, simpler choice, might be to show them relative to each other – similar the original’s pie chart. One might even attempt to show how the data corresponds to the population of the US or the world, attempting to bring the figure down to something manageable (in the US the debt is a more comprehensible $61,000 per head).

Before we attempt to visualise something though we need to think about the audience and message we want to provide. Are we simply trying to show the figures without any comment? or do we want to focus on how large they are? or are we commenting on how large the US debt is to the rest of the world and making a social / political comment?

With a dataset so small any editorial comment is difficult though. For example we have no context on the direction of movement of these figures. The US might be quickly bringing it’s debt under control, while the ROW grows, or the opposite might be true. The ROW figure might be dominated by other developed countries, or might be shared equally. How can we comment without further analysis on temporal change or the context of this figure?

If we can’t comment editorially then we are left with simply showing how huge these numbers are. My criticism of the original is that the number it shows in comparison are equally huge, and equally incomprehensible for a lay person. Given this visualisation is published on a website Visual Capitalist perhaps their audience is more familiar with global oil production or the size of companies but for any visualistion published away from the site a more meaningful figure is needed. Personally I think the amount per head is an especially powerful metaphor. In the US $61,000 dollars each would be required to clear the debt, the ROW world would just have to pay a little over $5.

To Visualise or not to Visualise

Now there is an important decision here, how to effectively show those figures in context. However with such small data is there any point in doing so? Everyone can quickly see $5 is much less than $61,000 – we don’t need a bar chart or bubble to show that, and we certainly don’t need a unit chart or anything even more complex. This is the problem with small datasets, any visual comparison is slightly academic given we can quickly mentally interpret the numbers.

One might be tempted to argue that a data visualisation is needed to engage our audience. Perhaps a beautiful and engaging data visual might do a good job of this, however so would the use of non-data images like the the below.

us-national-debt

Defining Data Visualisation

Makeover Monday is a weekly social data project, should a visual that includes only text be included?

What if the pile of dollars in the image above had exactly 61,000 dollar bills would that make it any more of a data visualisation than one that contained a random amount? What if, instead, we added as a unit chart with 12,200 units of $5 bills? These accompanying items don’t help us visualise the difference any better than the text. One could argue where the main purpose of a visualisation isn’t to inform or add meaning or context, and is instead used as a way of engaging the user, then it becomes no different to any other image used in this way. Therefore adding any more data related visualisations to the above text wouldn’t make the image any more of a data visualisation than the one above.

Semantic arguments that attempt to define data visualisation are interesting but academic. Ultimately each project that uses data does so because it needs to inform its audience, and it is the success of the transaction from author to audience that deems how successful the project is.

So should we define a data visualisation as more (or less) successful because of its accompanying “window decoration” (or lack thereof)? In my opinion yes. Accompanying visuals and text help provide information to the audience and can help speed up the transfer of information by giving visual and textual clues.

Do charts / visuals that make no attempt (or poor attempts) to inform the audience add any more value to a data visualisation project simply because they use data? In my opinion, no. This isn’t the same same thing as saying they have no value but simply producing a beautiful unit chart, say. with the data for this Makeover Monday project would add no intrinsic extra value in educating the audience and therefore would be no more valuable than any other picture or image.

Is the above image a successful Data Visualisation? Let’s wait and see on that one. I’m intrigued to see what the community makes of a purely text based “visualisation”.

Does it do a better job at informing the audience than the original? Again this is hard to answer but I believe I understand more about the size of the debt when it is visualised in terms of dollars per head. By bringing these numbers down to values I understand I did’t need to add any more visualisation elements in the same way as the original author, therefore you might say mine is more successful because it manages to pass across information in a simpler, more succinct transaction.

UK Netflix Movie Finder

Click on the image below to see my submission for the “Mobile” IronViz contest – it should work on both mobile, tablet and desktop.

desktop

Ideation

I’ll be honest, I didn’t start this Viz until Friday night before the contest ended on Sunday. My wife was out Friday and Saturday evenings and so I knew I had a few hours…however with little time I didn’t want to waste my time producing a visualisation on something trivial. Instead I wanted to produce a visualisation on something useful, an “app”, something I’d use.

For a long time now I’ve wished I could find something that would save me the job of hunting down decent movies on Netflix. I have a Netflix Subscription but sometimes hidden gems can be hard to root out. I have a good track record of finding good movies though it.takes me a long time to hunt through reviews online as I look through Netflix. It’s become a running joke I’ll spend longer picking a movie than actually watching it.

If only there was an app or website that would give me both combined…..

Friday Night is Data Night

So with my idea in mind Friday night I spent trying to get data…after some googling there weren’t many easy options and so I found myself installing Python to try a Python script to scrape Netflix but hours went by without luck (all while watching Butch Cassidy and the Sundance Kid on Netflix). The lesson: don’t try and learn python in an hour.

Head in hands and running out of ideas I google some more and found www.cinesift.com – it was purpose built to do what I needed and had all the data I needed in its search (wish I’d found this site ages ago!) but how to get at the data?

Brute force searching seemed the best option so I ran a search for Netflix UK movies and then scrolled down and down and down to populate the dynamic page…then Ctrl+A, Ctrl+C and Ctrl+V gave:

films-text

Ouch….I also took the HTML source for use later. Time for sleep….

Saturday Night Feels Alright

Now the previous night had proved a mixed bag, so I turned to my trusty companion Alteryx to solve my data woes:

alteryx

What does this spaghetti do? Well it takes the pretty horrible format txt file and turns into into rows and columns of proper data. The trick was to assign a row ID to each row and restart at the “Play Trailer” section marking a new movie. Then I simply needed to crosstab and rename the data. It also pulls out the Movie images from the html source uisng Regex and finds their URL. It combines the two and then splits out multiple genres and casts / directors into separate fields (in the end this last step wasn’t needed but I thought I might do, without it the Alteryx module is massively simplified removing the whole last row).

Then it was to Tableau…I decided to design for mobile first and quickly over a couple of hours designed a few initial drafts of pages then, as it was getting late I posted them to my colleagues for comment.

Next morning, while I was getting the kids ready to head out, the comments started coming back:

comments.png

 

I love that I have access to such a great and diverse range of opinions and talent from my peers at The Information Lab. As you can see I got loads of useful feedback – if you want to make a visualisation better just share it as much as possible, ideally on a collaboration tool with image commenting so people can highlight their comments with the corresponding piece of the image.

Sunday Night Polish

Tonight, Sunday, was all about acting on the feedback and building the Desktop and Tablet views. I designed a background theme of the visualisation (using a very quick piece of photoshop work with the Netflix logo) which I incorporated into Desktop – the black left panel but I soon realised that there were some limitations with the Device Designer in Tableau in this initial 10.0 version.

Firstly changing the background of Filters and Parameters alters them for ALL devices. Ouch. That meant ones overlaid on black looked odd on mobile when overlaid on white. Normally a quick solution would be to add a Container and colour it black but in device mode you can’t format object…grrr I was getting frustrated due to my lack of knowledge of this new feature and it’s limitations.

It was hard work fitting the phone layout to different sized phones. Lack of real estate means having to compromise on design vs functionality. All aspects of the visualisation, Text, Filters, Logos need justifying in terms of space. I loved the challenge that working on mobile provided and I hope it makes people entering the competition focus on simple (KISS) visualisations.

I ended up working on the smallest device and then checking it resized okay onto larger screens. As you can see below the differences are quite big depending on the phone.

In the end I decided the best approach was to switch my designs to Floating to overcome this limitation…while not ideal it did allow me to work round most of the problems. However images needed some tweaking as they expand / contract using Fit Width / Fit All.

Anyway..I got it done so I’m happy…and before midnight too. All in all I remain pleased with what was just around 12 hours work!

 

 

 

 

 

 

 

 

 

“Fitted” Gannts in Tableau

The Challenge

During Makeover Monday this week (week 22) I came across a problem: I needed to produce a Gantt chart for a huge amount of overlapping dates. Gantt was really the only way for me to go with start and end dates in the data (in the back of my head I’m thinking Mr Cotgrave will be loving this data given his fascination with the Chart of Bigraphy by Priestly) and I was fixated with showing the data in that way (I blame Andy) but everything I tried in Tableau left me frustrated.

Jittering left wide areas of open space and no room for labels, even if I zoomed into one area would render leave lots of the data unexposed.

2016-05-30_21-58-59

I knew what I wanted to do…I wanted to neatly stack / fit the bars in a decent arrangement to optimise the space and show as much data as possible at the top of the viz. The original author in the link for the makeover had done this as such:

Now Makeover Monday usually has a self-imposed “rule” that I tend to adhere to, spend an hour or less (if I didn’t stick to this I could spend hours), but here I was after half an hour without any real inspiration except something I knew wasn’t possible in Tableau. It was a challenge and to hell with rules I do like a challenge – especially given the public holiday in the UK meant I had a little time.

The Algoritm

So I turned to Alteryx, but how to stack the bars neatly.

Firstly I needed a clean data set, so I fixed some of the problems in the data with blank “To” dates and negative dates using a few formula and then I summarised the data to just give me a Name, From and To date for their life.

Algorithm-wise I think I wanted to create a bunch of discrete bins, or slots, for the data. Each slot would be filled  as follows:

  1. Grab the earliest born person who hasn’t been assigned a slot
  2. Assign them to a slot
  3. Find the next person born after they die, and assign them to the same slot
  4. Repeat until present day

In theory this would fill up one line of the Gantt. Then I could start again with the remaining people.

An iterative macro would be needed because I would step through data, then perform a loop on the remainder. First though I realised I needed a scaffold dataset, as I needed all the years from the first person (3100BC to present day).

I used the Generate Rows tool to create a row per year, and then joined it to my Name, Birth, Year data to create a module that looked like:

2016-05-30_22-10-07

Data:

2016-05-30_22-11-17

I’d fill the “slot” variable in my iterative process. So next up my iterative macro.

Translating the above algorithm I came up with a series of multi-row formula:

2016-05-30_21-29-41.png

The first multi-row formula would assign the first person in the dataset a counter, which would count down from their age. Once it hit zero it would stay at zero until a new person was born, at which time it would start counting down from their age.

The second multi-row formula would then look for counters that had started to work out who had been “assigned” in this slot and assign them the iteration number for the macro, i.e. first run would see everyone going in slot 1, second in slot 2, etc.

Perfect! Now to run it and attach the results to the original data:

2016-05-30_22-19-25

Easy peasy Alteryx-squeezy. That took me 30 mins or so, really not a long time (but then I have been using Alteryx longer than you….practice makes perfect my friend).

The Viz

So now back to Tableau:

2016-05-30_22-23-24

Neat, progress! Look at how cool those fitted Gannt bars look.  Now what….

Well I need to label each Gantt with the individuals name but to do that I really have to make my viz wide to give each one enough space….

2016-05-30_22-25-06

The labelling above is on a dashboard the maximum 4000 wide…..we need wider! But how? Tableau won’t let me….

Let’s break out the XML (kids don’t try this at home). Opening up the .twb in Notepad and….

2016-05-30_22-27-58

I changed the highlighted widths and low and behold back in Tableau – super wide!

Now I can label the points but what do I want to show – those Domain colours look garish….

So I highlighted Gender and….pop. Out came the women from history – nice story I think to myself. I decided not to add a commentary, what the viewer takes from it is up to them (for me I see very few women in comparison to men).

Other decisions

  • I decided to reverse the axis show the latest data first and make the reader scroll right for the past, mainly I did this because the later data is more interesting
  • I decided to zoom in at the top of the viz, generally I expect viewers won’t scroll down to the data below but while I toyed with removing it I decided that leaving it was a slightly better option. The top “slots” I’m showing are arbitrarily chosen but I feel this doesn’t spoil the story.
  • I decided to add a parameter to highlight anything the user chose (Gender or Occupation) – tying it into the title too.
  • I fixed AD / BC ont he axis using a custom format

2016-05-30_22-41-28

Conclusion

So I spent a couple of hours in total on this, way more that I planned today but that’s what I love about Makeover Monday – it sets me challenges I’d never have had if I hadn’t been playing with the data. I’ve not seen this done in Tableau before so it was a fun challenge to set myself.

Click on the image below for the final viz

2016-05-30_21-17-19

 

 

 

 

 

My Top 10 Blogging Tips

In this, my first blog post of 2015, I want to talk about blogging and offer some tips for those new to blogging in the Tableau or Alteryx community.

Introduction

Last year I wrote 40 blog posts, spread across this site and The Information Lab blog. I covered a range of subjects, from simple visualisations through to explainers of specific functionality, e.g. Tableau permissions, I also wrote a few opinion / commentary pieces. I’ve also just started a new site to host my more specific “BI and the Business User” blog posts. All this adds up to a lot of words, and I’ve learnt a lot, so here’s some of my top tips for blogging in the Tableau or Alteryx space.

(image by Alex Martinez - click for details / licence)

(image by Alex Martinez – click for details / licence)

Tip #1

Do it for the love of it This is the single most important tip, if you read no further please take this in. If you don’t feel it – don’t post it.

Don’t start blogging for any reason other than because you enjoy it. Blogging because you want to be the next Tableau Zen Master or the next Alteryx ACE, because you want to change careers, or even because you want to impress that hot girl you saw at TCC isn’t going to work in anything but the short term. You’ll soon lose your fizzle, the time between posts will get longer and sooner rather than later you’ll stop. Unless you are very, very single minded then I guarantee this will happen, and you’ll be disappointed with yourself for trying.

Likewise with a given post; don’t post to get views or retweets, or likes, just post what interests you, the rest will come as a result of that.

Blogging shouldn’t be a chore, you’re choosing to spend your spare time doing it after all, if it’s a chore go and do do something else – you owe yourself that.

Tip #2

Don’t Set Targets Targets will be unhelpful when you start blogging, and may cause you to feel undue pressure to post to meet the targets you’ve set. How do you know you will have enough time to post, say once a week, before you’ve tried? Believe me it’s harder than it sounds.

Instead of setting targets then just post when you have time (or when you have made time), and keep a backlog of subjects to ensure you make the most of that time.

Tip #3

Keep a notebook for subjects A virtual or physical notebook can really help record all those ideas you have for blogs. Those tips you come across in the course of a day in your work, or in a conversation with a colleague or twitter friend, need quickly recording so they don’t get lost. Sometimes a series will pop out naturally from this backlog, and you can string together a set of posts, other times you’ll find yourself with a spare 30 minutes and be able to pick a short subject and get a draft done there and then. Unless you record your ideas then those opportunities can go begging.

I currently have about 20 ideas stored away, some will never get written, some I plan to write about next week, others will probably be written by others before I find time. Before I started recording them I’d find I’d sit down to write a blog and wonder what to write about.

Tip #4

Think carefully about where the time will come from I estimate that on average I probably spend a day in total on each of my blogs, some have taken significantly more, others much less but roughly it’s probably a day. From the inception of an idea, to building a viz or module, through tidying and making it public, and then writing the actual text for the blog (not to mention proof-reading and editing) there’s a lot of work. So for me that’s about 40 posts x 8 hours = 320 hours of work over the last year – that’s about 6 hours a week.

I’m fortunate that I also blog for work and so some of those hours can happen in my working day, but more often than not even the “work” blogs are done in my personal time (after all I enjoy it). Therefore you have to find time in your week to fit in those hours – for me that’s on trains or in hotels, or while my wife is out in the evenings. Everyone is different but try and think about where your time will come from; I know some people blog in their lunch hours, others on their daily commute, I imagine others are doing it off the side of their desks at work. Regardless of where you find the time then be sure to acknowledge it needs to be spent, there’s no shortcut.

If that time is coming out of your family time then I’d also recommend you speak to your partner and explain your motivation – if you’re partner isn’t from the community they’re unlikely to understand why you’re spending time away from them to write a blog post.

Tip #5

Get social You’re going to need readers from somewhere. “Build it and they will come” doesn’t work if no-one can find it. So do some signposting. Get a twitter account and start posting links to your own posts, as well as other blogs you read – that way other’s won’t think twice about retweeting your content.. Don’t be afraid to share several posts through the day either, individual tweets can quickly get lost and you need to cater for different timezones. Google+, Linked In and Facebook are also great ways to connect and share your posts.

Tip #6

You don’t need a niche, but it helps For the last year I’ve blogged about anything and everything, whether I have expertise or not I’ve felt I can still offer something by sharing my thoughts. This approach has worked for me, but after 40 posts in the last year I’ve started feeling that a bit more specialism might help me focus, I’ve noticed other bloggers that the same approach too, moving from general to specialist over time.

Having a “specialism” (not necessarily one you’re expert in, more just a specific subject) helps you find your space in the community. Consider your likes and passions, look at what other’s post and look for gaps. Finding that specialism can help you stand out and give your readers a reason to seek you out on specific subject areas.

Tip #7

Craft blog posts around any community themes Your posts will get extra attention if you use current affairs, or post around the specific themes in the community, e.g. if it’s Tableau Politics Month then a Viz and post around politics is clearly going to get more publicity from the Tableau Public team.

Tip #8

Use your page stats All blogging platforms will give you information on your views and most popular posts, so use that information. What worked well, what didn’t. Be critical and analyse the posts – make sure you learn something.

Did you share your post on twitter on a particular day / time? Did you get a retweet from a specific person? Did your subject tie in with a given theme? Is it a particularly useful tip you wrote about? Whatever the stats tell you worked then do more of it! Likewise if something didn’t get many views then use it as a learning experience, or simply ask other bloggers – they’ll be more than happy to offer hints and tips and some friendly critique.

Stats

Tip #9

Revel in the rewards If you followed my first tip, and you’re doing it for the love of it, then that doesn’t mean you can’t enjoy all those views, retweets and feedback you’ll get. The communities we blog in are some of the greatest around, and all content is welcomed, from beginners to the experienced and you’re sure to get some shares and feedback.

For me this is the lifeblood of the whole experience, those retweets and mentions help justify what I enjoy, and give me motivation to keep writing about the things I enjoy the most.

Tip #10

Enjoy it! Walk to your local coffee shop, grab a wedge of cake; head to the local pub and grab a beer; or head to your office with a glass of wine. Whatever works for you. However I cannot stress enough that blogging shouldn’t be a chore.

Your Tips?

What motivates you to blog? What works for you? Do you have a schedule, or like me do you blog when you find time? I’d love to hear your thoughts.