MeasureCamp London VIII
On Saturday 5 March I attended my first industry event as a UX designer. Although not directly related to UX, I believe that digital analytics is a crucial part of the UX process which can sometimes be undervalued.
MeasureCamp is an 'unconference', which means that the schedule is created on the day and all the speakers are attendees who propose sessions that they may or may not have prepared. This could be anything from a typical presentation for a room of 100 to a group discussion with a few others.
This format, as well as there only being around 250 participants, meant the atmosphere was extremely collaborative and accessible to everyone, even newbies like myself!
Accurately measuring engagement with attention analytics
The first session I went to was by Ed Brocklebank about how Google Analytics does such a 'crap' job of measuring content engagement through the ‘time on page’ metric.
This is because it only tracks the time between the server calls of page views. In practice this means that not only does it not know if you have left it open in a background tab or stopped reading the page, it also cannot measure it on the last page of a user’s session (exit page) because it has no subsequent page view to relate to. This means that any use of this metric needs to be taken with a pretty large dose of skepticism.
So what’s the solution? Ed proposed a metric he termed ‘attention minutes’ which involves having server calls every 5 seconds while a user is active on the page. This seems to make a lot of sense, and doesn’t seem too difficult to implement (although the 500 hit limit per session on Google Analytics is something to consider).
Overall, it was a very good discussion around a topic that I personally did not know was so broken, with clear implications for better understanding content engagement.
Accurate testing & forecasting for stakeholders
The next session, presented by Rupert Bowater, was a little more specific but essentially had three clear messages:
- run multiple cycles of your test to increase accuracy
- weight results of the test across the year by understanding the wider context in which the test took place
- give the results as a range rather than an exact figure to manage expectations on the forecast
I don’t think much of what was said was groundbreaking in this talk, but there were some solid recommendations to bear in mind.
Session replay is vital!
The session before lunch was always going to be a good one. I had attended the training workshops the day before and gone to the afternoon session held by Craig Sullivan, so I knew it would be entertaining. What I really like about his delivery is not just the no-nonsense approach but the clear, actionable insights you can take away from it. As someone new to this game, it is incredibly useful to hear how someone with vast experience has solved the same problems and is passionate about sharing this with everyone else.
This session covered the use of replay tools that allow you to grab videos of real user behaviour on a website without the need to set up user tests. This allows you to observe actual visitors in huge numbers to find the real problems.
“Who cares what the 4% of users who are converting do, let’s find out why the other 96% don’t!”
As well as replay tools, he also covered VOC tools. This was interesting because I have always been skeptical of these - mainly because I personally always ignore questions that appear on websites I use. My concern would be that only a certain type of user would reply to these and therefore there is a danger of skewing your design to appeal only to them.
However, as Craig pointed out to me afterwards when I asked about this, it’s another piece in the puzzle that helps you understand the users. I think it’s important to bear this in mind for all types of analysis - rather than blindly follow the feedback, you need to consider the context and limitations of each type to provide as clear an overall picture as you can.
Google Analytics integration with Google Sheets
After lunch I attended a very practical session run by Edward Upton about how to use the Google Analytics add-on to create custom dashboards in Google Sheets. This was an interesting session which gave me loads of potential use cases - in fact I had chatted with Ed during lunch about my surprise at the lack of data insight I have come across, so any way of making it easier for others to access their data in a clear and accessible format is useful.
Google Analytics device detection is broken!
The second session I attended by Craig Sullivan was about analysing device usage to discover potential experience issues. He had covered some of this the day before, but this was extremely enlightening for me because I had always ignored the device usage reports in Google Analytics and dismissed them as useless for any meaningful analysis.
Craig provided a model for how to perform this analysis and also clarified the bits where Google Analytics was falling short and what we can trust from those metrics. Again, the most insightful bit of the session (beyond the practical takeaways) was the idea of not blindly looking at the numbers:
“You also need to consider the person behind the device — Mac laptop users might have more money!”
It never occurred to me that broken devices would be an issue, after all that is what browser testing is for. However, he gave examples of some huge companies that he has helped that you never would have thought would be affected. In fact, many of them found bugs through this method that were costing them more than their annual IT department budgets in missed conversions.
Google Analytics automated tools for CRO & more
The following session (again by Craig) was a discussion about a tool he is building that will identify the broken device experiences described above without the need for hours of data analysis. He outlined the current plans for it and asked people for suggestions and thoughts. Very much looking forward to this tool being released later in the year (hopefully with my suggestion of integrating session replay tools!).
Measuring gaps in the user experience
One of the few explicitly UX sessions of the day (although I would argue the whole point of digital analytics is to inform UX) was given by Charles Meaden.
His talk served as a good, practical guide for where to find usability issues. Following on from Craig earlier, he said the first step of any site analysis should be looking at device usage to highlight potential issues.
Another point I took away from this session was a way of understanding site navigation by users through Google Analytics. This is something I had to try and analyse for a recent IA review and fudged together my own way of looking at this. Charles’ method wasn’t very different from mine, but much simpler! To work out which pages are being navigated to by site users, simply subtract ‘entrances’ from ‘unique page views’.
Compared to What?
The final session of the day was a more strategic talk about how we practice data analytics that I will sum up in a few key takeaways:
- trust in the data, we know the limitations of it but perfect data is impossible so believe in your findings
- the more we focus on granular data, the less interesting it is to the business
- data doesn’t do anything, actions are what is important - any data analysis must come with insights, which in turn must come with recommendations for the business
I think that last point is a fitting way to end my summary of the day. There is no point doing analysis for the sake of doing it, it must aim to help the user or business achieve their goals.