In my last couple of posts I discussed salience models and – two methods that are frequently used as a proxy for eye-tracking and sometimes even passed off as eye-tracking itself. They aren’t the only offenders of course. Mouse tracking is another method which is based on the fundamentally incorrect assumption that tracking the mouse on a desktop is equivalent to tracking the gaze. In this Google paper from Rodden & Fu (2007) they conclude that “Without the eye data, we cannot determine the exact sequence of events” and this is because we simply do not move the mouse everywhere we look. Clearly there is a correlation between clicks and eye-movements, but it’s all the good stuff in-between the clicks, and even after the clicks if a page is slow loading, that you’re potentially missing, so you need to track and analyse BOTH if you want the full story.
Web-cam tracking is another methodology that, while at least attempting to track eyes, has until very recently had a very low levels of accuracy and precision in part because of its reliance on visible light for its eye-detection. Another issue is that it’s frequently used for self-administered tests which introduce a huge number of unknown potential confounds to a study and, at very least, require much, much bigger samples to separate the signal behaviours of interest from the noise. Interestingly, web-cam eye-tracking is just about getting up to scratch at the precise time that infra-red eye-tracking is becoming cheap enough to equip large panels with affordable eye-trackers for home testing, and with eye-tracking now being built into some laptops it may well be that web-cam tracking will be confined to studies where sub degree accuracy is not so important.
But I really wanted to end the year with a post on something a little more positive than methodology bashing! After all, as I mentioned to a colleague last week, some of the worst eye-tracking research I’ve ever seen was done with actual, research grade eye-trackers! And that’s because good research is not JUST about the tools you choose, after all you can hand me a Stradivarius but the moment I start playing it’s still going to sound like a cat in food processor!
Good research starts with the decisions you make before you collect any data at all and, in many cases, some bad, or lazy, decision making can invalidate the results from the outset. So, to the end the year, here’s my suggestion for 5 simple New Year’s Resolutions for those of you who ARE doing eye-tracking research and want to start getting some real value from your efforts. Don’t worry, there’s nothing too challenging here and I promise not to mention “looking beyond the heat-map”… doh! These tips are simply steps you can take before you hand a single consent form to your first participant and yet all five of them will pay back big time!
This resolution stems from the many vague research briefs I see. Statements like “we want to understand shopper behaviour” or “we want to know what our customers think of our new app” abound, and of course anyone doing research has these types of desires, but when designing any study, not just an eye-tracking one, it really pays to break this down into what you actually mean. For example, are you interested in how quickly and easily your products are located or recognised on a shelf? Are you interested in what products are being compared with yours and whether the messaging on the packaging is facilitating decision making or even being seen? Are you interested in whether those shelf wobblers that work so well in peripheral vision are priming a shopper to find your product or possibly occluding your packages when the shopper gets within reaching distance?
The importance of defining your questions to this level of specificity is that it will help you with the next 3 resolutions, and moreover it essential in answering the most important question for ANY eye-tracking study – is eye-tracking going to tell me anything useful? If you’re trying to use eye-tracking to tell you how shoppers FEEL about a brand or what they RECALL about a website, in all honesty, you’re probably going to be disappointed. Eye-tracking alone cannot answer questions like these. It can help you to understand whether the visual signals to elicit emotional responses or to be encoded in memory are actually being seen by the participant at the right time or for long enough, but you will need to supplement your eye-tracking with other methodologies to get those answers completely and in some cases this might eradicate the need for eye-tracking altogether. It is for this reason that I always encourage OUR customers to involve us early in the study design phase so we can identify precisely what they will get out of a study and set expectations appropriately.
Anyone familiar with eye-tracking, or this blog, will know by now how important the wording of a task can be in an eye-tracking study. For some people this might be a reason to avoid eye-tracking at all, after all if a methodology is that sensitive it can be a little scary, but I want to reassure you that task instructions are nothing to be afraid of. This strong association between task and eye-movements can be used to your advantage when designing a study. For example, asking a shopper to “find <your product>” is perfect when you’re comparing different pack designs or planograms since it’s a very focussed instruction and will really help to identify the difference between your designs/layouts. It doesn’t matter that you’ve directed the participant in a task like this as you direct everyone equally. However, if you want to know whether an unprompted shopper would ever see your product in the first place it’s a terrible task because you’ve both primed them with the product or brand name and ensured that they will just keep going until they find it, something that most shoppers wouldn’t do in the real-world.
Describing a scenario can be a great way to test more realistic behaviour because it puts the participant in an appropriate state of mind. Then, by just changing even a couple of words in the scenario you can test how some of biases your shoppers bring to the shelf are affecting whether they think of your product as “best in class” or “value for money” or “new and innovative”. With eye-tracking the task description is your friend, but only if you treat it nice. Take it out for a spin by testing it on a couple of people first to make sure it’s being understood the way you want it to be understood and NOT the way you, as a market researcher/designer, think it SHOULD be understood.
If there’s one thing I know as a vision scientist it’s that a slight tweak in a visual stimulus can result in a big change in perception and subsequent behaviour. With a lot of commercial research, the stimuli have already been iterated several times by a design agency and maybe even focus-grouped to be the best they can be, but that doesn’t mean you can get away with not thinking about the stimuli for your study before you plug in the eye-tracker! There are many different ways you can present those stimuli and it’s important to think about the consequences of your presentation method. If you’re doing findability/planogram testing, are you going to use real products and fixtures or projected CGI images or these days even immersive virtual reality? Have you allowed for different directions of approach to your fixture? Have you accounted for the luminance differences between eye-level and bottom shelf in your CGI? If you are testing full size pack designs on screen have you considered reducing the effects of central bias by jittering your start position/pack placement? I could go on and on with examples, but the take home message here is clear, you need to pick a presentation method that’s appropriate to those focussed research questions from resolution number 1. If you’re interested in findability, context needs to be built around your product and as realistically as possible; if you’re interested in messaging then the pack image needs to be big enough for the participant to be able read it, and so on. If the presentation method does not support the task you’re asking your participants to complete then you’re simply not testing what you think you’re testing and it will show in your results.
Recruitment fees and incentive payments probably account for the biggest single cost in most decent sized eye-tracking studies, and here I’m talking about ones where the sample size is actually bigger than the research team! The recruitment fees in particular can be quite high because they typically require specific participant attributes, including not being a regular research participant. The relatively high cost of recruitment sometimes leads researchers to try to avoid this spend and use other, seemingly cheaper techniques of accruing participants.
Street recruitment is one example, and in some cases it is unavoidable or even desirable, but because of the constraints that eye-tracking places on participant selection and also the inability to separate the screener questions from the data collection session and so avoid any priming of participant behaviour, it’s something I recommend avoiding when it comes to eye-tracking studies if at all possible. It is, however, vastly superior to other cost avoidance methods such as recruiting from within the company. We know from the cognitive bias literature that we humans cannot avoid being influenced by prior knowledge when it comes to decision making, and hey, let’s face it, that’s probably at least part of what you are testing in your study! Any internal employee is considerably more focused on your brand and products than almost anyone you could recruit from the street, and so the money you are saving by adopting this approach you are almost certainly throwing away in terms of ROI on the research.
OK, so maybe I misled you a little when I said all 5 resolutions could be applied before you start data collection, because the one exception is this – don’t wait until the day research starts to do a run through of the study. There are several reasons for this which include testing the time it takes to run a single participant through the protocol, your familiarity with the equipment and software which may well have changed since you last used it, access to necessary power sources at the location of data collection and detection of possible WiFi interference to name but 4!
For me the real reason for doing this is that I have never yet run an eye-tracking study where every participant turned up on time and there were no issues with calibration. Putting your participant at ease is one of the most important aspects of good data collection, and if you’re getting stressed because you’re running behind schedule you can be absolutely certainly that your participant will pick-up on this. The best way to avoid schedule impact from the unexpected, is to have tried out everything beforehand. If something goes wrong after that, and it still might, you can at least tell yourself you did your best and you won’t be adding to your stress with a bit self-flagellation!
So there you go. Five golden rules to try out in the New Year that will help you eye-track like a pro. Of course, you could just simply call in a pro, and we’ll always be happy to help!
Happy Christmas to all our customers (past, present and future) and especially to those of you who regularly read the blog and Tweets. We finally have a new website under construction where these posts will hopefully look a lot prettier in 2017 and you’ll even be able to join in on the discussion! Till then happy holidays!
Contact us using the form below or by calling us on +44 (0) 118 900 0820.