Research, in context!

25 May 2017 / Tim Holmes

In my previous post I talked about the role that context plays in influencing behaviour and decision making.  Not just the visual context or environment, but the mind-set and mood of the person making the decision.  Neuroscience and behavioural economics tell us that most of these influences operate at an unconscious level, meaning that slight changes in them can nudge behaviour without any awareness.  This is not news, or at least it shouldn’t be to marketers, advertisers and designers, since they regularly attempt to employ these effects.  Of course the same unconscious biases that trigger decision making at the point of sale, also operate in the boardroom, and so the belief that someone will correctly answer a question about their behavioural motivation (Fundamental Attribution Error / Choice-supportive Bias), or even resist unconscious cues from the researcher conducting the study (Observer-expectancy Effect) often result in an over reliance on self-report measures when testing designs using qualitative methods like focus groups.

Many of the companies I work with understand the unreliability of such sources of information and have begun using additional methods like eye-tracking and other biometric measurements to attempt to record unconsciously generated signals that can even be analysed without the need for any participant discussion at all.  This is the world of consumer neuroscience, or so call “neuromarketing”, and it is filled with potential, but it doesn’t mean the influence of context can be ignored.  In fact it makes the context even more important since this is almost certainly directly influencing every unconscious data point you record.  In this post I’m going to discuss some of the limitations contextual cues can impose on eye-tracking studies in particular, and I’m going to introduce a new way to mediate some of them that can generate insights that were previously difficult or prohibitively expensive to obtain.

Much of the commercial eye-tracking going on in research agencies and retail labs up and down the country is concerned with how the shopper navigates a store, searches a category or fixture and ultimately makes a decision to purchase.  Let’s stop to think about this for a minute.  Even if the shopper has a clear idea of exactly what they are looking for, they will be seeing the product in a number of different contexts as they complete their journey:

  • At a distance they will see their product surrounded by many related and semi-related products (category) but they will not be able to read prices or labels and will be reliant on signage, large product blocks and sign-post brands to recognise the category at all.
  • As they approach the fixture their visual field will become increasingly dominated by related products and they will be able to recognise different brands based on colour, shape and large design elements only. Recognition and familiarity will start to come into play now, but their attention will also be highly influenced by visual salience in the form of brightness, contrast, colour and motion which will involuntarily guide their attention to the most conspicuous regions of the fixture.
  • Once at the fixture, the shopper will typically turn to face it, changing their perspective and by now the top and lower shelves will be outside of the visual field unless the shopper moves their head up or down. This contributes to the well-known eye-level placement advantage, but adjacent shelves and fixtures can still influence attention through salience which operates well on peripheral vision, albeit less in response to colour than contrast, brightness and movement.  It’s also worth stating something “obvious” here, but packs on lower shelves are now presenting their tops to the shops rather than their fronts which can radically alter their ability to attract attention and influence behaviour.

Of course even this is a simplistic view of context, because the journey the shopper took to arrive at that fixture will have exposed them to different, and potentially, competing products from other categories – chilled versus ambient for example – and so the route and importantly the direction of approach to the fixture also form part of the context.  Even standing in front of a single bay, where all those sought after emotional associations with the brand start to compete with pricing and promotions, the type of fixture can have an effect, with scanning behaviour differing reliably between in-aisle bays and aisle-end gondolas.

All of this presents a problem for the creatives designing the packaging as well as the market researchers testing them, because it suggests that the only way to understand these effects is to research in an actual store.  And in some ways that’s true!  But of course that’s not always easy or cheap to do, and the dynamic nature of retail environments makes it extremely difficult to run a controlled study.  This in turns means larger sample sizes and still more expense.  So what’s the answer?

Mock stores or screen based virtual reality both offer a partial solution to this problem, but come with some significant limitations.  Mock stores are rarely large enough, or stocked well enough to trigger real-world shopping behaviour.  Meanwhile screen based VR limits depth cues and viewing perspective, is frequently “on rails” further constraining shopper behaviour and the edges of screen are a constant reminder that it just isn’t real, meaning the participant is always aware that they are participating in a study and being observed!

Head-mounted VR, yes the stuff of gaming on headsets like Oculus Rift and HTC Vive, presents a whole new way to conduct this type of research, that is fully immersive, encourages natural behaviour and at a fraction of the cost of most retail labs.  In fact it’s portable enough that you could even intercept participants entering an actual store bringing all that “shopper mission” context directly into your research, as well as the sounds and smells of the store that would further enhance presence in the VR world.

Of course I wouldn’t even be mentioning VR if it wasn’t possible to collect eye-tracking, and other biometric, data in these environments.  In fact, the use of a headset makes it a natural for eye-tracking and actually removes one of the concerns from real-world studies, which is that the presence of the eye-tracker can potentially influence behaviour, since most participants are completely unaware of the additional technology built directly into the VR headset to record their eye-movements.

There is, of course, a choice of eye-trackers and VR headsets, and right now it’s still unclear from the market which platforms will dominate in the future, so investment in such technology might feel like a bit of a risk, especially given the lack of analytic support for VR from any of the major eye-tracking manufacturers’ software.  This is the problem that we at Acuity Intelligence have been working on for the past year or more and, at the Marketing Week Insight Show, we recently launched AcuityVR, which is an eye-tracking and VR headset agnostic platform for collecting and automatically analysing eye-movement data in immersive virtual reality.

The software leverages the data from the VR (Unity) environment to generate visualisations and analytics for everything in that environment, and in the Shopper Edition this includes signage, pricing, fixtures, SKUs and everything in-between!  What this means is goodbye manual coding, hasta la vista drawing dynamic areas of interest (AOIs) and adios waiting to get your results, because they’re ready the moment you finish collecting the data!  Heat-maps and opacity maps are available instantly mapped to actual products and surfaces, so if a shopper picks a product off the shelf for close-up inspection that data is mapped over the entire package, front, back and, yes, that top they couldn’t read when looking down at the bottom shelf!

In my last post I mentioned the large real-world study we presented with Premier Foods at the Retail Design Expo, and one of the challenges of that project was providing a detailed map of each shopper journey through the large stores we were collecting data in.  AcuityVR does all of that automatically too, giving you direction of travel, dwell time and orientation data for every participant, at every stage of their journey – at the click of a button.

Now, I’m not going to pretend that VR is a perfect proxy for real world research, it isn’t.  There are still some issues around locomotion and haptic feedback that need work, and trust me there are plenty of people working feverishly to solve these problems right now.  But shopper research in immersive VR offers some exciting opportunities including the introduction of more agile testing methodologies like those used in interface/app development and UX research that will enable much faster iteration towards optimal designs that don’t just look great, but can actually withstand the challenges of real-world contextual effects.

If you want to know more about AcuityVR click over to the microsite, or better still, hit me up using the contact form, LinkedIn or Twitter to find out more and get a demo.  You won’t regret it, and it might just save your brand a whole heap of money!


Tags: , , , ,

Leave a Reply