Crystal Eye-ball!

4 January 2016 / Tim Holmes

If you’d asked me last year to guess the hottest trends for Christmas 2015 I would probably have nailed the Star Wars success but completely have overlooked the exploding hover-boards and underperforming iPad Pro, so maybe I’m not the best person to be writing a list of technology related predictions for 2016. Being the cautious scientist that I am, I have decided to restrict the scope of my list to developments related to eye-tracking and so I feel a little more confident in my psychic abilities! Thus, having given in to convention with a “review of the year” post to end 2015, I now give you my preview of what I think will be the dominant themes in 2016.

Virtual Reality will become a reality!

We keep hearing that “2016 will be the year of virtual reality”, and if you’re headed to the massive Consumer Electronics Show (CES2016) in Las Vegas this week, you are probably going to have VR, and it’s cousin AR (Augmented Reality) rammed down your throat, or at least placed in front of your eyes, at every turn! For those who have not been paying attention over the past couple of years, 2016 is the year that Facebook owned Oculus will finally launch their first commercial VR unit, the Oculus Rift, and whilst others like Samsung (powered by Oculus), HTC and Google Cardboard may have made it to market first, the Oculus Rift is set to become the iPhone of VR headsets with an already extensive developer community due to the amount of pre-launch dev kits that have been released. Of course, one of the main focuses of VR for 2016 will be entertainment, whether in the form of gaming or other immersive experiences such as live concerts and virtual travel, but the arrival of high quality, affordable VR opens up a realm of new possibilities for research, especially when combined with eye-tracking.

As I mentioned in my review of 2015, we are starting to see greater adoption of head-mounted (glasses type) eye-tracking in both academic and commercial research. This technology has made it possible to liberate participants from chin rests and screen based paradigms and to study behavioural responses in the real-world. Collecting data in the real-world brings with it many challenges and in particular the ability to control the many influencers of attention which might confound a study looking to contrast the performance of two different package designs, for example. Many commercial researchers already use large format screen based VR solutions, such as those provided by companies like Fifth Dimension and Kantar (formerly Red Dot Square) but, and I speak from personal experience here, large format, flat displays do not provide the same immersive experience that head-mounted 3D VR does. In addition, imagine being able to make on the fly changes to a store layout, lighting, signage and POS to explore how they affect the customer journey. Of course, these benefits apply more generally away from shopper research, to gaming, wayfinding, sports and driving research. Most eye-tracker manufacturers are beavering away to integrate their head-mounted eye-tracking platforms with one or more VR system, with Oculus being widely regarded as the gold standard. As such, 2016 will see rapid development in this area which I’ll keep you posted on here and via Twitter.

But just before we move on, it would be wrong of me to leave you with the impression that I think eye-tracking and VR is the answer to your research prayers. Firstly, the affordability of VR becomes much less the moment you throw a research grade eye-tracker into the mix as well as the high spec computer you are going to need to run all this! Secondly, as I’ve mentioned before, right now, many of the analytics associated with head-mounted eye-tracking are not where they need to be in terms of both event detection (fixations and saccades) and also visualisation of results. Finally, there is one area where VR still falls short of real-world research, and that’s the means by which the wearer moves through, and interacts with, the environment. Joysticks, game controllers, mice and touchpads are all fine for gaming interfaces where the player buys into the hardware and invests time to become reflexive in using the interface, but for shopper/market research, unnatural interfaces are a significant confound to research since they place increased cognitive load on the participant, distract attention and slow down decision making. As such I think there needs to be significant development in the human/machine interface before VR will become a true substitute for real-world research.

Getting ahead with head-mounted

Last year we started a real-world eye-tracking course which focussed on head-mounted eye-tracking technologies and methods, which we are running again in April this year. The course proved very popular and with good reason, head-mounted (glasses) type eye-tracking has become significantly easier, cheaper and less invasive in the past couple of years thanks to companies like Tobii, SMI, Ergoneers and ASL. Having said that, the challenges of analysing those recordings that are now so easy to make are only just beginning to be addressed. Until late last year, if you wanted to extract any detailed analytics from a glasses based study you faced the unenviable task of manually mapping each fixation to a series of static reference images in order to compare the amount of time participants spent looking at product A vs. product B, for example. As 2015 ended, Tobii and SMI both announced automated solutions to this problem which use sophisticated image processing algorithms to recognise features from the reference image in the scene camera video, which in turn allows the fixations to be accurately mapped to the reference image. The impact of this advancement is clear, if a little overstated by some of the manufacturers, since it means the cost of glasses research comes down as well as the elapsed time between completing data collection and delivery of insights. The upshot of this will be much more glasses based commercial research in 2016.

Now, as I already mentioned, the manufacturers still have a ways to go with this because firstly the current event detection algorithms and visualisations are somewhat lacking when it comes to working with glasses data. Moreover, the automated coding tools will only work in certain situations and so whilst they work great for shopper research in a fairly stable environment (i.e. one in which the shelves remain fairly constant throughout), they are not so good when it comes to dynamic environments where the objects of interest move relative to each other in the scene, for example hazard detection in driving. So, there’s still plenty of room for development here in 2016.

Faster, cheaper, smaller

As a rule, we expect technology to get faster, cheaper and smaller with each new generation. I’d throw in smarter too, but sometimes that lags behind! Eye-tracking is certainly no exception, and in 2016 it’s safe to say we will see faster eye-trackers, especially glasses-type, where Tobii currently lead the race with their upgrade to 100Hz just before Christmas but SMI have clearly signalled an intention go much faster with their 250Hz HMD kit being launched at CES this week. Speaking as someone who spent part of last year wrestling with 60Hz data from participants who were tracking objects moving at more than 80mph, this move to higher frequency head-mounted trackers can only be a good thing. But a heads-up to the manufacturers, the scene cameras need to be upgraded too, especially for the very studies where 100Hz+ data actually matters!

Competition between the manufacturers is fierce and this is great news for anyone looking to buy a tracker this year, because it is inevitable that prices will continue to drop, especially for the lower spec trackers. I welcome this wholeheartedly since cheaper trackers open up all kinds of new possibilities for research (see below) but I’m also a bit wary of this because there is a risk that more and more people will bolt eye-tracking on to their research to jump on the “neuromarketing” bandwagon without taking the time to learn how to do it properly. A quick look at the eye-tracking tips I’ve posted on this blog should make it clear that the pitfalls of poorly planned research are many and the naivety of many researchers’ interpretation of results is also rife! So whilst I can safely predict more people will be eye-tracking in 2016 than ever before, I can sadly forecast that also means more commercial decisions will be based on “insights” generated from heat-maps alone!

Now, I realise this next comment might sound a bit sad, but one of the most exciting moments of working at AI was the day I got my Tobii X2-30, which was the first USB remote eye-tracker I got my hands on and it meant that I could suddenly eye-track anywhere I could take my laptop! Well, times have moved on, and we have a range of such devices around the office to play with and the inevitable next great leap will come when I no longer need to attach an eye-tracker at all. Right now this is only really possible with webcam based eye-tracking which is very limited in terms of precision and accuracy, but the continued shrinking of cameras and chips means that the direct integration of eye-trackers in laptops, tablets, smart phones and car dashboards is upon us. We already have drowsiness detection systems in cars and with the increase in gaze driven computer interfaces and games, manufacturers like Tobii are already working on the next generation trackers that will mean I no longer need to remember where I’ve put my tracker, because it will be wherever I’ve put my computer! This ubiquity of eye-tracking opens up all kinds of possibilities, one of which I will now mention…

Big data from little eye-movements

If you’ve ever read a scientific research paper based on eye-tracking, one thing will strike you and that is the sample size. It’s quite rare for an eye-tracking study to top N=100 and quite common for the sample size to be south of 50, or even 20! Now, before we start down the road of discussing effect sizes and statistical power, both of which I will be returning to in a future post, the point I’m making here is slightly different. Market researchers are very wary of studies with small sample sizes, and with good reason. Small samples are fine when you KNOW you are looking at behaviour which is consistent across at your sample, but this is rarely the case in market research where gender, age, socio-economic, culture and geographical factors frequently influence outcomes. For this reason, market research usually talks in terms of cells, or consumer profiles, such as working males aged 18-24 living in the south-east of the UK. This is a very specific group of people and clearly we would need to add in quite a few more cells in order to determine whether age, gender, occupation or location were affecting the results of the study. Now the number of people in each cell needs to be big enough to cope with the variance within the cell (that’s the maths bit I’ll come back to another time), but the upshot of all these cells is you rapidly end up with a large sample, followed by the inevitable light headedness that comes from estimating the cost of running such an eye-tracking study! The unfortunate consequence of this is that many studies that would really benefit from an insight into participant attention never actually get funded.

The introduction of automated coding tools for head-mounted eye-tracking will certainly help, but what about screen based studies which still account for the majority of all eye-tracking research today? Cheaper, portable remote units bring with them the potential for testing multiple participants simultaneously, and we’ve already seen this approach being used. What’s more interesting is the development of large scale home testing panels for eye-tracking studies, where participants are provided with kit they keep at home and sent packaged studies of tasks and stimuli to complete before uploading the data via the internet for centralised analysis. We already started to see this approach being taken at the end of last year, and I fully expect to see more of this, especially when eye-trackers become a standard feature of other hardware. Of course, there are all sorts of risks here. Relying on participants to run their own data collection in a way that is consistent across the sample is notoriously difficult, and many of the analytics/visualisations in standard use today are not really designed to extract the most value out of such large and complex datasets. Clearly the use of such panels will be limited to certain applications, but just as a large-scale web usability studies are performed by exposing different participants to different versions of web pages or apps, the ability to do the same with marketing and advertising whilst collecting valuable information about the attention of the participants, is something we can expect to see much more of in 2016.

Happy New Year! May all your eye-tracking go smoothly this year, but if it doesn’t – you know who to call! 😉

Leave a Reply