Hey, big render!

20 January 2016 / Tim Holmes

In my Eye-Tracking Preview of 2016 I suggested that virtual reality was going to be a major topic of discussion this year, and if the internet chatter around the Consumer Electronics Show (CES) in Las Vegas was anything to go by, I wasn’t wrong. Already we are seeing VR integrations from a variety of manufacturers being touted, from the established like SMI and Tobii to the newer (and much cheaper) kids on the block such as EyeTribe and EyeTech. In the case of SMI and Tobii the eye-tracking technology began life as their head-mounted (glasses) eye-tracker, allowing some of the miniaturisation and portability issues to be dealt with in advance of the VR specific ones, whereas EyeTribe and EyeTech are going straight to Oculus integration in a single bound. Contained within all the hype about eye-tracking in VR was a discussion about something called Foveated Rendering and, seeing as I didn’t mention this in my preview post, I thought I’d take a few minutes for the uninitiated to explain what it is, why it’s important and also why, as researchers, we need to be a little wary of it.

So first of all, what exactly is Foveated Rendering and why is there so much excitement about it? Anyone who remembers buying pre-HD DVDs will know that a whole new technology (BluRay) had to be invented to cope with the increased file size that accompanied the encoding of movies in high definition video. Simply put, the number of pixels increased for each frame of video and that took more bits to encode on the DVD. A similar thing is happening with the upgrade from HD to 4K right now resulting in something called Ultra BluRay! HD TV/Video is typically 1920×1080 resolution, but only presents a flat, rectangular image that is far from the 360° image that you need for fully immersive VR. This leaves you with one of two options, use the same number of pixels to code a much wider field of view, with the inevitable loss of detail, or use a lot more pixels. Of course it is this latter option that has been preferred and so a little bit of vision science was needed to ease the pain of rendering such vast amounts of video at a high enough speed that the viewer feels like they are in a real, or at least virtually real, world!

Time for a quick vision lesson. Your vision is the result of light hitting photo-receptors in the retina at the back of your eye. A quick look under the microscope tells us that these photo-receptors, the rods and cones you remember from biology lessons, are not distributed evenly across the retina. In fact, the cones are highly concentrated (around 160,000 per mm) in the centre of the retina, dropping quickly to less than 10,000 per mm 10° out from that peak, or about the size of your fist held at arm’s length. The greatest concentration lies within a 2° radius of the centre of your vision, which is about the size of your thumbnail at arm’s length, and we call this the fovea. A similar, but much less pronounced drop-off occurs in the distribution of rods. Now, if you remember from those biology lessons, the cones are responsible for your colour vision and rods are tuned more to contrast between light and dark and are your main source of vision at night. Because of this high concentration of photo-receptors in the fovea, this is the part of your visual field is where you can see the best detail, something we call visual acuity, and it’s SO important to visual processing that we named our company after it! When we move our eyes it is for the precise reason that we need to move the fovea around in order to look at things in detail and, of course, you’re doing it right now as you read this.

And this is where eye-tracking for VR comes in, because eye-tracking technology allows us track the precise location of the fovea, or at least to within a degree or so. What this means is that eye-tracking in VR isn’t just reserved for market researchers to know where you’re looking in a virtual supermarket or for game designers to know where you’re not looking so they can surprise you with an assailant you don’t immediately see, it can also be used also tell the software exactly where you are looking right now which is the only part of the visual field requiring very high resolution detail. The rest of the visual field can be much lower resolution and you won’t even notice it!

In vision research, we’ve been using such “gaze contingent” methods for quite some time now, and by combining this foveal representation with other mechanisms such as saccadic suppression which means that most visual processing is “switched off” during a high-speed eye-movement, a lot can be done to mess with the display without the viewer being aware, allowing us to create the sort of freaky and annoying stimuli we love so much! But herein lies a concern, particularly for those of you who are relying on VR to become a proxy for the real-world that will facilitate new research methods and quicker/cheaper insights. In these days of System 1 marketing and behavioural economics we know that unconscious processing accounts for most of the brain’s activity and so just because a person isn’t aware of much that happens outside of their foveal vision, it doesn’t necessarily follow that the brain hasn’t been influenced by it. Peripheral vision is hugely important and is a strong influencer of involuntary attention, in fact we think this is its main function which has evolved to enable us to detect changes in the immediate environment that might require a closer inspection using, yes, you’ve guessed it, the fovea. These peripheral effects are used extensively in marketing and drive techniques like brand blocking, shelf wobblers and much digital point of sale media. By rendering a peripheral virtual world in a lower resolution where the viewer’s vision will further reduce the acuity of that sparsely rendered environment, some VR solutions might impact the ability to research questions you specifically invested in VR for such as shopper navigation and wayfinding. So this really is going to be a try before you buy decision, and one which I am sure I’ll be returning to on this blog throughout 2016.

Having said that, I for one can’t wait to receive our commercial grade Oculus when it ships, and also to see what some of our collaborators are going to allow us to do with it!


Leave a Reply