CNRL logo

Computer Networking Research Laboratory

Dept. of Electrical & Computer Engineering, Colorado State University

Smart Displays based on Eye Motion Tracking

Introduction

The importance of high-resolution video displays will be even higher in future as applications involving virtual reality, video conferencing, video mail, etc. become widely available. Not only is it desirable to increase the resolution of 2-D displays, but also develop 3-D technologies that will enhance many areas including telemedicine, entertainment and remote instruction.

As the resolution and the size of a monitor increases, the bandwidth required to feed a picture to video memory increases rapidly. In fact this is a major bottleneck that needs to be overcome. For a picture with constant resolution (ex. Xppm) the bandwidth is proportional to the area, and the display at certain size, the bandwidth is proportional to the square of the resolution. Further, 3-D displays will add another level of increase to the bandwidth requirement.

The approaches that are being pursued at the present are to come up with techniques to feed the required information to the video display units. While techniques such as compression reduce the bandwidth required for transmission of video, in order to display such images still has to be uncompressed prior to loading to video memory. The current research is based on the fact that even if you provide a very high-resolution image to display unit, an observer will not be seeing the entire image at that resolution at any time. Therefore, if the machine is smart enough to display the section of the display that the user is looking at a given time at a high resolution, and is able to update the display as the eyes move, there could potentially be a large saving in the bandwidth.

Ex: Consider the simulator for the driver education ( A better example would be something like a flight simulator, however to keep the explanation simple, lets consider this), In the ideal case, the simulator display should indicate what you would see through the windscreen as you drive down the road. As you drive today, at any given instant, make a note of what you really see at that instant. If you are looking at the car in front, the chances are that you do not notice the shape of leaves on the side of the road, if you look at the tree, you will only see the car at a significantly less detail. The attempts at present are to create a high-fidelity replica of the scenario that you see through the windshield. It is easy to see that making a faithful replica of what is out there requires far higher bandwidth than trying to create what a human eye will actually see in that instant. Larger the unit and closer the individual is to the screen, lower the fraction of image seen by the individual at the highest resolution. The further away an individual is from the screen the lower the resolution that he will actually observe. If there is a way for the display to know what the observer is looking at, then the display could be updated at a high resolution in the area on which the eye is focused, and at the lower resolution on the peripheral areas. Note that updating the image in the memory the bottleneck, though refreshing the display from memory is not. The potential improvement in bandwidth due to such an approach would be quite large when 3-D display are involved.

While the above is a very ambitious task, it could also result in products such as an eye controlled cursor in short haul. There is evidence that the approach could indeed be successful, e.g., existing products such as eye movement guided missiles. Long term results could lead to products that may use eye movement to control images on the screen - ex. zooming in. Although we humans are not endowed with telescopic vision, successful research in this area will lead to virtual display units that for example zoom in on a point when the user stares at that point, thus overcoming the lack of telescopic and microscopic vision at least in the virtual world provided by electronic monitors.

Visit the project page for details.

This project is complete.

Team members

Sponsor

NSF through Optoelectronic Computing Center, University of Colorado and Colorado State University.

Publications

  1. N. M. Piratla and A.P. Jayasumana, "A Neural-Network Based Gaze Tracker,'' Intelligent Engineering Systems Through Artificial Neural Networks, (Editors: Dagli, Buczak, Ghosh, Embrechts and Ersoy), Vol. 9 (Smart Engineering System Design), pp869-875, ASME Press, 1999. Also presented at ANNIE 99.
  2. N. M. Piratla and A. P. Jayasumana, "A Neural Network Based Real-Time Gaze Tracker,'' Journal of Network and Computer Applications, vol. 25/3,  pp. 179-196, 2002.