Cambridge University Logo Rainbow Logo

Computer LaboratoryRainbow Group

UnityEyes – a tool for rendering eye images

UnityEyes is a tool for eye tracking researchers that allows them to generate labelled synthetic eye images. These could be used for training appearance based eye trackers (see our paper), or as ground truth for other eye tracking systems.

Downloads

UnityEyes is available for Windows and Linux (untested). The contents of the .zip is:

UnityEyes_Windows.zip ├── imgs/ # This is where your images will be saved ├── subdiv_data/ # Subdivision surface data (do not edit) ├── unityeyes_Data/ # Unity data (do not edit) ├── visualize.py # A sample script showing how to access landmarks and gaze data └── unityeyes.exe # Run this to start UnityEyes

Using UnityEyes

When starting UnityEyes, you will be prompted to choose a resolution and quality. We recommend running it at 512x384px windowed, as it can be slow to save larger images. UnityEyes can be used in two modes:

  1. Interactive – where you use the mouse and keyboard to control the scene
  2. Automatic – where it continuously generates eye images until stoppped

Interactive mode is designed for testing the system, and making short videos like the one at the top of the page. The application starts in this mode. Use mouse_1 to control the camera, mouse_3 to control the eyeball, R to randomize face and eyeball appearance, L to randomize illumination, H to toggle UI display, and hold down S to save images.

Automatic mode

Automatic mode is for generating large datasets. First, specify the desired camera angle range and eyeball pose range parameters (in degrees) by typing into the textboxes in the bottom left corner (see the paper for details on these). Note: a text cursor will not be visible, but type anyway. Once these are set, click the START button and the system will enter automatic mode, and generate randomized images, saving them in the imgs/ directory. To stop UnityEyes, close it by clicking on the x.

Automatic mode Automatic mode

The dataset on the left was generated with (0,0,0,0) eyeball parameters. The dataset on the right was generated with (0,0,30,30) eyeball parameters.

Metadata

Each .jpg image file will be saved with an associated .json metadata file that contains the following:

{ "interior_margin_2d": [ # Screen-space interior margin landmarks "(202.7042, 186.4788, 9.5512)", … # x, y, z (can ignore z) ], "caruncle_2d": [ # Screen-space eye-corner (caruncle) landmarks "(191.9471, 175.4047, 9.6683)", … # x, y, z (can ignore z) ], "iris_2d": [ # Screen-space iris boundary landmarks "(213.3930, 195.4109, 9.1951)", … # x, y, z (can ignore z) ], "eye_details": { "look_vec": "(-0.3633, 0.0937, -0.9270, 0.0000)", # Gaze vector in camera-space (x, y, z) "pupil_size": "0.05249219", # Pupil size (arbitrary units) "iris_size": "0.9090334", # Iris size (arbitrary units) "iris_texture": "eyeball_amber" # Iris color }, "lighting_details": … # Illumination details "eye_region_details": … # Shape PCA details "head_pose": "(351.2107, 161.3652, 0.0000)" # Euler angle rotation from camera to world }

Of particular note, eye_details.look_vec encodes the optical axis gaze direction in camera space, and head_pose encodes the rotational differences between camera and head. The 2D screen-space landmarks should be used for post-processing the images, e.g. aligning them.

The visualize.py script contains some short examples of how to process this data with python, and was used to generate the image at the top of the page. Place this script in the same directory as the images.