PyEyeTrack - The Python eYe Tracking Library
PyEyeTrack is a python-based pupil-tracking library. The library tracks eye movements with commodity hardware, such as a laptop webcam, and gives a real-time stream of eye coordinates. It provides the functionality of eye-tracking and blink detection and encapsulates these in a generic interface that allows clients to use these functionalities in a variety of use-cases.
The goal has been to make the library generic. A user can provide any interface: a static image, text, video, and call upon PyEyeTrack to capture eye-movement data. The default is to measure eye-movement data captured by a live camera feed, a saved video file can also be used as input. PyEyeTrack can optionally save audio and video as an aid to debugging and development.
Use PyEyeTrack to develop applications that can be controlled by eye movements or blinks. Or use it to track eye movements as an aid to medical diagnostics.
PyEyeTrack has three modules - EyeTracking, AudioVideo recording, and the main interface (along with the data handling class). The EyeTracking module is the core of the library that performs eye tracking and blink detection. This module is responsible for accessing the webcam to track eyes and write it to a CSV. AudioVideo Recording module records audio and video. The third module is the primary interface to PyEyeTrack - The PyEyeTrackRunnerClass.
This class provides the user with an interface to execute the functionalities of the library. The user can specify the UI, the eye-tracking functionality, the destination folder and exploit the functions of this library by using various combinations with the following call-
pyEyeTrack_runner(UI = False,
UI_file_name = “User_ImageUI_EscExit”,
pupilTracking = False,
blinkDetection = False,
video_source = 0,
eyeTrackingLog = True,
eyeTrackingFileName = ‘EyeTrackLog’,
videoRecorder = False,
videoName = ‘video’,
audioRecorder = False,
audioName = ‘audio’,
destinationPath = ‘/Output’)
The user can set flags to run the combination of these functionalities.
UI (bool, optional): This parameter enables the user to run UI. Default: False.
UI_file_name(str, optional): This parameter takes the file name of the UI. Default: “User_ImageUI_EscExit”.
pupilTracking(bool, optional): This parameter enables the user to run pupil tracking. Default: False.
blinkDetection(bool, optional): This parameter enables the user to run blink detection. Default: False.
video_source(int/str, optional): This parameter takes either device index or a video file as input. Default: 0.
eyeTrackingLog(bool, optional): This parameter enables the user to generate a CSV of pupil tracking/ blink detection. Default: True.
eyeTrackingFileName(str, optional): This parameter takes the file name for the CSV. Default: ‘EyeTrackLog’.
videoRecorder(bool, optional): This parameter enables the user to record video. Default: False.
videoName(str, optional): This parameter enables the user to specify the filename with which the recorded video is to be saved. Default: ‘video’.
audioRecorder(bool, optional): This parameter enables the user to record audio. Default: False.
audioName(str, optional): This parameter enables the user to specify the filename with which the recorded audio is to be saved. Default: ‘audio’.
destinationPath(str, optional): The parameter enables the user to specify the location of the output files. Default: ‘/Output’.
The Data Handling class is responsible for handling the real-time pupil locations and blinks dynamically. This class implements a queue that the user has access to get the eye location or the blink details dynamically. This functionality gives the user the freedom to use eye-tracking data in real-time as per their wish. The user can use the following functions to handle the data:
add_data(data): This function allows the user to add data to the dynamic queue.
get_data(): This function returns the data elements of the queue.
is_empty(): This function checks if the queue is empty.
search_element(key): This function is used to search if a specified key
To make this clear, a basic example that makes use of the library to track the eyes of the user reading a text UI is described below.
pyEyeTrack_runner(UI = True,
UI_file_name = “Ex_1_SampleTextUI”,
pupilTracking = True,
eyeTrackingLog = True,
eyeTrackingFileName = ‘User_1’)
This example uses the pupil tracking functionality of the library. This example tracks the eyes of the user as the user reads the text displayed on the screen. The text UI is specified by the user. The user can set the destination path. In this case, as the destination path is not specified, the pupil coordinates CSV will be stored in the default ‘Output’ folder.
Try out sample demos at our Github repository
pip install PyEyeTrack
The library needs FFmpeg executable file in the working directory for the audio-video syncing to work.
How to merge the audio and video?
1. Download ffmpeg.exe from here
2. Paste the .exe file to the same folder as the audio,video files.
3. Open command prompt in that folder.
4. Command - 'ffmpeg -y -i audio.wav -r 30 -i video.avi -filter:a aresample=async=1 -c:a flac -c:v copy avoutput.mkv'
1. Go to the required folder.
2 .The press Shift + right click mouse button any where on the folder window.
3. Select “Open command windows here” option from the context menu.
conda install -c conda-forge dlib=19.4
You may want to use a conda virtual environment to avoid mix up with system dependencies.
UI requirements: The library can run any UI developed in python. The user needs to ensure that the UI has the main function. The library takes the name of the UI as input and runs the main function in it.
For any issues regarding PyEyeTrack, contact the PyEyeTrack support at firstname.lastname@example.org