The largest database of trusted experimental protocols

Eye tracker

Manufactured by Tobii
Sourced in Sweden

The Tobii eye tracker is a device that uses specialized sensors to accurately measure and record the user's eye movements and gaze patterns. It provides precise data on where the user is looking, how long they are fixated on specific areas, and other eye-tracking metrics. The core function of the Tobii eye tracker is to capture detailed information about the user's visual attention and interaction with digital content or physical environments.

Automatically generated - may contain errors

33 protocols using eye tracker

1

Pupil Diameter Measurement with Eye Tracker

Check if the same lab product or an alternative is used in the 5 most similar protocols
All stimuli were presented with experimental control software E-Prime version 2.0. Pupil diameter was recorded at 60 Hz using a Tobii T120 eye tracker, integrated into a 17-inch TFT monitor. Participants sat on a chair behind the eye tracker in a darkened room at approximately 60 cm from the screen. Data obtained from the Tobii eye tracker were processed and analyzed by the use of Brain Vision Analyzer. Custom-made macros programmed in Brain Vision Analyzer were used. The artifacts and eye blinks that were detected by the Tobii eye tracker plus three samples before and after these data points were marked as missing data. These samples were corrected using linear interpolation.
+ Open protocol
+ Expand
2

VR Eye Tracking in Binocular Immersion

Check if the same lab product or an alternative is used in the 5 most similar protocols
Participants wore an HTC Vive head-mounted display (HMD; New Taipei City, Taiwan) equipped with a Tobii eye tracker (Tobii, Stockholm, Sweden) and held an HTC Vive controller in their dominant hand. The two 1080 × 1200 px OLED screens have a refresh rate of 90 Hz and a combined field of view of approximately 100° (horizontally) × 110° (vertically). The integrated Tobii eye tracker (Tobii, Stockholm, Sweden) recorded eye movements binocularly with a refresh rate of 120 Hz and a spatial accuracy below 1.1° within a 20° window centered in the viewports. The experiment was implemented in C# in the Unity 3D game engine (version 2017.3; Unity Technologies, San Francisco, CA, US) using SteamVR (version 1.10.26; Valve Corporation, Bellevue, WA, US) and Tobii eye-tracking software libraries (version 2.13.3, Tobii, Stockholm, Sweden) on a computer operated with Windows 10.
+ Open protocol
+ Expand
3

Eye Tracking Emotional Perception Evaluation

Check if the same lab product or an alternative is used in the 5 most similar protocols
The devices used in this experiment include an Intel computer (NUC11PAHi5), a 1920 × 1080-pixel display (392 mm × 250 mm × 10 mm, 17.3 inches), and a Tobii eye tracker 5 with a sampling rate (SR) of 60 Hz. We constructed the “eye movement emotional perception evaluation paradigm” based on Unity and integrated it into the HCI system (including the paradigm evaluation module and data acquisition module). Original HCI data included objective evaluation data such as participants’ eye movement coordinates, the time for each task, and each photo, which was recorded in a log file (JSON format) and uploaded to the database.
+ Open protocol
+ Expand
4

Rapid Visual Stimulus Naming with Eye-Tracking

Check if the same lab product or an alternative is used in the 5 most similar protocols
In Task 1, the participants were required to name aloud all the visual stimuli on the grid as rapidly as possible. Their eye movements were recorded using the Tobii eye- tracker.
+ Open protocol
+ Expand
5

Emotion Recognition and Eye Tracking

Check if the same lab product or an alternative is used in the 5 most similar protocols
In order to detect differences in emotion recognition abilities, all participants underwent one session of about 2 h during which they were assessed for: emotion recognition, attention abilities, frontal functioning, memory functioning, and quality of life satisfaction.
During the execution of the emotion recognition test using the Pictures of Facial Affects (PoFA, Ekman and Friesen, 1976 ) and a modified version of PoFA (M-PoFA), all the eye movements of the subjects were recorded with the Tobii Eye Tracker in order to detect the exploration modality adopted by each participant.
+ Open protocol
+ Expand
6

Infant Face Attention and Recognition

Check if the same lab product or an alternative is used in the 5 most similar protocols
At age 7 months, infants completed a Face Pop-Out paradigm in which they were shown stimulus arrays consisting of one face image and four object images (e.g., car, bird, scrambled face, phone) while their eye-movement behaviour was recorded using a TOBII eye-tracker (for full details, see Elsabbagh et al., 2013 (link)). For the purposes of the current study, the proportion of time the infants spent looking at the face image compared to the object images was calculated and used in analysis to index attentional engagement with faces in infancy. Full details of data processing are provided in Elsabbagh et al. (2013) (link). This measure of attentional engagement was previously found to be associated with poorer face recognition ability at age 3 years in this sample (de Klerk et al., 2014 (link)).
+ Open protocol
+ Expand
7

Measuring Visual Attention through Eye Fixation

Check if the same lab product or an alternative is used in the 5 most similar protocols
Eye-fixation count, hereafter as Fix-C, refers to the number of fixation points (grey boxes) on the radar captured by the Eye-Tracker in each individual frame (Figure 7), as defined in (1). It is measured through the Tobii X2-30 screen-based Eye-Tracker and mapped onto the fixation frame via the I-VT Algorithm. Fix-C is found to be a suitable measure in assessing the participant's ability to notify visual cues (Wee et al., 2020 ). FixCAN=EyefixationCountAircraftumber
+ Open protocol
+ Expand
8

Speech Features Annotation Protocol

Check if the same lab product or an alternative is used in the 5 most similar protocols
Speech features such as the time participants took to respond to the questions and the time they spent speaking were manually annotated from the audio extracted from the Tobii Eyetracker using the open-source software Audacity. Each question was annotated using the following four tags: start question (STAQ), stop question (STOQ), start answer (STAA), stop answer (STOA). We calculated the start tags (STAQ, STAA) by considering the first word uttered and for the stop tags (STOQ, STOA) the last one. All tags were annotated with a precision of milliseconds. With these tags we computed the time to respond as the temporal difference between the STOQ and STAA. Eloquence was defined from the temporal difference of the tag STOA and STAA.
+ Open protocol
+ Expand
9

Eye-Tracking Experiment Setup and Implementation

Check if the same lab product or an alternative is used in the 5 most similar protocols
The eye-tracker (Tobii, Sweden) was integrated with a 17 inch LCD monitor, on which stimuli were displayed. A nine-point calibration was administered at the start of every block. A webcam placed under the eye-tracker focused on the participant's eyes to monitor the gaze interaction. For the test phase, a 17 inch CRT monitor was used. Sound stimuli were presented through two speakers (BOSE Media Mate II).
+ Open protocol
+ Expand
10

Multimodal ASD Identification Framework

Check if the same lab product or an alternative is used in the 5 most similar protocols
Proposed framework. We developed a multimodal framework capable of automatically identifying ASD in children. In the data acquisition stage, multimodal data were collected with non-invasive sensors, including a Tobii Eye Tracker, a video camera, and a personal computer.
These sensors provided information on eye fixation, facial expression, and cognitive level, respectively. Next, the features were extracted. First, the number of fixation coordinates in each cluster was extracted as an eye fixation feature using the K-means algorithm. Second, the number of frames containing a smiling expression in each time interval was extracted as a facial expression feature with the use of an improved facial expression recognition algorithm boosted by soft label. Finally, the answers and response times collected with an interactive question-answer platform were extracted as cognitive level features. Features with the same source and synchronization then underwent feature fusion, and an optimized RF algorithm based on weighted decision trees was applied to the classification model, which became the input for the decision fusion stage. After this stage, the final classification result was obtained. The experimental scene and the proposed framework are shown in Fig. 1.
+ Open protocol
+ Expand

About PubCompare

Our mission is to provide scientists with the largest repository of trustworthy protocols and intelligent analytical tools, thereby offering them extensive information to design robust protocols aimed at minimizing the risk of failures.

We believe that the most crucial aspect is to grant scientists access to a wide range of reliable sources and new useful tools that surpass human capabilities.

However, we trust in allowing scientists to determine how to construct their own protocols based on this information, as they are the experts in their field.

Ready to get started?

Sign up for free.
Registration takes 20 seconds.
Available from any computer
No download required

Sign up now

Revolutionizing how scientists
search and build protocols!