The largest database of trusted experimental protocols

Eye Gaze

Eye Gaze: A powerful tool for research optimization.
Discover the potential of AI-driven eye gaze analysis to revolutionize your protocols.
Easily identify the most effective techniques and products from literature, pre-prints, and patents.
Enhance your research with data-driven insights and take it to new heights.
Experiecne the future of research with PubCompare.ai's cutting-edge eye gaze analysis capabilities.

Most cited protocols related to «Eye Gaze»

The DDM models two-choice decision making as a noisy process accumulating evidence over time (Fig 2). This process approaches one of two boundaries with a certain speed (drift-rate, influenced by the amount of evidence conveyed by the stimuli). When one of the two boundaries is crossed, the associated response is executed. The distance between the two boundaries is called the decision threshold; larger thresholds lead to slower, but more accurate responding. Estimation of these underlying decision processes was accomplished using DDM analysis of test phase choices. We fit each participant's choices and RT distributions with a DDM model assuming that the proportional difference in reward values for the two options sets the drift rate (Cavanagh, Wiecki, et al., 2011 (link); Ratcliff & Frank, 2012 (link)), i.e. the rate of evidence accumulation for one option over the other. We used hierarchical Bayesian estimation of DDM parameters, which optimizes the tradeoff between random and fixed effect models of individual differences, such that fits to individual subjects are constrained by the group distribution, but can vary from this distribution to the extent that their data are sufficiently diagnostic (Wiecki, Sofer, & Frank, 2013 (link)). This procedure produces more accurate DDM parameter estimates for individual and groups, particularly given low trial numbers or when assessing coefficients between psychophysiological measures and behavior.
Estimation of the Hierarchical DDM (HDDM) was performed using recently developed software (http://ski.clps.brown.edu/hddm_docs) (Wiecki et al., 2013 (link)). Bayesian estimation allowed quantification of parameter estimates and uncertainty in the form of the posterior distribution. Markov chain Monte-Carlo (MCMC) sampling methods were used to accurately approximate the posterior distributions of the estimated parameters. Each DDM parameter for each subject and condition was modeled to be distributed according to a normal (or truncated normal, depending on the bounds of parameter intervals) distribution centered around the group mean with group variance. Prior distributions for each parameter were informed by a collection of 23 studies reporting best-fitting DDM parameters recovered on a range of decision making tasks (Matzke & Wagenmakers, 2009 (link)), see the supplement of (Wiecki, Sofer, & Frank, 2013 (link)) for visual depictions of these priors. A model using non-informative priors (e.g. uniform distributions that assign equal probability to all parameter values over a large interval) resulted in highly similar results. There were 5000 samples drawn from the posterior; the first 200 were discarded as burn-in following the conventional approach to MCMC sampling whereby initial samples are likely to be unreliable due to the selection of a random starting point.
We estimated regression coefficients in separate HDDM models to determine the relationship between single trial variations in psychophysiological measures (eye gaze, pupil dilation) and model parameters (drift rate, decision threshold):
parameter=β0+β1(psychophysiology)
In these regressions the coefficient β1 weights the slope of parameter (drift rate, threshold) by the value of the psychophysiological measure (proportional gaze dwell time, pupil change from baseline) on that specific trial, with an intercept β0. We extended this regression approach to formally compare three competing models of the influence of value and gaze dwell time on drift rate. Each of these models contained multiple regression coefficients in order to test independent and interactive influences of value and gaze dwell time on drift rate. In each of these models, the continuous influence of value (10%, 20%, 40%, 50%, 60%) was used instead of the condition-specific differences (win-win, lose-lose, and win-lose), although we do plot the condition-specific effect of gaze in one instance for descriptive purposes. For descriptive clarity, these models are formally explained in the Results section. This regression approach was also used to model the influence of pupil dilation on decision threshold in the corrected win-win and lose-lose conditions.
Bayesian hypothesis testing was performed by analyzing the probability mass of the parameter region in question (estimated by the number of samples drawn from the posterior that fall in this region; for example, percentage of posterior samples greater than zero). Statistical analysis was performed on the group mean posteriors. The Deviance Information Criterion (DIC) was used for model comparison, where lower DIC values favor models with the highest likelihood and least number of parameters (Gelman, 2004 ). While alternative methods exist for assessing model fit, DIC is widely used for model comparison of hierarchical models (Spiegelhalter, Best, Carlin, & van der Linde, 2002 ), a setting in which Bayes factors are not easily estimated (Wagenmakers, Lodewyckx, Kuriyal, & Grasman, 2010 (link)).
Publication 2014
A-factor (Streptomyces) Diagnosis Dietary Supplements Diffusion Eye Gaze factor A Lymphoid Progenitor Cells Mydriasis Pupil
An Eyelink 1000 eye-tracker (SR Research) monitored the gaze location of participants' right eyes during reading. The eye tracker has a spatial resolution better than 30! arc and a 1000 Hz sample rate. Participants viewed the stimuli binocularly on a monitor 63 cm from their eyes; approximately 3 characters equaled 1° of visual angle.
Publication 2011
Character Eye Eye Gaze
During their first and second visits infants were administered a face preference task very similar to that reported by [28] . Looking behaviour was recorded with a Tobii eye tracker. The Tobii system has an infrared light source and a camera mounted below a 17 in. flat-screen monitor to record corneal reflection data. The Tobii system measures the gaze direction of each eye separately and from these measurements evaluates where on the screen the individual is looking. During the eye tracker tasks the child is seated on his/her caregivers lap, at 50–55 centimeters from the Tobii screen. The height and distance of the screen are adjusted for each child to obtain good tracking of the eyes. First a five-point calibration sequence is run, with recording only started when at least four points are marked as properly calibrated for each eye. Gaze data were recorded at 50 Hz.
In the present task, 14 different arrays, each with five stimuli, were presented (see Fig. 1 for an example). Each array contained a colour image of one of fourteen different faces with direct gaze used as the target. Different exemplars from each of the following categories: mobile phones, birds, and cars were also included in the array. Another stimulus was a visual ‘noise’ image, generated from the same face presented within the array, by randomizing the phase spectra of the faces whilst keeping the amplitude and colour spectra constant [33] (link). The slides were counterbalanced for gender, ethnicity, and vertical and horizontal location of the face within the array. To verify that faces were similar to other categories in terms of visual saliency, saliency ranks were calculated for each area of interest on all 14 slides using the Saliency Toolbox 2.2 [63] (link).2 Categories had very similar average saliency ranks. When placed at a distance of 55 cm from the child the five individual images on the slide had an eccentricity of 9.3° and covered an approximate area of 5.2° × 7.3°.
Before each slide a small animation was presented in the center of the screen to ensure that the children's gaze was directed to the centre. Each slide presentation lasted 15 s. To assist in maintaining the children's attention, the visual presentation was accompanied by music. If the child stopped looking at the slide one of the experimenters prompted the infant to look at the screen again, without naming or referring to any of the stimuli. When the infant looked away for more than 5 s, the experimenter terminated presentation of the given slide. Rectangular AOIs were defined around each object image and the center of the screen using Tobii Studio software. Gaze data were extracted for each AOI: centre, face, noise, car, bird, phone, and total (the entire slide).
Full text: Click here
Publication 2013
Attention Aves Child CM 55 Corneal Reflexes Ethnicity Eye Gaze Face Infant Infrared Rays
This study was conducted in 36 healthy, orthotropic paid volunteers who were recruited by advertisement and 12 subjects with chronic unilateral SO palsy confirmed by both clinical examination and the presence of significant SO atrophy on MRI,15 (link),16 (link) with no history of strabismus surgery. Each subject gave written informed consent according to a protocol conforming to the Declaration of Helsinki and approved by the Human Subject Protection Committee at the University of California, Los Angeles. Data collection was compliant with the Health Insurance Portability and Accountability Act of 1996.
Healthy subjects underwent a comprehensive eye examination to verify normal visual acuity, normal ocular motility, stereoacuity, and ocular anatomy. Subjects with SO palsy underwent the same comprehensive eye examination as well as a Hess screen test.
Each subject underwent high-resolution T1-weighted or T2-weighted fast spin-echo MRI with surface coils and a 1.5-T Signa scanner (General Electric, Milwaukee, Wisconsin) using protocols described in detail elsewhere.17 (link),18 (link) Images were obtained in a quasi-coronal fashion (Figure 1) in a matrix of 256×256 pixels over an 8-cm field of view (313-μm pixel resolution) with 2-mm slice thickness. Images were obtained by scanning the fixating eye in central gaze in all subjects and in supraduction and infraduction for most subjects, with imaging in secondary gaze positions limited in some subjects by fatigue during image acquisition.
Digital MRIs were quantified using the program ImageJ 1.37v (W. Rasband, National Institutes of Health, Bethesda, Mary-land). For the 6 contiguous image planes beginning at the globe–optic nerve junction and extending 12 mm posteriorly, each rectus EOM and the SO were manually outlined (Figure 2) and the cross-sectional area was obtained using the Area function of ImageJ. Volumes were determined by multiplying the cross-sectional areas by the 2-mm slice thickness and summing the volumes for all 6 image planes.
The EOM volumes and maximum cross-sectional areas in central gaze were considered measures of hypertrophy. The change in EOM maximum cross-sectional area from supra-duction to infraduction was considered a measure of contractility. Statistical comparisons were made using paired t tests.
Publication 2011
Atrophy ECHO protocol Electricity Eye Eye Gaze Eye Movements Fatigue Fingers Fragility, Capillary Healthy Volunteers Hypertrophy Magnetic Resonance Imaging Muscle Contraction Operative Surgical Procedures Optic Nerve Physical Examination Strabismus Vision Visual Acuity
Data were analyzed using SPSS-20. Effect sizes (partial eta-squared, ηp2 for F statistics and Cohen's d for t-tests) are reported together with p-values for significant main effects and interactions. Following Cohen (1988), a ηp2 value / d value between 0.01/.20 and 0.06/.50 reflects a small effect, between 0.06/.50 and 0.14/.80 is a medium effect, and > 0.14/.80 represents a large effect. All p-values are for a two-tailed test.
Our analyses aim to characterize how patterns of eye gaze to Social vs. Object AoIs vary by diagnostic group (ASD vs. TD) and by task (Static, Dynamic, Interactive). Because all three tasks include empty background information (see Figure 1) that does not fall into either AoI category, fixation time combined between Social and Object AoIs does not equate to total fixation time on screen.
Our analyses focus on “Total Fixation Duration” for each stimulus type, which is the sum total duration of fixations (defined as >30 ms) within a given AoI category (e.g., social stimuli) and which is often used as an index to measure preference for one stimulus type over another. In order to account for individual differences in overall looking and differences in AoI size, and to retain all collected data rather than implement an exclusionary gaze time threshold that may produce selection biases, we chose to calculate the “Proportion of Total Fixation Duration” for each task by dividing the fixation time participants devoted to each AoI group (i.e., Social vs. Object) by their total fixation time on the entire screen. This metric therefore indicates the percentage of on screen fixation time each participant directed to each AoI group, and retains comparability with previously published research (Parish-Morris et al., 2013 )i. Finally, based on these calculated proportions, we also computed a Social Prioritization score by subtracting the proportion of fixation time devoted to Social AoIs minus the proportion of fixation time devoted to Object AoIs, which is a relevant summary score. Importantly, because each task consisted of more than just Social and Object AoIs (e.g., empty backgrounds), the proportion of fixation time to the Social AoI is not simply the inverse of proportion of fixation time to the Object AoI (and vice versa).
Given the observed group difference in age and IQ, we first assessed whether these variables had an impact on Social Prioritization. Age did not correlate with Social Prioritization across the sample as a whole, or within each group independently, on any of the three tasks, suggesting that these tasks were appropriate choices for testing and producing stable results within the broad age range examined here. Full Scale IQ, however, correlated with Social Prioritization for the ASD group on the static task (r = -.31, p = .016) and the Dynamic Non-Interactive Task (r = -.26, p = .044) but not the Dynamic Interactive task, indicating that their cognitive ability related to task performance on two of the three tasks. In contrast, IQ did not correlate with Social Prioritization for the TD group for any of the three tasks. A standard approach in case of between-group IQ differences is to covary IQ in an ANCOVA. This common practice increases Type 2 error, however, especially in cases where the correlation between IQ and the dependent variable is not homogeneous across groups (Dennis et al., 2009 (link); Miller & Chapman, 2001 (link)). Since the field usually provides ANCOVA results for situations like ours, we ran all analyses with and without covarying IQ and found that the pattern of significant and non-significant effects did not differ. We also re-ran all analyses after excluding the eight participants with ASD with Full Scale IQs under 70, and the results did not differ. In what follows, we report the results of the ANOVA, which are more reliable in this context.
Publication 2015
Cognition Diagnosis Eye Gaze neuro-oncological ventral antigen 2, human Task Performance

Most recents protocols related to «Eye Gaze»

Protocol full text hidden due to copyright restrictions

Open the protocol to access the free full text link

Publication 2023
Animals Blood Vessel Eye Gaze Ferrets Forehead Light Sinusoidal Beds
Ten-minute video recordings of the NPCI were obtained for each child in the families’ homes. Before the video-recorded observation began, a research clinician described the purpose of the observation and the recording procedures. Parents were asked to play with their child the way they typically do. For Study 1 (2016–2018), the families used the child's own toys or other items available in the home; for Study 2 (2018–2020), the dyads were provided with a standardized set of developmentally appropriate toys (a drum, farm animal and tractor set, musical piggy bank, blocks, baby doll with accessories, shape sorter, ball, puzzles, snap lock toy, and picture books). Video recordings of the SCCI were obtained for each child during administration of the CSBS DP. Both the NPCI and SCCI were digitally recorded by a research clinician using an iPad2 for a wide-angle view and hidden camera glasses worn by the adult (parent or clinician) to capture child eye gaze.
The recorded NPCI and SCCI assessments were sent to the lab where the two streams of digitized videos (i.e., iPad 2 and glasses) were time linked, transcribed for communication (including gesture), and coded for various social and communicative measures using the conventions of the Child Language Data Exchange System (CHILDES; MacWhinney, 2000 ). Transcription was conducted at the level of the utterance and included all verbal, vocal, and gestural behaviors bounded by a pause or change in conversational turn (Pan et al., 2005 (link)). For Spanish-speaking dyads, bilingual (English/Spanish) research assistants transcribed in Spanish and provided English translations on a secondary coding line. Transcription and coding were conducted by trained research assistants who were blind to group assignment. Specifically, research assistants trained on practice videos until they achieved substantial inter-rater agreement measured by obtaining a Cohen's kappa coefficient of .75 or above. Cohen's kappa accounts for agreement that occurs by chance (Yoder et al., 2018 ). Once reliable, research assistants were allowed to code study videos.
Publication 2023
Adult Child Conferences Congenital Short Bowel Syndrome Eye Gaze Eyeglasses Farm Animals Hispanic or Latino Infant Parent Transcription, Genetic Visually Impaired Persons
Before developing the app’s content, we agreed on 5 thematic domains that are relevant to children’s social, emotional, and cognitive development and 5 neurobiological systems that are involved in social, emotional, and cognitive development. These domains and neurobiological systems (Figure 2) guide the development of the app’s content, are used to categorize the content within the app, and provide the scientific rationale for encouraging parents to engage with the practices promoted by the app.
The thematic domains are based broadly on the Bright Tomorrows project (developed by Minderoo Foundation and Telethon Kids Institute), with the Brain and Mind Centre team mapping new domains. The domains and the broad types of content included in each domain are (1) the Cognitive Brain domain, which includes content about broad cognitive processes (eg, attention, learning, memory, visual and auditory processing, motor skills, and imagination); (2) the Social Brain domain, which includes content about social interaction and the sociocognitive processes involved in recognizing, interpreting, and responding to social cues (eg, eye gaze, joint attention, and facial expressions); (3) the Language and Communication domain, which includes content about processing, understanding, and using verbal and nonverbal language and signals (eg, gestures); (4) the Identity and Culture domain, which includes content about the development of a sense of personal, social, and community identity and the roles that culture and place play in identity development (eg, customs, festivals, and folk stories); and (5) the Physical Health domain, which includes content about physical health, growth, and development and physical protection from harm and abuse (eg, harsh discipline).
The neurobiological systems that we focus on and an outline of their relevance to early child development are shown in Figure 2. These five systems and their main functions include (1) the stress response system, which creates a hormonal response to stress (prolonged activation of the stress response system is associated with negative emotional, behavioral, and physical health outcomes); (2) the oxytocin system, which regulates social, behavioral, and emotional processes (eg, smiling, attention to eye gaze, and breastfeeding), of which many are fundamentally important for early child-caregiver bonds and other social bonds; (3) the learning system, which assigns value to objects and behaviors (in childhood, this is fundamental for motivation creation, social behaviors, and associative learning); (4) the fear-arousal-memory system, which encodes and maintains memories of fearful stimuli and the contexts in which they are experienced; and (5) the circadian system, which orchestrates the daily rhythmic timing of almost all physiological processes and behaviors (eg, sleep and wakefulness, appetite, mood, and cognitive function). Other publications provide more details about these systems and their relevance to early child development [25 (link)-38 (link)].
Full text: Click here
Publication 2023
Arousal Attention Brain Child Child Development Cognition Drug Abuse Emotions Eye Gaze Fear Imagination Joints Memory Mental Processes Mood Motivation Motor Skills Oxytocin Parent Physical Examination Physiological Processes Sleep Vaginal Diaphragm Wakefulness
To confirm that the participants followed the instructions and actually looked at each other, we analyzed the eye-gaze behavior of each participant using data retrieved from the eye trackers. We estimated how often the participants looked at the partner’s (i) body and (ii) face. The former was performed by first estimating the area occupied by the partner’s body on each image, something we achieved using a pre-trained DeepLabV3 model with a ResNet-101 backbone.111 (link) Next, this information was combined with the gaze information resulting in a binary code (1 if gaze location overlapped with body location, 0 otherwise). The second estimation of eye-gaze on the partner’s face, instead, relied on landmarks estimated using OpenPose. Specifically, we estimated the center of the face and the maximum extent of the face on the image. We then used this information to compute an ellipse around the center of the face. The results of this analysis indicated that individuals, on average, looked at the body of their partners 81.67% of the time, and specifically at the face 30.78% of the time. This confirms that participants were complying with the experimental instructions.
Full text: Click here
Publication 2023
Eye Gaze Face Human Body Vertebral Column
The data analysis involved three steps: (a) transcribing dyadic conversation sessions; (b) coding CTs; and (c) coding SRs (i.e., AN, AU, and PR) [57 (link)]. The assistants were then paired as R1 with R2 and R3 with R4. For the analysis, first, all the video-recorded conversation sessions were transcribed through transcription notations adapted from the work of Tsai [56 (link)], as documented in Table S1. All spoken words, nonverbal conversation behaviors (e.g., head nod and eye gaze), and silences within and between utterances were transcribed precisely [62 (link)]. The first two research assistants (i.e., R1 and R3) transcribed the conversation sessions, and the other research assistants (i.e., R2 and R4) examined the accuracy of transcriptions before coding. Any discrepancies in the transcriptions were discussed to gain consensus [76 (link)], and any potential discrepancies were resolved [76 (link),77 (link)].
Second, CTs were independently coded on an utterance-by-utterance basis by R1 and R2 [46 (link)]. A CT-A was coded to a CT contributed by the core HOAs, and CT-B was coded to a CT contributed by the recommended HOAs. Detailed operational definitions of CT coding adapted from the work of Tsai [56 (link)] and extracts illustrating different coding rules of CTs are documented in Table S2.
Third, three SRs (i.e., AN, AU, and PR) were independently coded on an utterance-by-utterance basis using Goffman’s [43 ] framework by R1 and R2. AN was defined as a person producing utterances (i.e., giving “voice” to the words) [43 ], and AU was defined as a person who selects words or infers meanings from incomplete spoken words, utterances (e.g., go), and/or nonverbal conversation behaviors (e.g., head nod and eye gaze) [43 ,44 (link)]. PR was defined as a person whose beliefs, positions, perspectives, personal information, and sentiments are established during the conversation [43 ,78 ]. An AN-A, AU-A, and PR-A were coded to an SR contributed to by the core HOAs, and an AN-B, AU-B, and PR-B were coded to an SR contributed to by the recommended HOAs. Detailed operational definitions of SR coding adapted from the work of Tsai, Scherz, and DiLollo [46 (link)] and extracted illustrating different coding rules of SRs are documented in Table S3.
Full text: Click here
Publication 2023
Eye Gaze Head Joints Transcription, Genetic

Top products related to «Eye Gaze»

Sourced in United States, United Kingdom, Germany, Canada, Japan, Sweden, Austria, Morocco, Switzerland, Australia, Belgium, Italy, Netherlands, China, France, Denmark, Norway, Hungary, Malaysia, Israel, Finland, Spain
MATLAB is a high-performance programming language and numerical computing environment used for scientific and engineering calculations, data analysis, and visualization. It provides a comprehensive set of tools for solving complex mathematical and computational problems.
Sourced in Canada
The EyeLink 1000 is a high-performance eye tracker that provides precise and accurate eye movement data. It is capable of recording monocular or binocular eye position at sampling rates up to 2000 Hz. The system uses infrared illumination and video-based eye tracking technology to capture eye movements. The EyeLink 1000 is designed for use in a variety of research applications, including cognitive science, psychology, and human-computer interaction studies.
Sourced in Canada
The Eyelink 1000 is a high-performance eye tracker manufactured by SR Research. It is capable of recording eye movements with high spatial and temporal resolution. The device uses infrared video-based technology to track the user's gaze and provide accurate data on eye position and pupil size.
Sourced in Canada
The EyeLink 1000 system is a high-performance eye tracking device designed for research applications. It provides accurate, real-time data on eye movements and gaze position. The system uses advanced optical and digital technologies to capture and analyze eye behavior. It is a versatile tool used in various fields of study, including psychology, neuroscience, and human-computer interaction.
Sourced in Canada, United States
The EyeLink 1000 Plus is a high-speed, video-based eye tracker that provides accurate and reliable eye movement data. It is designed for a wide range of applications, including research, usability testing, and clinical studies. The system uses infrared illumination and a digital video camera to track the user's eye and record its position over time.
Sourced in Canada
The EyeLink 1000 Desktop Mount is an eye tracking system developed by SR Research. It is designed to accurately measure and record eye movements. The system uses infrared cameras to track the position and movement of the user's eyes during various tasks or activities.
Sourced in Sweden
Tobii Pro Lab is a software platform designed for research and analysis of eye tracking data. It provides tools for recording, visualizing, and analyzing eye movements during various tasks and experiments. The core function of Tobii Pro Lab is to enable researchers and professionals to capture and interpret eye tracking data efficiently.
Sourced in Sweden
The Tobii X2-60 is an eye-tracking device designed to capture and analyze eye movement data. It operates at a sampling rate of 60Hz and has a field of view of 32 degrees horizontal and 22 degrees vertical. The X2-60 is a compact and lightweight device that can be integrated into various research and development applications.
Sourced in Canada, China
The EyeLink 1000 Desktop Mount is a high-performance eye-tracking system designed for accurate and reliable measurement of eye movements. It features a desktop-mounted camera that captures the user's eye movements, providing precise data on gaze position, pupil size, and other eye-related metrics.
Sourced in Sweden
The TX300 is an eye tracking device that measures the user's gaze point and eye movements. It has a sampling rate of 300 Hz and can be used in both stationary and mobile settings. The TX300 is designed for research purposes and provides accurate and reliable eye tracking data.

More about "Eye Gaze"

Eye gaze analysis is a powerful tool for research optimization, allowing researchers to gain valuable insights and enhance their protocols.
By utilizing AI-driven eye gaze analysis, researchers can easily identify the most effective techniques and products from literature, pre-prints, and patents, enabling them to take their research to new heights.
The EyeLink 1000 and EyeLink 1000 Plus eye trackers, along with the Tobii Pro Lab software, are commonly used in eye gaze research.
These advanced systems provide precise and reliable data, empowering researchers to understand human behavior and cognition in depth.
The EyeLink 1000 Desktop Mount and X2-60 eye tracker are also popular choices, offering flexibility and portability for various research settings.
Experiecne the future of research optimization with PubCompare.ai's cutting-edge eye gaze analysis capabilities.
This AI-driven platform allows researchers to compare and analyze eye gaze data from multiple sources, including MATLAB-generated data, to identify the most effective protocols and techniques.
By leveraging the power of eye gaze analysis, researchers can make data-driven decisions, leading to more impactful and efficient research outcomes.