The largest database of trusted experimental protocols

Psychtoolbox 3

Manufactured by MathWorks
Sourced in United States

Psychtoolbox-3 is a software toolkit designed for research in experimental psychology and neuroscience. It provides a set of functions and libraries that enable researchers to create and control visual, auditory, and other stimuli for their experiments. The toolkit is compatible with MATLAB and Octave and is widely used in the scientific community.

Automatically generated - may contain errors

35 protocols using psychtoolbox 3

1

Multi-Echo fMRI of Decision-Making

Check if the same lab product or an alternative is used in the 5 most similar protocols
The task was divided into two runs of 40 trials each. Each run lasted around 18 min with 30 additional TRs of fixation at the beginning, which were used to compute the combining weights for the four echoes in our multi-echo fMRI sequence. Before the first run, there was a left-handed finger tapping task and a calibration procedure for eye tracking. Between the runs, the participant was allowed to take a break for as long as he/she wanted. At the end of the scanner session, a T1-weighted anatomical scan was made (see “fMRI data acquisition”). All stimuli were presented using PsychToolBox 3.0.11 (www.psychtoolbox.org) in MATLAB 2013a (Mathworks, Natick, MA, USA) onto a screen at the back of the scanner bore, which the participant could view using a mirror mounted onto the head coil. The participant responded using the leftmost two buttons on a four-button curved response box (Current Designs, Philadelphia, PA, USA) in the right hand. These buttons moved the slider on the decision screen left and right in increments of 1 token or 10% of the slider range (whichever was greatest, to increase the speed of movement on the slider13 (link)). The slider ranged from 0 to [investment × multiplier]. The starting point of the slider was randomly selected on each trial, ensuring that the number of button presses was orthogonal to the number of tokens selected.
+ Open protocol
+ Expand
2

Computational Model Analysis of Perceptual Data

Check if the same lab product or an alternative is used in the 5 most similar protocols
Stimuli were presented on a Windows pc running PsychToolBox 3.0.11 (www.psychtoolbox.org) for Matlab 2016a (Mathworks, Natick, MA, USA). Questionnaire data, screening, and debriefing were collected using Castor Electronic Data Capture (www.castoredc.com). Computational model analysis was carried out in Python 2.7, where models were fit to data using the optimize.least_squares function in the Scipy package version 1.0.054 (link).
+ Open protocol
+ Expand
3

Multimodal Sensory Stimulation Protocol

Check if the same lab product or an alternative is used in the 5 most similar protocols
The visual stimulus was a radial expansion motion stimulus in low-contrast (10%) concentric circles with sinusoidal luminance modulation (Kremláček et al., 2004 (link)). A temporal frequency of 5 cycle/second was kept constant over the whole stimulus field. A black square in the middle of the screen was used as a fixation point. The stimuli had 200 ms motion, followed by a 1,000 ms interstimulus interval stable image. Total stimulus repetition was 60.
The auditory stimulus was a 1,000 Hz tone burst and was presented binaurally via Hosiden DH-05-S circumaural headphones (Hosiden Electronics, Japan) at a level of 70 dB SPL. A gray screen with a white “+” sign was shown to prevent eye movement during recording. The duration of the test stimuli was 200 ms, followed by a 1,000 ms interstimulus interval. The stimulus was presented 60 times.
All stimuli were presented on a 22-inch computer monitor (AOC International GmbH, Germany) with a 60 Hz refresh rate from a 70 cm observing distance. For the presentation of stimuli, an Intel (R) Core (TM)2 Quad CPU Q8300 2.50 GHz computer; ATI Radeon HD 3400 Series graphic card and an Eugene Gavrilov kX 10k1 Audio (3550) sound card were used. The software used to present the stimuli was Psychtoolbox-3.0.8 (Kleiner et al., 2007 (link)) and Matlab-R2008a (MathWorks Inc., United States).
+ Open protocol
+ Expand
4

Emotional Faces and Food Preferences

Check if the same lab product or an alternative is used in the 5 most similar protocols
Stimuli consisted of colored pictures of three male faces with neutral expressions, obtained from the NimStim database (Tottenham et al., 2009 (link)), along with 36 colored pictures of meat dishes and 36 colored pictures of vegetable dishes obtained from the Internet. Stimuli were presented with using Psychtoolbox 3.0.8 (Brainard, 1997 (link); Pelli, 1997 (link)) in MATLAB 7.8 (The MathWorks, Inc., Natick, MA, USA).
The stimuli were presented using an event-related design. In each trial, one of the three faces was presented along with a text cue above the face indicating the person’s emotional state (“happy” or “sad”), and pictures of a meat dish and a vegetable dish on the left and right of the face (Figure 1). Each trial was presented for 2 s and trials were separated by a 4–10 s jittered fixation interval. Each run consisted of six trials per condition (i.e., each face paired with each emotion) to give a total of 36 trials per run, and a run duration of 5 min. The program “optseq2”1 was used to generate the optimal sequence and separation of trials for maximal statistical efficiency of rapid-presentation event-related hemodynamic response estimation for each run (Dale, 1999 (link)). The position of the meat and vegetable dishes on the left and right of the face was counterbalanced across trials within each condition and each run. Ten runs were presented.
+ Open protocol
+ Expand
5

Multimodal Stimulation and Response Monitoring in fMRI

Check if the same lab product or an alternative is used in the 5 most similar protocols
Audiovisual signals were presented using Psychtoolbox 3.09 (www.psychtoolbox.org; Brainard, 1997 (link); Kleiner et al., 2007 ) running under Matlab R2010a (MathWorks). Auditory stimuli were presented at ∼75 dB SPL using MR-compatible headphones (MR Confon). Visual stimuli were back-projected onto a Plexiglas screen using an LCoS projector (JVC DLA-SX21). Participants viewed the screen through an extra-wide mirror mounted on the MR head coil, resulting in a horizontal visual field of ∼76° at a viewing distance of 26 cm. Participants indicated their response using an MR-compatible custom-built button device. Participants’ eye movements and fixation were monitored by recording participants’ pupil location using an MR-compatible custom-built infrared camera (sampling rate 50 Hz) mounted in front of the participants’ right eye and iView software 2.2.4 (SensoMotoric Instruments).
+ Open protocol
+ Expand
6

Audiovisual Localization Task in fMRI

Check if the same lab product or an alternative is used in the 5 most similar protocols
Audiovisual stimuli were presented using Psychtoolbox 3.09 (www.psychtoolbox.org) [38 (link)] running under MATLAB R2010a (MathWorks). Auditory stimuli were presented at ~75 dB SPL using MR-compatible headphones (MR Confon). Visual stimuli were back-projected onto a Plexiglas screen using an LCoS projector (JVC DLA-SX21). Participants viewed the screen through an extra-wide mirror mounted on the MR head-coil resulting in a horizontal visual field of approximately 76° at a viewing distance of 26 cm. Participants performed the localization task using an MR-compatible custom-built button device. Participants’ eye movements and fixation were monitored by recording participants’ pupil location using an MR-compatible custom-build infrared camera (sampling rate 50 Hz) mounted in front of the participants’ right eye and iView software 2.2.4 (SensoMotoric Instruments).
+ Open protocol
+ Expand
7

Dichoptic Stimulus Presentation Protocol

Check if the same lab product or an alternative is used in the 5 most similar protocols
Experiments were conducted on a PC running statistical software (MATLAB; MathWorks, Inc., Natick, MA, USA) with extensions (PsychToolBox 3.0.9; MathWorks, Inc.). 4, 5 The stimuli were presented on a gamma-corrected LG D2342PY 3D LED screen (LG Life Science, Seoul, Korea) with a 1920 3 1080 resolution and a 60 Hz refresh rate. Subjects viewed the display dichoptically with polarized glasses in a dimly lit room at a viewing distance of 136 cm. The background luminance was 46.2 cd/m 2 on the screen and 18.8 cd/m 2 through the polarized glasses. A chin-forehead rest was used to minimize head movements during the experiment.
+ Open protocol
+ Expand
8

Fractal Imagery Reward Protocol

Check if the same lab product or an alternative is used in the 5 most similar protocols
For the three experiments the cues consisted of three neutral fractal images. The reward outcome consisted of a 3s long video of experimenter’s hand delivering the participant’s favourite snack into a small bag. At the end of each session, participants received the bag containing the snacks they collected during the task to be consumed. The correspondence between the amount of food consumed at the end of each session was not identical (1 video - 1 piece of snack) but proportional. This proportion varied from 1:2 to 1:6 according to the amount of calories per individual piece of the snack selected by the participant. The neutral outcome used in Experiment 4 consisted of a 3s long video of the experimenter’s hand approaching the bag in a highly similar fashion to the reward outcome video but without any snack. All stimuli were displayed on a computer screen with a visual angle of 6° using Psychtoolbox 3.0, a visual interface implemented on Matlab (version 8.6; The Mathworks Inc., Natick, MA, USA).
+ Open protocol
+ Expand
9

Spatial Perception and Attention Tasks

Check if the same lab product or an alternative is used in the 5 most similar protocols
The three tasks were programmed using Matlab R2011a, Psychtoolbox 3.0 (MathWorks, Natick, USA), and each participant completed all trials. All stimuli were matrices consisting of 4 × 4 squares (Suchan et al., 2006 (link)), with each matrix containing 4 black squares and 12 white squares (see Fig. 1). The position of the 4 black squares within the matrix was altered for each different stimulus. Stimuli were presented in the center of a 17-inch NESO computer monitor with a vertical visual angle of 2.8°. The order for each task was counterbalanced across all participants.
+ Open protocol
+ Expand
10

Optogenetic Stimulation of Cortex

Check if the same lab product or an alternative is used in the 5 most similar protocols
Optogenetic stimulation was done with a 473 nm (blue) laser or with a 470 nm (blue) LED (Omicron Laserage). A 594 nm (yellow) laser was used as control. Laser light was delivered to cortex through a 100 μm or a 200 μm diameter multimode fiber (Thorlabs), LED light through a 2 mm diameter polymer optical fiber (Omicron Laserage). Fiber endings were placed just above the cortical surface, immediately next to the recording sites with a slight angle relative to the electrodes. Laser waveform generation used custom circuits in TDT, and timing control used Psychtoolbox-3, a toolbox in MATLAB (MathWorks) (Brainard, 1997 (link)).
For white noise stimulation, the laser was driven by normally distributed white noise, with light intensities updated at a frequency of 1017.1 Hz. For each recording session, the mean of the normal distribution was chosen to fall into the lower half of the dynamic range of the laser-response curve of the recorded MUA. This resulted in mean values in the range of 3-12 mW/mm2 (13 MUA recording sites in the 3 cats showing expression of ChR2 in area 17). The standard deviation (SD) of the normal distribution was scaled to be 1/2 the mean. The resulting distributions were truncated at 3.5 SDs. The resulting range of laser intensities always excluded both zero and maximal available laser intensities and thereby avoided clipping.
+ Open protocol
+ Expand

About PubCompare

Our mission is to provide scientists with the largest repository of trustworthy protocols and intelligent analytical tools, thereby offering them extensive information to design robust protocols aimed at minimizing the risk of failures.

We believe that the most crucial aspect is to grant scientists access to a wide range of reliable sources and new useful tools that surpass human capabilities.

However, we trust in allowing scientists to determine how to construct their own protocols based on this information, as they are the experts in their field.

Ready to get started?

Sign up for free.
Registration takes 20 seconds.
Available from any computer
No download required

Sign up now

Revolutionizing how scientists
search and build protocols!