The largest database of trusted experimental protocols

Hmz t1

Manufactured by Sony

The HMZ-T1 is a head-mounted display device produced by Sony. It features two OLED panels, one for each eye, providing a virtual large screen experience. The device is designed to connect to various video sources, such as a TV, computer, or gaming console, and display the content directly in front of the user's eyes.

Automatically generated - may contain errors

Lab products found in correlation

4 protocols using hmz t1

1

Virtual Social Environment Stress Study

Check if the same lab product or an alternative is used in the 5 most similar protocols
The virtual environment used in this experiment was a cafe ´. Participants could navigate the virtual environment using a Logitech F310 Gamepad. The Sony HMZ-T1 headmounted display used for VR display of the cafe ´had a high-density resolution of 1280 • 720 (per eye), with 51.6 diagonal field of view, and built-in headphones. A 3DOF tracker (UM7 Orientation Sensor; CH-Robotics) was added to the Sony HMZ-T1 for head rotation. The researcher controlled the VR system and actions in the virtual environment using a graphical user interface.
Detailed information on the conditions is already published. 31 The social stressors used in this virtual social environment (population density, ethnic density, and hostility) were found to elicit feelings of anxiety. 31 All participants participated in five conditions, each with different levels of social stress. Exposure to each condition lasted 4 min. The order of the five conditions was randomized to prevent a sequence effect.
+ Open protocol
+ Expand
2

Motor Imagery Control of Robotic Hands

Check if the same lab product or an alternative is used in the 5 most similar protocols
Subjects wore a head-mounted display (Sony HMZ- T1)) through which they had a first-person view of the robot’s hands. Two randomly lighting balls were placed in front of the robot’s hands to give a cue for the motor imagery task during the operational sessions. Each participant performed the following two randomly conditioned sessions: 1) MoCap session, where subjects performed a grasp motion using their own right and left hand to control the robot’s corresponding hand, and 2) BCI session, where subjects performed a right or left motor imagery task and controlled the robot’s hands without actual motions. In both sessions, the trials lasted 7.5 s each. At second 2, an acoustic warning was given in the form of a “beep” to indicate the onset of the task and at second 3 the cue (lighting ball) was presented to the subject. From second 3.5 to 7.5, classifier results were sent to the robot in the form of motion commands. Throughout the experiment, identical blankets were laid on both the robot’s and subject’s legs so that the background view of the robot’s and subject’s bodies was the same.
+ Open protocol
+ Expand
3

Virtual Body Ownership Experiments

Check if the same lab product or an alternative is used in the 5 most similar protocols
In this study, we used a head mounted display (HMD, Sony HMZ-T1) and a stereo camera (Sony HDR-TD20V). The skin conductance responses (SCR) were recorded with a Data Acquisition Unit-MP35 (Biopac Systems, Inc. USA). For questionnaires, we used a Likert scale from “strongly disagree” (−3) to “strongly agree” (+3). The questionnaire statements are randomly distributed and can be divided into the following categories: body-part ownership, full-body ownership, touch referral, agency, self-touching illusion, experiential ownership, and double body effect (Table 1). We conducted three experiments, each with four conditions. See Table 2 below for the details of the participants. All participants gave written consent prior to the experiments. This study was approved by the Research Ethics Committee of National Taiwan University (NTU-REC: 201310HS026).
+ Open protocol
+ Expand
4

Exploring Social Stress in Virtual Cafés

Check if the same lab product or an alternative is used in the 5 most similar protocols
Experiments took place in a VR 3D café with a terrace covering an area of 181 m 2 (Fig. 1), created by CleVR with Vizard software. The café was presented through a head mounted display (HMD, Sony HMZ-T1) with a resolution of 1280 × 720 per eye and 51.6 diagonal field of view, integrated headphones and a built-in 3DOF head tracker. Participants moved by operating a joystick (Logitech F3 Gamepad). Avatars were standing or sitting at tables in the VR café. When participants approached avatars, some avatars would look their way briefly, others remained interacting and drinking. Participants heard random café background noises through the headphones.
The social stressors present in the café differed in each experiment. This was accomplished by manipulating three variables: crowdedness, facial expression and ethnicity, see Table 1. The ethnicity of minimal 80% of the avatars was similar or different (white Caucasian or North-African) to the ethnic appearance of the participant. The facial expression of the avatars was neutral or hostile. During the neutral condition avatars continuously looked neutral at each other and the participant. In the hostile condition hostile looks (duration of five seconds) were interspersed with neutral looks.
+ Open protocol
+ Expand

About PubCompare

Our mission is to provide scientists with the largest repository of trustworthy protocols and intelligent analytical tools, thereby offering them extensive information to design robust protocols aimed at minimizing the risk of failures.

We believe that the most crucial aspect is to grant scientists access to a wide range of reliable sources and new useful tools that surpass human capabilities.

However, we trust in allowing scientists to determine how to construct their own protocols based on this information, as they are the experts in their field.

Ready to get started?

Sign up for free.
Registration takes 20 seconds.
Available from any computer
No download required

Sign up now

Revolutionizing how scientists
search and build protocols!