The largest database of trusted experimental protocols

Unity game engine

Manufactured by Unity Technologies
Sourced in United States

Unity is a cross-platform game engine developed by Unity Technologies. It is designed to create 2D and 3D games, as well as interactive applications, simulations, and visualizations. The engine provides tools for developers to design, build, and deploy their projects across a wide range of platforms, including desktop computers, mobile devices, and game consoles.

Automatically generated - may contain errors

24 protocols using unity game engine

1

Immersive VR Stimulus Presentation

Check if the same lab product or an alternative is used in the 5 most similar protocols
Stimuli were presented in the immersive virtual-reality head-mounted display and on the computer display by using custom built software for Unity game engine (version 4.6.2, Unity Technologies, San Francisco, CA). The computer running the stimulus presentation software communicated with BIOPAC through Stimtracker (Cedris Corporation, San Pedro, CA) hardware.
+ Open protocol
+ Expand
2

Plagi-Warfare: Designing Immersive 3D Library Environments

Check if the same lab product or an alternative is used in the 5 most similar protocols
In this section, we describe the design of the game world or environment. The environments in Plagi-Warfare were built using Unity Game Engine; it plays out in 3D gaming environments that are modeled after the libraries of the University of Johannesburg, South Africa. The libraries used in the gaming environment include the Auckland Park Bunting Road (APB) campus, the Kingsway Avenue (Auckland Park Kingsway [APK]) campus, and the Doornfontein (DFC) campus libraries. In Plagi-Warfare, there are 27 scenes: 4 of the scenes are used to assist the player to sign into/up for the game; 5 of the scenes are for the navigation for each mode of gameplay (resulting in 10 scenes for the mafia and detective modes)—that is, used to create pages for the user to access the leaderboard, options, profile, level selection, and level failed message; 7 scenes are for quizzes or game challenges; and 6 scenes are library environments, 3 for each game mode, representing the 3 libraries of the university.
+ Open protocol
+ Expand
3

Simulated Landing Scenarios for Pilot Training

Check if the same lab product or an alternative is used in the 5 most similar protocols
Twenty-seven static images depicting a typical landing scenario were constructed using the Unity game engine (63° field of view)1. Stimuli represented three runways being seen from three increasing distances (30 m/98 ft, 90 m/295 ft, 150 m/689 ft) and 3 height/distance ratios corresponding to three different glide paths (0.2 = low glide path, 0.3 = on the glide path, 0.4 = high glide path) (see Figure 2 for stimuli example). Based on this, in a typical low glide path view (i.e., 0.2 height/distance ratio), the approach is too low and the runway cannot be reached in a powerless glide. A typical correct landing approach is represented in a scenario adopting the 0.3 height/distance ratio (i.e., on the glide path). The other possible scenario is represented by a high approach (i.e., 0.4 height/distance ratio), where the landing spot is shifted forward. Stimuli and related reachability of the runway in a powerless glide were independently rated in a previous validation study by a different sample of pilots (see the Supplementary Data Sheet 1).
+ Open protocol
+ Expand
4

Unity-Based Multimodal Experimental Setup

Check if the same lab product or an alternative is used in the 5 most similar protocols
The Unity game engine (version 2018.1.9f2) was used for experimental control. Participants’ and the experimenter’s momentary position and orientation per time point were recorded with a motion-tracking system (see below). The Unity application was programmed to read the motion-tracking data in real-time during the experiment, in order to trigger auditory events that depended on the participant’s momentary location. For instance, the Unity application was programmed to automatically detect when a participant arrived at a hidden target location during a ‘target search’ trial, and then to trigger the next auditory instruction. Next, the Unity application was programmed to detect when a participant arrived at the cued wall-mounted sign, at which point another auditory instruction was triggered to direct participants to find the next target location, and so on.
Moreover, the Unity application was programmed to trigger iEEG data storage automatically, and to insert a mark at specific time points in the iEEG data. These synchronization marks were sent at the beginning and end of each 3.5–4-min recording interval, allowing for synchronization of iEEG data with data from other recording modalities (for example, motion tracking and eye tracking). See ref. 38 (link) for further technical details and specifications of this setup.
+ Open protocol
+ Expand
5

Online Behavioral Experiment Protocol

Check if the same lab product or an alternative is used in the 5 most similar protocols
Participants used their own personal computer to complete the experiments and were restricted to users on a laptop or desktop using Prolific's screening tool. The experiments were created using the Unity game engine (2019.4.15f) and the Unity Experiment Framework (89 ) and delivered via a WebGL build hosted on a web page, with data uploaded to a remote database. Given participants used their own computers with varying sizes and aspect ratios, the physical size of the task was not consistent across participants but was developed to be visible on a 4:3 aspect ratio monitor. The height of the scene, 4 au, always took up the full height of the participant's monitor, with wider aspect ratios featuring more of a task-irrelevant background texture. Full screen was forced throughout, and during the experiment, the desktop cursor was hidden and locked so it could not be used to interact with the web page. Instead, the raw mouse or trackpad input was used to perform in-game movements of the cursor (eliminating mouse acceleration from the operating system). The sensitivity of in-game movements was initially calibrated to be similar to the participants’ desktop cursor, which could be adjusted in a calibration stage.
+ Open protocol
+ Expand
6

Haptic Simulation of Wood Drilling

Check if the same lab product or an alternative is used in the 5 most similar protocols
The Novint Falcon, a low-fidelity haptic device by Novint Technologies (Albuquerque, NM, USA), was used to provide haptic force-feedback in the audiohaptic trials. The Falcon provides up to 9.0 N of force-feedback with three degrees of freedom, with a resolution of 400 dpi (dots per inch). Higher-fidelity haptic devices can typically provide up to 40 N of force, with five to seven degrees of freedom [3 (link)]. As described in [11 (link)], the haptic sensations provided in this simulation were modeled after the resulting forces and vibrations measured from a real drill drilling through a piece of wood. Haptic forces were simulated using a spring-mass system coded into the Unity game engine (Unity Technologies, San Francisco, CA, USA). A lightweight 3D-printed mock drill was used as the haptic interface to provide realistic tactile input. When a participant pushed the drill forwards to contact the virtual block of wood during an audiohaptic trial, the Falcon provided resistance to the user’s motion. The haptic stimuli and wood audio recording occurred simultaneously. Figure 1 displays the stimuli presented in both trial types.
+ Open protocol
+ Expand
7

Simulating Depth Perception Accuracy

Check if the same lab product or an alternative is used in the 5 most similar protocols
The Unity game engine (Unity Technologies, San Francisco, CA, USA) was used to run the simulation, and collect and record performance data. Performance measures were obtained from all participants by recording mean drilled depth and calculating absolute, constant, and variable errors for both trial conditions (see Supplementary Tables S1 and S2). Absolute error represents the average error without reference to direction, whereas constant error indicates the direction of error, and variable error refers to the variability of performance trials [35 (link),36 (link),37 (link)]. Absolute error was calculated as the absolute value of the difference of the drilled depth and the target depth (2 cm; Equation (1)), constant error was calculated as the signed difference between drilled depth and target depth (Equation (2)), and standard deviation of constant error was used as variable error (Equation (3)).


Variable Error=Σ(xx¯)2n.
+ Open protocol
+ Expand
8

Gaze-contingent VR Experiments

Check if the same lab product or an alternative is used in the 5 most similar protocols
Virtual scenes were displayed inside an HTC Vive (HTC, Valve corporation) VR head-mounted display worn by participants. The display is refreshed 90 times per second and shows, on average, a field-of-view of 90 by 90 deg. binocularly. The headset was retro-fitted with a Tobii eye tracker (Tobii Technology), tracking gaze binocularly at 120 Hz with a precision below 1.1 deg. within a 20 deg. window centered in the viewports.
The experimental protocol was implemented in the Unity game engine (Unity Technologies) with the Steam VR (Valve corporation) and Tobii eye tracking (Tobii Technology) software libraries. We minimized the time between gaze sampling to updating a mask position in a viewport, by updating the display as late as possible in the rendering process with the latest sample received from the eye tracker. The latency problem is critical in moving-mask conditions, because we need to ensure that central vision is modified even at the start and end of saccades [49 (link),59 (link)]. We estimated the maximum (“worst-case scenario”) latency to be below 15 ms.
+ Open protocol
+ Expand
9

Mobile App Development for Augmented Reality Learning

Check if the same lab product or an alternative is used in the 5 most similar protocols
The maturity of the mobile app environment offers many development tools and approaches for developing mobile apps. For the VARIAT program, the developers used the Unity game engine (Unity Technologies) for game content with ARKit (Apple Inc) and ARCore (Google LLC) for the augmented reality component and deployed both iOS and Android apps that were readily available in their respective app stores. Blender (Blender Foundation) was used for 3D modeling and animation, and Photoshop (Adobe) was used to create 2D assets.
The learner downloads the app from the Apple App Store or Google Play on their device, and their progress is maintained on the device with evaluation, progress, and study data synchronized as the learner completes various modules. When synchronized, the data are stored and managed using Google Big Query Workspace, which produces data feeds for training evaluators and researchers.
+ Open protocol
+ Expand
10

Markerless Motion Capture for Avatar Interaction

Check if the same lab product or an alternative is used in the 5 most similar protocols
We used a markerless motion capture software with a frame rate of 30 frames per second. The software used was Faceshift, which uses a Facial Action Coding System solver to categorize facial motions into 51 unique expressions (e.g., open mouth, smile, etc.) and tracks the extent to which each expression is activated [38 ]. A depth camera that uses infrared sensors and adaptive depth detection (Asus Xtion PRO Live) was used to track facial expressions in addition to the translation (x, y, z) and orientation (pitch, yaw, roll) of the participant’s physical head and eyes. This tracking data was then used to update the avatar’s head, eyes, and facial expressions. The Unity game engine was used to generate and program the avatar interaction platform.
+ Open protocol
+ Expand

About PubCompare

Our mission is to provide scientists with the largest repository of trustworthy protocols and intelligent analytical tools, thereby offering them extensive information to design robust protocols aimed at minimizing the risk of failures.

We believe that the most crucial aspect is to grant scientists access to a wide range of reliable sources and new useful tools that surpass human capabilities.

However, we trust in allowing scientists to determine how to construct their own protocols based on this information, as they are the experts in their field.

Ready to get started?

Sign up for free.
Registration takes 20 seconds.
Available from any computer
No download required

Sign up now

Revolutionizing how scientists
search and build protocols!