The largest database of trusted experimental protocols

Geforce gtx 1080

Manufactured by NVIDIA
Sourced in United States

The GeForce GTX 1080 is a high-performance graphics processing unit (GPU) designed and manufactured by NVIDIA. It is based on the NVIDIA Pascal architecture and features 8GB of GDDR5X video memory. The GeForce GTX 1080 is capable of delivering exceptional performance for a wide range of tasks, including gaming, video editing, and scientific computing.

Automatically generated - may contain errors

48 protocols using geforce gtx 1080

1

GPU-Accelerated FDTD Simulation for Real-Time

Check if the same lab product or an alternative is used in the 5 most similar protocols
We implemented a FDTD formulation and the corresponding multi-resolution simulation approach through parallel computation utilizing a GPU device, with the aim of semi-real time feedback for the operators. The simulation was performed using a quad-core work station (Intel(R) Core(TM) i7-7700 CPU @ 3.60 GHz, 64 GB memory, Microsoft Windows 10 64bit) and GPU device (Nvidia’s single Pascal card, GeForce GTX 1080, 2560 cores, 8 GB GDDR5X memory @ 320 GB/s bandwidth). The computational implementation was written in CUDA (Nvidia, Santa Clara, CA) and C++ languages.
+ Open protocol
+ Expand
2

Ray Tracing-Based Guided Ultrasound

Check if the same lab product or an alternative is used in the 5 most similar protocols
By implementing the aforementioned technique, Kranion software was developed by the authors in [31 ], and Fig. 5d exhibits a screen capture of the GUI. Figure 5e illustrates a close-up view of the refracted rays. The software was developed using the Java programing language on the NetBeans integrated development environment (Apache Software Foundation, https://netbeans.apache.org) and available for free download from GitHub (https://github.com/jws2f/Kranion) under MIT license. The main ray tracing computation was constructed as a compute shader executing on a GPU (GeForce GTX 1080 with 12 GB RAM, NVIDIA, CA, US). The software allows a user to manually register the MR and CT images and rotate the scene to any desired observation view. The FUS transducer geometry could be moved to an arbitrary position, and the corresponding ray tracing of 1024 elements was rendered in a sub-second speed and illustrated on the screen of a laptop computer. Additional information on the software is available in [31 ].
+ Open protocol
+ Expand
3

Virtual Reality for Pediatric Anesthesia Training

Check if the same lab product or an alternative is used in the 5 most similar protocols
All 3D characters in the virtual environment were modeled after pediatric anesthesiology employees who had undergone motion capture recording with professional Vicon Motion Systems Ltd equipment (Vicon Motion Systems Ltd). Vicon Vantage cameras were used for body motion capture and the Vicon Cara system, including 4 high resolution high speed cameras and a custom head rig, were used for facial motion capture.
The virtual environment is presented via an HTC Vive headset, using room-scale technology, which allows the user to navigate naturally. Real world awareness is created through a 110 degrees field of view for captivating immersion. The Vive features 32 sensors for 360 degrees motion tracking, a 2160 by 1200 combined resolution, and a 90 Hz refresh rate. With 2 wireless, motion-tracked handheld Vive controllers, each containing 24 sensors, users can interact with precision and experience immersive environments. The headset is connected to a custom computer with an Intel Core i7-5820K processor and a NVIDIA GeForce GTX 1080 graphics card.
+ Open protocol
+ Expand
4

Evaluating CapsNet-MHC Model Performance

Check if the same lab product or an alternative is used in the 5 most similar protocols
The method was implemented using the popular Python library PyTorch on NVIDIA’s GeForce GTX 1080 with 11 GB of available memory. For evaluating CapsNet-MHC, we applied five-fold cross-validation at the training step. To this end, the training set is split into five nearly equal parts, where four parts are used for the training, and the remained part is used for the model evaluation. To ensure that all parts of the datasets are included in evaluating the proposed model, the training and evaluation phases are repeated in five iterations. Finally, the model is evaluated using an independent test set to demonstrate its prediction capability for any unseen data. In this manner, the average prediction accuracy of various iterations is considered as the final result. Supplementary Note 7 and Supplementary Table S9 illustrate various parameter settings for CapsNet-MHC.
+ Open protocol
+ Expand
5

Molecular Dynamics of Glucocorticoid Receptor

Check if the same lab product or an alternative is used in the 5 most similar protocols
Molecular Dynamics simulations were performed on an AMD 3970 × CPU@ 3.7 GHz with the support of a NVIDIA GeForce GTX 1080 graphics chip using GROMACS v2018 and the CHARMM36 force field built in the Linux Ubuntu 18 environment40 (link),41 (link),48 (link). Calculations are based on the GRLBD crystal structure with PDB ID 5NFP, solved by Hemmerling et al.39 (link) The L773P mutation was generated with FoldX while dexamethasone topology was generated with the CGenFF server38 (link),49 (link),50 (link). The Avogadro program was used to assign the dexamethasone hydrogen atom coordinates51 (link). The Cgenff_charmm2gmx.py script from the Mac Kerell lab was used to format the ligand topology for GROMACS. The unit cell was defined as a dodecahedron and was solvated in TIP3P water. The protein net charge was neutralized by adding the appropriate Na+ and Cl ions. The energy minimization step and NVT, NPT equilibration steps were performed based on the tutorials and mdp. files provided by Dr. Justin Lemkul (http://www.mdtutorials.com/) with minor modifications52 . Production MD for data collection was performed for 100 ns (n = 2) and trajectories were analyzed with the GROMACS toolset. Histograms were visualized with Origin 8.6, hydrophobic interactions with BIOVIA discovery studio and Kyte-Doolitle hydrophobicity was colored using UCSF Chimera37 (link),53 (link).
+ Open protocol
+ Expand
6

Gradient-based Phylogenetic Tree Inference

Check if the same lab product or an alternative is used in the 5 most similar protocols
Implementation of the BME criterion and the optimization framework was written in Python using Jax (Bradbury et al. 2018 ) and Optax (Babuschkin et al. 2020 ). Optimization was performed on a Xeon 2.30 GHz (CPU; Intel Corporation) or on a single GeForce GTX 1080 (GPU; Nvidia Corporation). Evaluation of the BioNJ (Gascuel 1997 (link)) and FastME (Lefort et al. 2015 (link)) methods was performed via the R package ape (Paradis et al. 2004 (link)) using rpy2 (Gautier and Krassowski 2021 ). Tree manipulation and visualization scripts were written using ete3 (Huerta-Cepas et al. 2016 (link)) and NetworkX (Hagberg et al. 2008 ). An implementation is available at: https://github.com/Neclow/GradME.
+ Open protocol
+ Expand
7

Low-Latency VR System Evaluation

Check if the same lab product or an alternative is used in the 5 most similar protocols
The HTC Vive virtual reality headset and a TP Cast wireless adapter (latency < 2 ms) are used to display the virtual environment with the following specs: a resolution of 2,160 × 1,200 pixels (1,080 × 1,200 pixels per eye), a field of view of 110° and a refresh rate of 90 Hz. We also use two Vive controllers and three Vive Trackers to track the body movements using inverse kinematics algorithms to animate the avatars in real-time (Figure 4). The computer running the application is composed of an Intel Xeon E5-1607 @ 3,10 GHz processor and a Nvidia GeForce GTX 1080 graphics card.
We measured the overall latency of the system using the manual frame counting method described in He et al. (2000 ). Thanks to a high-speed camera (240 Hz), we recorded simultaneously five hand-clapping motions by removing the lenses of the HTC Vive to be able to compare the delay between the real movement and its virtual counterpart displayed on the headset's screen. Analyzing the motions frame by frame, we measured an average shift of 3 frames (12.5 ms). Thus, our system benefits from these low latency devices providing an overall delay under the threshold of 20 ms recommended for virtual reality (Raaen and Kjellmo, 2015 ).
+ Open protocol
+ Expand
8

Virtual Reality System Specifications

Check if the same lab product or an alternative is used in the 5 most similar protocols
The VR system used was the HTC VIVE, with the HMD and hand trackers. The HMD has a resolution of 2,160 × 1200 (1,080 × 1,200 per eye), a refresh rate of 90 Hz, a field of view of about 110°and weight 0.47 Kg. The headphones used were the Sennheiser HD 206 headphones. The computer used had an Intel i7 4790k processor, 16 Gb of RAM and a NVidia GeForce GTX 1080 graphics card.
+ Open protocol
+ Expand
9

Efficient Long-Read Sequencing Analysis

Check if the same lab product or an alternative is used in the 5 most similar protocols
For our Read Until baseline, we use the ONT recommended version of Minimap2, v2.17 for ONT long reads and we turn minimizers off for better classification accuracy. Guppy v4.0.11 in high-accuracy mode is used for basecalling and is invoked using ONT’s pyguppyclient server for Read Until. Centrifuge v1.0.4 with a human and microbial nucleotide (NT) index is used as explained under Section 3.5.We further add the capability to calculate cell number abundance to Minimap2. RawMap is evaluated using a single-threaded execution on an Intel Xeon E5–2697 ×86 processor. Guppy runs on NVIDIA GeForce GTX-1080.
+ Open protocol
+ Expand
10

Deep Learning System Configuration

Check if the same lab product or an alternative is used in the 5 most similar protocols
For computational processing, Python 3.6.4 programming language was used in Microsoft Windows operating system. The deep learning model was constructed by using Keras 2.2.2 which utilizes Tensorflow-GPU 1.6.0 as a backend. An NVIDIA GeForce GTX1080 (8 GB RAM) with 16 GB system RAM workstation was used.
+ Open protocol
+ Expand

About PubCompare

Our mission is to provide scientists with the largest repository of trustworthy protocols and intelligent analytical tools, thereby offering them extensive information to design robust protocols aimed at minimizing the risk of failures.

We believe that the most crucial aspect is to grant scientists access to a wide range of reliable sources and new useful tools that surpass human capabilities.

However, we trust in allowing scientists to determine how to construct their own protocols based on this information, as they are the experts in their field.

Ready to get started?

Sign up for free.
Registration takes 20 seconds.
Available from any computer
No download required

Sign up now

Revolutionizing how scientists
search and build protocols!