The largest database of trusted experimental protocols

Geforce gtx titan x

Manufactured by NVIDIA
Sourced in United States

The GeForce GTX Titan X is a high-performance graphics processing unit (GPU) designed for gaming and professional applications. It features 3,072 CUDA cores, 12GB of GDDR5 memory, and a 384-bit memory interface. The GTX Titan X is capable of delivering high-quality graphics and computing performance.

Automatically generated - may contain errors

19 protocols using geforce gtx titan x

1

Benchmarking MHC Peptide Prediction

Check if the same lab product or an alternative is used in the 5 most similar protocols
Experiments were performed on a machine with twelve Intel Core(TM) i7-5930K CPUs at 3.50GHz, four NVIDIA GeForce GTX TITAN X GPUs, and 64GB memory using the MHCtools Python interface to the MHCflurry and NetMHC tools (https://github.com/openvax/ mhctools) with parallelization and GPUs disabled. We measured the time to generate various numbers of predictions (10 2 , 10 3 , 10 4 , 10 5 , and 10 6 ) for a single allele using peptides sampled from the MS benchmark. We repeated the experiment three times using different alleles (HLA-A*02:01, HLA-A*02:07, HLA-A*01:01). Rates and speedups reported in the main text are averages for the three alleles at the maximum number of peptides (10 6 ).
Training the MHCflurry 1.2.0 full ensembles (320 models for each of 130 alleles, for 41,600 models total) took 1,049 minutes using all GPUs and CPUs on the machine. Model selection took 299 minutes, and computing the histogram of predicted affinities for each allele took 15 minutes.
+ Open protocol
+ Expand
2

Deep Learning-Based Protocol for Image Analysis

Check if the same lab product or an alternative is used in the 5 most similar protocols
The MD-CNN is developed and implemented using TensorFlow 2.3.0 in Python 3.7.9 with CUDA 10.163 –65 . Model training is performed on an NVIDIA GeForce GTX Titan X graphics processing unit (GPU).
+ Open protocol
+ Expand
3

Deep Learning for Genome Analysis

Check if the same lab product or an alternative is used in the 5 most similar protocols
All WDNN and MLP model implementations used the Keras 1.2.0 library in Python 2.7 with a TensorFlow 0.10.0 backend. The random forest and regularized logistic regression classifiers were implemented with Python Scikit-Learn 0.18.1. The isolate diversity analysis was implemented using R 3.4.0, the t-SNE analysis used the Rtsne 0.13 package in R, and the permutation tests were implemented in Python 2.7. All models were trained on a NVIDIA GeForce GTX Titan X graphics processing unit (GPU). Hyperparameters are available in Table S9. All analysis code and input data files are openly available at https://github.com/farhat-lab/wdnn.
+ Open protocol
+ Expand
4

Molecular Dynamics of Rosette-like CSC

Check if the same lab product or an alternative is used in the 5 most similar protocols
Six-fold assemblies of the relaxed oligomers were generated to mimic assembly possibilities for the rosette CSC. Dimer and trimer assemblies were docked with SymmDock as described above to generate a pool of 2000 conformations, followed by minor manual adjustments in helical alignment. Tetramer, pentamer, and hexamer assemblies were generated manually due to system size limitations in SymmDock. Explicit MD simulations on each six-fold assembly were performed using the combination of NAMD 2.10 software55 (link) and Amber suite. The protein assembly, neutralizing ions and water molecules cumulatively represented 1.6 million to over 3 million atoms per system. These six-fold assemblies were parametrized for the Cornell force field from the Amber suite, minimized using conjugate gradient with NAMD, and run in parallel on four GPU cards as described above. Post minimization, heating, and equilibration steps were performed using Amber14/PMEMD as described above on a single GPU card that supports simulations of up to 5 million atoms (GeForce GTX Titan X; NVIDIA Corp., Santa Clara, CA). The potential energies per monomer within each of the six-fold assemblies were determined as described above for the individual oligomers.
+ Open protocol
+ Expand
5

Implementing a Classification Network

Check if the same lab product or an alternative is used in the 5 most similar protocols
The classification network presented earlier was programmed in Python 3.6 (Python Software Foundation, Fredericksburg, VA, USA) using the open-source Pytorch tools. On a graphics processing unit (GPU)-optimized workstation with a single NVIDIA GeForce GTX Titan X (12GB, Maxwell architecture; NVIDIA, Santa Clara, CA, USA), training was conducted. The following settings were established: an Adam optimizer with a learning rate of 1e−4, a weight decay of 1e−4, and a mini-batch size of 4. Initialization of convolution kernels was performed utilizing a Kaiming uniform initializer. A technique of lowering the learning rate gradually was employed, lowering it by 0.25*total epochs, 0.25*total epochs, and 0.75*total epochs at a pace of 0.1.
+ Open protocol
+ Expand
6

GPU-Accelerated Ultrasound Motion Tracking

Check if the same lab product or an alternative is used in the 5 most similar protocols
The GPU hardware used in this study is Nvidia GeForce GTX TITAN X (Nvidia Corp., Santa Clara, CA, USA). The GPU card comes along with 80 Stream Multiprocessors, along with a total of 5120 CUDA computing cores along with 12 GB of Memory. The GPU card is installed on a desktop workstation with a Ubuntu operating system (version 16.04), Intel(R) Core(TM) i7–8700 CPU @ 3.20 GHz, and 16 GB of host memory. ANSI C and CUDA 9.0 were adopted for implementing all CPU and GPU algorithms. All testing was done under the MATLAB platform (Version 2016b, Mathworks Inc., Natick, MA, USA) and all implemented algorithms were invoked in the MATLAB environment through the MEX interface.
In total, six different implementations were done in this study: standard NCC in CPU, standard NCC in GPU, Lewis’ method in CPU and Lewis’ method in GPU, Luo-Konofagou method in CPU and Luo-konofagou method in GPU. Hereafter, those six implementations are referred to as NCC-CPU, Lou-Konofagou-CPU, Lewis-CPU, NCC-GPU, Luo-Konofagou-GPU, and Lewis-GPU, respectively. In this paper, the performance is mainly compared from the following two aspects: (1) Evaluating whether or not a different implementation yields substantial errors and (2) comparing computational efficiency of those three methods, given motion tracking parameters and the computing environment (i.e., CPU or GPU).
+ Open protocol
+ Expand
7

GPU-Accelerated Ultrasound Motion Tracking

Check if the same lab product or an alternative is used in the 5 most similar protocols
The GPU hardware used in this study is Nvidia GeForce GTX TITAN X (Nvidia Corp., Santa Clara, CA, USA). The GPU card comes along with 80 Stream Multiprocessors, along with a total of 5120 CUDA computing cores along with 12 GB of Memory. The GPU card is installed on a desktop workstation with a Ubuntu operating system (version 16.04), Intel(R) Core(TM) i7–8700 CPU @ 3.20 GHz, and 16 GB of host memory. ANSI C and CUDA 9.0 were adopted for implementing all CPU and GPU algorithms. All testing was done under the MATLAB platform (Version 2016b, Mathworks Inc., Natick, MA, USA) and all implemented algorithms were invoked in the MATLAB environment through the MEX interface.
In total, six different implementations were done in this study: standard NCC in CPU, standard NCC in GPU, Lewis’ method in CPU and Lewis’ method in GPU, Luo-Konofagou method in CPU and Luo-konofagou method in GPU. Hereafter, those six implementations are referred to as NCC-CPU, Lou-Konofagou-CPU, Lewis-CPU, NCC-GPU, Luo-Konofagou-GPU, and Lewis-GPU, respectively. In this paper, the performance is mainly compared from the following two aspects: (1) Evaluating whether or not a different implementation yields substantial errors and (2) comparing computational efficiency of those three methods, given motion tracking parameters and the computing environment (i.e., CPU or GPU).
+ Open protocol
+ Expand
8

CNN Training with Adam Optimizer

Check if the same lab product or an alternative is used in the 5 most similar protocols
The CNN was trained random weights initialized using the heuristic described by He and coworkers 18 Gradients for backpropagation were estimated using the Adam optimizer, an algorithm for first-order gradient-based optimization of stochastic objective functions based on adaptive estimates of lower-order moments. 19 An initial learning rate of 2-4 was used and annealed (along with an increase in minibatch size) whenever a plateau in training loss was observed.
Software code was written in Python 3.5 using the open-source TensorFlow r1.9 library (Apache 2.0 license). 20 Experiments were performed on a GPU-optimized workstation with a single NVIDIA GeForce GTX Titan X (12GB, Maxwell architecture).
+ Open protocol
+ Expand
9

3D SDOCT Image Pre-processing for AI

Check if the same lab product or an alternative is used in the 5 most similar protocols
We applied standardisation and normalisation for data pre-processing. Specifically, standardisation was used to transfer data to have zero mean and unit variance, and normalisation rescaled the data to the range of 0–1. To alleviate the over-fitting issue, during the training process, we used several data augmentation techniques, including random cropping and random flipping at three axes, to enrich training samples for the 3D SDOCT volumetric data. Consequently, the final input size of the network was 200 × 1000 × 200.
We implemented the DL model using Keras package and python on a workstation equipped with 3.5 GHz Intel® Core™ i7-5930K CPU and GPUs of Nvidia GeForce GTX Titan X. We set the learning rate as 0.0001 and optimised the weights of the networks with Adam stochastic gradient descent algorithm.
+ Open protocol
+ Expand
10

Molecular Docking and Structure Analysis

Check if the same lab product or an alternative is used in the 5 most similar protocols
A laptop equipped with AMD® Ryzen 3 2200U, VGA Radeon® Vega 3, 4 GB RAM, and 1TB HDD, and a workstation with Intel® Xeon® E5-2620 v3, NVIDIA GeForce GTX TITAN X, 64 GB RAM, and 1 TB HDD. Software employed include Marvin Sketch (www.chemaxon.com), AutoDock 4.2.6, and AutoDockTools 1.5.6 (www.autodock.scripps.edu), LigandScout 4.4.7 (www.inteligand.com), BIOVIA Discovery Studio 2020 (www.accelrys.com), AMBER MD 2018 (www.ambermd.org), and UCSF Chimera 1.15 (www.cgl.ucsf.edu/chimera/).
+ Open protocol
+ Expand

About PubCompare

Our mission is to provide scientists with the largest repository of trustworthy protocols and intelligent analytical tools, thereby offering them extensive information to design robust protocols aimed at minimizing the risk of failures.

We believe that the most crucial aspect is to grant scientists access to a wide range of reliable sources and new useful tools that surpass human capabilities.

However, we trust in allowing scientists to determine how to construct their own protocols based on this information, as they are the experts in their field.

Ready to get started?

Sign up for free.
Registration takes 20 seconds.
Available from any computer
No download required

Sign up now

Revolutionizing how scientists
search and build protocols!