The largest database of trusted experimental protocols

Titanx gpus

Manufactured by NVIDIA

The NVIDIA Titan X is a high-performance graphics processing unit (GPU) designed for demanding applications. It features advanced technologies and a powerful architecture to deliver exceptional performance and capabilities. The Titan X is well-suited for a variety of tasks, including scientific computing, machine learning, and professional-grade visual computing.

Automatically generated - may contain errors

6 protocols using titanx gpus

1

Deep Learning for Tissue Blur Classification

Check if the same lab product or an alternative is used in the 5 most similar protocols
We trained an 18 layer deep ResNet model[1 ] which takes as input 3-channel images of 224×224 pixels. We modified the architecture to accept as input gray-scale images. This was because, firstly, it allows for a better comparison with the engineered features approach which relies on gray-scale images, and, in addition, it can help the ResNet learn color independent features, especially since staining varies widely between tissues. Finally, during training, center crops are taken from our 256 pixels patches to accommodate the network’s 224 pixels input requirement. A 6-class classification task was performed, where each blur level was considered a class with no ordinal information, with cross-entropy as cost function. A regression task was performed, where the blur level is the target, with MSE as cost function. Training from scratch was performed in parallel on four Nvidia TitanX GPUs for 300 epochs with hyper-parameters set as follows: batch size=1024, learning rate=0.1(multiplied by 1/10 every 30 epochs), momentum=0.1. For each epoch the training was done on the training set and validation error calculated for the validation set. After 300 epochs, the best performing model on the validation set was chosen. The test set error was then measured for the best model. Convergence plots are shown in appendixAppendixE.
+ Open protocol
+ Expand
2

Benchmarking Phylogenetic Tree Inference Methods

Check if the same lab product or an alternative is used in the 5 most similar protocols
For FFNN-SS and CNN-CBLV, we reported the average CPU time of encoding a tree (average over 10,000 trees), as reported by NextFlow56 (link). The inference time itself was negligible.
For BEAST2, we reported the CPU time averaged over 100 analyses with BEAST2 as reported by NextFlow. For the analyses with BDEI and BDSS models, we reported the CPU time to process 10 million MCMC steps, and for the analyses with BD, we reported the CPU time to process 5 million MCMC steps. To account for convergence, we re-calculated the average CPU time considering only those analyses for which the chain converged and an ESS of 200 was reached for all inferred parameters.
The calculations were performed on a computational cluster with CentOS machines and Slurm workload manager. The machines had the following characteristics: 28 cores, 2.4 GHz, 128 GB of RAM. Each of our jobs (simulation of one tree, tree encoding, BEAST2 run, etc.) was performed requesting one CPU core. The neural network training was performed on a GPU cluster with Nvidia Titan X GPUs.
+ Open protocol
+ Expand
3

Accelerated MRI Reconstruction Comparison

Check if the same lab product or an alternative is used in the 5 most similar protocols
Online computation time was recorded for each patient scan using both conventional iterative self-calibration and PICS reconstruction method and the proposed data-driven calibration and reconstruction method under identical hardware settings with GPU-optimized computations (two Intel Xeon CPU E5–2670 v3 @ 2.30GHz CPUs with 24 cores each, 256 Gb RAM, and two NVIDIA TITAN X GPUs). The ratio of the average computation time between these two approaches was calculated. A t-test was performed to test the null hypothesis that there is no significant difference between the computation time of the conventional approach and the proposed approach. A two-tailed P value of under 0.05 was considered as statistical significance.
+ Open protocol
+ Expand
4

Tomographic Image Reconstruction with Deep Learning

Check if the same lab product or an alternative is used in the 5 most similar protocols
We used radon and iradon functions in Matlab 2018a for generating sinograms and obtaining FBP reconstructed images, respectively. We used Keras (version 2.1.1) with a Tensorflow backend (version 1.3.0) as the framework for developing deep learning models, and performed experiments using an NVIDIA Devbox (Santa Clara, CA) equipped with four TITAN X GPUs with 12 GB of memory per GPU.
+ Open protocol
+ Expand
5

Optimizing Deep Autoencoder for Performance

Check if the same lab product or an alternative is used in the 5 most similar protocols
We optimized the deep AE using Adam, which computes adaptive learning rates during training and has demonstrated superior performance over other optimization methods [50 ]. An early-stopping strategy was applied to improve the learning of deep AE weights and prevent overfitting, where the training would be terminated if the performance did not improve over five consecutive epochs (maximum number of training epochs: 50). The deep AE was implemented in Torch7 [40 ], and the training was done on two NVIDIA Titan X GPUs and an NVIDIA Tesla K40 GPU. Ten-fold patient-based cross-validation was performed to determine the optimal deep AE architectures, including the number of encoder hidden layers (from 1–3) and the number of hidden units (factor of 4, 8, 16, 32).
+ Open protocol
+ Expand
6

Three-Class UNet Model for Nucleus Segmentation

Check if the same lab product or an alternative is used in the 5 most similar protocols
UnMICST-U models
A three-class UNet model 8 was trained based on annotation of nuclei centers, nuclei contours, and background. The neural network is comprised of 4 layers and 80 input features.
Training was performed using a batch size of 32 with the Adam Optimizer and a learning rate of 0.00005 with a decay rate of 0.98 every 5,000 steps until there was no improvement in accuracy or ~100 epochs had been reached. Batch normalization was used to improve training speed. During training, the bottom layer had a dropout rate of 0.35, and L1 regularization was implemented to minimize overfitting 30, (link)31 and early stopping. Training was performed on workstations equipped with NVidia GTX 1080 or NVidia TitanX GPUs.
+ Open protocol
+ Expand

About PubCompare

Our mission is to provide scientists with the largest repository of trustworthy protocols and intelligent analytical tools, thereby offering them extensive information to design robust protocols aimed at minimizing the risk of failures.

We believe that the most crucial aspect is to grant scientists access to a wide range of reliable sources and new useful tools that surpass human capabilities.

However, we trust in allowing scientists to determine how to construct their own protocols based on this information, as they are the experts in their field.

Ready to get started?

Sign up for free.
Registration takes 20 seconds.
Available from any computer
No download required

Sign up now

Revolutionizing how scientists
search and build protocols!