The largest database of trusted experimental protocols

Titan xp

Manufactured by NVIDIA
Sourced in Switzerland, United States, France

The Titan Xp is a high-performance graphics processing unit (GPU) developed by NVIDIA. It is designed for professional applications and advanced computing tasks. The Titan Xp features 12GB of GDDR5X video memory and a powerful Pascal architecture, providing exceptional performance for tasks such as 3D rendering, deep learning, and scientific computing.

Automatically generated - may contain errors

47 protocols using titan xp

1

High-Speed Microscope Image Acquisition

Check if the same lab product or an alternative is used in the 5 most similar protocols
During acquisition, the images are collected by a dedicated custom workstation (Puget Systems) equipped with a high-specification motherboard (Asus WS C422 SAGE/10G), processor (Intel Xeon W-2145 3.7GHz 8 Core 11MB 140W), and 256 GB of RAM. The motherboard houses several PCIe cards, including 2 CameraLink frame grabbers (mEIV AD4/VD4, Silicon Software) for streaming images from the camera, a DAQ card (PCIe-6738, National Instruments) for generating analog output voltages, a 10G SFP+ network card (StarTech), and a GPU (TitanXP, NVIDIA). Datasets are streamed to a local 8 TB U.2 drive (Micron) that is capable of outpacing the data rates of the microscope system. Data is then transferred to a mapped network drive located on an in-lab server (X11-DPG-QT, SuperMicro) running 64-bit Windows Server, equipped with 768 GB RAM and TitanXP (NVIDIA) and Quadro P6000 (NVIDIA) GPUs. The mapped network drive is a direct-attached RAID6 storage array with 15 × 8.0 TB HDDs. The RAID array is hardware based and controlled by an external 8-port controller (LSI MegaRaid 9380–8e 1 GB cache). Both the server and acquisition workstation are set with jumbo frames (Ethernet frame), and parallel send/receive processes matched to the number of computing cores on the workstation (8 physical cores) and server (16 physical cores), which reliably enables ~ 1.0 GB sec−1 network-transfer speeds.
+ Open protocol
+ Expand
2

Agent-Based Modeling of COVID-19 Dynamics

Check if the same lab product or an alternative is used in the 5 most similar protocols
We have implemented our ABM using Python3 programming language [47 ]. The experiments have been conducted in the following machines: (i) a desktop computer having intel core i7-7700 processor (3.6 GHz, 8 MB cache) CPU, 16 GB RAM, and NVIDIA TITAN XP (12 GB, 1582 MHz) GPU; (ii) a virtual private server (16-core CPU), 64 GB RAM, and 200 GB Storage; (iii) a cloud computing platform Galileo from Hypernet (https://galileoapp.io/). All code and data can be found at the following link: https://github.com/s-shamil/agent-based-modeling-covid-19.
+ Open protocol
+ Expand
3

Deep Learning for MRI Cavity Segmentation

Check if the same lab product or an alternative is used in the 5 most similar protocols
The proposed DL approach requires a training phase, where the images of the four MR sequences and the manually created reference segmentations are used. Although the cavity is defined in the T1w and T2w images, we use all four sequences to leverage additional information that can be beneficial for training of the DL model. During training, for each MRI sequence we feed batches of 16 slices of random orientation (i.e. axial, coronal, or sagittal). We optimized the cross-entropy loss by the Adam optimizer [32 ] and used a learning rate of 10− 4. The DL training takes approximately 24 h on a NVIDIA Titan Xp graphics processing unit (GPU) with 12 GB memory. The code was implemented in Python 3.6.8 with PyTorch 1.0.1 (pytorch.org).
+ Open protocol
+ Expand
4

Image Classification Model Training

Check if the same lab product or an alternative is used in the 5 most similar protocols
Models were trained with an Adam optimizer44 for 200 epochs, with a batch size of 54 ( learning rate=1E-4 , β1=0.9 , β2=0.999 ). Models were implemented in Python (Keras45 2.2.4 using as backend TensorFlow46 1.8.0) and run on the GPU NVIDIA TITAN Xp and NVIDIA GeForce GTX 1070.
+ Open protocol
+ Expand
5

Benchmarking CNN Models for Skin Cancer

Check if the same lab product or an alternative is used in the 5 most similar protocols
In this paper, we used four mainstream CNN networks to build the binary classifier of BCC and SK, including InceptionV3 [15 ], InceptionResNetV2 [16 (link)], DenseNet121 [17 (link)], and ResNet50 [18 (link)]. The detailed introduction and the differences of four CNN structures can be seen in Appendix in Supplementary Materials 1. In order to apply these CNN structures to our BCC and SK classification task, we set the output dimension of the last fully connected layer to 2. On the training set, we compared the CNN training means of fine-tuning from the pretrained weights and training from scratch; that is, eight models were compared in this experiment in total. We kept the same experimental parameters by setting the same image batch size of 30, max iteration epochs of 100, optimizer of root mean square prop (RMSProp), and loss function of cross entropy. All experiments were conducted on three graphic processing units of Nvidia Titan XP. After the training process, these models were tested on the same test set and evaluated by the performance indexes of ACC, ACCAvg, Sen, Spe, and AUC.
+ Open protocol
+ Expand
6

3D Reconstruction of Sparse Axons

Check if the same lab product or an alternative is used in the 5 most similar protocols
Injected embryos were imaged with a Leica TCS SP8 multi-photon confocal microscope (Leica Microsystems, Buffalo Grove, IL), with a white laser source and Leica HyD-SMD Hybrid detector. Images were recorded with a DFC365FX camera at a resolution of 2048 × 2048 pixels using a 20× oil immersion objective (NA 0.75), a 1.2 times software zoom and a z-step size of 1.2 μm. Based on the excitation wavelength used, the optical resolution was 291 nm in the XY-plane and 1.6 μm in the Z-plane. Images were captured using the photon counting feature to detect both bright and dim signals and stitched to overcome the limited field of view of the high NA objective. Images were saved in the ‘Lif’ format and quantified on a Dell Intel Xeon E 5-2637V2 3.50 GHz PC workstation with a 12 GB NVIDIA TITAN Xp graphic card and 128 GB RAM using Imaris 9.2 (Bitplane, Zurich, Switzerland). To avoid image overlap, only sparsely labeled axons were selected for analysis. Axon tracing, 3D reconstruction and scoring were performed blind to genotype. ‘Lif’ files were flattened and converted to the ‘Tif’ format for illustration. Regions of interest were enlarged digitally for visual inspection and for ‘higher magnification’ panels in the figures.
+ Open protocol
+ Expand
7

Generative Adversarial Network for Deformable Image Registration

Check if the same lab product or an alternative is used in the 5 most similar protocols
Of the 15 cases used for training, we used data from 13 patients for training and from 2 for validation. After the hyperparameters were set, we trained the model on all 15 cases and tested it on the 10 leave-out cases. We used the following cost function for deepPERFECT training. The model was trained using NVIDIA Titan XP with 12 GB RAM.
loss=λ1LGANG,D+λ2LL1G+λ3LL1I+λ4Rsmooth(G)
λ is the hyparameter that indicates the effect of each loss on the final loss function value. In this equation, G denotes the generator output, which represents deformation vector fields, and D denotes the discriminator. I denotes the final deformed image and R represents the regulatory term. LGANG,D is the adversarial loss, defined as: LGANG,D=Ex,ylogDx,y+Ex(log1Dx,Gx)
in which x represents the input and y is the target deformation vector field.
LL1G is the L1 norm of the differences between target DVFs and network generator DVFs, defined as: LL1G=Ex,y(yG(x))
LL1I is the L1 norm between target image I and the deformed image, created by applying G(x) or Gx to the input image; it is defined as: LL1I=Ex,y(IGx(x))
Finally, to enforce smooth deformation fields, we used the second-order curvature regulatory term, which is widely used in the registration literature, which is given as follows: RsmoothG=j=13Gix2
+ Open protocol
+ Expand
8

Comparative Evaluation of Deep Learning Algorithms

Check if the same lab product or an alternative is used in the 5 most similar protocols
To evaluate the performance of each algorithm, we utilized mean absolute error (MAE), root mean squared error (RMSE), and the coefficient of determination (R2) as performance metrics.
In this study, a 10-fold cross-validation scheme was applied to compare the performances of different methods: each algorithm was trained on nine randomly selected subsets, and then validated on the final subset, referred to as the validation set. The optimal algorithm was identified by evaluating the average performance metrics in 10-fold cross-validation.
Utilizing a computational framework comprising two NVIDIA Titan Xp GPUs with 12 GB memory, the training time for the CNN-MLP algorithm was approximately 6.94 h, whereas the CNN-only algorithm necessitated 5.28 h for training.
+ Open protocol
+ Expand
9

Deep Learning for DENSE Artifact Removal

Check if the same lab product or an alternative is used in the 5 most similar protocols
A U-Net consisting of encoding and decoding paths, each with multiple convolutional layers, was used. Each layer consists of 3×3 convolutions followed by the sigmoid function. A 3×3 max pooling operator with stride 2 was used between convolutional layers of the encoding path. Prior to each convolutional layer in the decoding path, a 2×2 upsampling convolution was used and its outputs were concatenated with the output of the corresponding layer in the encoding path. The layers within the encoding path downsampled the input and increased the number of feature channels, and these operations were reversed in the decoding path. The convolutional and max-pooling operators had the same kernel size as those in the generic U-Net architecture. However, different numbers of convolutional layers and feature channels were chosen to avoid overfitting/underfitting.
For training, non-phase-cycled DENSE images were provided as the input and corresponding artifact-free DENSE images after subtraction of phase-cycled data were used as the ground-truth data. Training was posed as minimization of the absolute difference between the ground-truth and the output of the U-Net using the Adam optimizer. The training was implemented using the TensorFlow32 library on an NVIDIA TITAN Xp graphical processing unit.
+ Open protocol
+ Expand
10

CNN Training on Nvidia Titan Xp

Check if the same lab product or an alternative is used in the 5 most similar protocols
A notebook was used for the creation of the dataset. Training of the CNN was performed on an Intel Xeon server equipped with two Graphics Processing Unit (GPU) Nvidia Titan Xp and 32 Gb of RAM.
+ Open protocol
+ Expand

About PubCompare

Our mission is to provide scientists with the largest repository of trustworthy protocols and intelligent analytical tools, thereby offering them extensive information to design robust protocols aimed at minimizing the risk of failures.

We believe that the most crucial aspect is to grant scientists access to a wide range of reliable sources and new useful tools that surpass human capabilities.

However, we trust in allowing scientists to determine how to construct their own protocols based on this information, as they are the experts in their field.

Ready to get started?

Sign up for free.
Registration takes 20 seconds.
Available from any computer
No download required

Sign up now

Revolutionizing how scientists
search and build protocols!