The largest database of trusted experimental protocols

Titan x

Manufactured by NVIDIA
Sourced in United States

The Titan X is a high-performance graphics processing unit (GPU) designed and manufactured by NVIDIA. It is a powerful piece of lab equipment capable of handling complex computational tasks and data processing. The Titan X features a large number of CUDA cores, high-speed memory, and advanced rendering capabilities, making it a versatile tool for a variety of scientific and research applications.

Automatically generated - may contain errors

33 protocols using titan x

1

Faster R-CNN for Vertebral Body Localization

Check if the same lab product or an alternative is used in the 5 most similar protocols
The faster region-based CNN (R-CNN) [19 (link)] was developed from the R-CNN [22 (link)] and the fast R-CNN [23 (link)], which unifies the target detection process (including candidate region generation, feature extraction, classification, and position refinement) into 1 deep network framework and greatly improves operational speed. In step 1, the faster R-CNN was used to locate the vertebral bodies in sagittal MR images.
First, the six vertebral bodies (L1-S1) in 200 midsagittal images were manually located under the guidance of a radiologist. Second, the faster R-CNN was trained to detect and locate each vertebral body. We detected vertebral bodies instead of disks because they were easier to manually locate. Finally, the middle point coordinate of each vertebral body was calculated based on bounding box coordinates, as the precise location of the vertebral bodies would be used to locate the vertebrae in axial MR images, as shown in Figure 1 (step 1).
The faster R-CNN was implemented with Caffe [24 (link)] (Berkeley Vision and Learning Center deep learning framework) and trained in parallel on 4 Nvidia Titan X graphics processing units. Accuracy, sensitivity, and specificity [25 (link),26 (link)] were analyzed to comprehensively evaluate the performance of this system.
+ Open protocol
+ Expand
2

GPU-accelerated Bloch Simulations for bSSFP

Check if the same lab product or an alternative is used in the 5 most similar protocols
The signal evolution of IR-bSSFP sequence has been well described in previous studies9 (link), 10 (link). We chose to use IR-bSSFP because of its simplicity in implementation, and its enhanced contrast by mixing the T1 and T2 relaxations in the signal evolution. We used GPU-based parallel computing to simulate bSSFP signal evolution in real-time for training the neural network, as shown in Figure 1. In the simulation algorithm, we used simplifications for the excitation and procession in one TR, and they are given as:
Excitation:Rex(α,ϕ)=Rx(ϕ) Rz(α) Rx(ϕ)and Mk,+=Rex(α,ϕ)Mk,
where Rx and Rz are the standard SO(3) rotation matrices, α denotes the flip angle, and ϕ is the phase of RF, Mk,- and Mk,+ is the magnetizations before or after kth excitation respectively, and
Procession: Mk+1,=diag([E2, E2, E1])Rz(φ)Mk,++m0[0, 0, 1E1]
where E1,2=e-TR/T1,2 , m0 proton density, φ phase accumulation due to the frequency offset during procession period and Mk,+/Mk+1,- the magnetization at the beginning or ending time points of the kth TR. The TR-by-TR evolution was sequentially computed, while different spins were simulated in parallel on GPU with a batch size of 100. On typical GPU hardware, e.g., NVIDIA TITAN X or 1070/1060 GTX, simulating hundreds of signal evolutions with 600 to 1000 TRs normally could be done within a few seconds.
+ Open protocol
+ Expand
3

Normalization and Deep Learning for Satellite Imaging

Check if the same lab product or an alternative is used in the 5 most similar protocols
To reduce the effect of varied illumination and atmosphere for the images acquired at different time over different areas, Equation (1) is employed to normalize the original image DN values at the scene level. This is expected to provide comparable training and test data sets. The normalized images are then clipped to sample images with a size of 512 × 512 pixels. In total 1460 sample images are selected, where 80% of the sample images are randomly selected for training, while the remaining 20% of the samples are selected for validation: DNT=DNOrii=0MNDNiMN
where DNori and DNT are the DN values of original and transformed images, respectively, and M,N are the image height and width. The DCNN model is implemented upon the open source deep learning package developed by Google, i.e., Tensorflow. High performance workstation with GPUs, i.e., NVIDIA TitanX, is employed to perform the DCNN model training and inference, and the batch size in the training is 8. The dropout rate of 0.5 is set at training stage to prevent the DCNN from over-fitting.
+ Open protocol
+ Expand
4

Deep Learning for Physiological Signal Translation

Check if the same lab product or an alternative is used in the 5 most similar protocols
In our proposed model, there are two layers of LSTM, with 128 hidden nodes for each phase. We apply dropout layer in the end of decoding phase with a rate of 0.2 to prevent overfitting. Our model’s learning rate is set to 0.0025 and uses Adam optimizer for the training process. The maximal epochs for the training are set to 50 for both source field and target field. We built our model in Python using Keras with Tensorflow 2.2 as the backend. Using four GPU machines (NVIDIA Titan X from Taipei, Taiwan), it takes up to 9 h to train both the PPG to PPG translation model and the PPG to ABP translation model.
+ Open protocol
+ Expand
5

GPU-Accelerated Machine Learning Workstation

Check if the same lab product or an alternative is used in the 5 most similar protocols
All trainings ran on a GPU workstation with 16 GB RAM, Intel(R) Xeon(R) CPU E3-1231 v3 @ 3.40GHz and a NVidia TITAN X (Pascal) GPU with 12 GB VRAM.
+ Open protocol
+ Expand
6

GPU-Accelerated Model Training

Check if the same lab product or an alternative is used in the 5 most similar protocols
Model parameters consumed up to 1 GB depending on the number of included components. The training was performed on the Entropy computation cluster (Faculty of Mathematics, Informatics and Mechanics, University of Warsaw, Warsaw, Poland) with the following GPU hardware: RTX 2080 Ti (MSI, Taipei, Taiwan), TITAN V, and TITAN X (NVIDIA, Santa Clara, CA, USA). The performance of these GPUs was enough to run the model.
The categorical cross-entropy cost function was minimised by Adam optimiser [54 ].
+ Open protocol
+ Expand
7

Deep Learning Performance Evaluation

Check if the same lab product or an alternative is used in the 5 most similar protocols
The proposed models were implemented using PyTorch version 1.8.0 [29 ], with NVIDIA CUDA version 10.0 (Santa Clara, CA, USA), Single GPU NVIDIA TITAN X and 64 GB RAM. For our algorithm approximately 6 GB should be enough depending on the batch size. The models were trained and evaluated on Linux (Ubuntu) operating system (also compatible with Windows) and were coded in Python 3.9. The performance measurement and statistical analysis were performed using the publicly available libraries including Scikit-learn version 0.20.3.
+ Open protocol
+ Expand
8

Deeply Optimized Binary Classifier Model

Check if the same lab product or an alternative is used in the 5 most similar protocols
Our model is a Fully Connected Deep Neural Network (DNN) [39 (link)] having an input layer (of 13,654 dimensions), 18 hidden layers (of 512 dimensions each) and a scalar output layer. We employ the logistic function and log loss at the output layer for binary classification (with 0/1 labels), and use the Scaled Exponential Linear Unit (SeLU) activation function [40 ] at each layer. The model is optimized using the Adam optimizer [41 ], with a mini-batch size of 128 examples. The default learning rate was used (0.001).
Intermediate model snapshots of the model weights were taken every 250 mini-batch iterations, and the snapshot that performed best on the validation test was retroactively selected as the final model. Explicit regularization was not found necessary. The network configuration was reached by extensive hyperparameter search over various network depths (ranging from 2 to 32) and activation functions (tanh, ReLU and SeLU).
The software was implemented using the Python programming language (version 2.7), PyTorch framework [42 ], and the scikit-learn library (version 0.17.1) [43 ]. The training was performed on an NVIDIA TitanX (12 GB RAM) with CUDA version 8.0.
+ Open protocol
+ Expand
9

Optimized CNN for Plaque and CAA Detection

Check if the same lab product or an alternative is used in the 5 most similar protocols
All neural network models were trained in the open-source package PyTorch71 on four NVIDIA GTX 1080 or Titan X graphics processing units. Our optimized model used a simple convolutional architecture for image classification, consisting of alternating (3 × 3) kernels of stride 1 and padding 1 followed by max pooling (Fig. 3a), followed by two fully connected hidden layers (512 and 100 neurons) and rectified linear units as the nonlinear activation function. All neural network models were trained using backpropagation. The optimized training procedure used the Adam72 optimizer with a multi-label soft margin loss function with weight decay (L2 penalty, 0.008) and dropout (probability 0.5 for the first two fully connected layers and probability 0.2 for all convolutional layers). Training proceeded with mini-batches of 64 images with real-time data augmentation including random flips, rotations, zoom, shear, and color jitter. When calculating the classification accuracy, a threshold of 0.91, 0.1, and 0.85 was used for cored plaque, diffuse plaque, and CAA prediction, respectively. Predictions with confidence above the threshold were considered to be positives.
+ Open protocol
+ Expand
10

Robust Deep Learning for Image Tasks

Check if the same lab product or an alternative is used in the 5 most similar protocols
We perform all experiments with TensorFlow 2.4 on an NVIDIA Titan Xp, except for batch sizes of more than 64 total images, where we use an additional NVIDIA Titan X. The CNN architecture uses a batch size of 72 images. We train the LSTM with a batch size of 8 sequences, each sequence holding 9 images, which corresponds to 72 images per iteration. Increasing the batch size in either of the above architectures yields negligible improvements. Nevertheless, using larger batch sizes for the BLSTM shows more significant improvements. Therefore, in the end, the BLSTM is trained with a batch size of 16 sequences, each holding 9 images, for a total of 144 images per batch. It takes approximately 25 min to train the model. This corresponds to 4500 iterations. It iterates through 4500 × batch_size sequences in total. This corresponds to about 16 epochs for the BLSTM and 8 for the CNN and LSTM architectures. Stochastic gradient descent without momentum is used to train the network. In terms of data augmentation, we use colour jittering, which includes adjustments to brightness, contrast, saturation, and hue, random gray-scale conversion, rotations, random left-right as well as up-down flips, and finally introduce a mask with a radius of 128px to remove any possible artefacts near the borders.
+ Open protocol
+ Expand

About PubCompare

Our mission is to provide scientists with the largest repository of trustworthy protocols and intelligent analytical tools, thereby offering them extensive information to design robust protocols aimed at minimizing the risk of failures.

We believe that the most crucial aspect is to grant scientists access to a wide range of reliable sources and new useful tools that surpass human capabilities.

However, we trust in allowing scientists to determine how to construct their own protocols based on this information, as they are the experts in their field.

Ready to get started?

Sign up for free.
Registration takes 20 seconds.
Available from any computer
No download required

Sign up now

Revolutionizing how scientists
search and build protocols!