The largest database of trusted experimental protocols

Titan rtx gpu

Manufactured by NVIDIA
Sourced in United States

The Titan RTX GPU is a high-performance graphics processing unit (GPU) designed for professional applications. It features 24GB of GDDR6 memory, 4,608 CUDA cores, and a boost clock speed of up to 1.77GHz. The Titan RTX is capable of delivering exceptional performance for tasks such as rendering, machine learning, and scientific computing.

Automatically generated - may contain errors

57 protocols using titan rtx gpu

1

Accelerated Deep Learning on GPU Cluster

Check if the same lab product or an alternative is used in the 5 most similar protocols
All DNNs were implemented in PyTorch and trained on an Ubuntu 16.04.12 LST system of x86_64 architecture. The experimental hardware equipment of this system consisted of the Intel Xeon 2.30 GHz CPU with 502 GB RAM and four NVIDIA TITAN RTX GPUs.
+ Open protocol
+ Expand
2

DETR-IQA: Object-Centric Image Quality Assessment

Check if the same lab product or an alternative is used in the 5 most similar protocols
In this study, we implemented the DETR-IQA model via PyTorch, and experiments were run on 2 NVIDIA TITAN RTX GPUs, each with 24G VRAM size. We trained 300 epochs with a batch size of 2. Moreover, we leveraged the pre-trained DETR model with the ResNet-50 backbone on COCO 2017 val5k and fine-tuned it using our constructed dataset. We did not change the structure of the baseline DETR model and used the default parameters. DETR-IQA is composed of ResNet-50, a 6-layer transformer encoder, and a 6-layer transformer decoder. We used an AdamW optimizer with a weight decay of 1 × 104 and at most 200 epochs. We set the initial transformer’s learning rate to 1 × 104 and the backbone’s learning rate to 1 × 105 . The weights of DETR-IQA were initialized with a COCO-pretrained DETR model. The hyperparameters λ were empirically set to 0.99, 0.9, 0.8, 0.7, and 0.6. According to the ratio of 8:2, the constructed images containing objects of interest were randomly divided into the training set and testing set.
+ Open protocol
+ Expand
3

Transfer Learning for Thick-Thin Slices

Check if the same lab product or an alternative is used in the 5 most similar protocols
For the implementation, we first trained the model on the thick-slice datasets with SGD optimizer for 200 epochs. The initial learning rate was set to 1e-3 with linear decay schedule. Then, we used the pretrained model as the initialization of the model and applied both thick slice and thin slices on the proposed objective function with the initial learning rate 1e-4 for 100 epochs. The weight decay factor was 1e-5 for training. For the training time, the pretrain stage took about 5 h on the machine with 4 NVIDIA TITAN RTX GPUs, and the main training took 10 h on the same machine.
+ Open protocol
+ Expand
4

Deep Learning-based Ankle X-Ray Classification

Check if the same lab product or an alternative is used in the 5 most similar protocols
The software used to build the DCNN was based on an Ubuntu 18.04.3 LTS operating system on a workstation with an Intel(R) Core(TM) i9-10900X CPU @ 3.70 GHz, 96 GB RAM, and 2 NVIDIA TITAN RTX GPUs with the TensorFlow and a Keras open-source library with Python 3.6.9 (Python Software Foundation). The input image size is resized to 880 × 880 pixels with an 8-bit grayscale color to reduce the complexity and computation. We used ImageNet as our pre-training material. The pre-trained weights of the DCNN were preserved for AXR training. Image augmentation was randomly applied during the training process with zoom, rotation, width shifting, height shifting, shear transformation, and horizontal flipping operations. The class weight was adjusted according to the class distribution. The Adam optimizer and categorical cross-entropy loss were used to train the model for 60 epochs with a batch size of 4 and a starting learning rate of 1e−5. The model evaluation matrix was the accuracy. Two ankle models have been trained independently.
+ Open protocol
+ Expand
5

3D CNN-based Jaw Localization and Canal Segmentation

Check if the same lab product or an alternative is used in the 5 most similar protocols
For this study, all the CNN architectures were implemented using the Keras framework [35 ] with TensorFlow [36 ] as back-end. We performed our experiment on two powerful NVIDIA Titan RTX GPUs with 4608 CUDA cores and 24 GB GDDR6 SDRAM. The batch size for this experiment was set to 2 for both models and the proposed architecture was optimized with the Adam optimizer. The learning rate to train the model was set to 1 × 10−5. To reduce the training time and use the GPU efficiently, we use 10 to 20 percent of the 3D CBCT scans for training of model for jaw localization as well as canal segmentation. The size of images while training the jaw localization model is kept to 128 × 128 × 128. After localizing the jaw, the 3D images are cropped and resized to three fixed sizes as mentioned in Section 3.4.2. We used the dice loss function Equation (1) to calculate the loss. Labels are the segmentation annotation of images containing 0 as background and 1 as foreground. We trained the localization model for 50 epochs and the 3D segmentation models for 80 epochs by keeping the learning rate lower in order to train a generalized model. The batch size, epoch and learning rate were reset depending upon the need.
DiceLoss=1DiceCoefficient
where, dice coefficient is given by Equation (6).
+ Open protocol
+ Expand
6

Efficient Tumor Segmentation with PyTorch

Check if the same lab product or an alternative is used in the 5 most similar protocols
All experiments are conducted on the PyTorch platform with two NVIDIA TITAN RTX GPUs (24GB). We use the ADAM optimizer to optimize all networks. The initial learning rates of the whole-breast and tumor segmentation models are 0.002 and 0.001, respectively. And, the learning rate decays by half for every 50 epochs. A total number of 300 epochs are set for each task. We compute the training loss within 10 epochs to determine the convergence. While using the well-trained segmentation models to test the results, we use sliding windows to crop the overlapping patches, whose stride is half of the patch size. Then, we average the overlapping patches to obtain the final results.
+ Open protocol
+ Expand
7

Large Language Model Training Protocol

Check if the same lab product or an alternative is used in the 5 most similar protocols
All of the experiments were conducted on a machine with 48 cores Intel Xeon Platinum (Cooper Lake) 8369 processor and 192 GB memory. For training the model, the PyTorch framework was utilized with 4 Nvidia Titan RTX GPUs.
+ Open protocol
+ Expand
8

Fast.ai Deep Learning Protocol

Check if the same lab product or an alternative is used in the 5 most similar protocols
Final models were generated using fast.ai v1.0.55 library (https://github.com/fastai/fastai), PyTorch on two NVIDIA TITAN RTX GPUs. Initial experiments were conducted using NVIDIA Tesla V100s, NVIDIA Quadro P6000s, NVIDIA Quadro M5000s, NVIDIA Titan Vs, NVIDIA GeForce GTX 1080s, or NVIDIA GeForce RTX 2080Ti GPUs.
+ Open protocol
+ Expand
9

Fast.ai Deep Learning Protocol

Check if the same lab product or an alternative is used in the 5 most similar protocols
Final models were generated using fast.ai v1.0.55 library (https://github.com/fastai/fastai), PyTorch on two NVIDIA TITAN RTX GPUs. Initial experiments were conducted using NVIDIA Tesla V100s, NVIDIA Quadro P6000s, NVIDIA Quadro M5000s, NVIDIA Titan Vs, NVIDIA GeForce GTX 1080s, or NVIDIA GeForce RTX 2080Ti GPUs.
+ Open protocol
+ Expand
10

Training Deep Learning Models for Medical Imaging

Check if the same lab product or an alternative is used in the 5 most similar protocols
For training and testing, we used the PyTorch51 deep-learning framework on 8 × NVIDIA TITAN RTX GPUs. The Adam optimizer41 (link) with a weight decay of 0.0001 was used to train the CT-Nets. The initial learning rate was set at 0.0005, and the learning rate decayed by a factor of 10 after the 35th, 40th, and 43rd epochs. All models were trained for 45 epochs. Owing to the restricted GPU memory, the batch sizes on each GPU were set to 16 for the abnormality model and 8 for the disease model.
To train CXR-Nets, an Adam optimizer with a weight decay of 0.0001 was used. The initial learning rate was set at 0.0005, and the learning rate decayed by a factor of ten after the 25th and 35th epochs. All models were trained for 45 epochs. Owing to the restricted GPU capacity, the batch sizes on each GPU were adjusted to 128 for the abnormality model and 64 for the disease model.
+ Open protocol
+ Expand

About PubCompare

Our mission is to provide scientists with the largest repository of trustworthy protocols and intelligent analytical tools, thereby offering them extensive information to design robust protocols aimed at minimizing the risk of failures.

We believe that the most crucial aspect is to grant scientists access to a wide range of reliable sources and new useful tools that surpass human capabilities.

However, we trust in allowing scientists to determine how to construct their own protocols based on this information, as they are the experts in their field.

Ready to get started?

Sign up for free.
Registration takes 20 seconds.
Available from any computer
No download required

Sign up now

Revolutionizing how scientists
search and build protocols!