The largest database of trusted experimental protocols

Gtx 1080 ti gpu

Manufactured by NVIDIA

The NVIDIA GTX 1080 Ti is a high-performance graphics processing unit (GPU) designed for professional and enthusiast-level computing applications. It features 3,584 CUDA cores, a base clock speed of 1,480 MHz, and 11 GB of GDDR5X video memory. The GTX 1080 Ti delivers exceptional performance and is capable of powering advanced graphics workloads, such as video editing, 3D rendering, and professional visualization.

Automatically generated - may contain errors

21 protocols using gtx 1080 ti gpu

1

Cardiac MRI Class Imbalance Mitigation

Check if the same lab product or an alternative is used in the 5 most similar protocols
To solve the class-imbalance problem in multi-slice cardiac MR images, a patch of size 128 × 128 was extracted around the LV center from a full-sized cardiac MR and slice-wise normalization of voxel intensities was performed. The training dataset was divided into 70% training data, 15% validation data, and 15% testing data with five non-overlapping folds for cross-validation. Networks implemented in PyTorch were initialized with He normal initializer [11 ] and trained for 100,000 epochs with a batch size of 16. We used the Adam optimizer with a learning rate of 0.001 and decay rate of 0.1 after every 25,000 step. All experiments were performed on a workstation equipped with two NVIDIA GTX 1080 Ti GPU (11GBs of memory).
+ Open protocol
+ Expand
2

Efficient Particle Resampling Techniques

Check if the same lab product or an alternative is used in the 5 most similar protocols
Particles are resampled by multinomial resampling using the algorithm introduced in [48 (link)], which allows to draw a list of sorted numbers in a single step. Alternative resampling schemes can also be implemented. For instance, residual and stratified resampling methods dominate the multinomial one in terms of conditional variance [49 ]. We found that for artificial data, the stratified method leads to faster convergence of the estimates of N, q, and σ, whereas p and τD estimates do not show a significant difference of rate of convergence (S8B Fig), but does not significantly improve the information gain (S8A Fig).
Algorithm 2 is slightly modified in Fig 3 for the “ESB-BAL (exact)” simulations, in which Eq 13 is computed using MC samples instead of the point-based simplifications explained in Eqs 14 and 15. Samples to compute the expectation over θ are drawn from the current posterior distribution p(θ|ht), i.e. by random sampling from the pool of particles {θti}i{1,,Mout} . For each of these samples, and for each candidate next input xt+1 in St+1 :
Unless otherwise specified, all the simulations results were obtained with Mout = 1024 and Min = 256 particles, and were run using a commercially available Nvidia GTX 1080 Ti GPU.
+ Open protocol
+ Expand
3

Comparative Evaluation of Classification Models

Check if the same lab product or an alternative is used in the 5 most similar protocols
We choose Adam as the optimizer [59] , and the parameters of Adam are: , , the weight decay is set as 0.0005. The proposed model is trained through 200 epochs with learning rate 0.001. is determined by the dataset we used, and the dimension of the encoded feature is . We reported the results of 10-fold cross-validations during the classification process, which means the dataset is randomly divided into 10 parts, 9 of which were used for training and 1 for testing. The process can repeat 10 times, each time using different test data. The final performance of 10-fold cross-validation is the mean and variance of the results of 10 experiments. To prevent the overfitting, we designed a data augmentation strategy by shuffling the order of different detection vectors during training procedure. For three traditional methods, we utilize a widely used machine learning package Scikit-learn [60] . For GradientBoosting, we use learning rate=0.05, n estimators=50 000, subsample=1.0. For Random Forest, we set n estimators=30, max depth=10, min samples split=2, min samples leaf=1. For Multi-Layer Perception, we set solver=’adam,’ activation=’logistic,’ alpha=1e-3, hidden layer sizes=(40, 4). For IE-Net, we use threshold 0.5 to determine the predicted results while evaluating. Our code is implemented on PyTorch platform [61] , all experiments were run on a NVIDIA GTX 1080Ti GPU.
+ Open protocol
+ Expand
4

Generative Adversarial Network for Maize Tassel Synthesis

Check if the same lab product or an alternative is used in the 5 most similar protocols
In this section, we discuss in detail the proposed maize tassel dataset generation method, TasselGAN, which consists of maize tassel data and sky data generation, and then merging the two to form field-like dataset. Natural scenes are complex, and hence, to simplify the synthesis problem, foreground and background are separately generated. This approach is used in [24 –26 ].
We begin with a brief overview of GAN. Recent techniques that use deep learning for generating data are variational autoencoder (VAE) [27 ] and DC-GAN [23 ]. However, GANs have been shown to produce visually more appealing results than VAE [7 (link)]. Hence, in our method, we have used modified DC-GAN architectures for separately generating maize tassels and sky background data. We have trained our networks using a NVIDIA GTX 1080 Ti GPU.
+ Open protocol
+ Expand
5

Efficient Deep Learning Model Training

Check if the same lab product or an alternative is used in the 5 most similar protocols
We implemented our methodology using PyTorch1. During the pre-processing step, we cropped the un-wanted black border from each video frame. Images were normalized by subtracting their mean and dividing by their standard deviation (i.e., according to their z-scores). Batch normalization was used before each weighted layer, as it re-parameterizes the underlying gradient optimization problem that helps the training to converge faster [22 ]. For training, we used Adam optimizer as an optimizer with a learning rate of 0.00001. We didn’t use dropout as it degraded validation performance in our case. All models were trained for 100 epochs. The training set was shuffled before each epoch and the batch size was 4 in our case. All experiments were run on a machine equipped with an NVIDIA GTX 1080 Ti GPU (11GBs of memory).
+ Open protocol
+ Expand
6

DNGAN Model Training and Optimization

Check if the same lab product or an alternative is used in the 5 most similar protocols
For the DNGAN training, we randomly initialized all the kernel weights. We set the mini-batch size to 512, and λD to 10 as suggested [31 ]. We set λadv = 10−2 The denoiser and the discriminator were trained alternately, and the discriminator had 3 steps of updates for every step of the denoiser update. The discriminator and the denoiser both used Adam optimizer [56 ] and shared the same learning rate. The learning rate started with 10−3 and dropped by a factor of 0.8 for every 10 epochs. The learning rate started with 10−4 and dropped by a factor of 0.8 for every 50 epochs in the fine-tuning stage. We selected 300 epochs for stage one training and 1,000 epochs for fine-tuning. The selection of λadv is shown in Section III.A below. The other parameters, batch size, learning rate, and the number of epochs, were also chosen experimentally based on the training convergence and efficiency as shown in Section S-VI of the Supplementary Materials. The DCNN model was implemented in Python 2.7 and TensorFlow 1.4.1. The training was run on one Nvidia GTX 1080 Ti GPU.
+ Open protocol
+ Expand
7

Addressing Class Imbalance in Cardiac MRI

Check if the same lab product or an alternative is used in the 5 most similar protocols
To solve the class-imbalance problem in multi-slice cardiac MR images, a patch of size 128 × 128 was extracted around the LV center from a full-sized cardiac MR and slice-wise normalization of voxel intensities were performed. The training dataset was divided into 70% training data, 15% validation data, and 15% testing data with five non-overlapping folds for cross-validation. Networks implemented in PyTorch were initialized with He normal initializer16 and trained for 100k epochs with a batch size of 16. We used the Adam optimizer with a learning rate of 0.001 and decay rate of 0.1 after every 25k step. All experiments were run on a workstation equipped with two NVIDIA GTX 1080 Ti GPU (11GBs of memory).
+ Open protocol
+ Expand
8

Neural Network Noise Simulation Training

Check if the same lab product or an alternative is used in the 5 most similar protocols
Using our simulation, we created a dataset consisting of 40 batches each containing 4 × 1000 crops. Three of these sub-batches are used for training and one for evaluation. While the noise simulations are completely random for each crop, sigma is different for each batch and in the range of σ[175,185] . The network is trained for 150 iterations followed by one evaluation circle, where we compute the JI, the RMSE and a validation loss. We used an Adam optimizer with a learning rate of 10-4 . Neural networks were implemented in Tensorflow 2 and trained on a Nvidia GTX 1080 TI GPU.
+ Open protocol
+ Expand
9

Deep Learning Language Models for NLP

Check if the same lab product or an alternative is used in the 5 most similar protocols
Our language model is a sequential model implemented using Keras in Python 3. The embedding layer has an encoding dimension of 64. Then we have the convolution layer with 6 as kernel size, followed by a pooling layer with 5 as pool size. The LSTM layer has an output dimension of 30. The dropout rate is 0.2. We used “RELU” as activation function for convolutional layer and LSTM layer, and “sigmoid” for the output layer. We choose binary cross entropy as the loss function and “Adam” as optimizer.
The sDAE-like model is an autoencoder model, also implemented using Keras in Python 3. The encoder has an embedding layer and a LSTM layer. The decoder also has an embedding layer and a LSTM layer, plus a “SoftMax” layer as output. We chose a 200-dimensional embedding layer. We used categorical cross entropy as loss function and “RMSProp” as optimizer. We trained our models using a single Nvidia GTX 1080 Ti GPU. It takes less than 3 minutes to train the language model, and it takes 10 to 20 minutes to train the sDAE-like model.
+ Open protocol
+ Expand
10

Skin Lesion Detection with YOLOv3

Check if the same lab product or an alternative is used in the 5 most similar protocols
Yolov3 [70 ] was trained with the ISBI 2017 dataset as the lesion detection part of our system. The dataset was separated as training and validation sets. Then the final system detection performance was evaluated using two different datasets (the ISBI 2017 and the PH2). There are studies about the effectiveness of transfer learning in deep nets [71 (link),72 (link)]. So, in the training phase, we used pretrained weights of ImageNet dataset [73 (link)]. Afterwards, Yolov3 was fine-tuned and re-trained with the skin lesion images. The training parameters of Yolov3 are set as follows: batch size = 64, subdivisions = 16, momentum = 0.9, decay = 0.0005, learning rate = 0.001. Yolov3 was trained through 50,000 epochs and the network weights were saved every 10,000 epochs. Test result showed that the weights saved at 10,000th epoch were the most successful at detecting the location of lesion in the image. The whole implementations and computations were performed on a PC with two Intel Xenon processors, 64 GB RAM, NVIDIA GTX 1080Ti GPU and Ubuntu 14.04 operating system. Python and C programming languages, OpenCv image processing framework were used in the development of the system.
+ Open protocol
+ Expand

About PubCompare

Our mission is to provide scientists with the largest repository of trustworthy protocols and intelligent analytical tools, thereby offering them extensive information to design robust protocols aimed at minimizing the risk of failures.

We believe that the most crucial aspect is to grant scientists access to a wide range of reliable sources and new useful tools that surpass human capabilities.

However, we trust in allowing scientists to determine how to construct their own protocols based on this information, as they are the experts in their field.

Ready to get started?

Sign up for free.
Registration takes 20 seconds.
Available from any computer
No download required

Sign up now

Revolutionizing how scientists
search and build protocols!