The largest database of trusted experimental protocols

Gtx 1080 gpu

Manufactured by NVIDIA

The GTX 1080 is a high-performance graphics processing unit (GPU) designed and manufactured by NVIDIA. It is based on the Pascal architecture and features 8GB of GDDR5X video memory. The GTX 1080 is capable of delivering high-quality graphics and performance for a variety of applications.

Automatically generated - may contain errors

15 protocols using gtx 1080 gpu

1

Training Neural Network Language Models

Check if the same lab product or an alternative is used in the 5 most similar protocols
All three models were trained on a machine with 32 GB of RAM, an 8-core 3.0GHz Intel Xeon processor, and a NVIDIA GTX 1080 GPU. To train the recurrent neural networks, we used the open-source machine-learning framework TensorFlow (Abadi et al., 2016 ), and to train Skip-gram, we used Gensim (Rehurek and Sojka, 2010 ), a free Python library which provides APIs for a wide variety of semantic models. The code, including the training corpus, and test materials are available at https://github.com/phueb/rnnlab. Using a mini-batch size of 64, training one LSTM and one SRN on the GPU takes approximately 3 and 2.5 h, respectively. Using 4 CPU cores in parallel, Skip-gram completed training in less than 5 min.
+ Open protocol
+ Expand
2

Transfer Learning for Structural Restoration

Check if the same lab product or an alternative is used in the 5 most similar protocols
Directly applying a model trained on one specific structure to other structures may produce significant artifacts (Supplementary Fig. 8), which means that each target needs a unique model. In theory, we need to prepare ~1000 training samples and train the network for 2–3 days (~2000 epochs) on a consumer-level graphics card (NVIDIA GTX-1080 GPU) to get a working model for each structure we tested. We adopted transfer learning20 (link) to reduce the effort of imaging new structures. Briefly, we took the parameters obtained from a pre-trained network to initialize a new network and started retraining on a different structure with smaller training samples size (200 of cropped patches). We validated the effectiveness of transfer learning in restoring different structures. Even with reduced training efforts (200 epochs), the new model produced results comparable to the model trained with a much larger dataset and greater training effort (Supplementary Fig. 9).
+ Open protocol
+ Expand
3

Deep Learning for Biological Image Recognition

Check if the same lab product or an alternative is used in the 5 most similar protocols
We trained the designed deep convolutional network with about 5,000 manually labeled samples (half the samples were positive). The network training was implemented through Keras (2018) with Tensorflow (2018) backend. Back propagation with mini-batch stochastic gradient decent was used during the training. A mini-batch size of 60, a learning rate of 10−2 with a decay of 10−6, and a moment of 0.9 were adopted. The network could reach the desired accuracy with approximately 50 epochs of training on one NVIDIA GTX 1080 GPU in about 1 day.
Sample augmentation is a common technique in deep learning domains of computer vision and biological image recognition. Its purpose is to add variability to the samples, thus improving the robustness, such as rotation invariance and noise immunity, of learned networks. We introduced sample augmentation in our training data as below. Rotation: rotate a sample by 90, 180, or 270 degrees. Noise: add Gaussian noise, salt and pepper noise, or Poisson noise to a sample. Shifting: shift a sample in the x-y dimension by [1, 1], [1, −1], [−1, 1], or [−1, −1]. Scaling: scale a sample by 1.2 or 0.82 rates. Transforming gray levels: multiply the image gray intensity by a random coefficient within limits.
+ Open protocol
+ Expand
4

AI-Aided Model Training Protocol

Check if the same lab product or an alternative is used in the 5 most similar protocols
We utilize NVIDIA GTX 1080 GPU to train the proposed AI-aided model and apply Adam optimizer30 to update network weights during the backpropagation process. The learning rate is set to 0.001. Then, during the training process, Focal Loss is selected as a loss function with parameter γ=2 . The model weights are initialized with kaiming uniform. The batch size is set as 128. The whole training process runs for 50 epochs, and the final model is selected based on the best validation results obtained during the optimization process.
+ Open protocol
+ Expand
5

Evaluating Auto-Segmentation Models

Check if the same lab product or an alternative is used in the 5 most similar protocols
To evaluate the effect of the slice classification model, we developed two auto-segmentation models: the segmentation model with slice classification model (two-step segmentation model) and the segmentation model without slice classification model (segmentation-only model). The deep learning networks were constructed and implemented with Keras [17 ] using a TensorFlow [18 ] backend. All computations were implemented on a computer with an Intel® Core™ i7–7700 CPU, hard disk of 4 TB, RAM of 64 GB and a Nvidia GTX 1080 GPU.
+ Open protocol
+ Expand
6

Optimized CNN for Image Classification

Check if the same lab product or an alternative is used in the 5 most similar protocols
The experiment runs on Nvidia GTX1080 GPU and is implemented by Keras 2.2.5. The categorical cross-entropy loss function is adopted to train the CNN model, which assesses the difference between the real label and the predicted label. As for the network optimizer, the Adam optimizer was chosen to adaptively optimize the learning rate based on the initial setting of 0.0003. Except for that, we also use the callback function ReduceLROnPlateau to monitor the decline in learning rate according to validation accuracy, and the lower boundary of learning rate was then set to 0.0001 and the patience set to 10 epochs. The training set and the validation set were divided according to the scale of 0.3, and the former was trained with a batch size of 8 for each epoch. By using softmax as our classifier, the checkpoint with the best accuracy was selected as the final model.
+ Open protocol
+ Expand
7

Optimal Neural Network Initialization and Training for CryoET

Check if the same lab product or an alternative is used in the 5 most similar protocols
Before training, all the kernels in the neural network are initialized using a uniform distribution of near-zero values, and the offsets are initialized to zero. Log squared residual (log((y-y′)2 (link))) between the neural network output and the manual annotation is used as the loss function. Since there is a pooling layer in the network, the manual annotation is shrunk by 2 to match the network output. A L1 weight decay of 10−5 is used for regularization of the training process. No significant overfitting is observed, likely because the high noise level in the CryoET images also serves as a strong regularization factor. To optimize the kernels, we use stochastic gradient descent with a batch size of 20. By default, the neural network is trained for 20 iterations. The learning rate is set to 0.01 in the first iteration and decreased by 10% after each iteration. The training process can be performed on either a GPU or in parallel on multiple CPUs (~10x slower on our testing machine). Training each feature typically takes under 10 minutes on a current generation GPU, and the resulting network can be used for any tomogram of the same cell-type collected under similar conditions. A workstation with 96GB of RAM, 2× Intel X5675 processors for a total of 12 compute cores, and an Nvidia GTX 1080 GPU was used for all testing.
+ Open protocol
+ Expand
8

Deep Learning-based Mouse Behavior Analysis

Check if the same lab product or an alternative is used in the 5 most similar protocols
We used a Dell XPS 8930 workstation (Intel Core i7-8700K, 16GB RAM(DDR4), 512GB SSD, 2TB HDD, Nvidia GTX 1080 GPU) to implement the DLC-based approach, and to train the machine learning classifiers. We investigated the labeling, training, and analysis times of networks that use different numbers of labeled points. It takes an experienced experimenter ~5 min to label 20 frames with 18 points of interest (13 labels on the mouse and 4 or more labels on the arena, based on its complexity). Using the same computer described above, the network then trains overnight (ca. 11 h), and a 10-min video (928 × 576 pixels, 25 fps) is analyzed in ca. 9 min (see Supplementary Table S1). However, analysis/processing speed depends heavily on the hardware used, with GPU type and pixel number/frame size being of great importance [36 ].
+ Open protocol
+ Expand
9

Real-Time GPU-Accelerated Multi-Thresholding

Check if the same lab product or an alternative is used in the 5 most similar protocols
The system uses a NVIDIA GTX1080 GPU with a parallel implementation of the software developed in NVIDIA Compute Unified Device Architecture (CUDA)—Toolkit 9.0. A parallel prefix sum algorithm is used to reduce the run time of the multi-thresholding algorithm by several orders of magnitude. Three images were processed 1000 times, and the average speed was taken for the implementations on each the CPU and GPU to establish the time advantage of introducing multithreading. The results show the algorithm is up to 10,000 times faster using the GPU method, with the same threshold values found. A trial run of 50 images gave a total average loop time of 100 ms and maximum of 175 ms per frame with one detection module in use.
An in-field calibration method allows the user to optimise the system for different conditions. A background image is taken with no illumination, as are two further images in different positions to allow the user to tune parameters in Equation (1) and the height estimation algorithm. Figure 8 shows the calibration process with the “ideal” processed images output from the tuned parameters.
+ Open protocol
+ Expand
10

Optimal Neural Network Initialization and Training for CryoET

Check if the same lab product or an alternative is used in the 5 most similar protocols
Before training, all the kernels in the neural network are initialized using a uniform distribution of near-zero values, and the offsets are initialized to zero. Log squared residual (log((y-y′)2 (link))) between the neural network output and the manual annotation is used as the loss function. Since there is a pooling layer in the network, the manual annotation is shrunk by 2 to match the network output. A L1 weight decay of 10−5 is used for regularization of the training process. No significant overfitting is observed, likely because the high noise level in the CryoET images also serves as a strong regularization factor. To optimize the kernels, we use stochastic gradient descent with a batch size of 20. By default, the neural network is trained for 20 iterations. The learning rate is set to 0.01 in the first iteration and decreased by 10% after each iteration. The training process can be performed on either a GPU or in parallel on multiple CPUs (~10x slower on our testing machine). Training each feature typically takes under 10 minutes on a current generation GPU, and the resulting network can be used for any tomogram of the same cell-type collected under similar conditions. A workstation with 96GB of RAM, 2× Intel X5675 processors for a total of 12 compute cores, and an Nvidia GTX 1080 GPU was used for all testing.
+ Open protocol
+ Expand

About PubCompare

Our mission is to provide scientists with the largest repository of trustworthy protocols and intelligent analytical tools, thereby offering them extensive information to design robust protocols aimed at minimizing the risk of failures.

We believe that the most crucial aspect is to grant scientists access to a wide range of reliable sources and new useful tools that surpass human capabilities.

However, we trust in allowing scientists to determine how to construct their own protocols based on this information, as they are the experts in their field.

Ready to get started?

Sign up for free.
Registration takes 20 seconds.
Available from any computer
No download required

Sign up now

Revolutionizing how scientists
search and build protocols!