The largest database of trusted experimental protocols

Geforce gtx 1080 ti gpu

Manufactured by NVIDIA
Sourced in United States

The GeForce GTX 1080 Ti is a high-performance GPU designed for advanced graphics processing. It features 3,584 CUDA cores, a boost clock speed of 1,582 MHz, and 11GB of GDDR5X video memory with a bandwidth of 484 GB/s. The GeForce GTX 1080 Ti is capable of delivering high-quality graphics performance for a wide range of applications.

Automatically generated - may contain errors

40 protocols using geforce gtx 1080 ti gpu

1

DeepLabCut Installation on Windows 10

Check if the same lab product or an alternative is used in the 5 most similar protocols
DeepLabCut (version 2.0.1) was installed on a computer (Intel-Core-i7-7800 × 3.5 GHz CPU, NVIDIA GTX GeForce 1080 Ti GPU, quad-core 64 GB RAM, Windows 10, manufactured by PC Specialist Ltd.) with an Anaconda virtual environment and was coupled to Tensorflow-GPU (v.1.8.0, with CUDA v.9.01 and cUdNN v. 5.4).
+ Open protocol
+ Expand
2

DeepLabCut Installation on Windows

Check if the same lab product or an alternative is used in the 5 most similar protocols
DeepLabCut installation. DeepLabCut (version 2.0.1) was installed on a computer (Intel®-Core™-i7-7800X 3.5 GHz CPU, NVIDIA GTX GeForce 1080 Ti GPU, quadcore 64 GB RAM, Windows 10, manufactured by PC Specialist Ltd.) with an Anaconda virtual environment and was coupled to Tensorflow-GPU (v.1.8.0, with CUDA v.9.01 and cUdNN v. 5.4).
+ Open protocol
+ Expand
3

End-to-End Deep Learning for Autonomous Cytopathology

Check if the same lab product or an alternative is used in the 5 most similar protocols
The proposed end-to-end DL-based system consisted of learning and application sections (Fig. 1c). The learning section included two main parts: cell detection by Detec-tionNet and cell classification by ClassificationNet. These two models were trained individually and independently. In the application section, the DetectionNet and Classifica-tionNet models were combined to implement cell detection and classification sequentially, thus achieve autonomous cytopathology interpretation. Deep learning architectures and experiments were implemented on MATLAB 2019b using an Nvidia GeForce GTX 1080 Ti GPU with 11 GB memory.
+ Open protocol
+ Expand
4

Evaporation Estimation using ML Frameworks

Check if the same lab product or an alternative is used in the 5 most similar protocols
The present study used three ML frameworks for estimating evaporation. These models are Extreme Gradient Boosting (XGB)43 , ElasticNet Linear Regression (ElasticNet LR)50 , and Long Short-Term Memory (LSTM)45 (link). The training and testing for the machine learning models were carried out by using the TensorFlow framework on an NVIDIA GeForce GTX 1080 Ti GPU.
+ Open protocol
+ Expand
5

Pancreas Segmentation from Abdominal CT Scans

Check if the same lab product or an alternative is used in the 5 most similar protocols
The National Institutes of Health (NIH) pancreas segmentation dataset17, 18 contains 82 contrast‐enhanced abdominal CT volumes, and it is the most recognized public dataset for pancreas segmentation. Each CT scan has a resolution of 512×512×L, and L varies from patient to patient within the range of 181–466. In this work, the CT scans are resized to [208, 208] based on the approximate range of the pancreas label in the scans to ensure that each slice contains complete pancreas areas. Meanwhile, the CT volumes are randomly split into four folds, where three folds are used for training and the remaining one is used for testing, that is, 4‐fold cross‐validation. Metrics dice similarity coefficient (DSC) and Jaccard are used to evaluate the similarity between the obtained prediction maps and their corresponding ground truths. Besides, average symmetric surface distance (ASD) and root‐mean‐squared error (RMSE) are used to determine whether the pancreas edge is well segmented compared with the edge of the ground truths. Our algorithm is implemented by PyTorch environment,19 and the ADAU‐Net processing is conducted on one NVIDIA GeForce GTX 1080Ti GPU with 11 GB memory. In the experiment, Adam optimizer is used with the learning rate of 0.0001, and the momentum of 0.9 and 0.99. The networks are optimized from scratch with a batch size of 1.
+ Open protocol
+ Expand
6

Efficient Deep Learning Model Evaluation

Check if the same lab product or an alternative is used in the 5 most similar protocols
The computer hardware included an Intel Core i5-8300H CPU @ 2.30 GHz and an NVIDIA GeForce GTX 1080 Ti GPU. The model was trained on a single GPU using Python 3.8 and CUDA 10.2, with an initial learning rate of 0.01. The maximum number of training iterations was set to 50, with a momentum of 0.937. The batch size was set to 48. The performance of the model was evaluated using precision (P), recall (R), average precision (AP), and mean average precision (mAP). The definitions of these metrics are as follows: P=TPTP+FP×100%
R=TPTP+FN×100%
AP=01P(R)dR×100%
mAP=n=12APn2×100%
where TP, FP, and FN represent true positive, false positive, and false negative, respectively, with n representing the nth sample. In addition, to evaluate the computational capability and inference speed of the model, the number of parameters (Params), floating-point operations per second (FLOPs), and frames per second (FPS) were used as evaluation indicators.
+ Open protocol
+ Expand
7

EDRnet: Mortality Prediction Model

Check if the same lab product or an alternative is used in the 5 most similar protocols
We implemented and trained EDRnet using TensorFlow, version 1.13.1 for graphics processing unit (GPU), and Keras, version 2.2.4 for GPU. NumPy, version 1.16.4; Pandas, version: 0.25.3; Matplotlib, version 3.1.2; and scikit-learn, version 0.22.1, were used to build the model and analyze the results. We trained the models with the Adam optimizer and a binary cross-entropy cost function in equation 9 with a learning rate of 0.0001 and a batch size of 64 on the NVIDIA GeForce GTX 1080 Ti GPU as
where yi is the label (ie, 1 for deceased and 0 for survived) and p(yi) is the predicted probability of each patient being deceased for the batch size N number of patients.
+ Open protocol
+ Expand
8

Multi-wavelength Imaging for Oxygenation Analysis

Check if the same lab product or an alternative is used in the 5 most similar protocols
In this work, we performed acquisitions at a sampling frequency Fe=100  Hz , processing N=10 images, and modulated two wavelengths to determine oxygenation, λ1=665  nm with k1=3 ( f1=30  Hz ) and λ2=860  nm with k2=2 ( f2=20  Hz ).21 (link) The number of images ( N=10 ) was chosen such that all parameters have been adjusted to satisfy the conditions for proper DFT processing (Sec. 2.1) while maintaining a proper signal-to-noise ratio (SNR) in the acquired images. We implemented a custom rolling window algorithm of the DFT method onto Matlab with the parallel computing toolbox (using a graphics processing unit, GPU) by adding the N+1 term of the DFT sum and subtracting the first term of the DFT sum in real time. More details regarding the GPU implementation of the processing method are available elsewhere.20
All acquisitions have been performed on the imaging system described in the next section. In order to handle the large flux of data, as well as control the hardware and perform GPU processing of the acquired images, a personal computer with the following characteristics was used: Intel i7-7800x 3.5 GHz central processing unit, 16 GB of RAM, four 1 TB solid-state drives for data acquisition and one 500 GB solid-state drive for system operation, and an NVIDIA GeForce GTX1080TI GPU.
+ Open protocol
+ Expand
9

Deep Learning Model for Prediction

Check if the same lab product or an alternative is used in the 5 most similar protocols
In our experiments, an Adam optimizing function was used for training the entire network of the default setting parameters. The default learning rate was initially set at 0.0004 with a decreasing step 0.0001, whereas the validation accuracy did not increase after more than 10 epochs. The learning-rate threshold was set to 0.0001. A cross-entropy loss function was used to train the model. Weight constraint of dropout (p = 0.5) used to avoid overfitting were applied to the output filters before advancing to the next BGRU layer. The algorithm was enforced to complete when validation accuracy stopped increasing. When the model had iterated about 130 epochs, it converged and predictive performance stabilized. Our model was implemented in Keras, which is a publicly available deep-learning software. Weights in the CRRNN were initialized using default values, and the entire network was trained on a single NVIDIA GeForce GTX 1080 Ti GPU with 12GB memory.
+ Open protocol
+ Expand
10

Deep Learning Model Evaluation via Cross-Validation

Check if the same lab product or an alternative is used in the 5 most similar protocols
We implemented our proposed model using the TensorFlow package (version 1.14.0), which provides a Python (version 3.6.8; Python Software Foundation) application programming interface for tensor manipulation. We also used Keras (version 2.2.4) as the official front end of TensorFlow. We trained the models with the Adam optimizer with a learning rate of 0.0001, a batch size of 16, and the loss functions of binary cross-entropy and dice loss [17 (link)] on the GeForce GTX 1080 Ti GPU (NVIDIA Corporation).
For the performance evaluation, 5-fold cross-validation was performed to confirm its generalization ability. The augmented training data set (n=48,874) was randomly shuffled and divided into five equal groups in a stratified manner. Subsequently, four groups were selected for training the model, and the remaining group was used for validation. This process was repeated five times by shifting the internal validation group. Then, we averaged the mean validation costs of the five internal validation groups according to each epoch and found the optimal epoch that provides the lowest validation cost. The testing data set was evaluated only after the model was completely trained using the training and validation data set.
+ Open protocol
+ Expand

About PubCompare

Our mission is to provide scientists with the largest repository of trustworthy protocols and intelligent analytical tools, thereby offering them extensive information to design robust protocols aimed at minimizing the risk of failures.

We believe that the most crucial aspect is to grant scientists access to a wide range of reliable sources and new useful tools that surpass human capabilities.

However, we trust in allowing scientists to determine how to construct their own protocols based on this information, as they are the experts in their field.

Ready to get started?

Sign up for free.
Registration takes 20 seconds.
Available from any computer
No download required

Sign up now

Revolutionizing how scientists
search and build protocols!