The largest database of trusted experimental protocols

Gtx 1080

Manufactured by NVIDIA
Sourced in United States

The NVIDIA GTX 1080 is a high-performance graphics processing unit (GPU) designed for use in computer systems. It features a Pascal architecture and is capable of delivering fast and efficient graphics processing performance. The GTX 1080 is equipped with 8GB of GDDR5X video memory and supports various display technologies. It is designed for a wide range of applications, including gaming, content creation, and scientific computing.

Automatically generated - may contain errors

33 protocols using gtx 1080

1

Spatialized Detection of Pleroma Blooms

Check if the same lab product or an alternative is used in the 5 most similar protocols
To avoid border effects, each 10240 × 10240 pixels Sentinel-2 image was cropped on a regular grid of 128 × 128 pixels (1280 × 1280 m), and 4 neighboring pixels were added on each side to create an overlap between the patches. The function gdal_retile61 was used for this operation. Prediction was then made for each subset image: for each image, the detection model returned 0 or 1 if a blooming Pleroma was found in the image. Then the results were spatialized again using the grid, but this time, each cell of the grid only received 1 value, the prediction, resulting in a raster of 80 columns and 80 rows and a spatial resolution of 1280 m, of the same extent as the Sentinel-2 image. The value of the pixels (1 or 0) indicated the presence or absence of blooming Pleroma trees in this squared area of 1280 m of side. Prediction using GPU of a single tile of 10240 × 10240 pixels took approximately 1 minute on a Nvidia GTX1080 with an 8 GB memory and 45 s on a Nvidia RTX2080 with an 8 GB memory. The prediction for the complete Sentinel-2 time series presented in this work took approximately 22 days using a Nvidia GTX1080 GPU—from the 30 October 2020 to the 20 November 2020.
+ Open protocol
+ Expand
2

Optimized Radiation Therapy Planning

Check if the same lab product or an alternative is used in the 5 most similar protocols
We performed numerical experiments on the 2D phantom and 2D patient data which consisted of a HR_CTV positioned on both sides of the tandem applicator and three OARs; a bladder, rectum, and sigmoid. The details of each configuration are shown in Figure 6. With the patient data, which the dimension was 332×502×118 (resolution of 0.29 cm×0.29 cm×0.8 cm), when we implemented our method to calculate by setting CUDA C++ for parallel computation and optimize the treatment plan it only took around 1 minute. We employed CPU of Intel i7-6700k 4.00 GHz, GPU of Nvidia GTX 1080, and 32 GB DDR4 3200 MHz memory. To compare the proposed method with the conventional one, we use the same model and solver, but we fixed the transmission rates as constant (= 1.0). A radioactive Ir-192 source was utilized, and data was collected for the twelve dwell positions of the Ir-192 source that were monitored. The transmission rates and dwell times were calculated for each of the twelve dwell positions and compared between the conventional and proposed method. Similarly, the dose distribution and coverage statistics were also calculated for both methods.
+ Open protocol
+ Expand
3

Super-Resolution Imaging Comparison

Check if the same lab product or an alternative is used in the 5 most similar protocols
We reconstructed the super-resolution image with a single-emitter fitting method (ThunderSTORM), a multi-emitter fitting method (3D-DAOSTORM), and a compressed sensing-based approach (FALCON), all compared with WindSTORM. The single-emitter maximum likelihood Gaussian function fitting algorithm and wavelet filtering were used in ThunderSTORM, and the multi-emitter maximum likelihood Gaussian function fitting algorithm was used in 3D-DAOSTORM. The kernel width of the PSF used in WindSTORM was set to 1.5 pixels for the simulated dataset and our experimental PALM imaging dataset, 1.9 pixels for our experimental STORM imaging dataset, and 1.4 pixels for the open-access experimental dataset (http://bigwww.epfl.ch/smlm/datasets/index.html?p=real-hd). For iterative approaches (FALCON and ThunderSTORM), the above kernel widths of the PSF were used as the initial kernel. The drift correction was performed using cross-correlation method provided in ThunderSTORM (30 (link)). In this study, the GPU versions of WindSTORM and FALCON were executed on GPU (GTX1080, Nvidia), and the CPU version of WindSTORM, 3D-DAOSTORM, FALCON, and ThunderSTORM was executed on a Quad-core CPU (Core i7-4790, Intel).
+ Open protocol
+ Expand
4

Oculus Rift CV1 VR Experience

Check if the same lab product or an alternative is used in the 5 most similar protocols
We used an Oculus Rift CV1 with two Oculus sensors and an Asus laptop with an Intel i7 processor, 16 Gb of RAM memory, and a Nvidia GTX 1080 graphics card. The researcher started the experience by pressing a key on the keyboard and had the option to stop it at any time.
+ Open protocol
+ Expand
5

Automated Nuclei Segmentation Using UNet

Check if the same lab product or an alternative is used in the 5 most similar protocols
A three-class UNet model14 was trained based on annotation of nuclei centers, nuclei contours, and background. The neural network is comprised of 4 layers and 80 input features. Training was performed using a batch size of 32 with the Adam Optimizer and a learning rate of 0.00005 with a decay rate of 0.98 every 5000 steps until there was no improvement in accuracy or ~100 epochs had been reached. Batch normalization was used to improve training speed. During training, the bottom layer had a dropout rate of 0.35, and L1 regularization was implemented to minimize overfitting44 ,45 and early stopping. Training was performed on workstations equipped with NVidia GTX 1080 or NVidia TitanX GPUs.
+ Open protocol
+ Expand
6

High-Resolution DTI Image Reconstruction Using HADTI-Net

Check if the same lab product or an alternative is used in the 5 most similar protocols
The 100 preprocessed HCP subjects were split into a training and testing set with a commonly used 80–20% ratio, respectively. The concatenated input had a total of 8 channels which include a channel for T1, a channel for b0, and 6 channels for minimally evenly distributed diffusion gradient directions. The patch size of the input was set to 64×64×64 with a 32×32×32 overlap. We discarded any patches containing only the background region to reduce the computational cost during the training phase. The HADTI-Net was trained for 100 epochs using Adam with an initial learning rate of 0.0001 and batch size of 8 on a single NVIDIA GTX 1080. For inference, the averages from different overlapping regions between adjacent patches were calculated to reconstruct the final enhanced LAR-DTI volume. The code was implemented on PyTorch (version 1.10.1) and Numpy (version 1.21.2).
+ Open protocol
+ Expand
7

Optimized CNN for Plaque and CAA Detection

Check if the same lab product or an alternative is used in the 5 most similar protocols
All neural network models were trained in the open-source package PyTorch71 on four NVIDIA GTX 1080 or Titan X graphics processing units. Our optimized model used a simple convolutional architecture for image classification, consisting of alternating (3 × 3) kernels of stride 1 and padding 1 followed by max pooling (Fig. 3a), followed by two fully connected hidden layers (512 and 100 neurons) and rectified linear units as the nonlinear activation function. All neural network models were trained using backpropagation. The optimized training procedure used the Adam72 optimizer with a multi-label soft margin loss function with weight decay (L2 penalty, 0.008) and dropout (probability 0.5 for the first two fully connected layers and probability 0.2 for all convolutional layers). Training proceeded with mini-batches of 64 images with real-time data augmentation including random flips, rotations, zoom, shear, and color jitter. When calculating the classification accuracy, a threshold of 0.91, 0.1, and 0.85 was used for cored plaque, diffuse plaque, and CAA prediction, respectively. Predictions with confidence above the threshold were considered to be positives.
+ Open protocol
+ Expand
8

DL-B0GluCEST: Evaluating a Novel Deep Learning Approach

Check if the same lab product or an alternative is used in the 5 most similar protocols
DL-B0GluCEST was compared to a popular DL model, Unet [32 ] without using wide activation blocks. DL-B0GluCEST was trained and validated for 3, 5 and 7 input image pairs separately to show the stability and consistency of the algorithm. For DL-B0GluCEST with 1/3 input pairs, the CEST images acquired at ±3/±2, 3, 4 ppm were used respectively. For DL-B0GluCEST with 5 input pairs, those at ±2.4, 2.8, 3, 3.4, 3.8 ppm or at ± 2.2, 2.6, 3, 3.2, 3.6 ppm were used as the input pairs. For the 7-input-channel run, the images acquired at position from ±1.8 to 4.2 ppm with a step of 0.4 ppm were used. Structural similarity index (SSIM), peak signal-to-noise ratio (PSNR) and contrast-to-noise ratio (CNR) were calculated as the performance indices. CNR was measured by the ratio of the subtraction between the mean value of a gray matter (GM) region-of-interest (ROI) and a white matter (WM) ROI and standard deviation of a WM ROI. Both ROI were extracted from segmentation mentioned in section 2.1. All DL experiments were performed using framework of Keras and Tensorflow running on a Ubuntu16.04 system with NVIDIA Tesla P100 and GTX 1080.
+ Open protocol
+ Expand
9

Automated Cell Segmentation in CSLM Images

Check if the same lab product or an alternative is used in the 5 most similar protocols
The ImageJ plugin takes as an input a CSLM image (Figure 2Aa), the ROIs of the selected cells (Figure 2Ab), and the channels to analyze. It then performs the conversion to RGB, clears the signal outside the selected ROIs (Figure 2Ac), and calls the U-Net plugin, which normalizes the image (Figure 2Ad) and performs segmentation with the specified weight file. The segmented image (Figure 2Ae) is then recalled by the plugin and for each class (Figure 2Af) the objects above the minimum size (Figure 2Ag) are quantified (Figure 2A, h and i) through the use of the Analyze Particles class. Deep learning computations were performed on a single graphical processing unit (GPU, nVidia GTX 1080 with 8GB of VRAM). The Caffe framework was patched with U-Net version 99bd99_20190109 and compiled on a Linux CentOS remote server with cuda 8.1 and cudNN 7.1.
+ Open protocol
+ Expand
10

Benchmarking CPU and GPU Image Reconstruction

Check if the same lab product or an alternative is used in the 5 most similar protocols
All tests were performed using MATLAB on an Intel Xeon E5–2650 v4 2.20 GHz workstation with 100 GB RAM. GPU computations were performed an a NVidia GTX 1080 with 8 GB RAM. The Michigan Image Reconstruction Toolbox was used for the CPU tests (21 ) and only a single computational thread was permitted. The gpuNUFFT library was used for the GPU tests (22 ).
+ Open protocol
+ Expand

About PubCompare

Our mission is to provide scientists with the largest repository of trustworthy protocols and intelligent analytical tools, thereby offering them extensive information to design robust protocols aimed at minimizing the risk of failures.

We believe that the most crucial aspect is to grant scientists access to a wide range of reliable sources and new useful tools that surpass human capabilities.

However, we trust in allowing scientists to determine how to construct their own protocols based on this information, as they are the experts in their field.

Ready to get started?

Sign up for free.
Registration takes 20 seconds.
Available from any computer
No download required

Sign up now

Revolutionizing how scientists
search and build protocols!