The largest database of trusted experimental protocols

Tesla k40 gpu

Manufactured by NVIDIA
Sourced in United States

The Tesla K40 GPU is a high-performance graphics processing unit (GPU) designed for scientific and technical computing applications. It features 2,880 CUDA cores, 12GB of GDDR5 memory, and a maximum power consumption of 235 watts. The Tesla K40 GPU is capable of delivering up to 4.29 teraflops of single-precision performance and 1.43 teraflops of double-precision performance.

Automatically generated - may contain errors

13 protocols using tesla k40 gpu

1

Optimizing Performance in HPC Environments

Check if the same lab product or an alternative is used in the 5 most similar protocols
All runtimes discussed were generated using Ubuntu 14.04.4 LTS (Linux kernel 3.19.0–68, OpenJDK IcedTea 2.6.7, and FFTW 3.326 (link)) with 2× Intel® Xeon® E5–2650 v3 CPUs @2.30 GHz, 128GB DDR4 memory, 2× NVIDIA Tesla K40 GPUs (CUDA 7.527 ).
+ Open protocol
+ Expand
2

Accelerating Compressed FLIM Reconstruction

Check if the same lab product or an alternative is used in the 5 most similar protocols
Based on the iterative construction, compressed FLIM is computationally extensive. For example, to reconstruct a 500 × 400 × 617 (x, y, t) event datacube and compute a single lifetime image, it takes tens of minutes on a single PC. The time of constructing a dynamic lifetime movie is prohibitive. To accelerate this process, we (1) implemented the reconstruction algorithm using a parallel programming framework on two NVIDIA Tesla K40 GPUs and ( 2) performed all reconstructions simultaneously on a computer cluster (Illinois Campus Cluster). The synergistic effort significantly improved the reconstruction speed and reduced the movie reconstruction time to seconds. Table 1 illustrates the improvement in reconstruction time when the computation is performed on a single PC vs. the GPU-assisted computer cluster.
+ Open protocol
+ Expand
3

Deep Learning Model Training Protocol

Check if the same lab product or an alternative is used in the 5 most similar protocols
All networks were implemented in Keras with Tensorflow 39 as backend. A stochastic gradient descent algorithm 40 with learning rate of 0.001 was used to optimize the networks. To avoid overfitting, we used online data augmentation, including random rotation, random flipping and adding random noise for all training data sets. All training and inference procedures were performed on servers equipped with NVIDIA Tesla K40
GPUs.
+ Open protocol
+ Expand
4

Optimizing Hyperparameters for NLP Modeling

Check if the same lab product or an alternative is used in the 5 most similar protocols
The word embedding of our model is initialized with pre-trained word embedding and other parameters are initialized at random from a uniform distribution. Then all parameters are optimized using stochastic gradient descent (SGD) [27 ] to maximize the log-probability of the correct tag sequence. In addition, several hyper-parameters need to be determined in our model. We tuned the hyper-parameters on the development set by random search [28 ]. The main hyper-parameters of our models are shown in Table 2. The number of epochs is chosen by early stopping strategy [29 (link)] on the development set. Our model is implemented using open-source deep learning library Theano (http://deeplearning.net/software/theano/) and trained on a NVIDIA Tesla K40 GPU.

The main hyper-parameters of our model

Hyper-parameterValueValues tested
Word embedding dimension10050, 100, 200
Character embedding dimension2525, 50
Character-level BiLSTM state size2525, 50
Capitalization embedding dimension55, 10
POS embedding dimension2525, 50
Chunking embedding dimension1010, 20
NER embedding dimension55, 10
Word-level BiLSTM state size10050, 100, 200
SGD learning rate0.0010.01, 0.005, 0.001
+ Open protocol
+ Expand
5

ZOLA-3D: Automated 3D Fluorescence Microscopy

Check if the same lab product or an alternative is used in the 5 most similar protocols
ZOLA-3D is provided as an ImageJ plugin available via the Fiji update system28 (link),37 (link). Instructions on how to obtain, install and use ZOLA-3D are available at https://github.com/imodpasteur/ZOLA-3D, along with sample data. We analyzed images using ZOLA on a Windows computer equipped with a Nvidia GTX480 GPU card, a Nvidia Quattro K4200 or on an Ubuntu machine with a Nvidia Tesla K40 GPU.
+ Open protocol
+ Expand
6

FDTD Modeling of Patterned Silicon

Check if the same lab product or an alternative is used in the 5 most similar protocols
Full-scale FDTD modeling is performed using commercial package Speag SEMCAD X v14.8 with Acceleware CUDA GPGPU acceleration library running on a high-performance workstation equipped with a pair of 10-core Intel Xeon processors and Nvidia Tesla K40 GPU. Periodic boundary conditions are applied to the side faces of the reconstructed unit cell (Fig. 2(e)), while the top and bottom faces of the simulation box are set to light absorbing perfectly matched layers. The simulated light plane wave is incident upon the patterned silicon surface from the vacuum. FDTD grid step of 4 nm is used as it has been checked that further decreasing the step does not affect the optical observables. Arrays of field monitors are placed above and below the metasurface to resolve the contributions from the incident, transmitted and diffracted waves. The absorption is evaluated as the deficit of the light energy between the incident and all the outgoing waves.
+ Open protocol
+ Expand
7

Programmable Ultrasound System for 3D Imaging

Check if the same lab product or an alternative is used in the 5 most similar protocols
A fully programmable ultrasound system with 256 fully programmable channels in emission and receive (Vantage, Verasonics, Kirkland, USA) was used to control a 2.5MHz ultrasonic 2D matrix array probe of 256 square elements (16×16 elements), with an inter-element spacing of 0.95mm and a bandwith of 50% (Sonic Concepts, Bothell, USA). The volume delay-and-sum beamforming and the axial strain distribution calculations were performed on a Tesla K40 GPU (Nvidia, Santa Clara, USA). 3D rendering was computed with Amira software (Visualization Sciences Group, Burlington, USA).
+ Open protocol
+ Expand
8

Deep Learning for Medical Image Segmentation

Check if the same lab product or an alternative is used in the 5 most similar protocols
Codes for the study were developed in Python3 (https://www.python.org/download/releases/3.0/) with use of the open source Keras 2.2.2 library and SimpleITK python library46 (link). The experiments were executed on a workstation with one Nvidia Tesla K40 GPU. The software 3D Slicer (version 4.8.1) was used for image processing and labelling of the CT images in creation of ground truth57 (link),58 . Among the three baseline methods for comparison to ours, we used the public code available for reproducing the PItcHPERFeCT, and re-implemented the other two methods based on the published algorithms.
+ Open protocol
+ Expand
9

Motion-Resolved MRI Reconstruction via CS

Check if the same lab product or an alternative is used in the 5 most similar protocols
Reconstruction was performed offline in MATLAB R2015b (The MathWorks, Natick, MA, USA) on a LINUX workstation with two six-core CPUs (Intel Xeon E5; Intel, Santa Clara, CA, USA), 512 GB of RAM, and an NVIDIA Tesla K40 GPU (Nvidia, Santa Clara, CA, USA). Physiological motion signal extraction was performed as previously described.5 Principal component analysis was subsequently performed on these SI projections to extract respiratory and cardiac signals. These signals were then used to sort the readouts into non-overlapping cardiac bins with a temporal width of 50 ms, and into four non-overlapping respiratory bins containing equal numbers of readouts.5
The binned k-space data were then reconstructed into motion-resolved images using CS by solving the optimization equation (Table 1) with the Alternating Direction Method of Multipliers (ADMM) algorithm (iterations = 10).
+ Open protocol
+ Expand
10

Optimizing Deep Autoencoder for Performance

Check if the same lab product or an alternative is used in the 5 most similar protocols
We optimized the deep AE using Adam, which computes adaptive learning rates during training and has demonstrated superior performance over other optimization methods [50 ]. An early-stopping strategy was applied to improve the learning of deep AE weights and prevent overfitting, where the training would be terminated if the performance did not improve over five consecutive epochs (maximum number of training epochs: 50). The deep AE was implemented in Torch7 [40 ], and the training was done on two NVIDIA Titan X GPUs and an NVIDIA Tesla K40 GPU. Ten-fold patient-based cross-validation was performed to determine the optimal deep AE architectures, including the number of encoder hidden layers (from 1–3) and the number of hidden units (factor of 4, 8, 16, 32).
+ Open protocol
+ Expand

About PubCompare

Our mission is to provide scientists with the largest repository of trustworthy protocols and intelligent analytical tools, thereby offering them extensive information to design robust protocols aimed at minimizing the risk of failures.

We believe that the most crucial aspect is to grant scientists access to a wide range of reliable sources and new useful tools that surpass human capabilities.

However, we trust in allowing scientists to determine how to construct their own protocols based on this information, as they are the experts in their field.

Ready to get started?

Sign up for free.
Registration takes 20 seconds.
Available from any computer
No download required

Sign up now

Revolutionizing how scientists
search and build protocols!