The largest database of trusted experimental protocols

Tesla p100

Manufactured by NVIDIA
Sourced in United States

The Tesla P100 is a high-performance GPU accelerator designed for data centers and scientific computing. It features the NVIDIA Pascal architecture, which provides powerful parallel processing capabilities for a wide range of applications, including machine learning, scientific simulations, and high-performance computing. The Tesla P100 offers advanced features such as high-speed memory, efficient power consumption, and support for various compute APIs.

Automatically generated - may contain errors

34 protocols using tesla p100

1

Monte Carlo Radiation Dose Simulation

Check if the same lab product or an alternative is used in the 5 most similar protocols
An FDEIR [6 (link)], which was used to estimate the exposure dose, simulates the radiation exposure dose in the diagnostic energy range using Monte Carlo method. Its accuracy was well validated in earlier comparative studies of dosimetry and other Monte Carlo codes [6–8 (link)].
The simulation was conducted on a single graphical processing unit (GPU; Tesla P100; NVIDIA Corp.) on a supercomputing system (SGI Rackable C2112-4GP3/C1102-GP8, Reedbush-L; Silicon Graphics International Corp.) at the Information Technology Center of the University of Tokyo. This study simulated a trillion incident photons with 5 keV cut-off energy for the photon. The simulation suppressed electron transport to accelerate the calculations.
+ Open protocol
+ Expand
2

scDCC: Single-Cell Differential Clustering

Check if the same lab product or an alternative is used in the 5 most similar protocols
scDCC is implemented in Python 3 (version 3.7.6) using PyTorch52 (version 1.5). The sizes of hidden layers in ZINB model-based autoencoder are set to be (256, 64, 32, 64, 256), where the bottleneck layer’s size is 32. The standard deviation of Gaussian random noise is 2.5. Adam with AMSGrad variant53 ,54 and Adadelta55 are applied for pretraining stage and clustering stage, respectively. The parameters of Adam optimizer are set with initial learning rate lr = 0.001, β1 = 0.9, and β2 = 0.999, and parameters of Adadelta optimizer are set to be of lr = 1.0 and rho = 0.95. The choice of γ follows scDeepCluster’s setting of 1. The weight of constraint loss γ is set to be 1 for all experiments. The batch size for pretraining and clustering is 256. We pretrained the autoencoder 300 epochs. The convergence threshold for clustering stage is 0.1% of the changed clustering labels per epoch. All experiments are conducted on Nvidia Tesla P100 (16 G) GPU.
+ Open protocol
+ Expand
3

Deep Learning Models Trained on Multimodal Brain Features

Check if the same lab product or an alternative is used in the 5 most similar protocols
Each of our 12 model types was trained on 15 different feature sets, for a total of 180 model type by feature set combinations. The feature sets contain measures of anatomical volume and functional connectivity from the IMPAC dataset. These feature sets included: (1–7) functional connectivity measured between regions defined by one of the 7 atlases described in “MRI feature extraction” (using the processing steps in Fig. 5A,B,D), (8) an anatomical feature set consisting of 207 measures of regional volume and thickness (Fig. 5C,F), (9–15) the union of the anatomical feature set with one of the functional feature sets (Fig. 5A–C,E). All feature sets also included sex and imaging site as additional covariates. The deep learning models were trained on an NVIDIA Tesla p100. Further description of the training of the deep learning models can be found in Supplementary Sect. 1.2.
+ Open protocol
+ Expand
4

GPU-accelerated Structural Simulation Evaluation

Check if the same lab product or an alternative is used in the 5 most similar protocols
Evaluation was performed on a headless GPU server with an Intel Xeon E5-2680, 256 GB of RAM and one Nvidia Tesla P100 with 16 GB of memory. The P100 includes 56 streaming multiprocessors (SM) each with 24 kB of L1 cache.
As a baseline, each dataset was also simulated using the iterative solver in the commercially available software Abaqus. Datasets were not re-meshed for this purpose. The simulation was performed in parallel on a workstation with 16 CPU cores and 128 GB of RAM. The linear solver was configured to use the iterative method with convergence criterion of 5.0×10−3 for the average flux norm and 1.0×10−2 for displacement corrections.
+ Open protocol
+ Expand
5

CNN-based Land Cover Classification

Check if the same lab product or an alternative is used in the 5 most similar protocols
TL was used to carry out the LULC classification. In past experiments, several architectures have been proposed and tested for scene classification [22 (link),23 (link),24 (link)]. After experimenting with and comparing different pre-trained architectures [25 (link),26 (link),27 (link),28 (link)], we decided to employ VGG16 and Wide ResNet-50 for the particular use-case. The models were fine-tuned on the RGB version of the EuroSAT dataset and trained using the PyTorch framework, in the Python language. NVIDIA TESLA P100 GPUs available with Kaggle were used for model training and testing.
+ Open protocol
+ Expand
6

Structural Modeling of Homologous Proteins

Check if the same lab product or an alternative is used in the 5 most similar protocols
Three-way homologous proteins in A. muricata, M. foliosa, and P. verrucosa were selected for structure modeling. Representative sequences of them were sorted by length and numbered as CPXXXXXXXX according to order before being sent to the ColabFold (1.3.0) platform. A total of 1,053 proteins no longer than 200 Amino acid (aa) were calculated on the NVIDIA (Santa Clara, CA, USA) Tesla P100 while others were calculated on the NVIDIA Tesla V100 cluster at the Big Data Computing Center of Southeast University. The parameters of ColabFold were set to –amber, –templates, –num-recycle 3, –use-gpu-relax. For each protein, the structure with the highest predicted local distance difference test (pLDDT) scores (*_relaxed_rank_1_model_x.pdb) was preserved and labeled as CPXXXXXXXX.pdb.
Four hundred structures predicted in this work were selected and aligned to public AlphaFold structures of their similar proteins (BLAST E value <2.8e-309) by PyMOL (RRID:SCR_000305), and then root mean square deviations (RMSDs) were calculated.
+ Open protocol
+ Expand
7

Machine Learning Model Benchmarking Across Platforms

Check if the same lab product or an alternative is used in the 5 most similar protocols
The experiments were run on 2 different computing devices: a computer with Intel Core i7, 12 GB of RAM, and Ubuntu 18.04 and a server with Intel Xeon Gold and NVIDIA Tesla P100. Python 3.7 was used for coding all the experiments. Numpy (1.19.5) [58 (link)], pandas (1.2.2) [59 (link)], and scipy (1.6.1) [60 (link)] were used for data engineering, scikit-learn (0.24.1) [61 (link)] and tensorflow (2.4.1) [62 ] for building machine learning models and feature selection algorithms, and matplotlib (3.3.4) [63 (link)] for plotting. Other needed libraries were lime (0.2.0.1) [49 (link)] and shap (0.39.0) [50 (link)]. Some processing pipelines were run on a Jupyter Notebook in order to visualize the charts.
+ Open protocol
+ Expand
8

Cell Cycle Phase and Age Prediction

Check if the same lab product or an alternative is used in the 5 most similar protocols
The convolutional neural network (CNN) models predicting cell cycle phase and age were trained using as an input image stacks for individual cells extracted from 4i experiments (48 fluorescent channels and 4 channels of masks representing entire cell, cell nucleus, cytoplasm and cytoplasmic ring around the nucleus respectively; 52 frames total; 100 px × 100 px) and ground truth annotations obtained from the time-lapse imaging. The Fastai (https://github.com/fastai/fastai) Python deep learning library was used for training and the initial pre-trained ResNet-50 convolutional networks were obtained directly from it. Both of the models were trained first using a low-resolution stack (50 px × 50 px) and then fine-tuned using full resolution stacks. The model predicting cell cycle phase was based on 2930 cells divided into a training set (2491 cells; 85%) and a validation set (439 cells, 15%). ResNet-50 CNN predicting phase was trained using a cross-entropy loss function. The model predicting cell cycle age was based on 2767 cells divided into a training set (2352 cells; 85%) and a validation set (415 cells, 15%). ResNet-50 CNN predicting age was trained using mean squared error loss function. Models were trained using Google Cloud VM (8 vCPUs, 52 GB memory, 1 x NVIDIA Tesla P100).
+ Open protocol
+ Expand
9

Semantic Segmentation Performance Comparison

Check if the same lab product or an alternative is used in the 5 most similar protocols
Model 3 was also trained by using Deeplab V3+, just like model 1. From centers C and D, 80% of cases were randomly selected and used as the training cohort, and 20% were used as the test cohort (the same split previously mentioned). Dice ratio, Jaccard ratio, 95% HD, and TPR were used as the metrics for the performance evaluation. All CNN models were programmed using the Python programming language (PyTorch 1.3.1; Meta AI), and the hardware platform was a workstation equipped with an NVIDIA Tesla P100 data center accelerator.
+ Open protocol
+ Expand
10

Deep Learning for Image Enhancement

Check if the same lab product or an alternative is used in the 5 most similar protocols
The network model, loss function, metrics, and training routine were built using the Keras45 and TensorFlow46 frameworks in Python. The training was carried out in a Google Colaboratory Cuda‐enabled environment, equipped with a 4‐core CPU, 25 GB RAM, and NVIDIA® Tesla® P100 GPU support 16 GB RAM. The training routine was set to save the best weights values when the validation set SSIM score is maximized.
+ Open protocol
+ Expand

About PubCompare

Our mission is to provide scientists with the largest repository of trustworthy protocols and intelligent analytical tools, thereby offering them extensive information to design robust protocols aimed at minimizing the risk of failures.

We believe that the most crucial aspect is to grant scientists access to a wide range of reliable sources and new useful tools that surpass human capabilities.

However, we trust in allowing scientists to determine how to construct their own protocols based on this information, as they are the experts in their field.

Ready to get started?

Sign up for free.
Registration takes 20 seconds.
Available from any computer
No download required

Sign up now

Revolutionizing how scientists
search and build protocols!