The largest database of trusted experimental protocols

Cuda toolkit 10

Manufactured by NVIDIA
Sourced in United States

The NVIDIA CUDA Toolkit 10.1 is a software development kit that provides a programming model and tools for developing, deploying, and optimizing parallel computing applications on NVIDIA GPUs. The toolkit includes a compiler, libraries, and tools for creating GPU-accelerated applications.

Automatically generated - may contain errors

3 protocols using cuda toolkit 10

1

High-Performance LSTree Analysis Pipeline

Check if the same lab product or an alternative is used in the 5 most similar protocols
The LSTree analysis tasks have been trained and used on a workstation with following specifications: 16 core Intel Xeon W-2145, 64 GB 2666 MHz DDR4 RAM equipped with a Nvidia Quadro RTX 6000 GPU with 24 GB VRAM and using Ubuntu 18.04.6 LTS. All code runs with Nvidia cudatoolkit 10.1, and cuDNN 7.
Minimally, one would need 16 GB of RAM and a Tensorflow compatible GPU with at least 8 GB of VRAM. Since many of the steps of the pipeline run in parallel, a higher number of CPUs is also desirable.
A step-by-step guide on installation and on how to run the example data provided can be found in the repository (www.github.com/fmi-basel/LSTree).
+ Open protocol
+ Expand
2

Deep Learning Workflow for Survival Analysis

Check if the same lab product or an alternative is used in the 5 most similar protocols
Training and operation of the deep feature model in this experiment were both conducted using a Linux Ubuntu 16.04 (Canonical Ltd., London, UK) system environment. The adopted deep learning framework was the PyTorch deep learning library (Facebook’s AI Research lab (FAIR), NY), based on Python 3.8 software (Python Software Foundation, Wilmington, DE). The graphics processing unit (GPU) used was an NVIDIA TITAN RTX with 64G VRAM and 32G RAM, and CUDA Toolkit 10.1 (NVIDIA Corporation, Santa Clara, CA, USA). The Cox proportional hazard regression model was established using R version 4.1.0 (The R Foundation, Vienna, Austria), for programming in a 64-bit Windows system.
+ Open protocol
+ Expand
3

Optimized GPU-Accelerated Deep Learning

Check if the same lab product or an alternative is used in the 5 most similar protocols
The computations were performed on a Lenovo computer with a Windows 10 (64-bit) operating system, an Intel (R) Core (TM) I7-8700 @3.20 GHz CPU, an NVIDIA GeForce RTX2060 graphics card with 16.0 GB of RAM. For GPU acceleration, a computing platform (NVIDIA CUDA Toolkit 10.1) and a deep neural network acceleration library (NVIDIA cuDNN v7.6.5) were used. All models were implemented on TensorFlow 2.1.0 framework and deep learning library Keras 2.3.1 using Python 3.7.3 in Spyder IDE (v. 3.3.3).
+ Open protocol
+ Expand

About PubCompare

Our mission is to provide scientists with the largest repository of trustworthy protocols and intelligent analytical tools, thereby offering them extensive information to design robust protocols aimed at minimizing the risk of failures.

We believe that the most crucial aspect is to grant scientists access to a wide range of reliable sources and new useful tools that surpass human capabilities.

However, we trust in allowing scientists to determine how to construct their own protocols based on this information, as they are the experts in their field.

Ready to get started?

Sign up for free.
Registration takes 20 seconds.
Available from any computer
No download required

Sign up now

Revolutionizing how scientists
search and build protocols!