The largest database of trusted experimental protocols

Gtx1060

Manufactured by NVIDIA

The NVIDIA GeForce GTX 1060 is a high-performance graphics processing unit (GPU) designed for desktop computers. It is based on NVIDIA's Pascal architecture and features 1,280 CUDA cores, a boost clock speed of up to 1.7 GHz, and 6 GB of GDDR5 video memory. The GTX 1060 is capable of delivering efficient and smooth graphics performance for a variety of applications, including gaming, video editing, and scientific computing.

Automatically generated - may contain errors

10 protocols using gtx1060

1

AI Model Training and Evaluation

Check if the same lab product or an alternative is used in the 5 most similar protocols

Model training. Here is the information on experimental environment: Window10, Intel(R) Core(TM) i7-8700 CPU @3.20GHZ processor, RAM 16G, graphics card NVIDIA GTX1060, Python3.7. Model parameters are set as follows. Input image size is 416 × 416. The batch is set to 4, and the label smoothing is set to 0.05. The breakpoint continuation training method is adopted. One breakpoint is set every 350 times, four breakpoints are set, 350 weight files are generated after 350 times of training, and the best weight file is manually selected as the initial weight of the next breakpoint. The total number of training times is 1400 times. During the test, the confidence is set to 0.4, and IOU is set to 0.4.

Evaluation metrics. In this paper, we mainly evaluate the effectiveness of model training in terms of detection accuracy and efficiency. The evaluation metric used is the mean Average Precision (mAP), the average detection accuracy AP of all categories, and the number of image frames per second FPS detected by the algorithm.

+ Open protocol
+ Expand
2

Single-Molecule Localization Microscopy

Check if the same lab product or an alternative is used in the 5 most similar protocols
Each frame from an image stack was first box-filtered with the box size of 4 times of the FWHM of a 2D Gaussian PSF. We note that each pixel was weighted by the inverse of its variation during such box-filtering. The low-pass filtered image was then extracted from the raw image, followed by recognition of local maximums. The local maximums from all the frames of the image stack were then submitted for 2D-Gaussian single-PSF fitting.
The 2D-Gaussian single-PSF fitting were performed in GPU (Nvidia GTX 1060, CUDA 8.0) using the Maximum Likelihood Estimation (MLE) algorithm. In brief, the likelihood function at each pixel was built by convolving the Poisson distribution of the shot noise governed by the photons emitted from fluorophores nearby, and the gaussian distribution of the readout noise that characterized by the expectation, variation, and the analog-to-digital conversion factor that pre-calibrated as mentioned above. The fitting accuracy was estimated by Cramér-Rao lower bound (CRLB).
+ Open protocol
+ Expand
3

Performance Edge VR for Oculus Rift

Check if the same lab product or an alternative is used in the 5 most similar protocols
Performance Edge VR was developed for the Oculus Rift VR platform, which requires a direct cable connection to a high-performance gaming laptop (minimum specifications: Intel i5-4590 or greater, NVIDIA GTX 1060 video card or greater, 8 GB+ RAM, Windows 10 and compatible video output; used in current study: Alienware Dell 15R3, Intel core i7-7700HQ, CPU@2.8 GHz, NVIDA GTX 1080, 16 GB RAM, Win 10 Pro) and external VR positioning sensors. Respiratory rate measurements and in-application biofeedback were provided by an EquiVital biosensor respiratory harness (Hidalgo, UK) with integrated transmitter (SEM), with the digital signal transmitted wirelessly via Bluetooth to the laptop. Intellectual property relating to Performance Edge VR is owned by The University of Newcastle. If interested in accessing Performance Edge VR application for research purposes, please contact the corresponding author.
+ Open protocol
+ Expand
4

W-Net GAN for Seismic Inversion

Check if the same lab product or an alternative is used in the 5 most similar protocols
The proposed seismic inversion algorithm was implemented in Python (version 3.9). We used PyTorch with CUDA (versions 1.13 and 11.7) for the W-Net GAN model and training code. The training was conducted on a computer running Windows 11 operating system, with Intel® Core™ i7-8750H CPU and NVIDIA® GeForce™ GTX 1060. The average training time of the W-Net GAN was approximately 45 s per epoch for all the synthetic application examples, and 55 s per epoch for the real application example.
+ Open protocol
+ Expand
5

Deep Learning-Powered Image Classification

Check if the same lab product or an alternative is used in the 5 most similar protocols
The work was implemented in Python™ with Keras and TensorFlow in a Windows 10 environment and was run on the Nvidia® GeForce® GTX 1060 graphics processing unit (GPU).
+ Open protocol
+ Expand
6

Deep Learning for Hippocampus Segmentation

Check if the same lab product or an alternative is used in the 5 most similar protocols
The training of the CNN model was performed using a NVIDIA GTX 1060 GPU and the CUDA 9 toolkit and cuDNN 7.5 with TensorFlow 1.12.1. All pre-processing steps were performed using the visualization toolkit (VTK) and python 2.7. All registrations were performed using the NiftyReg package [30 (link)]. Brain extraction and bias field correction were done using the brain extraction tool (BET), part of the FSL package. All statistical analyses were performed using open-source R and RStudio software packages. The trained model, pre-processing scripts and instructions for using tool are available on https://github.com/snobakht/DeepHarP_Hippocampus/tree/main/DeepHarP.
+ Open protocol
+ Expand
7

Transfer Learning on Desktop Hardware

Check if the same lab product or an alternative is used in the 5 most similar protocols
All implementations of the transfer learning framework proposed in this article were performed on personal desktops with Core i7-8700 (3.20 GHz), 16 GB RAM, NVIDIA GTX1060 6 GB, and 64-bit operating system. The operating system was Windows10 Professional and Python 3.6 with tensorflow and keras. Since the number of samples was not very large, all algorithms were run in a CPU environment.
+ Open protocol
+ Expand
8

VR System for Immersive Neuroscience Research

Check if the same lab product or an alternative is used in the 5 most similar protocols
The VR system was set up as previously described (Huang et al., 2022 (link)). Specifically, the VR system used in this study (Shanghai VR-Sens Intelligent Technology Co. Ltd., Shanghai, China) consisted of a VR interface, a VR headset, two controllers, and two cameras (Fig. 2). The VR system was linked to a conventional gaming computer (CPU: Intel i7-7700 and GPU: NVIDIA GTX1060). The VR interface offered scenarios in virtual reality. Two tracking cameras were used to precisely determine the location of the VR headset. The two controllers were employed to interact with VR objects. As a data visualization tool, custom software (VR-SENS VR Implant Tutorial Software) was utilized. The VR-compatible files were previously generated computationally.
+ Open protocol
+ Expand
9

CellexalVR: Immersive Research Simulations

Check if the same lab product or an alternative is used in the 5 most similar protocols
Users require a VR ready workstation/laptop with a suitable graphics card (Steam recommends an NVIDIA GTX1060 or higher) running Windows 10. CellexalVR will work with any SteamVR compatible headset. The HTC Vive and Valve Index were used during development. Compatibility for other VR systems will be added. These VR systems are readily available, and are priced for the home consumer.
CellexalVR was developed on a gaming class workstation comprising an Intel i7 processor, 16GB RAM and an NVIDIA GTX1080 graphics card.
+ Open protocol
+ Expand
10

Image Preprocessing for U-Net Model

Check if the same lab product or an alternative is used in the 5 most similar protocols
Image intensity values can change due to not only by different tissue types but also can change due to noise and scanner artifacts. It has been suggested [39] that intensity normalization has a significant role as a preprocessing stage. The purpose of intensity normalization is to uniformized the mean and variance values of image intensities. We used the normalization process to enable us to change the range of pixel intensity values in ranges 0 and 1. More details for normalization are given in [40] . We used simple noise reduction and image smoothing for all images to improve the U-Net input image quality. The denoising, normalization, and resizing processes were run using custom code written in the OpenCV-Python library. Also, all image sizes were changed to 256×256 to reduce the U-Net training time according to our GPU computational capability and memory (NVIDIA GeForce GTX-1060 with 6 GB memory).
+ Open protocol
+ Expand

About PubCompare

Our mission is to provide scientists with the largest repository of trustworthy protocols and intelligent analytical tools, thereby offering them extensive information to design robust protocols aimed at minimizing the risk of failures.

We believe that the most crucial aspect is to grant scientists access to a wide range of reliable sources and new useful tools that surpass human capabilities.

However, we trust in allowing scientists to determine how to construct their own protocols based on this information, as they are the experts in their field.

Ready to get started?

Sign up for free.
Registration takes 20 seconds.
Available from any computer
No download required

Sign up now

Revolutionizing how scientists
search and build protocols!