The largest database of trusted experimental protocols

Geforce gtx 1060

Manufactured by NVIDIA
Sourced in United States

The GeForce GTX 1060 is a graphics processing unit (GPU) designed and manufactured by NVIDIA. It is built on the Pascal architecture and features 1,280 CUDA cores, a base clock speed of 1,506 MHz, and supports DirectX 12 and OpenGL 4.5. The GTX 1060 is capable of delivering high-performance graphics processing for a variety of applications.

Automatically generated - may contain errors

30 protocols using geforce gtx 1060

1

Optimizing Neural Network Architecture for Wrench Prediction

Check if the same lab product or an alternative is used in the 5 most similar protocols
Firstly, a deep feed forward neural network was trained with the idealized simulated datasettrain,sim. To find an optimal network architecture for our problem of predicting end-effector wrenches, an autonomous hyperparameter optimization was done using the optuna toolbox (Akiba et al., 2019 ) and the LAMB optimizer (You et al., 2019 ). In Table 3 the static as well as the varied neural network parameters and their ranges are listed. The loss function for training a particular neural network is an optimization parameter and can vary between mean squared error and mean absolute error (see Table 3). In order to compare the different tested network architectures with an equal metric to find the neural network with the highest prediction accuracy, we calculated a separate evaluation error using the root mean squared error. Training was done on four GPUs of a DGX-2. In total 700 different architectures were tried, which needed a computing time of approximately around 19 days. The resulting optimal neural network model with the highest prediction accuracy is named NNsim. The trained model was used on a regular desktop PC, with an Intel core i7 CPU and a nVidia GeForce GTX 1060. One evaluation step takes 21.5 ms.
+ Open protocol
+ Expand
2

Semantic Segmentation of Brain MRI using CNN

Check if the same lab product or an alternative is used in the 5 most similar protocols
A convolutional neural network (CNN) for semantic image segmentation composed of an encoder and corresponding decoder subnetwork was set up [2 (link)]. The network was pre-initialized with layers and weights from a pre-trained VGG 16 model [32 ].
The network used a pixel classification layer to predict the categorical label for every pixel in the input images. Class frequency of CSF (8.6%), brain (22.1%), tissue (14.3%), and background (55.0%) was obtained. Since the class “CSF” was underrepresented in the training data, a class weighting was carried out to balance classes.
A stochastic gradient descent with momentum (0.9) optimizer was used and a regularization term for the weights to the loss function was added with a weight decay of 0.0005. Cross-entropy was used as a loss function for optimizing the classification model. The initial learning rate was set to 0.001. Furthermore, the learning rate was reduced by a factor of 0.3 every 10 epochs. The network was tested against the validation data set every epoch to stop training when the validation accuracy converged. This prevented the network from overfitting on the training data set. The training was conducted on a single GPU (NVIDIA GeForce GTX 1060). The validation accuracy converged after 6000 repetitions.
+ Open protocol
+ Expand
3

Immersive VR Experience with Oculus Rift

Check if the same lab product or an alternative is used in the 5 most similar protocols
The experiment was performed using a PC equipped with an NVidia GeForce GTX 1060 graphics card. The setup included the Oculus Rift (oculus.com/rift) head-mounted display (HMD) with 2160 × 1200 resolution (1080 × 1200 per eye), 110° field of view, and 90 Hz refresh rate for 3D immersive viewing, head rotational, and positional tracking, and providing spatialized audio. The application was created using the Unity (version 2018.2.1) game engine and the environment using Autodesk Maya and Adobe Photoshop. The virtual characters were designed and rigged using Autodesk Character Generator. For the lip synchronization feature, the SALSA plug-in for Unity was used.
+ Open protocol
+ Expand
4

Neural Network Training Protocols

Check if the same lab product or an alternative is used in the 5 most similar protocols
All neural networks were implemented in PyTorch and trained on an NVIDIA GeForce GTX 1060 graphics card with 6 GB memory. They were trained for 100 or 200 epochs using the Adam optimizer with momentum parameters of β1 = 0.5, β2 = 0.999. The initial learning rate was 1 × 10−4, and it was decayed by 0.3 every 50 epochs. The mini-batch size was 16 for U-Net and Pix2Pix GAN and 8 for Cycle GAN to fit in the GPU memory.
+ Open protocol
+ Expand
5

Scaling and Augmenting Images for Deep Learning

Check if the same lab product or an alternative is used in the 5 most similar protocols
The cropped cell images were originally 150 × 150 pixels, which were scaled up to 224 × 224 (according to VGG16 requirements) via bilinear interpolation. During training, 32 images were mini-batch processed with minor image augmentation allowing randomized rotation up to 10°, vertical and horizontal flipping, as well as vertical and horizontal shifting up to 5% to reduce overfitting and to artificially inflate the total number of training images. We trained the model using RMSprop optimization—finding similar performance with Adam optimization56 —with a learning rate = 2.9 × 10−5 using a GeForce GTX1060 by NVIDIA.
+ Open protocol
+ Expand
6

Pear Counting with YOLOv4 and Deep SORT

Check if the same lab product or an alternative is used in the 5 most similar protocols
The best performing YOLOv4 model that satisfied the criteria in the model comparison was converted to the Tensorflow™ format. Deep SORT, in combination with YOLOv4, was implemented locally to track the pears in an unseen test mobile phone video of resolution 1080 × 1920, 32 s long, with a frame rate of 30 FPS. The hardware specification was as follows: Quad-core Intel® Core™ i7-7700HQ @2.80GHz, 16.0 GB RAM and NVIDIA GeForce GTX 1060.
Two counting methods were compared in this study: (1) region-of-interest (ROI) method and (2) unique object ID method. The ROI method was based on the number of unique object centroids tracked by Deep SORT that would cross the ROI, which is a horizontal line. Different ROIs were tested, and 50% of the height of the video was deemed to be the optimal ROI. For the second method, the counts were based on the number of unique object IDs generated by Deep SORT’s tracking mechanism. Figure 12 illustrates the pear counting system.
+ Open protocol
+ Expand
7

Diabetic Foot Thermograms Network (DFTNet)

Check if the same lab product or an alternative is used in the 5 most similar protocols
After not being able to obtain satisfactory results with SVM, MLP, AlexNet, and GoogLeNet, especially with the five levels of classification, we propose a new type of DL structure. The name of this network is Diabetic Foot Thermograms Network (DFTNet). With this proposal, we considerably reduce the number of layers, compared with the 22 layers of GoogLeNet, which also result in a decreasing training time.
The parameters used from training the DFTNet are a maximum of 100 epochs, a minibatch size of 64, and the Adam solver with a learning rate of 0.001. The configuration of the computer is: CPU Intel i7–7700 HQ @2.8 GHz, GPU NVIDIA GeForce GTX 1060, RAM 16 GB, Software Matlab. The structure of DFTNet is shown in Table 1.
+ Open protocol
+ Expand
8

Comparative Evaluation of VR Headsets

Check if the same lab product or an alternative is used in the 5 most similar protocols
The experiment was conducted in two VR devices—the HTC Vive and the Oculus Rift—and ran on a PC with an NVIDIA GeForce GTX 1060 graphics card. The IPD was set to the average of 63 mm (Dodgson, 2004 ). The associated Rift Touch controllers and Vive Motion controllers were used by participants, along with a standard tennis ball with a 3.5-cm radius. Display specifications of the two headsets are summarized in Table 1.

Display properties of the two VR headsets

DeviceHTC ViveOculus Rift
Display resolution per eye1200 x 1800 pixels960 x 1080 pixels
Field of view (HxV)110x113°94x93°
Pixel size6.2 arc min5.2 arc min
LensFresnelHybrid Fresnel
Refresh rate90 Hz90 Hz
+ Open protocol
+ Expand
9

Virtual Reality Hand Tracking Experiment

Check if the same lab product or an alternative is used in the 5 most similar protocols
The position and movement of participants’ hands were tracked using a motion tracker (Leap Motion Controller by Ultraleap, ltd; Hand tracking running at 150 fps). Participants saw virtual hands from first person perspective through a head mounted display (HMD: Oculus Rift CV1 which displayed a stereoscopic image with a resolution of 2160×1200), and no other body parts were presented. A virtual world was developed using Unity3D and run on a Windows PC (Level Infinity by iiyama: Intel core i7-7700HQ at 2.8 GHz, 16 GB RAM, and NVIDIA GeForce GTX 1060). The visual stimulus was an outdoor scene based on a Japanese city model in which there are plenty of familiar objects (e.g., buildings, cars and traffic signals). The visual stimulus was displayed at 90 fps.
+ Open protocol
+ Expand
10

Deep Learning Language Model Training

Check if the same lab product or an alternative is used in the 5 most similar protocols
The LA models were trained using filter size s = 9, dout = 1024, the Adam (Kingma and Ba, 2015 ) optimizer (learning rate 5 × 10– 5) with a batch size of 150, and early stopping after no improvement in validation loss for 80 epochs. We selected the hyperparameters via random search (Supplementary Appendix: Hyperparameters). Models were trained either on an Nvidia Quadro RTX 8000 with 48 GB vRAM or an Nvidia GeForce GTX 1060 with 6 GB vRAM.
+ Open protocol
+ Expand

About PubCompare

Our mission is to provide scientists with the largest repository of trustworthy protocols and intelligent analytical tools, thereby offering them extensive information to design robust protocols aimed at minimizing the risk of failures.

We believe that the most crucial aspect is to grant scientists access to a wide range of reliable sources and new useful tools that surpass human capabilities.

However, we trust in allowing scientists to determine how to construct their own protocols based on this information, as they are the experts in their field.

Ready to get started?

Sign up for free.
Registration takes 20 seconds.
Available from any computer
No download required

Sign up now

Revolutionizing how scientists
search and build protocols!