The largest database of trusted experimental protocols

Geforce gtx 1070ti gpu

Manufactured by NVIDIA
Sourced in United States

The GeForce GTX 1070Ti is a high-performance graphics processing unit (GPU) designed and manufactured by NVIDIA. It is part of the Pascal architecture and features 2,432 CUDA cores, a boost clock speed of 1,607 MHz, and 8GB of GDDR5 video memory.

Automatically generated - may contain errors

Lab products found in correlation

4 protocols using geforce gtx 1070ti gpu

1

High-Performance Computing for Data Analysis

Check if the same lab product or an alternative is used in the 5 most similar protocols
For data analysis, two workstation computers were used. Both systems boot into Windows 10 (Microsoft) off a 1 TB M.2 drive (Samsung 970 EVO Plus). The first system consists of an I9-9900X processor (Intel), a GeForce GTX 1070 Ti GPU (Nvidia) and 128 GB of DDR4 RAM (Corsair). The second system has an I9-9900K processor (Intel), a GeForce RTX 2070 GPU (Nvidia) and 64 GB of DDR4 RAM (G.Skill Ripjaws). Data were stored on a 4 TB RAID0 array consisting of two 2 TB drives (Samsung) and a 2 TB RAID0 array consisting of two 1 TB Drives (Samsung), respectively. System integration, support and maintenance performed by Nobska Imaging, Inc.
+ Open protocol
+ Expand
2

Fruit Detection Model Performance Evaluation

Check if the same lab product or an alternative is used in the 5 most similar protocols
The experiments of this research were conducted using an Intel i5 64-bit quad-core CPUs operating at a frequency of 3.30 GHz (Santa Clara, CA, USA). The system had 16 GB of RAM and an NVIDIA GeForce GTX 1070Ti GPU with 8 GB memory. The chosen model framework was PyTorch, with CUDA 11.1 and Python 3.8.10 for implementation. Table 1 lists some hyper parameters used in the experiments.
The criteria used for assessing the performance of fruit detection encompassed precision, recall, mean average precision (mAP), and F1 score (Padilla et al., 2020 (link)). The metrics are defined in (Equations 912):
where R and P are the recall and precision, respectively. Using mAP is a valuable approach to assess the model performance across different confidence levels.
with AP expresses in (Equation 11.a):
where p(r˜) represents the calculated precision at a given recall value ( r˜ ), while Ncls is the total number of classes.
+ Open protocol
+ Expand
3

Object Detection Model Evaluation Metrics

Check if the same lab product or an alternative is used in the 5 most similar protocols
In this study, the experiments were conducted on a computer that has Intel i5 (Santa Clara, CA, USA), 64-bit 3.30 GHz quad-core CPUs, and a NVIDIA GeForce GTX 1070Ti GPU.
The model receives images of 416×416 pixels as inputs. Due to GPU memory constraints, the batch size was set to 8. The model was trained for 160 epochs with an initial learning rate of 103 , which was then divided by 10 after 60 and 90 epochs. The momentum and weight decay were set to 0.9 and 0.0005, respectively.
A series of experiments were conducted to evaluate the performance of the proposed method. The indexes for evaluation of the trained model are defined as follows: Recall=TPTP+FN
Precision=TPTP+FP
where TP, FN, and FP are abbreviations for true positives (correct detection), false negatives (miss), and false positives (false detection).
To better show the comprehensive performance of the model, F1 score was adopted as a trade-off between the recall and precision, defined in Equation (15): F1=2×Recall×PrecisionRecall+Precision
Another evaluation metric for object detection—Average Precision (AP) [34 (link),36 (link)]—was also used in this study. It can show the overall performance of a model under different confidence thresholds, and is defined as follows: AP=nrn+1rnpinterprn+1
with
pinterprn+1=maxr˜:r˜rn+1p(r˜)
where p(r˜) is the measured precision at recall r˜ .
+ Open protocol
+ Expand
4

Computer Vision Model Training Protocol

Check if the same lab product or an alternative is used in the 5 most similar protocols
In this study, the computer used had Intel i5, 64-bit 3.30-GHz quad-core CPUs (Santa Clara, CA USA), 16 GB of RAM, and an NVIDIA GeForce GTX 1070Ti GPU. The model framework was Pytorch with related software CUDA 11.1 and Python 3.8.10. The batch size was set to 8. The input image size was: 416×416 . The setting of some hyper parameters used in this study is given as follows: number of epochs: 400, learning rate: 0.001, optimizer weight decay: 94.75, STD momentum: 96.3, warm-up initial momentum: 0.8, batch size: 8, box loss gain: 0.05, classification loss gain: 0.5, cls BCE loss positive weight: 1.0, object loss gain: 1.0, and anchor multiple threshold: 4.0.
+ Open protocol
+ Expand

About PubCompare

Our mission is to provide scientists with the largest repository of trustworthy protocols and intelligent analytical tools, thereby offering them extensive information to design robust protocols aimed at minimizing the risk of failures.

We believe that the most crucial aspect is to grant scientists access to a wide range of reliable sources and new useful tools that surpass human capabilities.

However, we trust in allowing scientists to determine how to construct their own protocols based on this information, as they are the experts in their field.

Ready to get started?

Sign up for free.
Registration takes 20 seconds.
Available from any computer
No download required

Sign up now

Revolutionizing how scientists
search and build protocols!