The largest database of trusted experimental protocols

Geforce gtx 1660 ti

Manufactured by NVIDIA

The GeForce GTX 1660 Ti is a mid-range graphics processing unit (GPU) developed by NVIDIA. It features the Turing architecture and is designed to deliver high-performance gaming experiences. The GTX 1660 Ti has 1,536 CUDA cores, a base clock speed of 1,500 MHz, and 6GB of GDDR6 video memory. This GPU is capable of rendering complex visual scenes and supporting advanced graphics features, making it a viable option for a range of computing applications.

Automatically generated - may contain errors

12 protocols using geforce gtx 1660 ti

1

Numerical Simulation of SNES and GA

Check if the same lab product or an alternative is used in the 5 most similar protocols
The numerical simulation was conducted on a GPU (GeForce GTX 1660 Ti, NVIDIA). The SNES had parameters of ημ = 1 and ησ = 0.039, while the GA had crossover rate of 85%, mutation rate of 10% and elite rate of 20%, which were determined from literature for optimal performance37 . The population size for SNES and GA are both 30. The simulations were repeated 10 times to reduce randomness.
+ Open protocol
+ Expand
2

Improving CNN Performance via Deep Learning

Check if the same lab product or an alternative is used in the 5 most similar protocols
Firstly, we gathered all images of size 1280 × 720 pixels. Then we applied a profound learning method to improve the CNN and obtain the best outcomes. Training and analysis was done using the pytorch 1.9.1 framework and operating system Ubuntu 18.04.6 LTS deep learning packages on NVIDIA GEFORCE GTX 1660Ti GPU. For profound learning, we utilized packages such as opencv-python 3.4.11.43, NumPy 1.21.2, SciPy 1.21.2, matplotlib 3.4.3.
+ Open protocol
+ Expand
3

Transfer Learning for Image Classification

Check if the same lab product or an alternative is used in the 5 most similar protocols
The study was conducted in accordance with the Declaration of Helsinki (as revised in 2013). Approval was given to the study by the Institutional Review Board of the Third Xiangya Hospital and Xiangya Hospital, Central South University (No. 2019-S455). Informed consent was received after the procedure was fully explained to all participants and their legal guardians.
Transfer learning strategies have been widely used to exploit information learned from multiple domains. This cross-domain approach to learning can be contrasted to the approach of training a model from scratch with randomly initialized weights. Ultimately, transfer learning is the technique by which knowledge gained from an already trained model is used to learn about another data set. Figure 3 shows the transfer learning framework.
The batch size was set to 16, the learning rate was set to 0.0001, and all models were trained for 50 epochs. For fine-tuning, the network weights were initialized from weights that were trained on ImageNet. The training and testing processes of the proposed architectures were implemented in Python using the TensorFlow and PyTorch packages which were run on a Nvidia GeForce GTX 1660 Ti GPU graphics card.
+ Open protocol
+ Expand
4

APTICE: VR-Enabled Cycle Ergometer for Depression

Check if the same lab product or an alternative is used in the 5 most similar protocols
This proof-of-concept study is based on the augmented physical training for isolated and confined environments (APTICE) system. The aim of the system is to use physical exercise in a VR environment to improve the well-being of patients with depression. It is composed of a VR‐enabled cycle ergometer (VirZOOM Bike Controller) and a VR-based head-mounted display (Oculus Rift CV1, Oculus VR), which provides visual and auditory inputs. The VR application was developed by GAMIT (Petit-Quevilly) and ran on an Asus A15 TUF566IU-HN326T laptop with an AMD Ryzen 5 4600H 16 GB processor, a 512 GB solid state drive, and an Nvidia GeForce GTX1660 Ti 6 GB graphics card. The VR environment consisted of natural areas of forests and mountain plains (Figure 1). See Multimedia Appendix 1 for further images of the APTICE device.
+ Open protocol
+ Expand
5

PyTorch-Powered Deep Learning Development

Check if the same lab product or an alternative is used in the 5 most similar protocols
The PyTorch library was used in this study. The PyTorch library is an open-source library that aims to remove this barrier for both researchers and practitioners. The Python open-source programming language (v.3.6.1; Python Software Foundation, Wilmington, DE, USA) and PyTorch library were used for model development. In our study, model training was performed on a computer equipped with 16 GB RAM and NVIDIA GeForce GTX 1660 TI graphics card.
+ Open protocol
+ Expand
6

Deep Learning-Powered Soybean Seed Analysis

Check if the same lab product or an alternative is used in the 5 most similar protocols
The processing unit was a Lenovo Y7000P laptop with an Intel Core i7-9750H@2.60 Hz CPU, 16 GB RAM, and single GPU (Geforce GTX1660 Ti, NVIDIA). The environment of deep-learning-related procedure included Integration Develop Environment (IDE) integrating Python 3.6, Keras (ver. 2.1.5), Tensorflow_GPU (ver. 1.13.1) OpenCV3 (ver. 3.4.2), which were operated in Windows 10 64bit. The synthetic image-related procedure was operated on the same environment (GPU was not involved in computation). The manually annotation of real-world soybean seeds image was operated on the same environment using LabelMe (ver. 3.16.5).
+ Open protocol
+ Expand
7

Deep Learning GPU-Powered Windows Environment

Check if the same lab product or an alternative is used in the 5 most similar protocols
The technical environment included Windows 10 Professional (64-bit) and was enhanced by software platforms such as Anaconda 3.5.0, CUDA8.2, and CUDNN10.2. The computer was equipped with 16 GB of RAM, powered by an Intel(R) Core(TM) i7-10750H processor, and utilized an NVIDIA GeForce GTX1660Ti graphics card for GPU-intensive tasks. For deep learning target detection, models were constructed within the PyTorch framework. Python served as the primary development language, and the chosen integrated development environment was PyCharm Community Edition 2022.1.3.
+ Open protocol
+ Expand
8

Optic Flow and Numerical Attention

Check if the same lab product or an alternative is used in the 5 most similar protocols
Optic flow displays (Figure 1A, 80° H ×80° V) simulated observers translating at 4 m/s on a dot-ground (depth range: 0.20 – 5 m; eye-height: 0.17 m) consisting of 100 dots (diameter: 0.28°) (Figure 1A). The simulated heading direction of each display was randomly selected from ±28°, ±21°, ±14°, or ±7° in Experiments 1 and 2; that was randomly selected from 0° and ±21° in Experiment 3. Positive (negative) values meant that headings were right (left) to the display center (i.e., 0 degrees).
In some displays, three integers (RGB: [0, 0, 200]; 1.76° V × 1.76° H) were vertically presented on the display center (Figures 1B and 1C). The gap between the two numbers was 0.44°. In the perceptual and attention conditions of Experiment 1a, and the low-load condition of Experiment 1b, the first two numbers were randomly selected from the range [1, 10]; the third number was randomly selected from the range [1, 20]. In the high-load condition of Experiment 1b and the load condition of Experiment 3, the first two integers were randomly selected from the range [11, 40], and the third number was randomly selected from the range [40, 92].
Stimuli were programmed in MATLAB using the Psychophysics Toolbox 3 and presented on a 27-inch Dell monitor (resolution: 2560 H× 1440 V pixels; refresh rate: 60 Hz) with NVIDIA GeForce GTX 1660Ti graphics card.
+ Open protocol
+ Expand
9

Machine Learning on Laptop Hardware

Check if the same lab product or an alternative is used in the 5 most similar protocols
All experiments in this study were performed on a laptop with Intel® Core™ i7-9750H CPU, 16GB DDR4 RAM, and NVIDIA GeForce GTX 1660 Ti, GDDR6 6GB GPU. Application codes were written in Python using Keras [27 ] from DL libraries and Scikit-learn [28 ] from ML libraries.
+ Open protocol
+ Expand
10

High-Performance Computing for Research

Check if the same lab product or an alternative is used in the 5 most similar protocols
We used a desktop computer running Windows 10 Enterprise N, 64-bit, with two Intel(R) Xeon(R) Gold 5122 processors (3.60 GHz) with 8 cores each and 384 GB RAM. The graphics card used for GPU processing was an NVIDIA GeForce GTX 1660 Ti with 6 GB dedicated memory.
+ Open protocol
+ Expand

About PubCompare

Our mission is to provide scientists with the largest repository of trustworthy protocols and intelligent analytical tools, thereby offering them extensive information to design robust protocols aimed at minimizing the risk of failures.

We believe that the most crucial aspect is to grant scientists access to a wide range of reliable sources and new useful tools that surpass human capabilities.

However, we trust in allowing scientists to determine how to construct their own protocols based on this information, as they are the experts in their field.

Ready to get started?

Sign up for free.
Registration takes 20 seconds.
Available from any computer
No download required

Sign up now

Revolutionizing how scientists
search and build protocols!