The largest database of trusted experimental protocols

Core i7 12700

Manufactured by NVIDIA

The Intel Core i7-12700 is a high-performance desktop processor featuring 12 cores and 20 threads. It has a base clock speed of 2.1 GHz and a maximum turbo frequency of 4.9 GHz. The processor is built on the Intel 12th Gen 'Alder Lake' architecture and supports DDR4 and DDR5 memory technologies.

Automatically generated - may contain errors

5 protocols using core i7 12700

1

Improved YOLOv5s Object Detection

Check if the same lab product or an alternative is used in the 5 most similar protocols
The training and testing of this research work were experimented using a computer having an Ubuntu22.04LTS operating system, Core i7-12700 CPU @ 64-bit 4.90 GHz, 32 GB RAM (NVIDIA GeForce RTX 3060 GPU), python 3.9.12 and torch-1.11.0+cu113. The improved YOLOv5s including other compared models used in this paper received an input image of 640 × 640 pixels, 16 batch size, 0.937 momentum, 0.0005 weight decay, 0.2 IoU, 0.015 hue, 0.7 saturation, 0.4 lightness, 1.0 mosaic, 0.9 scale, 0.2 translate, 0.15 mix-up, and 300 epochs for training. Random initialization technique was utilized to initialize the weights for training all the models from scratch.
+ Open protocol
+ Expand
2

GPU-Powered Neural Network Training

Check if the same lab product or an alternative is used in the 5 most similar protocols
For the network training we used Intel Core i7-12700 computer with 128 GB DRAM, and NVIDIA RTX A5000 GPU with 24 GB GDDR.
+ Open protocol
+ Expand
3

Object Detection with Optimized Training

Check if the same lab product or an alternative is used in the 5 most similar protocols
The network model we used is based on the PyTorch deep learning framework. We used the Adam optimizer to perform training, with a total of 1000 epochs. The batch size of the model is set to 4, and four repetitions of learning are performed for each sample to strengthen the training effect. The initial value of the learning rate is set to 0.001, and it is dynamically scheduled by a cosine annealing. The voxel size is set to 0.1, the score threshold τ of semantic score in grouping is 0.2, and the search radius of the nearest neighbour is 0.3. In the training phase, we use the HAIS training model as the pre-training basis for the backbone network to further train the backbone network. After the backbone network training, we freeze the parameters of the backbone network and train other parts of the network. The above experimental results were completed on a computer using Intel Core i7-12700, 16 GB RAM and NVIDIA RTX 3070TI GPU, running Ubuntu 20.04 operating system.
+ Open protocol
+ Expand
4

Lightweight YOLOv5-LiNet for Real-Time Object Detection

Check if the same lab product or an alternative is used in the 5 most similar protocols
This experiment deploys python 3.19.13 and torch-1.11.0+cu113 deep learning framework for model training and testing on a computer with an Intel Core i7-12700 CPU @ 64-bit 4.90 GHz, 32 GB RAM, NVIDIA GeForce RTX 3060 12045MiB GPU graphics card and ubuntu22.04LTS operating system. Table 1 provides the details of all the trained models. Using the general procedures for network training on YOLOv5 platform, the proposed lightweight YOLOv5-LiNet including other YOLO related models takes an input of 512×512 pixels, 16 batch size, 0.937 momentum, 0.0005 weight decay, 0.2 IoU, 0.015 hue, 0.7 saturation, 0.4 lightness, 1.0 mosaic and 300 epochs training, while Mask-RCNN received an input of 512×512 pixels with default parameters on MMdetection platform. Random initialization technique was used to initialize the weights for training all the models from scratch.
+ Open protocol
+ Expand
5

Evaluating Image Classification Techniques

Check if the same lab product or an alternative is used in the 5 most similar protocols
Pearson correlation coefficient was used to determine the relationship between two gradient results and all gradient results. The SciPy package in Python was used to calculate the Pearson correlation coefficient, where the parameters p and r reflect the significance level and correlation, respectively. Significant differences were considered when p<0.05 . The positive or negative value of r reflected the positive or negative correlation. Difference analysis was used to analyze the differences of different gradient results, as well as to analyze the effect of data augmentation methods.
All the experiments were run on Intel Core i7-12700 (2.10 GHz), 64GB RAM and NVIDIA RTX3060 GPU with 12 GB memory. Python 3.7.1 was used to implement all program code. The functions in the Sckit-learn package were used to realize the KNN and SVM algorithms. The MobileViT-xs model was implemented by the deep learning framework PyTorch 1.10, and CUDA Toolkit 11.3 was used to accelerate the processing. More code details were posted at https://github.com/mepleleo/DA_peanut (accessed on 1 April 2022).
+ Open protocol
+ Expand

About PubCompare

Our mission is to provide scientists with the largest repository of trustworthy protocols and intelligent analytical tools, thereby offering them extensive information to design robust protocols aimed at minimizing the risk of failures.

We believe that the most crucial aspect is to grant scientists access to a wide range of reliable sources and new useful tools that surpass human capabilities.

However, we trust in allowing scientists to determine how to construct their own protocols based on this information, as they are the experts in their field.

Ready to get started?

Sign up for free.
Registration takes 20 seconds.
Available from any computer
No download required

Sign up now

Revolutionizing how scientists
search and build protocols!