The largest database of trusted experimental protocols

Xeon e5 2630

Manufactured by NVIDIA

The Xeon E5-2630 is a server-grade processor developed by Intel. It features six cores, twelve threads, and a base clock speed of 2.3 GHz. The Xeon E5-2630 is designed for use in high-performance computing applications and data centers.

Automatically generated - may contain errors

3 protocols using xeon e5 2630

1

Genetic Epistasis Detection in Breast Cancer

Check if the same lab product or an alternative is used in the 5 most similar protocols
In this study, the breast cancer-related GWAS dataset comprising 528,173 SNPs was used to detect genetic epistasis. All the SNPs with a missing genotype rate < 0.1, MAF > 0.05, a Hardy-Weinberg equilibrium (HWE) p > 0.001, and a pair-wise R2 < 0.8 were retained. In total, 498,847 SNPs were used for detecting genetic epistasis (available upon request). The eight most widely cited genetic epistasis detection software packages, including pMDR [20 (link)], GBOOST [21 (link)], PLINK [22 (link)], FastEpistasis [23 (link)], SNPRuler [24 (link)], AntEpiSeeker [25 (link)], Ranger [26 (link)], and BEAM3 [27 (link)], were used to detect SNP-SNP interactions. All eight software packages were run using the default configuration on a machine with an Intel Xeon E5-2630 CPU and an Nvidia Quadro K2000 GPU.
+ Open protocol
+ Expand
2

Convolutional Neural Network Training Parameters

Check if the same lab product or an alternative is used in the 5 most similar protocols
Training was performed on a computer with two Intel Xeon E5-2630 CPU processors, four NVIDIA GTX 1080 Ti Graphical processing units, and 128 GB of DDR4 Random access memory. The project was implemented using the Python programming language and the Google Tensorflow library. We used the rectified linear unit (ReLu) as the activation function, the Adam optimizer, and cross-entropy as the loss function. We used the grid search approach for optimizing the learning rate, dropout, and kernel size of the convolution and pooling layers. We varied learning rates from (0.1, 0.001, 0.0001), and dropout from (0, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6) after each convolution layer and fully connected layer. The kernel sizes of convolution layers were varied from (1 × 1 × 1, 2 × 2 × 2, 3 × 3 × 3, 5 × 5 × 5), and pooling kernel sizes ranged from (1 × 1 × 1, 2 × 2 × 2). Eventually, we set the learning rate at 0.001 and set the dropout after the fully connected layer at 0.2. We employed a convolutional kernel size of 3 × 3 × 3 at each layer, with a 2 × 2 × 2 pooling kernel after each convolutional layer. We set β1 = 0.001, β2 = 0.999; ε = 10−8.
+ Open protocol
+ Expand
3

Multimodal Neural Network Architecture

Check if the same lab product or an alternative is used in the 5 most similar protocols
Each of the three neural networks was trained using the methods of gradient descent and backpropagation. A cross-entropy loss function was used for training, with this loss function selected because of its excellent performance in classification tasks [42 (link)], and a stochastic gradient descent (SGD) optimizer was employed because of its good generalization [43 ]. A cross-entropy loss function was used for FlowNet, but an Adam [44 ] optimization function was deployed for faster convergence. For VaporNet, a Mean Square Error (MSE) loss function was implemented and optimized using Adam. The detailed hyperparameters used for training can be seen in Table 1.
FrameNet was trained first, separately from the other two networks, as it is used to generate the training and offline testing data for the other two networks. The recurrent and classification layers of FlowNet and VaporNet were then trained using sets of this feature data generated by FrameNet.
Training of the models was carried out using a workstation with a 2.3 GHz Intel® Xeon® E5-2630 CPU, 128 GB RAM, and an NVIDIA® Kepler™ K40 M GPU with 12 GB of GPU accelerator memory. All networks were implemented using the PyTorch framework [45 ]. A FrameNet model takes approximately 4 h to train, with VaporNet and FlowNet models being trained in under one hour.
+ Open protocol
+ Expand

About PubCompare

Our mission is to provide scientists with the largest repository of trustworthy protocols and intelligent analytical tools, thereby offering them extensive information to design robust protocols aimed at minimizing the risk of failures.

We believe that the most crucial aspect is to grant scientists access to a wide range of reliable sources and new useful tools that surpass human capabilities.

However, we trust in allowing scientists to determine how to construct their own protocols based on this information, as they are the experts in their field.

Ready to get started?

Sign up for free.
Registration takes 20 seconds.
Available from any computer
No download required

Sign up now

Revolutionizing how scientists
search and build protocols!