We grouped causes of death with a frequency of <20 as “other.” AIDS was defined according to the 1993 Centers for Disease Control and Prevention classification [16 ] and included CoDe 01 (unspecified AIDS), 01.1 (AIDS infection), and 01.2 (AIDS malignancy). We grouped deaths as AIDS, non-AIDS infection, liver-related (hepatitis and liver failure, CoDe 03 and 14), non-AIDS nonhepatitis malignancy (CoDe 04), cardiovascular disease (myocardial infarction/ischemic heart disease, CoDe 08, and stroke, CoDe 09), heart/vascular disease (heart failure/unspecified, and other heart disease, CoDe 24), respiratory disease (chronic obstructive lung disease, CoDe 13 and 25), renal failure (CoDe 15), disease of the central nervous system (CoDe 23), unnatural deaths (accident/violent, suicide, and overdose, CoDe 16, 17, and 19), and other (CoDe 90).
CODE protocol
Effortlessly locate and compare the most effective protocols from literature, preprints, and patents, streamlining your research process.
This innovative platform empowers researchers to achieve enhanced results by providing intelligent, data-driven protocol comparisons and recommendations.
Most cited protocols related to «CODE protocol»
We grouped causes of death with a frequency of <20 as “other.” AIDS was defined according to the 1993 Centers for Disease Control and Prevention classification [16 ] and included CoDe 01 (unspecified AIDS), 01.1 (AIDS infection), and 01.2 (AIDS malignancy). We grouped deaths as AIDS, non-AIDS infection, liver-related (hepatitis and liver failure, CoDe 03 and 14), non-AIDS nonhepatitis malignancy (CoDe 04), cardiovascular disease (myocardial infarction/ischemic heart disease, CoDe 08, and stroke, CoDe 09), heart/vascular disease (heart failure/unspecified, and other heart disease, CoDe 24), respiratory disease (chronic obstructive lung disease, CoDe 13 and 25), renal failure (CoDe 15), disease of the central nervous system (CoDe 23), unnatural deaths (accident/violent, suicide, and overdose, CoDe 16, 17, and 19), and other (CoDe 90).
To test local and meridional anisotropy a finely meshed grid in the visual field was projected through the models. Squares of the grid were oriented in such a way that one side was orthogonal to eccentricity, while the other side was orthogonal to polar direction. In principle, anisotropies can be derived analytically [15] (link), however the computational approach implemented for this manuscript allows flexible and comparable testing of model variations. Since we provide the code, the reader can easily implement alternative model functions within the code and test these using the methods provided.
Local anisotropy for a given position in the projection was then calculated as the length ratio of the side oriented parallel to isoeccentricity lines (i.e. Me) divided by the length of the side parallel to isopolar lines (Mp). Meridional anisotropy is calculated based on the surface area of a set of squares with the same eccentricity, but varying polar position. Meridional anisotropy for a given position in the projection was then calculated as the surface of a square at this position (Ma(P,E′)) divided by the surface of a square at the horizontal meridian in V1 (Ma(0,E′)). Predicted areal magnification M (
Data within studies were often reported in terms of odds ratios (ORs), the likelihood of mortality across distinct levels of social relationships. Because OR values cannot be meaningfully aggregated, all effect sizes reported within studies were transformed to the natural log OR (lnOR) for analyses and then transformed back to OR for interpretation. When effect size data were reported in any metric other than OR or lnOR, we transformed those values using statistical software programs and macros (e.g., Comprehensive Meta-Analysis [24] ). In some cases when direct statistical transformation proved impossible, we calculated the corresponding effect sizes from frequency data in matrices of mortality status by social relationship status. When frequency data were not reported, we recovered the cell probabilities from the reported ratio and marginal probabilities. When survival analyses (i.e., hazard ratios) were reported, we calculated the effect size from the associated level of statistical significance, often derived from 95% confidence intervals (CIs). Across all studies we assigned OR values less than 1.00 to data indicative of increased mortality and OR values greater than 1.00 to data indicative of decreased mortality for individuals with relatively higher levels of social relationships.
When multiple effect sizes were reported within a study at the same point in time (e.g., across different measures of social relationships), we averaged the several values (weighted by standard error) to avoid violating the assumption of independent samples. In such cases, the aggregate standard error value for the lnOR were estimated on the basis of the total frequency data without adjustment for possible correlation among the averaged values. Although this method was imprecise, the manuscripts included in the meta-analysis did not report the information necessary to make the statistical adjustments, and we decided not to impute values given the wide range possible. In analyzing the data we used the shifting units of analysis approach [25] which minimizes the threat of nonindependence in the data while at the same time allowing more detailed follow-up analyses to be conducted (i.e., examination of effect size heterogeneity).
When multiple reports contained data from the same participants (publications of the same database), we selected the report containing the whole sample and eliminated reports of subsamples. When multiple reports contained the same whole sample, we selected the one with the longest follow-up duration. When multiple reports with the same whole sample were of the same duration, we selected the one reporting the greatest number of measures of social relationships.
In cases where multiple effect sizes were reported across different levels of social relationships (i.e., high versus medium, medium versus low), we extracted the value with the greatest contrast (i.e., high versus low). When a study contained multiple effect sizes across time, we extracted the data from the longest follow-up period. If a study used statistical controls in calculating an effect size, we extracted the data from the model utilizing the fewest statistical controls so as to remain as consistent as possible across studies (and we recorded the type and number of covariates used within each study to run post hoc comparative analyses). We coded the research design used rather than estimate risk of individual study bias. The coding protocol is available from the authors.
The majority of information obtained from the studies was extracted verbatim from the reports. As a result, the inter-rater agreement was quite high for categorical variables (mean Cohen's kappa = 0.73, SD = 0.13) and for continuous variables (mean intraclass correlation [26] (link) = 0.80, SD = .14). Discrepancies across coding pairs were resolved through further scrutiny of the manuscript until consensus was obtained.
Aggregate effect sizes were calculated using random effects models following confirmation of heterogeneity. A random effects approach produces results that generalize beyond the sample of studies actually reviewed [27] . The assumptions made in this meta-analysis clearly warrant this method: The belief that certain variables serve as moderators of the observed association between social relationships and mortality implies that the studies reviewed will estimate different population effect sizes. Random effects models take such between-studies variation into account, whereas fixed effects models do not [28] . In each analysis conducted, we examined the remaining variance to confirm that random effects models were appropriate.
MB, TC and four other health science students undertook the coding. A half day training workshop was held with all coders and coders were then given access to the dataset for a number of days to become familiar with it. Once coders felt comfortable, reliability testing was conducted, with each coder achieving 90% concurrence with model answers on a test dataset of 115 images before coding commenced. Coders were supervised by MS, MB and TC to ensure consistency. Uncertain codes were noted as such and checked by MB or TC.
All foods were classified as either recommended (core) or not recommended (non-core) to be marketed to children based on the WHO Regional Office for Europe Nutrient Profiling Model [9 ], with some modifications (e.g. a ‘fast food’ category was added which included all commercially prepared food products sold at quick service restaurants). All fast food was classified as not recommended to be marketed to children as it is typically high in saturated fat and sodium and low in fiber [25 ]. Marketing in convenience stores and supermarkets was too extensive to code individually and was therefore excluded from this analysis. Codes were only assigned to an image where 50% or more of a brand name or logo could be clearly seen by the coder. Individual images could be coded for multiple marketing media and product categories.
Further processing of the coded data included determining the number of marketing exposures for each unique exposure code (defined as the combination of setting, medium and product type for that code). A marketing exposure was defined as starting on the first instance of an image with a particular setting/medium/product code; subsequent images were counted as part of the same exposure. An exposure was considered to have ended when 30 s had elapsed since the last recorded code of that setting/medium/product code (defined using the image timestamps). Any subsequent code for that same combination after this 30 s limit was counted as the start of a new exposure sequence.
The number of exposures was summed for each unique exposure code by child; aggregate counts were determined for each child to estimate total exposures to core and non-core foods, and exposure by setting, medium, and product type. Cleaning and aggregation of coded data was completed in R version 3.2.3 (R Institute, Vienna).
Most recents protocols related to «CODE protocol»
More specifically, the risk‐of‐bias items address the following methodological issues: random assignment to both the interrogation technique and guilt conditions, treatment of violations to the randomization process (e.g., participants assigned to the guilty condition who refused to cheat), level of deception employed in the study, treatment of participants suspicious of the true purpose of the study, and whether mock interrogators were blind to the guilt status of mock suspects. To address the confession outcome, coders will document any missingness of confession outcomes, including selective reporting of confession outcomes. See Supporting Information: Appendix
We will provide a table of these items for each coded study in the final report. Furthermore, we will investigate the potential for bias by examining the relationship between each bias item and effect sizes in a moderator analysis. Potential sources of bias and the associated moderator analyses will inform our interpretation of the findings.
Coders will be trained by the lead reviewers in steps: (1) coders will verbally walk through the code sheet for discussion and clarification, (2) the lead reviewers will demonstrate how to code an article in its entirety, and (3) the coders will practice on a small subset of articles for review and feedback from the lead reviewers. This iterative process will continue throughout the coding process with regular meetings to discuss coding issues.
Coders are likely to be volunteers from Dr. Redlich's research team, some of whom have prior experience with meta‐analyses. Generally, coding will include four hierarchical data levels: a study level, an experimental condition level, an outcome level, and an effect size level. Using LibreOffice, we will create a database that allows for the one‐to‐many hierarchical nature of our coding protocol (e.g., one study could include several experimental conditions, measure more than one outcome, and have several effect sizes).
Study level variables will include static information (e.g., publication type, publication year, geographic location). As such, there will be one record per study at this level of coding.
Experimental condition level coding will be conducted for each relevant group of the research design. Thus, there will be one record for each eligible experimental condition within a study. For example, if a study included a factor with three levels of interrogation techniques (i.e., accusatorial vs. information‐gathering vs. direct questioning), three condition coding sheets will be completed to capture each group. Information specific to each condition will be coded at this level, such as sample size and interrogation method.
The outcome level will code information specific to each eligible outcome measure. Thus, there will be one record per outcome. In addition to indicating the outcome construct, coded items will capture whether the variable includes all participants, innocent participants, guilty participants, or some other grouping.
The effect size level will code all necessary statistical information to calculate a logged odds ratio (LOR) and its variance for each outcome. As such, these will be one record per coded effect size. Coders will be instructed to identify the most detailed numerical data available when coding for effect size information (see Supporting Information: Appendix
Top products related to «CODE protocol»
More about "CODE protocol"
This innovative platform empowers researchers to effortlessly locate and compare the most effective protocols from literature, preprints, and patents, streamlining the research process.
The CODE protocol leverages advanced AI algorithms to analyze a vast corpus of scientific data, including published studies, preprint articles, and patent documents.
By intelligently comparing and evaluating these protocols, the CODE platform provides researchers with data-driven recommendations on the most effective and efficient experimental methods.
This approach is particularly beneficial for researchers working with techniques such as the ONT Rapid Bar-coding SQK-RBK004 Sequencing protocol, Nextera Coding Exome capture protocol, and NextSeq 550 sequencing platform.
Additionally, the use of anesthetics like MS-222 and the study of model organisms like Male CD-1 mice can be optimized using the insights provided by the CODE protocol.
The CODE protocol also seamlessly integrates with other research tools, such as the TruSeq PE Cluster Kit v3-cBot-HS for library preparation and the AllPrep DNA/RNA/Protein Mini Kit for sample extraction.
Researchers can further leverage the power of the Verio3.0T magnetic resonance scanner and Rappaport-Vassiliadis broth for their experimental needs.
By harnessing the power of AI, the CODE protocol streamlines the research process, enhances reproducibility, and ultimately leads to more accurate and reliable results.
This innovative approach empowers researchers to achieve enhanced outcomes, revolutionizing the way scientific investigations are conducted.
The Bambanker freezing medium, a commonly used cryopreservation solution, can also be integrated into the CODE protocol for seamless sample management.