The largest database of trusted experimental protocols
> Disorders > Acquired Abnormality > Tooth Attrition

Tooth Attrition

Tooth Attrition: A gradual, irreversible loss of tooth substance due to mechanical forces, such as chewing or grinding.
It can lead to the exposure of dentin and enamel wear, affecting tooth structure and function.
Researchers can optimize their attrition studies using PubCompare.ai, which helps identify the most effective protocols from literature, preprints, and patents.
This AI-driven platform enhances reproducibility and accuracy, enabling researchers to discover the most effective methods and products for their tooth attrition investigations.

Most cited protocols related to «Tooth Attrition»

The Cochrane RoB Tool was the starting-point for developing an RoB tool for experimental animal studies. The Cochrane RoB Tool assesses the risk of bias of RCTs and addresses the following types of biases: selection bias, performance bias, attrition bias, detection bias and reporting bias [9 (link)]. The items in the Cochrane RoB Tool that were directly applicable to animal experiments were adopted (Table 2: items 1, 3, 8, 9 and 10).
To investigate which items in the tool might require adaptation, the differences between randomized clinical trials and animal intervention studies were set out (Table 1). Then we checked whether aspects of animal studies that differed from RCTs could cause bias in ways that had not yet been taken into account in the Cochrane RoB tool. Finally, the quality assessments of recent systematic reviews of experimental animal studies were examined to confirm that all aspects of internal validity had been taken into consideration in SYRCLE’s RoB tool.
To enhance transparency and applicability, we formulated signaling questions (as used in the QUADAS tool, a tool to assess the quality of diagnostic accuracy studies [15 (link),16 (link)]) to facilitate judgment. In order to obtain a preliminary idea of inter-observer agreement for each item in the RoB tool, Kappa statistics were determined on the basis of 1 systematic review including 32 papers.
Full text: Click here
Publication 2014
Acclimatization Animals Diagnosis Tooth Attrition
Missing data are a rule rather than an exception in quantitative research. Enders (
2003 (link)
) stated that a missing rate of 15% to 20% was common in educational and psychological studies. Peng et al. (
2006
) surveyed quantitative studies published from 1998 to 2004 in 11 education and psychology journals. They found that 36% of studies had no missing data, 48% had missing data, and about 16% cannot be determined. Among studies that showed evidence of missing data, 97% used the listwise deletion (LD) or the pairwise deletion (PD) method to deal with missing data. These two methods are ad hoc and notorious for biased and/or inefficient estimates in most situations (
Rubin 1987
;
Schafer 1997
). The APA Task Force on Statistical Inference explicitly warned against their use (
Wilkinson and the Task Force on Statistical Inference 1999 (link)
p. 598). Newer and principled methods, such as the multiple-imputation (MI) method, the full information maximum likelihood (FIML) method, and the expectation-maximization (EM) method, take into consideration the conditions under which missing data occurred and provide better estimates for parameters than either LD or PD. Principled missing data methods do not replace a missing value directly; they combine available information from the observed data with statistical assumptions in order to estimate the population parameters and/or the missing data mechanism statistically.
A review of the quantitative studies published in Journal of Educational Psychology (JEP) between 2009 and 2010 revealed that, out of 68 articles that met our criteria for quantitative research, 46 (or 67.6%) articles explicitly acknowledged missing data, or were suspected to have some due to discrepancies between sample sizes and degrees of freedom. Eleven (or 16.2%) did not have missing data and the remaining 11 did not provide sufficient information to help us determine if missing data occurred. Of the 46 articles with missing data, 17 (or 37%) did not apply any method to deal with the missing data, 13 (or 28.3%) used LD or PD, 12 (or 26.1%) used FIML, four (or 8.7%) used EM, three (or 6.5%) used MI, and one (or 2.2%) used both the EM and the LD methods. Of the 29 articles that dealt with missing data, only two explained their rationale for using FIML and LD, respectively. One article misinterpreted FIML as an imputation method. Another was suspected to have used either LD or an imputation method to deal with attrition in a PISA data set (
OECD 2009
;
Williams and Williams 2010 (link)
).
Compared with missing data treatments by articles published in JEP between 1998 and 2004 (
Table 3.1 in Peng et al. 2006
), there has been improvement in the decreased use of LD (from 80.7% down to 21.7%) and PD (from 17.3% down to 6.5%), and an increased use of FIML (from 0% up to 26.1%), EM (from 1.0% up to 8.7%), or MI (from 0% up to 6.5%). Yet several research practices still prevailed from a decade ago, namely, not explicitly acknowledging the presence of missing data, not describing the particular approach used in dealing with missing data, and not testing assumptions associated with missing data methods. These findings suggest that researchers in educational psychology have not fully embraced principled missing data methods in research.
Although treating missing data is usually not the focus of a substantive study, failing to do so properly causes serious problems. First, missing data can introduce potential bias in parameter estimation and weaken the generalizability of the results (
Rubin 1987
;
Schafer 1997
). Second, ignoring cases with missing data leads to the loss of information which in turn decreases statistical power and increases standard errors(
Peng et al. 2006
). Finally, most statistical procedures are designed for complete data (
Schafer and Graham 2002 (link)
). Before a data set with missing values can be analyzed by these statistical procedures, it needs to be edited in some way into a “complete” data set. Failing to edit the data properly can make the data unsuitable for a statistical procedure and the statistical analyses vulnerable to violations of assumptions.
Because of the prevalence of the missing data problem and the threats it poses to statistical inferences, this paper is interested in promoting three principled methods, namely, MI, FIML, and EM, by illustrating these methods with an empirical data set and discussing issues surrounding their applications. Each method is demonstrated using SAS 9.3. Results are contrasted with those obtained from the complete data set and the LD method. The relative merits of each method are noted, along with common features they share. The paper concludes with an emphasis on assumptions associated with these principled methods and recommendations for researchers. The remainder of this paper is divided into the following sections: (1) Terminology, (2) Multiple Imputation (MI), (3) Full Information Maximum-Likelihood (FIML), (4) Expectation-Maximization (EM) Algorithm, (5) Demonstration, (6) Results, and (6) Discussion.
Full text: Click here
Publication 2013
Conclude Resin Deletion Mutation Tooth Attrition
The Treatment of Adolescent Suicide Attempters study was a National
Institute of Mental Health multisite feasibility study designed to develop
and evaluate treatments to prevent suicide reattempts in adolescents.
Participants were 124 male and female patients 12–18 years of age
with a suicide attempt or interrupted attempt during the 90 days before
enrollment (34 (link)–36 (link)). Participants were evaluated at
baseline and at treatment weeks 6, 12, 18, and 24, as well as during
intervening unscheduled visits. Evaluations included the C-SSRS, the
Columbia Suicide History Form, the Scale for Suicide Ideation, and
Beck’s Lethality Scale. All instruments were administered by
independent evaluators, who were Ph.D.-, R.N.-, or master’s-level
clinicians. Assessment of participants also included the self-report Beck
Depression Inventory (BDI) at the same visits, as well as ratings by the
treating psychopharmacologist (who was not the independent evaluator) on the
Montgomery-Åsberg Depression Rating Scale (MADRS). Any potential
suicidal events in the study were rated by the suicide evaluation board,
which was an independent panel of suicidology experts uninvolved in the
day-to-day management of the trial. The board, which was blind to original
event classifications, treatment status, and other potentially biasing
information, rated narratives according to predetermined criteria and
definitions of potential suicidal events. Unanimous consensus was reached in
cases where there was any initial disagreement.
Most participants (N=96, 77.4%) were assessed at
week 12; 87 (70.2%) were evaluated at week 18, and 83
(66.9%) at week 24. Attrition between the study visits was due to
participants refusing to continue study treatment or assessments.
Participants who refused treatment but continued with assessments were
included in the analyses. There was one death by suicide in the study during
the follow-up period. As previously reported (36 (link)), participants who remained in the study for
longer than the median duration were similar to those who were followed for
less than the median duration on all baseline predictors of suicidal events
except income.
Publication 2011
Adolescent Males Mental Health Patients Suicide Attempt Tooth Attrition Visually Impaired Persons Woman
In the life sciences, animals are used to elucidate normal biology, to improve understanding of disease pathogenesis, and to develop therapeutic interventions. Animal models are valuable, provided that experiments employing them are carefully designed, interpreted and reported. Several recent articles, commentaries and editorials highlight that inadequate experimental reporting can result in such studies being un-interpretable and difficult to reproduce1 (link)–8 (link). For instance, replication of spinal cord injury studies through an NINDS-funded program determined that many studies could not be replicated because of incomplete or inaccurate description of experimental design, especially how randomization of animals to the various test groups, group formulation and delineation of animal attrition and exclusion were addressed7 (link). A review of 100 articles published in Cancer Research in 2010 revealed that only 28% of papers reported that animals were randomly allocated to treatment groups, just 2% of papers reported that observers were blinded to treatment, and none stated the methods used to determine the number of animals per group, a determination required to avoid false outcomes2 (link). In addition, analysis of several hundred studies conducted in animal models of stroke, Parkinson’s disease and multiple sclerosis also revealed deficiencies in reporting key methodological parameters that can introduce bias6 (link). Similarly, a review of 76 high-impact (cited more than 500 times) animal studies showed that the publications lacked descriptions of crucial methodological information that would allow informed judgment about the findings9 . These deficiencies in the reporting of animal study design, which are clearly widespread, raise the concern that the reviewers of these studies could not adequately identify potential limitations in the experimental design and/or data analysis, limiting the benefit of the findings.
Some poorly reported studies may in fact be well-designed and well-conducted, but analysis suggests that inadequate reporting correlates with overstated findings10 (link)–14 . Problems related to inadequate study design surfaced early in the stroke research community, as investigators tried to understand why multiple clinical trials based on positive results in animal studies ultimately failed. Part of the problem is, of course, that no animal model can fully reproduce all the features of human stroke. It also became clear, however, that many of the difficulties stemmed from a lack of methodological rigor in the preclinical studies that were not adequately reported15 (link). For instance, a systematic review and meta-analysis of studies testing the efficacy of the free-radical scavenger NXY-059 in models of ischaemic stroke revealed that publications that included information on randomization, concealment of group allocation, or blinded assessment of outcomes reported significantly smaller effect sizes of NXY-059 in comparison to studies lacking this information10 (link). In certain cases, a series of poorly designed studies, obscured by deficient reporting, may, in aggregate, serve erroneously as the scientific rationale for large, expensive and ultimately unsuccessful clinical trials. Such trials may unnecessarily expose patients to potentially harmful agents, prevent these patients from participating in other trials of possibly effective agents, and drain valuable resources and energy that might otherwise be more productively spent.
Publication 2012
Animal Model Animals Cerebrovascular Accident DNA Replication Free Radical Scavengers Impact 76 Malignant Neoplasms Multiple Sclerosis Muscle Rigidity NXY 059 pathogenesis Patients Spinal Cord Injuries Stroke, Ischemic Tooth Attrition
Two main tasks were undertaken for this study. The first task was to extract the quality assessments of Burke et al. from systematic reviews. The criteria for an eligible review were: One, it covered the comparison of antibiotic therapy with placebo or symptomatic therapy for AOM in children. Two, it was based on primary trials identified by a form of systematic search. Three, its main aim was to summarize the therapeutic evidence. It may or may not have performed a meta-analysis. Four, it included the paper Burke et al. And five, it was published in or before December 2006. Reviews which just summed up the findings from other reviews were excluded. For updated reviews, the most recent version was used. I identified the potential reviews from a hand search of the English literature. Nine systematic reviews published from 1993 to 2006 fulfilling these criteria were selected [37 (link)-43 (link),46 ,53 (link)].
From each review, I recorded the quality evaluation method, the quality evaluation for Burke et al., and how its data were used. I focused on issues pertaining to internal validity (selection, performance, detection, attrition, analysis and reporting biases) as broadly described by [16 (link)]. The information extracted is detailed in Additional file 1: Burke et al. (1991) in nine systematic reviews.
The second main task was to perform an in-depth, comprehensive evaluation of Burke et al. This was done by repeatedly and carefully reading the paper, performing a section by section, and at times, sentence by sentence, dissection, and recording the relevant information. I checked all the data for completeness, consistency and accuracy, and assessed the reasoning, methods and conclusions in various parts of the paper. Where possible, I reanalyzed the data. At times, all potential datasets consistent with other information provided were generated and analyzed. During this process, I kept in mind the same quality components relating to internal validity as those noted above for systematic reviews. Other than that, I did not follow any formal method. Indeed, the approach in such an exercise will necessarily vary from trial to trial, and medical field to medical field.
At the end, I formulated plausible explanations and an overall perspective for the distinct problems I found. Where feasible, flaws of reporting were distinguished from the flaws in the design, conduct and analysis [23 (link)]. It took about three months of focused work to complete my in-depth review. The information extracted and the complete picture I formed of this trial are in Additional file 2: A detailed critique of Burke et al. (1991).
To complement these main tasks, two other activities were undertaken. One, I performed a check list based quality assessment of Burke et al. using Table 1 of Balk et al. [19 (link)] as the template. This was completed before the in-depth evaluation. The aim, in part, was to compare its conclusions with those I found from the systematic reviews, and in part, to provide an overall, standardized description of the trial for this paper.
The second activity involved a class of medical researchers and postgraduate students attending a course on evidence based medicine conducted by me at the University of Oslo in June 2006. They had had a day of lectures on the history and basic principles of clinical trials in which several examples of poor and high quality trials were given. They were then required to read Burke et al. and another paper (the first reported randomized trial which compared antibiotics with placebo for AOM [44 (link)]). At the start of the next class session, they provided an overall quality evaluation of the two papers on a five point scale (1 = very bad, 2 = poor, 3 = acceptable, 4 = good, 5 = excellent). At this stage, the use of specific scales or instruments to assess trial quality had not been discussed. Hence, this was not a check-list based assessment. The aim here was to see whether the students would spot problems with Burke et al. which the systematic reviews had overlooked.
Full text: Click here
Publication 2009
Antibiotics Antibiotics, Antitubercular Child Dissection Placebos Student Therapeutics Tooth Attrition

Most recents protocols related to «Tooth Attrition»

Example 3

The photocatalytic oxidation of toluene to CO2 and H2O was performed using the same setup as Example except the quartz tube used had been worn due to attrition by 300 mg of catalyst for 6 weeks at a flow of 1000 sccm. The toluene conversion of the worn tube and a new quartz tube were compared. The transmission of the worn tube was 2× lower than the new tube when measured normal to an LED source with the reactor tube in between. The concentration of toluene was 3 ppm for Example 3. Using the same lights source and reactor geometries, the concentration of toluene decreased to ˜100 ppb for reactor tubes despite the difference in light transmission indicating attrition did not adversely affect performance.

Full text: Click here
Patent 2024
Light Quartz Toluene Tooth Attrition Transmission, Communicable Disease
Recruitment for the semistructured interviews was conducted using purposive and snowball sampling at a large Veterans Affairs hospital in the Southeast United States. Veterans and their care partners were identified from the following sources: (1) on-site in services conducted with primary care teams, (2) web-based in services, (3) direct staff referrals, and (4) internal tracking of the report of current registrants. Potential participants were approached via phone. A subsample of the interview participants were asked to participate in the subsequent user testing phase. A subsample of three agreed to participate in user testing. One health care staff became aware of the project and volunteered to be part of the user testing for a staff perspective.
Inclusion criteria included veterans who were aged ≥18 years, who were registered My HealtheVet users, who had no cognitive impairment that prevented the use of a PC or the ability to engage in project activities, and who reported having a caregiver who assisted them with health care management. Inclusion criteria for care partners included those aged ≥18 years, who had no cognitive impairment that prevented the use of a PC or the ability to engage in project activities, and who reported providing caregiving assistance.
On the basis of qualitative sampling methods, saturation was anticipated to occur between 12 and 15 interviews for each VDT user type (ie, veteran and care partner) [22 ]. An overrecruitment strategy was used to allow for attrition. Up to 25 individuals representing each user type were recruited to ensure saturation across domains.
Full text: Click here
Publication 2023
Disorders, Cognitive Health Services Administration Primary Health Care Tooth Attrition Veterans
A PhD-level researcher systematically searched Google Scholar, PsycINFO, and PubMed databases for articles with the word “binge” in their title and the terms “ecological momentary” or “experience sampling” to find risk state descriptors with relevance to binge eating in the literature. The search resulted in 509 articles that were deduplicated and scanned for relevance. Only empirical articles reporting the results of EMA studies on binge eating were retained. A total of 262 articles were subsequently analyzed (see Multimedia Appendix 1, Figure S1 for an attrition diagram).
Full text: Click here
Publication 2023
Tooth Attrition
Patients at risk for developing diabetes were randomly assigned (2-1) by the project manager to either an RCT or Choice study arm using a blocked (groups of 4) randomization table stratified by sex created by the study statistician. Patients in the RCT arm were further randomized (1-1-1) to one of three groups: (1) a 2-h small group class designed to help patients develop a personal action plan to prevent diabetes (SC) (25 (link)); (2) a 2-h small group class plus automated telephone calls using an interactive voice response system (IVR) to help participants initiate weight loss via the promotion of a healthful diet and regular physical activity, and maintain their behavior changes over a period of 12 months (Class/IVR); or (3) a DVD with same content as the class plus the same IVR calls over a period of 12 months (DVD/IVR).
This paper presents weight related outcomes associated with the randomized control trial arm of the DiaBEAT-it study (24 (link)). We powered our study to detect statistically significant body weight changes at 6 and 12 months in Class/IVR and DVD/IVR when compared with the SC group within the RCT design. Sample size was determined by using the average weight loss and standard deviations found in our previous studies (25 (link)–27 (link)) for the 6-month effect and averages from the literature (28 (link), 29 (link)) for the 12-month effect. As such, assuming a correlation of 0.5 between repeated measures, we estimated that a sample size of 78 participants per group would give us a 90% power to detect a minimum detectable difference in change in weight of 2.3 lbs. at 6 months and 2.7 lbs. at 12 months. The goal for enrollment was 120 participants per group to achieve a sample size of 78 after an estimated 35% attrition at 18 months. The trial design and methods have been described in detail elsewhere (24 (link)). Supplementary Figure 1 provides the CONSORT information for the RCT study arm. This study and protocol were approved by the Carilion Clinic Institutional Review Board and was registered at clinicaltrials.gov (NCT02162901).
Full text: Click here
Publication 2023
Diabetes Mellitus Ethics Committees, Research Health Promotion Patients Reducing Diet Tooth Attrition
Participant attrition is a risk we will work to mitigate. First, by collaborating with the primary care physician for referrals, we will build participant trust in the intervention. Second, to reduce participant burden, we will offer flexible appointment times in the homes of participants and research staff will make reminder calls prior to appointments. Lastly, we are providing remuneration for each testing event. In the unforeseen event of loss of the unbiased study evaluator the PI will use funds to purchase an occupational therapist colleague’s time for follow-up testing.
Publication 2023
Occupational Therapist Primary Care Physicians Tooth Attrition

Top products related to «Tooth Attrition»

Sourced in United States, Austria, Japan, Cameroon, Germany, United Kingdom, Canada, Belgium, Israel, Denmark, Australia, New Caledonia, France, Argentina, Sweden, Ireland, India
SAS version 9.4 is a statistical software package. It provides tools for data management, analysis, and reporting. The software is designed to help users extract insights from data and make informed decisions.
Sourced in United States, Austria, Japan, Belgium, United Kingdom, Cameroon, China, Denmark, Canada, Israel, New Caledonia, Germany, Poland, India, France, Ireland, Australia
SAS 9.4 is an integrated software suite for advanced analytics, data management, and business intelligence. It provides a comprehensive platform for data analysis, modeling, and reporting. SAS 9.4 offers a wide range of capabilities, including data manipulation, statistical analysis, predictive modeling, and visual data exploration.
Sourced in United States, Denmark, Austria, United Kingdom, Japan, Canada
Stata version 14 is a software package for data analysis, statistical modeling, and graphics. It provides a comprehensive set of tools for data management, analysis, and reporting. Stata version 14 includes a wide range of statistical techniques, including linear regression, logistic regression, time series analysis, and more. The software is designed to be user-friendly and offers a variety of data visualization options.
Sourced in United States, United Kingdom, Denmark, Austria, Belgium, Spain, Australia, Israel
Stata is a general-purpose statistical software package that provides a comprehensive set of tools for data analysis, management, and visualization. It offers a wide range of statistical methods, including regression analysis, time series analysis, and multilevel modeling, among others. Stata is designed to facilitate the analysis of complex data sets and support the entire research process, from data import to report generation.
Sourced in United States, Denmark, Austria, United Kingdom, Japan
Stata version 15 is a data analysis and statistical software package. It provides a range of tools for data management, statistical analysis, and visualization. Stata version 15 supports a variety of data types and offers a comprehensive set of statistical procedures.
Sourced in United States, Denmark, United Kingdom, Belgium, Japan, Austria, China
Stata 14 is a comprehensive statistical software package that provides a wide range of data analysis and management tools. It is designed to help users organize, analyze, and visualize data effectively. Stata 14 offers a user-friendly interface, advanced statistical methods, and powerful programming capabilities.
Sourced in United States, Austria, United Kingdom, Cameroon, Belgium, Israel, Japan, Australia, France, Germany
SAS v9.4 is a software product developed by SAS Institute. It is a comprehensive data analysis and statistical software suite. The core function of SAS v9.4 is to provide users with tools for data management, analysis, and reporting.
Sourced in United States, Japan, United Kingdom, Germany, Belgium, Austria, Spain, France, Denmark, Switzerland, Ireland
SPSS version 20 is a statistical software package developed by IBM. It provides a range of data analysis and management tools. The core function of SPSS version 20 is to assist users in conducting statistical analysis on data.
Sourced in United States, United Kingdom, Austria, Denmark
Stata 15 is a comprehensive, integrated statistical software package that provides a wide range of tools for data analysis, management, and visualization. It is designed to facilitate efficient and effective statistical analysis, catering to the needs of researchers, analysts, and professionals across various fields.

More about "Tooth Attrition"

Dental Attrition, Tooth Wear, Occlusal Wear, Abrasion, Erosion, Bruxism, Mastication, Grinding, Chewing, Dentin Exposure, Enamel Wear, Tooth Function, Dental Investigations, Dental Research, Tooth Degradation, Dental Abrasion, Tooth Deterioration, Tooth Destruction, Tooth Loss, Dental Erosion, Dental Abrasion, Dental Bruxism, Dental Occlusion, Dental Mechanics, Dental Biomechanics, Dental Tribology, Dental Epidemiology, Dental Biostatistics, SAS 9.4, Stata 14, Stata 15, SPSS 20.
Reseachers can optimize their tooth attrition studies using PubCompare.ai, which helps identify the most effective protocols from literature, preprints, and patents.
This AI-driven platform enhances reproducibility and accuracy, enabling researchers to discover the most effective methods and products for their tooth attrition investigations.