Our prospective cohort study was conducted alongside two ongoing clinical trials that are coordinated from our institution, an academic Level-1 trauma center in The Netherlands. The medical ethical review committee granted approval before initiation of this parallel study, without the need for informed consent from patient participants.
Patients for our cohort study were recruited from the two ongoing clinical trials between January 2011 and July 2014, during their first visit to the outpatient clinic. To increase the size of our cohort study population, we also recruited patients with distal radius fractures at the outpatient clinic who were not enrolled in the clinical trials. The patients who were not participants of the clinical trial were enrolled in our study between January 2014 and July 2014.
Our study population consisted of 102 patients with distal radius fractures. Patients were excluded if they: (1) did not want to complete the questionnaire at the outpatient clinic; (2) did not complete the anchor questions; (3) were unable to understand the study information; or (4) had sustained their distal radius fracture more than 1 year before their visit to the outpatient clinic.
Of the two concurrent clinical trials occurring during our prospective cohort study, the first trial [3 (link)] included 42 patients who underwent a study of two- and three-dimensional imaging. This trial provided 42 adult patients with intraarticular distal radius fractures who were treated with open reduction and internal fixation with a volar locking plate.
The second trial [25 (link)] randomized patients with displaced extraarticular distal radius fractures (AO types A2 and A3 [17 ]) between treatment with either open reduction and internal fixation with a volar locking plate or plaster immobilization. This trial provided 39 patients.
Additionally, during the first 6 months of 2014, we identified 55 patients who were not enrolled in either clinical trial but who were eligible for participation in our study. All adult patients with a distal radius fracture were eligible for inclusion, regardless of the type of treatment they received. After exclusion, an additional 21 patients with a distal radius fracture who were not enrolled in either of the two trials were included in our study cohort.
There are two methods to define the MCID: (1) a distribution-based and (2) an anchor-based approach [5 (link)]. The distribution-based approach is used to evaluate if the observed effect is attributable to true change or simply the variability of the questionnaire. It examines the distribution of observed scores in a group of patients. The magnitude of the effect is interpreted in relation to variation of the instrument [9 (link)]. In other words, is the observed effect attributable to true change or simply the variability of the questionnaire?
The anchor-based approach uses an external criterion (the anchor) to determine the MCID. Possible anchors include objective measurements, such as prehensile grip strength and ROM, or patient-reported anchor questions. The purpose of a patient-reported anchor question is to “anchor” the changes observed in the PRWE score to patients’ perspectives of what is clinically important [13 (link)].
Anchor-based methods to determine the MCID are preferred because an external criterion is used to define what is clinically important [7 (link)]; however, the anchor-based method does not take into account the measurement error of the instrument, so it is valuable to use the anchor- and distribution-based approaches [7 (link)]. To avoid confusion, the distribution-based method generally is referred to as minimum detectable change (MDC), and the anchor-based method as MCID [7 (link)]. We use the same terms to identify the methods.
Data were collected prospectively. Patients completed the Dutch version of the PRWE questionnaire during two visits at approximately 6 to 12 weeks and approximately 12 to 52 weeks after distal radius fracture injury.
At the second visit, patients were asked to indicate the degree of clinical change they had noticed since the previous visit for each domain (pain and function). Patients noted their answers on a global rating of change scale (GRC) from −5 (much worse) to +5 (much better) (Fig. 1) [11 (link)]. The purpose of this question was to “anchor” the changes observed in the PRWE score to patients’ perspectives regarding what is clinically important [13 (link)].

The global rating of change (GRC) scale used in the Patient-rated Wrist Evaluation (PRWE) questionnaire is shown. The anchor questions allowed patients to assess their current health status regarding wrist function and wrist pain, and compare their status with that of their previous visit.

There is no consensus regarding the required sample size to determine the MCID [19 (link)]. We made a sample size estimation based on a conservatively estimated MCID of 12 points, with a SD of ± 14 [12 (link), 20 (link), 22 (link)]. To achieve an α of 0.05 and a power of 80%, we required 18 data points representing no change, and 18 data points representing minimal improvement.
Free full text: Click here