Feasibility and reliability were evaluated using data from the Monitoring study. Feasibility included the TiC-P response rate, respondent’s time of filling out the TiC-P and data completeness on the items of the questionnaire. Completeness is reported as proportions of missing values. Additionally, completeness on reported medication was evaluated including the medication name, the dose per intake, the number of doses per day, and the number of days that the medication was taken during the previous 4 weeks. For this evaluation a random sample (n=283) was taken from the questionnaires submitted by patients who actually reported use of medication. Completeness on medication name was checked manually and was defined missing if no name was reported or in cases that the medication was described in general terms (f.e. sleeping pill, antibiotic). Respondents’ time for filling out the TiC-P was assessed as follows. Patients in the Monitoring study filled out online a number of different questionnaires at the same time. Consequently, information was available of the time of filling out for successive submitted questionnaires. The average time for filling out the TiC-P was calculated by subtracting the times of these questionnaires according to the web-based dataset. Identically, we estimated the mean time for filling out the TiC-P of the first measurement and for the next measurements. Differences in time for filling out the TiC-P at baseline and during follow-up measurements were evaluated using a paired sample t-test.
Reliability was assessed using a test-retest design. Test-retest reliability analyses were performed to evaluate consistency of the data reported. For these analyses a subsample was invited to fill out the TiC-P again (retest) two weeks after submission of the original measurement. A cover letter explained the purpose of the retest and we offered a gift voucher of €10 if the retest questionnaire would be returned. Consistency of categorical (yes/no) variables was assessed with percentages of absolute agreements indicating the proportion of cases with the same value on the test and retest questionnaire. To adjust for the fact that a number of these agreements may arise by chance alone, chance-corrected agreements were assessed using Cohen’s kappa coefficients (κ values). The following values were attached to the coefficients: modest (0.21-0.40); moderate (0.41-0.60); satisfactory (0.61-0.80) and almost perfect (0.81-1.00)
[20 ]. Consistency of data on interval level was evaluated by computing intra class correlation coefficients (ICC) (two-way mixed models; absolute agreement).
The construct validity of the TiC-P was evaluated by assessing the agreement with reported and registered data for the items ‘contacts with a psychotherapist’ and ‘long-term absence from work’ including the percentage absolute agreement, absolute differences between reported data en registered data and Spearman rank correlation coefficient (rho). Reported data on contacts with therapists were compared with registration data of the therapists. Additionally, reported data on long-term absence from work was compared to registration data from the occupational health service. As registration data on absence from work of the Monitoring study were not accessible, the latter were derived from the Collaborative Care Study
[19 (link)]. All statistical analyses were performed in SPSS (V. 17.0; Chicago, IL). Significance was set at a p-value of 0.050.
Free full text: Click here