Reliability was assessed using a test-retest design. Test-retest reliability analyses were performed to evaluate consistency of the data reported. For these analyses a subsample was invited to fill out the TiC-P again (retest) two weeks after submission of the original measurement. A cover letter explained the purpose of the retest and we offered a gift voucher of €10 if the retest questionnaire would be returned. Consistency of categorical (yes/no) variables was assessed with percentages of absolute agreements indicating the proportion of cases with the same value on the test and retest questionnaire. To adjust for the fact that a number of these agreements may arise by chance alone, chance-corrected agreements were assessed using Cohen’s kappa coefficients (κ values). The following values were attached to the coefficients: modest (0.21-0.40); moderate (0.41-0.60); satisfactory (0.61-0.80) and almost perfect (0.81-1.00)
[20 ]. Consistency of data on interval level was evaluated by computing intra class correlation coefficients (ICC) (two-way mixed models; absolute agreement).
The construct validity of the TiC-P was evaluated by assessing the agreement with reported and registered data for the items ‘contacts with a psychotherapist’ and ‘long-term absence from work’ including the percentage absolute agreement, absolute differences between reported data en registered data and Spearman rank correlation coefficient (rho). Reported data on contacts with therapists were compared with registration data of the therapists. Additionally, reported data on long-term absence from work was compared to registration data from the occupational health service. As registration data on absence from work of the Monitoring study were not accessible, the latter were derived from the Collaborative Care Study
[19 (link)]. All statistical analyses were performed in SPSS (V. 17.0; Chicago, IL). Significance was set at a p-value of 0.050.