Quantitative data were analyzed using IBM SPSS (version 27). Descriptive statistics (i.e., means, standard deviations [SD], frequencies, percentages) were computed to describe the sample at baseline. Following this, data were checked for approximately normal distribution4, univariate (z-score greater than 3 or less than − 3) and multivariate outliers (p-value < 0.01 on the Mahalanobis Distance Test), and sphericity. In cases where outliers were identified, sensitivity testing was performed (with and without outliers) to affirm consistent trends in the data and then outliers were removed on a variable-by-variable basis to enhance homogeneity and maximize statistical power. Repeated measures analysis of variance [51 ] were conducted to examine changes across time points (baseline [week 0], post-intervention [week 8], follow-up [week 16]) in physical and psychological outcomes. Of note, data were not nested based on wave or instructor, no adjustments were made, and a higher type I error probability was set (i.e., an uncorrected significance level of 0.05) to decrease the risk of missing a potentially beneficial effect of yoga5. The effect size of these changes was computed with partial eta squared ( ηp2 ; small effect = 0.01, medium effect = 0.06, large effect = 0.14).
To analyze the qualitative data, interviews were transcribed verbatim and uploaded into NVivo (version 12) where they were subsequently analyzed by one author (EM) using conventional content analysis [52 (link)]. First, EM read each transcript several times to immerse herself in the data. Next, EM coded transcripts, created labels reflecting key ideas, and sorted the codes into higher-order categories. At this point, the author sent the coding scheme to another author (AW) who had reviewed the transcripts several times and challenged EM’s thoughts and interpretations. Following this, EM generated definitions for each category and selected exemplar quotes from the data to illustrate findings from the interviews. The penultimate coding scheme was then sent to all authors, each of whom was involved in the study design, intervention delivery, and/or data collection, to review and approve. Following this, EM revisited all raw data to ensure participants voices were accurately represented and the coding scheme was finalized. To promote rigor and trustworthiness, several steps recommended in the literature were followed [53 (link)]. The two authors who conducted the interviews (EM, KE) and one author who conducted the content analysis (EM) kept reflexivity journals and continuously (re-)examined their own perspectives and how they might influence interpretations. A critical friend (AW) challenged interpretations and sought to ensure the results represented participants’ voices and all authors critically reviewed the findings, and finally, category descriptions and exemplar quotes are available and presented herein to provide transparency.
Free full text: Click here