Because much of the evidence base guiding this process emerged from studies in oncology, we focused specifically on the cancer context. In addition to the project team and this project’s SAB, we purposefully invited representatives from key stakeholder groups: cancer patients/caregivers, oncology clinicians, PRO researchers, and stakeholders specific to particular applications (e.g., electronic health record vendors for individual patient data, decision aid experts for research data presented to patients, journal editors for research data presented to clinicians).
Prior to each in-person meeting, we held a webinar during which we oriented participants to the purpose of the project, the specific data display issues that we were addressing for the relevant applications (
After the pre-meeting webinar, we surveyed participants’ initial perspectives using Qualtrics, a leading enterprise survey company, with protections for sensitive data, used by colleges and universities around the world [18 ]. Specifically, for each issue, we first asked participants to rate whether there ought to be a standard on that topic. Response options were Important to Present Consistently, Consistency Desirable, Variation Acceptable, and Important to Tailor to Personal Preferences. Regardless of their response to this question, we asked participants to indicate what the standard should be, with alternative approaches for addressing that particular issue as the response options. For example, for data presented to patients, the options for presenting proportions included pie charts, bar charts, and icon arrays, based on the available evidence base [16 ]. Following each question, participants were asked to indicate the rationale behind their responses in text boxes. A summary of the pre-meeting survey results and comments was circulated prior to the meeting.
At each in-person meeting, we addressed each of the data display issues, briefly summarizing the evidence base and the feedback from the pre-meeting survey before opening up the topic for discussion. At Meeting #1, the participants aimed to be consistent across the two applications, when possible. For Meeting #1 topics also addressed during Meeting #2, after an initial discussion, the consensus statements from Meeting #1 were shared for the Meeting #2 group’s consideration, with the possibility of accepting the statement unchanged, modifying it, discarding it, or developing a new statement.
Following the discussion, participants voted using an audience response system (to ensure anonymity) on whether there should be a standard, and in cases where a standard was supported, what that standard should be. Issues that were not considered appropriate for a standard, and topics for further research, were also noted. After the meeting, the consensus statements were circulated to participants via Qualtrics. Each participant was asked whether each consensus statement was “acceptable” or “not acceptable,” and if the latter, to indicate why in a text box. The funders had no role in the project design; data collection, analysis, or interpretation; writing; or decision to submit this manuscript for publication.