We conducted a stakeholder-driven, evidence-based, modified-Delphi process to develop recommendations for displaying PRO data in three different applications: individual patient data for monitoring/management, research results presented to patients in educational materials/decision aids, and research results presented to clinicians in peer-reviewed publications. We used a standard modified-Delphi approach, consisting of a pre-meeting survey relevant to the application of interest, a face-to-face meeting, and a post-meeting survey. The first two applications were addressed during an in-person meeting in February 2017, and the third application was addressed during an in-person meeting in October 2017. For simplicity, we refer to these as Meeting #1 and #2. The meetings addressed different applications; issues that were relevant across applications were handled in the context of each application separately.
Because much of the evidence base guiding this process emerged from studies in oncology, we focused specifically on the cancer context. In addition to the project team and this project’s SAB, we purposefully invited representatives from key stakeholder groups: cancer patients/caregivers, oncology clinicians, PRO researchers, and stakeholders specific to particular applications (e.g., electronic health record vendors for individual patient data, decision aid experts for research data presented to patients, journal editors for research data presented to clinicians).
Prior to each in-person meeting, we held a webinar during which we oriented participants to the purpose of the project, the specific data display issues that we were addressing for the relevant applications (Table 1, column 1), and the evidence base regarding the options for those data display issues. The following parameters informed the considerations: (1) recommendations should work on paper (static presentation); (2) presentation in color is possible (but it should be interpretable in grayscale); and (3) additional functionality in electronic presentation is possible (but not part of standards). Notably, during the meeting discussions, additional guiding principles were established: (1) displays should be as simple and intuitively interpretable as possible; (2) it is reasonable to expect that clinicians will need to explain the data to patients; and (3) education and training support should be encouraged to be available.
After the pre-meeting webinar, we surveyed participants’ initial perspectives using Qualtrics, a leading enterprise survey company, with protections for sensitive data, used by colleges and universities around the world [18 ]. Specifically, for each issue, we first asked participants to rate whether there ought to be a standard on that topic. Response options were Important to Present Consistently, Consistency Desirable, Variation Acceptable, and Important to Tailor to Personal Preferences. Regardless of their response to this question, we asked participants to indicate what the standard should be, with alternative approaches for addressing that particular issue as the response options. For example, for data presented to patients, the options for presenting proportions included pie charts, bar charts, and icon arrays, based on the available evidence base [16 ]. Following each question, participants were asked to indicate the rationale behind their responses in text boxes. A summary of the pre-meeting survey results and comments was circulated prior to the meeting.
At each in-person meeting, we addressed each of the data display issues, briefly summarizing the evidence base and the feedback from the pre-meeting survey before opening up the topic for discussion. At Meeting #1, the participants aimed to be consistent across the two applications, when possible. For Meeting #1 topics also addressed during Meeting #2, after an initial discussion, the consensus statements from Meeting #1 were shared for the Meeting #2 group’s consideration, with the possibility of accepting the statement unchanged, modifying it, discarding it, or developing a new statement.
Following the discussion, participants voted using an audience response system (to ensure anonymity) on whether there should be a standard, and in cases where a standard was supported, what that standard should be. Issues that were not considered appropriate for a standard, and topics for further research, were also noted. After the meeting, the consensus statements were circulated to participants via Qualtrics. Each participant was asked whether each consensus statement was “acceptable” or “not acceptable,” and if the latter, to indicate why in a text box. The funders had no role in the project design; data collection, analysis, or interpretation; writing; or decision to submit this manuscript for publication.