This article describes the experience of two Swiss centers for primary care forming a pool of end-users to provide repeated feedback on patient DAs. During iterative improvement cycles, the citizen-advisors provided valuable criticism that made materials and research projects evolve. Participants at in-person meetings became familiar with the concepts of SDM and provided higher-level observations over time. Due to local circumstances, there were important differences in the methodology used in Lausanne and Bern; these contrasting experiences can provide helpful insights for other DA developers.
Our citizen-advisors commented on multiple versions of our DAs, allowing both study groups to fulfill requirements in international DA guidelines for end-user feedback [6] without multiple cycles of recruitment to focus groups. Going forward, the Lausanne group also intends to pay citizen-advisors and use formal contracts, as was done in Bern, to ensure fair compensation. Repeated meetings with the same participants were helpful for both groups, though differences in methodology allowed us to attain different goals. In Lausanne, citizen-advisors were able to provide feedback more efficiently on subsequent DAs, such that fewer in-person meetings were required. However, our methodology did not allow us to measure informed choice among participants, as originally intended. In Bern, accumulated knowledge about the trial within which the group was nested [12] allowed citizen-advisors to provide critical feedback on our overall approach to CRC screening. Other DA developers should value experience gained by participants, which in our opinion outweighs the value of participants seeing a DA for the first time. In future studies we hope to combine feedback from citizen-advisors with one-to-one, ‘think-aloud’ interviews with persons from low-literacy groups. This approach will test comprehension of DAs developed with our current approach.
Two innovative points in our methodology should be highlighted. First, both study sites combined elements of community-based participatory research (CBPR) and quality improvement to develop a pragmatic means of involving end users in the creation of DAs. Projects using CBPR often employ community advisory boards (CABs) as a means of integrating stakeholders in key decisions and knowing local, context-specific knowledge [17]. We did not involve participants in strategic protocol decisions (aside from our dissemination plan), in the selection of topics to discuss, or explicitly in co-design [18]. Similar to CABs, we recruited from the population concerned by cancer screening, asked participants to serve as ‘representatives’ of all end-users, and maintained the same group of participants for in-person meetings to allow them to gain confidence and develop expertise. We also used elements of plan-do-study-act (PDSA) cycles, a methodology from quality improvement used to provide a structure for iterative testing of changes [11]. PDSA cycles aim to adapt complex interventions for local implementation. Though we based our initial DAs on examples in the international literature, we needed PDSA cycles to make the adaptations needed for successful dissemination in Switzerland.
Second, in both the Lausanne pilot phase and the Bern study phase, we recruited among standardized patients from our medical schools. In our experience, they have a heightened interest in potential improvements to the local medical system. Standardized patients can provide an easily-identifiable pool of participants, especially for topics like cancer screening without identifiable patient organizations. Standardized patients have been used previously in research projects as unannounced patients to measure care quality [19], but not to our knowledge for CBPR. In Lausanne, we also recruited from community organizations, which also identified people who can readily identify and discuss improvements to communication materials. Broadening to other sources of participants did seem to diversity the group, in having more members working full time in more professions. One limitation of all of our recruitment sources is that despite varied levels of education attainment, nearly all of our members seemed to have high levels of health literacy, though their health literacy was not formally measured using a validated scale.
Our two study sites allocated resources differently: in Lausanne we developed questionnaires that were mailed to citizens not participating in in-person meetings, while in Bern all meetings were audio-recorded, transcribed and analyzed by one of the study authors (BM). Patterns in responses to knowledge questions in the mailed questionnaires and written feedback provided valuable information beyond what was collected during in-person meetings. Questionnaires also provided individual-level information not always captured in group discussions. The systemic analysis of meeting transcripts by the Bern group allowed them to avoid a possible perception bias by the researchers, which significantly supported the quality of the data.
Strengths of this study include its parallel implementation at two study sites, the accumulation of 3 years experiences, and patient involvement in the preparation of the manuscript. Weaknesses include our small sample sizes and lack of comparator group, limiting the precision of our quantitative results and our ability to conclusively demonstrate that we reached our objective to develop a more efficient method of developing DAs. Meetings in Lausanne were not audio recorded, potentially introducing bias in our interpretation of meeting information. Our citizen partners participated fully in DA development and the reporting of results, but not in the choice of topics or the initial study protocol. As we gain confidence and experience in this area, and have existing contacts with citizen partners, we hope to involve them as partners earlier and earlier in the research process. Repeated meetings with the same participants might bias their responses and make our DAs less understandable for wider audiences. Also, we were able to develop multiple DAs with the same members because cancer screening is widely applicable to the general population aged 50 to 75; DAs addressing specific diseases will need to recruit patients with specific experience. The participants in Bern consisted exclusively of standardized patients, who brought with them an affinity for medical issues. With this prior medical knowledge, the support group likely differ from the average end-users from the population. We have not yet performed validation studies for our DAs and communication materials to assess their impact on routine care. Finally, because both groups were formed to discuss cancer screening in two academic centers for primary care, it is difficult to know the generalizability of our results.