Skip to main content

Comparing three approaches for involving patients in research prioritization: a qualitative study of participant experiences

Abstract

Background

By participating in priority-setting activities in research, patients and members of the public help ensure that important questions are incorporated into future research agendas. Surveys, focus groups, and online crowdsourcing are increasingly used to obtain input, yet little is known about how they compare for prioritizing research topics. To address this gap, the Study of Methods for Assessing Research Topic Elicitation and pRioritization (SMARTER) evaluated participant satisfaction with the engagement experience across three prioritization activities.

Methods

Respondents from Back pain Outcomes using Longitudinal Data (BOLD), a registry of patients 65 years and older with low back pain (LBP), were randomly assigned to one of three interactive prioritization activities: online crowd-voting, in-person focus groups using nominal group technique, and two rounds of a mailed survey (Delphi). To assess quality of experience, participants completed a brief survey; a subset were subsequently interviewed. We used descriptive statistics to characterize participants, and we analyzed responses to the evaluation using a mixed-methods approach, tabulating responses to Likert-scale questions and using thematic analysis of interviews to explore participant understanding of the activity and perceptions of experience.

Results

The crowd-voting activity had 38 participants, focus groups 39, and the Delphi survey 74. Women outnumbered men in the focus groups and Delphi survey; otherwise, demographics among groups were similar, with participants being predominantly white, non-Hispanic, and college educated. Activities generated similar lists of research priorities, including causes of LBP, improving physician-patient communication, and self-care strategies. The evaluation survey was completed by 123 participants. Of these, 31 across all activities were interviewed about motivations to participate, understanding of activity goals, logistics, clarity of instructions, and the role of patients in research. Focus group participants rated their experience highest, in both the evaluation and interviews.

Conclusion

Common methods for research prioritization yielded similar priorities but differing perceptions of experience. Such comparative studies are rare but important in understanding methods to involve patients and the public in research. Preferences for different methods may vary across stakeholder groups; this warrants future study.

Trial registration

NICHSR, HSRP20152274. Registered 19 February 2015.

Peer Review reports

Plain english summary

Asking patients and the public to help design and conduct research is an important next step beyond simply participating in a study. Patient and public involvement in research decisions brings fresh views about using scarce resources to address their specific needs. Too often, only researchers and funding agencies have a say. Yet, the best ways to involve patients in shaping research remain unclear. In this project, patients drawn from a registry of those 65 years or older with low back pain were asked to participate in three possible activities to prioritize research: 1) crowd-voting, where users can connect online but not always in real time; 2) focus groups, where a moderator leads in-person discussions; and 3) a paper-based survey, where interactions are written. Participants evaluated their experiences through a survey and interviews. As predicted, participants preferred the in-person focus group activity best. However, the activity’s purpose was not always clear. Focus groups were thought to be about understanding low back pain, when the goal was to prioritize research topics. Participants in the other two activities (crowd-voting and the survey) were less positive about their experience but understood the purpose better. These and other results show one approach will not fit all. Patients differ in many ways, including age, type of illness, access to health care, and comfort with technology. Understanding how methods compare from the participant’s perspective informs decisions when designing activities to prioritize research with the goal to expand patient and public involvement.

Background

Involving patients and the public, especially when research topics are formulated, is a cornerstone of patient-centered outcomes research. Firsthand knowledge of a health condition and its impact on day-to-day life often yields research priorities not recognized or reflected by researchers and policymakers alone [1, 2]. For example, in 2000, Tallon and colleagues demonstrated a mismatch in areas of research and stakeholder needs. An evaluation of osteoarthritis studies found the bulk of research focused on drugs and surgical interventions. In contrast, focus group discussions with patients identified a need for evidence across treatment options (e.g., physical therapy, injections, complementary therapy, etc.) and effective education strategies [1]. This discrepancy highlights the need for contributions from patients and the public as research priorities are determined.

Funding agencies, governments, and advocacy organizations seek to better align research agendas with the needs of stakeholders by employing diverse methods to involve them in priority-setting activities [3,4,5,6]. Methods include surveys, focus groups, consensus-building activities, and, more recently, crowdsourcing [7]. Such efforts provide opportunities for understanding the interests, needs, and perspectives of stakeholders in order to improve the value and quality of evidence generated [8]. Little is known, though, about how different methods compare for priorities generated and participant experience. Evidence that directly compares different patient and public involvement techniques will support the research community in understanding the strengths and limitations of these approaches for application in their own work. In particular, understanding the participant experience will help enhance the future selection and use of such methods.

One vital area for understanding patient and public research priorities is low back pain (LBP), one of the most important causes of functional limitations and disability worldwide [9, 10]. Nearly a third of the US population over age 65 years experiences severe LBP, making it a leading reason for physician visits [11, 12]. A number of treatments are proposed as means to relieve LBP, ranging from physical therapy to surgical intervention, yet evidence on their effectiveness is inconsistent. While, in part, this relates to the relatively poor understanding of underlying mechanisms, the lack of high-quality research comparing options that mirror patient preferences for treatment is another cause [13, 14]. This disconnect between the information that people need to support decision-making and the available evidence is a recognized barrier and one that patient-centered outcomes research aims to address [3]. To expand evidence on effective treatments for LBP, the Back pain Outcomes using Longitudinal Data (BOLD) registry enrolled a group of older adults with LBP newly presenting to primary care to gather information about treatment pathways, healthcare utilization, and patient-reported outcomes [15]. In addition to generating knowledge about the clinical course of LBP, BOLD provides an opportunity to obtain input on research priorities from people with experience living with and navigating treatment decisions to maximize their health. Further, this informs ways that existing research infrastructures, such as patient registries, provide a mechanism for patient involvement in the prioritization of future research.

Developing infrastructure for and evidence on methods for involving stakeholders in the research process are important for guiding patient-centered outcomes research. In 2013, the Patient-Centered Outcomes Research Institute (PCORI) Methodology Committee published a recommendation to “support empirical research to assess and improve research prioritization methods for use by PCORI.” [16] Through a PCORI-funded study (Study of Methods for Assessing Research Topic Elicitation and pRioritization [SMARTER]), we sought to address this recommendation by comparing different methods for obtaining input on future research topics for LBP, with the aim of evaluating how prioritization methods (crowd-voting, focus groups using nominal group technique, and surveys using a modified Delphi method) compare across different domains regarding the ways participants rank research priorities and perceive their participation experience.

We hypothesized that the different techniques would produce similar rankings for the top five research priorities but differ in participant-rated experience, where greater participant interaction would receive better ratings.

Methods

Patient involvement in study design and conduct

Patient and public involvement occurred throughout the project through both collaborative and consultative activities (Fig. 1) [17]. This included collaboration with a patient partner (MRS), who served as a member of the research team and participated in the development of the study from conception through study conduct; consultation with five patient advisors from the two BOLD clinical sites involved in recruiting participants for the registry (Henry Ford Health System [HFHS] in Michigan and Kaiser Permanente Northern California [KPNC]), which allowed for iterative consultation during study materials development; and finally, consultation with the CERTAIN Back Pain Research Patient Advisory Group, a committee of 10 patient advisors established in 2014 by researchers at the University of Washington (UW) to support patient involvement across a number of ongoing research initiatives [18]. Consultation with the CERTAIN Back Pain Research Patient Advisory Group throughout the study provided a forum for presenting and discussing project updates, obtaining input on study design decisions including evaluation, and discussing findings. Details about patient involvement in the design and conduct of the study have been published previously [19].

Fig. 1
figure 1

Patient and stakeholder engagement process in SMARTER

Study design

This multi-phase project compared different quantitative and qualitative engagement methods for research [19]. In the study phase presented here, members of the BOLD registry were randomized in one of three prioritization activities (described below): crowd-voting [20], focus groups using nominal group technique, and a modified Delphi method using surveys [21, 22].

Online crowd-voting

Crowd-voting leverages online platforms to engage participants in discussion. This activity used IdeaScale, a secure, Internet-based community platform that allows participants to submit new ideas, vote on existing ideas, and interact with others through online, asynchronous discussion [20].

Focus groups with nominal group technique

Nominal group technique, a structured method for engaged problem-solving, is suitable for small group discussion on a defined topic, providing opportunity to generate new ideas. The method combines individual work and thought for new idea generation with moderated, interactive group discussion followed by prioritization of topics [21].

Modified Delphi method

In this study, the modified Delphi method used two rounds of mailed surveys to obtain input from respondents selected for experience in a given area. Participants reviewed a list of topics and provided individual ratings and commentary for each topic, as desired. During the second round, participants viewed their own responses in context of the group responses and additional commentary. Participants could adjust their ratings in the second round based on this new information [21].

Focus groups and the modified Delphi method are well-established consensus-building techniques, but they vary in level of participant interaction [21,22,23]. Crowd-voting provides efficiencies for obtaining input, with all participant interaction occurring online.

Activities used a prioritization list generated by BOLD participants in an earlier project phase [19, 24, 25]. Survey content was adapted from a list of 25 topics identified by primary care clinicians and researchers in 2009 and covered prevention, treatment options, diagnosis, communication, and outcomes of treatment [26]. Within-group work for each activity allowed for rating new topics generated by BOLD participants in the first phase of the study, as well as generation and prioritization of new topics during the activities [19, 25].

Institutional review boards at collaborating institutions (UW, HFHS, KPNC) approved the protocol for this study. All participants completed a consent form and a brief demographic form before the prioritization activity.

Data management

UW served as the data coordination center for BOLD and subsequently this study, handling recruitment and follow-up of participants across the sites and providing a common infrastructure for management of study data. Demographic, evaluation, and Delphi survey data were entered into the secure and encrypted Research Electronic Data Capture (REDCap) software platform [27]. Audio recordings and transcriptions of focus group discussions and evaluation interviews were stored on a secure and encrypted system maintained at UW. Finally, crowd-voting data, captured on IdeaScale [20], a secure Internet-based community platform, were exported directly at the conclusion of the activity for analysis.

Prioritization activity assignment and recruitment

BOLD participants were first asked to rank their choices of activities in order of preference. This was done to maintain a patient-centered approach to activity recruitment, as a slight preference for one activity might result in choosing to participate when asked. Each activity involved different levels of effort for participation. For example, focus groups require in-person attendance, whereas crowd-voting provides the opportunity to participate from home. Thus, when possible, we wanted to take preference into account.

To ensure equal representation from each study site, we stratified the sample by site. We then cross-tabulated the number who endorsed each activity to determine randomization probabilities. Participants from HFHS offered limited endorsement of crowd-voting and focus group activities. Across both sites, the mailed Delphi survey was the most highly endorsed, followed by focus groups, and then crowd-voting. We decided to randomize participants first to crowd-voting, then to focus groups, and finally to the Delphi survey. Not all eligible participants were assigned to an activity. When possible, group assignment probability was weighted by preference, subject to the constraints of the marginal total goals. For example, at HFHS, we were unable to take preference for crowd-voting or focus groups into account, given both the low number of participants at the site and the small number endorsing these activities. Those who preferred the Delphi survey were at most randomized with probability 0.8, second highest with probability 0.5, and third highest with probability 0.6. Because of the larger numbers at KPNC, we were able to include preference in the randomization for all activities at that site.

Participants were recruited approximately 6 weeks prior to the planned start of each activity. They were contacted by phone and provided a description of the activity, including the purpose, goals, expected time commitment, participant role, and incentives for participation. Recruitment continued until capacity was reached for each activity. We excluded participants from the crowd-voting activity if they did not have ready access to a computer or an active email address. Consented participants were contacted at 2 weeks and 2 days prior to an activity as a reminder of the event and to answer any questions.

Priority-setting activities

Excluding descriptions of and instructions for each specific activity, all preparatory materials provided consistent messaging across all methods to reduce variation in external factors that could influence the experience or outcomes.

Evaluation

To evaluate the quality of experiences and the perceived effectiveness of each activity, all participants received evaluation surveys (Additional file 1); a subset from each activity participated in semi-structured interviews selected to reflect a range of responses from evaluation surveys (Table 1). The evaluation survey assessed how effective each method was in meeting the overarching goals of PCOR as trustworthy, fair, balanced, respectful, and accountable [16, 28]. Questions allowed participants to evaluate the process and experience using a 10-point Likert-scale, where low numbers indicated low agreement with each item statement. We included open-ended questions to elicit input on aspects that participants liked most and least for each engagement activity. Crowd-voting and Delphi survey participants received the survey via Internet or mail, respectively, at the conclusion of the prioritization activity. Individuals participating in focus groups received the evaluation survey in person immediately following the activity.

Table 1 Evaluation interview guide questions

Analytical and statistical approaches

To evaluate our hypothesis that participants would prefer methods with greater interaction, defined both as in-person interaction with other participants (focus groups greater than crowd-voting greater than Delphi survey), we analyzed the evaluation questions using descriptive statistics. We created categorical variables for experience quality based on the Likert Scale responses (0–4 = low, 5–7 = neutral, and 8–10 = high) to each item. We used the non-parametric Kruskal-Wallis test to compare the groups. A p-value < 0.05 indicated that at least one of the groups is different from the others.

We employed thematic analysis to document and examine patterns within and across the evaluation interview transcripts [29]. The interview transcripts were checked for accuracy, de-identified, and then uploaded into the web-based Dedoose qualitative analysis program [30]. The audio recordings from the evaluation interviews were transcribed. Two research team members (DL, SOL) read through transcripts and independently generated initial lists of codes related to descriptions of patient knowledge about the activity and perceptions of experience. The team members compared their code lists, explained code definitions, and reconciled their codes into one list. The final code list was presented and explained to the rest of the research team prior to being applied to the interview transcripts for engagement-related content. The coding team regularly communicated about the codes, code applications, and emerging themes.

We assessed differences in participant characteristics across the activities. Categorical data were compared using a Chi square test, except where noted, and continuous measures were compared using a t-test. Patient characteristics include geographic location, age group (65–74 years, 75–80 years, greater than 80 years), sex, education, marital status, disability as measured by the Roland Morris Disability Questionnaire (RMDQ), and duration of pain.

Results

Activities occurred from April 2016 through March 2017. The crowd-voting activity took place over a 6-week period. Six in-person focus groups were held, lasting approximately 3 h each. Participants in the Delphi survey completed surveys at their convenience within 2 weeks. There were 38 participants in the crowd-voting activity (54.3% participation), 39 participants in the focus group activity (88.6%), and 74 participants in the Delphi survey (87.1%) (Fig. 2). College educated participants were the majority in each activity, and there were more women than men in the focus groups and Delphi survey. Otherwise, demographics across the activities were similar (Table 2). Within the top five priorities, all groups prioritized “diagnosis-causes” and efforts related to treatment strategies (Table 3). Focus group and Delphi activities generated four priorities in common, crowd-voting and focus groups generated three, and crowd-voting and Delphi had just two priorities in common.

Fig. 2
figure 2

Phase 2 flow of participants through activities

Table 2 SMARTER participant characteristics in each activity
Table 3 Prioritization activities and results

Evaluation: survey responses

The evaluation survey was completed by 123 participants. Response rates to the evaluation survey were high for the Delphi (85.1%) and focus groups (87.2%) and moderate for crowd-voting (65.9%). Characteristics of evaluation survey participants are provided in Table 4.

Table 4 Characteristics of evaluation survey respondents

Focus groups received the most favorable ratings across dimensions of experience and process (Table 5). Crowd-voting received the lowest scores on process and experience, whereas the Delphi survey earned the lowest scores for eliciting questions and allowing feedback from activity coordinators. Participants across all activities reported high agreement with the importance of patient involvement in the identification and prioritization of research topics, and the majority of participants (92% crowd-voting, 91% focus groups, 92.1% Delphi survey) were willing to participate in future such studies.

Table 5 SMARTER evaluation survey responsesa

The great majority of evaluation survey respondents provided open-ended responses to what they liked most about the activity (96.2% crowd-voting, 96.9% focus groups, 98.4% Delphi survey). The most frequently mentioned aspects were the opportunity to share ideas and hear/read about others’ experiences, the opportunity to contribute to research, and the focus on involving patients in the process. Focus group comments reflected enjoyment of in-person interaction; comments from crowd-voting and Delphi participants indicated a similar appreciation for learning from others, but the phrasing reflected the lower interaction among participants.

Overall, fewer participants commented on aspects of the activity they liked least (73.1% crowd-voting, 42.9% focus groups, 50.8% Delphi survey). The use of IdeaScale was unique to crowd-voting, and comments reflected challenges with the technology — both regarding the logistics for logging in as well as the user interface. Participants also noted that too many reminder and instructional emails with redundant content were sent over the course of the activity. Two respondents found the interaction experience as seeming isolated and impersonal. Comments from focus group participants centered primarily on logistical issues such as room temperature, distracting noise, and distance traveled to participate. Delphi participants noted the challenge of tracking their own participation, a circumstance that resulted from the delay between survey rounds. Three comments unique to Delphi respondents reflected a concern that their input was not relevant because their LBP was minimal.

Overall, the majority of participants commented favorably on their overall experience. This included enjoyment of participation, the focus of the research, and the opportunity it provided for patients. Recommendations for improvement often reflected the least liked aspects of the activities (e.g., improving the technology interface in crowd-voting). Cross-cutting recommendations for improving the activity focused on simplifying instructions and language as well as placing the study in a broader context. Crowd-voting and Delphi survey participants recommended including more information about the participants themselves, such as their diagnosis and duration of LBP in context with their responses.

Evaluation: interview responses

Of those who responded to the evaluation survey, 31 participants completed interviews (11 crowd-voting, 10 focus group, 10 Delphi participants). Interviews lasted approximately 30 min. Interviews highlighted motivation for participating, understanding of the activity goals, perceptions of the activity process and experience, and views on the role patients should play in research. Table 6 presents representative open-ended comments. More detailed interview data by theme, discussed and stratified by activity participation, is available in Additional file 2.

Table 6 Representative quotes from interview participants

Motivation for participation

Interviewees voiced common motivations for participating, including interest in learning how to manage LBP, a desire to advance research that helps others, and the appeal of engaging in an activity focused on the patient perspective. The most frequent reason cited for participation was to learn more about LBP treatment, including both learning how other individuals manage their condition and learning if new approaches were available. Similarly, participants were interested in sharing their individual experiences to help others, and many interviewees noted that simply being asked motivated their participation. Finally, a few participants credited the emphasis on hearing from patients as motivation to participate.

Understanding of activity goals

Among the participants interviewed, the majority identified the purpose of the activity as understanding the patient experience of LBP, including treatments used, those found effective, and information needs about the condition. This perception was most pronounced among those who participated in the focus group activity. The minority of interviewees understood our goals to be identification and prioritization of research questions in LBP. Seven participants said the formal goal of the activity was unclear. Of note, across activities, participants in crowd-voting most often recognized the goal of prioritization compared with individuals in the Delphi survey and focus groups.

Process measures

All interviewees commented on process outcomes related to participation. Themes identified related to logistics, clarity of instructions and materials, representation of participants, and transparency/accountability.

Logistics

Comments regarding logistics for participation reflected the nuanced approach to each activity. Among participants in the crowd-voting activity, comments largely focused on technology. The concept of online participation appealed to a number of interviewees. However, technical issues were often cited as hindering participation (i.e., account set-up and log-in), as well as the user-interface for voting. Focus group participants commented favorably on the logistics and organization of the activity; however, some remarked that additional time for discussion would have been ideal.

Participants in the crowd-voting activity noted that more hands-on moderation to guide discussion among participants and provide structure to the activity would have been beneficial. Focus group participants remarked favorably on the group discussion and interaction; however, a couple of interviewees offered negative remarks about the final summarization of topics, suggesting the wrap-up devalued the discussion.

Clarity of instructions and materials

The clarity of instructions for participation and materials prompted mixed reviews. While some participants commented that activity instructions were straightforward, others found instructions lacked clarity and recommended simplification of language and steps. Participants with experience in the medical field or survey research acknowledged the difficulty of conveying complicated topics and crafting survey questions that adequately captured the breadth of information desired. Interviewees also commented on the research topics. A number noted that a lack of specificity among the topics made it challenging to tease out the importance of any one question over another.

Representation

A few individuals interviewed noted that their LBP was either minimal or resolved, and they were concerned that this diminished the value of their contributions in comparison to others who described greater disability or limitations resulting from significant LBP.

Transparency and accountability

A number of interviewees expressed interest in seeing activity results and understanding how results would be used by the medical research community.

Role of patients in research

Interviewees noted the importance of patient involvement in research, commenting on the ability of patients to provide researchers with insight into what is relevant and important for individuals living with LBP. By involving patients, it was felt, researchers can better understand how the condition affects a person’s life, subsequently making it more likely such evidence will be used. One participant noted the ethics of patient involvement, tying the importance of involving those who are the recipients of healthcare to the decisions about what to study.

Only a few interviewees articulated specific roles for patients, such as participating in focus group discussions and paper-based surveys. Beyond providing insight on experience, additional roles for patients in research included identifying research topics, refining research questions, and framing study design to ensure that the evidence generated would be relevant and actionable for patients. One person noted involvement in determining funding decisions for research.

Discussion

Our study offers important insights into how interactive methods for involving patients and the public in patient-centered outcomes research compare. While the different methods yielded similar priorities for LBP, they differed in both how topics were ranked and in participant perceptions of experience.

Results in context

Our findings suggest participants differ in their perceptions of experience and their understanding of the activity in which they took part. For the participants in the focus groups, interactions — with other participants and with the research team — appear to have the greatest influence on their perceptions. This aligns with our hypothesis. However, the misunderstanding among focus group participants concerning the purpose of the activity suggests that the input provided did not support the goal of generating research prioritization. We believe the differences in understanding could reflect the processes of the different activities. The attention placed on group interaction and discussion in the focus groups compared with the focus on ranking topics in the conduct of the Delphi survey and crowd-voting activities could influence participant understanding of the activity. Researchers should prioritize maximizing interaction among and between participants while also ensuring participants fully understand the goals and anticipated outcomes throughout the activity.

All activities allowed participants to provide commentary and generate new topics. Interaction among participants provided important insight and context regarding priorities. In focus group discussions, participants connected the topics to their personal experiences, revealing preferences and disinterest for particular topics at a level of detail that prioritization lists alone would not capture. Further, discussions highlighted how influences from external factors such as family or peer experience, recommendations from healthcare providers, or healthcare experiences inform views and perceptions. The value of this information, while difficult to quantify, is important for funders and researchers to consider when selecting methods for engagement.

Results uptake

This project is timely, as national and international efforts to involve patients and the public in generating research portfolios grow. Understanding the effectiveness of different engagement activities will allow research teams to be more intentional regarding the approaches used in selecting methods to elicit input on research prioritization. Further, understanding participant experiences highlights areas for improving the use of these methods in the future. In our experience, working with people enrolled in a research registry provided an opportunity to hear from a community of individuals with direct experience in a defined area of health. By working with two registry sites, we reached a diverse group of individuals. Building upon the existing BOLD infrastructure allowed us to further cultivate an already-established relationship between the site teams and registry participants to conduct outreach. Staff contacted participants regularly over 2 years, establishing both a relationship and a bond of trust; these are both principles identified for meaningful patient engagement. This proved important for outreach. The majority of participants required more than one contact attempt, and we noted that follow-up calls often resulted in participation. Further efforts to expand engagement within patient registries should focus on techniques that support involvement, considering the resources necessary for conducting such work.

Another interesting finding pertains to participant views of representation. Patient-centered outcomes research encourages representation from a range of perspectives and experiences with a condition. In our study, participants noted concern about the value of their contributions when they experienced minimal back pain compared with others who experienced greater pain and disability as a result of the condition. Further work should focus on patient and public perception of appropriate representation in research.

More work is needed to build understanding around the important role that patients play to define, guide, and support research. Participants in this study expressed their belief in the value of patient involvement as a means to ensure the relevance and importance of the research conducted. While this supports the goal of patient-centered outcomes research, the roles articulated for patients were frequently limited to consented participation in research studies. This demonstrates that the concept of patient and public involvement in research is still new and not widely normalized in non-academic, non-research communities. Lack of understanding about the opportunity for patients and members of the public to inform and collaborate on research study teams may also reflect the lack of understanding of the primary goal across prioritization activities. Our team took this into account when planning for dissemination of study results and developed a video in conjunction with the CERTAIN Back Pain Research Patient Advisory Group to better explain and illustrate the role patients can play in research (Fig. 1). However, more work is needed to communicate the role that patients play as partners on research teams in order to continue building knowledge about and capacity for patient involvement in future research.

Study limitations

Our study focused on assessing prioritization methods among older adults with LBP and thus may not be generalizable for other health conditions or populations. For example, rare conditions may require methods that allow remote participation if it is not feasible to convene people in a central location due to geographic diversity [31, 32] or limitations imposed by certain health conditions. For example, face-to-face contact between people living with cystic fibrosis is not advised due to the risk of cross-infection [33]. In such circumstances, online engagement is warranted to support involvement. Characteristics of the population may also inform preferences for engagement methods. The BOLD population is limited to those with LBP 65 years and older. Thus, findings may not be generalizable to younger patient populations for whom technology is more ingrained in day-to-day communication and activities. Our experience with the BOLD registry indicated that online interaction would not be a preferred mode of communication from the outset, which informed our decision to use mailed surveys in place of online completion of the prioritization questionnaire.

We experienced technological challenges with the crowd-voting platform (IdeaScale) that resulted in lower participation rates.

Participants in our study were predominantly white, reported higher levels of formal education, and represented people enrolled in a research registry. Thus, it is important to recognize that the views and perspectives of participants are not generalizable to all people with LBP. While registries, by nature of bringing together people with a common health experience, offer a unique opportunity for involvement in identifying and prioritizing research, it is important to understand the underlying characteristics of participants and what views are underrepresented or missing.

Our team was concerned that conventional randomization would preclude participation (and limit successful study conduct) based on patient factors such as ability to travel and comfort with technology. To address this, we asked participant preference for activity participation. To control for bias introduced with this approach, we intended to use randomization by minimization to allocate participants to interactive prioritization activities while ensuring balance across groups on important factors that could heavily influence the ranking of priorities. Due to limited preference among participants for crowd-voting (n = 21), we had to modify our plan. In this case, preference for a given approach might reflect underlying participant characteristics associated with how topics are prioritized. This is a noted limitation of this study.

Obtaining input for research via survey outreach through a patient registry is more appropriate to supplement or complement collaborative research endeavors that provide dialogue and co-production of research reflective of partnership. It is important to recognize that providing input on future research topics is a different task than consenting to participate in a research study. Requesting involvement in other research activities outside of BOLD was not part of initial consent or discussions regarding participation. This could have resulted in a lack of understanding about how this new activity fit within the purpose of BOLD, thus limiting participation.

Our study highlights several areas for future research. Comparison of research prioritization methods among different health conditions (e.g., acute, rare, and/or co-existing, etc.) as well as comparison of research prioritization methods among diverse and underrepresented populations will help inform how best to further tailor approaches for patient and public involvement in research. Further exploration to compare research prioritization methods among different stakeholder groups, including multi-stakeholder groups, will inform nuances for how different methods perform amongst groups representing different perspectives, experiences, and backgrounds.

Conclusions

Patient and public involvement in research prioritization supports the goal of patient-centered outcomes research to pursue timely and important healthcare questions. Our study demonstrates that the diverse methods used in research prioritization yield similar priorities but differ in both the ways that priorities are ranked and in participant experience. Further, activities with higher levels of interaction among participants and with the research team yield more satisfaction but may require extra focus to successfully convey the core purpose of the study.

Availability of data and materials

The datasets generated and/or analyzed during the current study are not publicly available, as study participants were not informed that study data would be made available to the public. Data are available from the corresponding author on reasonable request.

Abbreviations

BOLD:

Back pain Outcomes using Longitudinal Data registry

HFHS:

Henry Ford Health System

KPNC:

Kaiser Permanente Northern California

LBP:

Low back pain

NGT:

Nominal group technique

PCOR:

Patient-centered outcomes research

PCORI:

Patient-Centered Outcomes Research Institute

REDCap:

Research Electronic Data Capture

RMDQ:

Roland Morris Disability Questionnaire

SMARTER:

Study of Methods for Assessing Research Topic Elicitation and pRioritization

UW:

University of Washington

References

  1. Tallon D. Relation between agendas of the research community and the research consumer. Lancet. 2000;355:2037–40.

    Article  CAS  Google Scholar 

  2. Crowe S, Fenton M, Hall M, Cowan K, Chalmers I. Patients', clinicians' and the research communities' priorities for treatment research: there is an important mismatch. Res Involv Engagem. 2015;1:2.

    Article  Google Scholar 

  3. Fleurence R, Selby JV, Odom-Walker K, et al. How the Patient-Centered Outcomes Research Institute is engaging patients and others in shaping its research agenda. Health Aff (Millwood). 2013;32(2):393–400.

    Article  Google Scholar 

  4. Guise J-M, O'Haire C, McPheeters M, et al. A practice-based tool for engaging stakeholders in future research: a synthesis of current practices. J Clin Epidemiol. 2011;66:666–74.

    Article  Google Scholar 

  5. O'Leary TJ, Slutsky JR, Bernard MA. Comparative effectiveness research priorities at federal agencies: the view from the Department of Veterans Affairs, National Institute on Aging, and Agency for Healthcare Research and Quality. J Am Geriatr Soc. 2010;58(6):1187–92.

    Article  Google Scholar 

  6. IOM (Institute of Medicine). Initial national priorities for comparative effectiveness research. Washington, D.C.: The National Academies Press. 2009. https://www.nap.edu/read/12648/chapter/1. Accessed Nov 6, 2019.

  7. Nass P, Levine S, Yancy C. Methods for involving patients in topic generation for patient-centered comparative effectiveness research: An international perspective. Research Priorities White Paper (PCORI-SOL-RPWG-001) for the Patient-Centered Outcomes Research Institute (PCORI). Washington, DC. http://www.pcori.org/assets/Methods-for-Involving-Patients-in-Topic-Generation-for-Patient-Centered-Comparative-Effectiveness-Research-%E2%80%93-An-International-Perspective.pdf. Accessed Nov 6, 2019.

  8. Chalmers I. How to increase value and reduce waste when research priorities are set. Lancet. 2014;383(9912):156–65.

    Article  Google Scholar 

  9. Freburger JK, Holmes GM, Agans RP, et al. The rising prevalence of chronic low back pain. Arch Intern Med. 2009;169(3):251–8.

    Article  Google Scholar 

  10. Hoy D, Bain C, Williams G, et al. A systematic review of the global prevalence of low back pain. Arthritis Rheum. 2012;64(6):2028–37.

    Article  Google Scholar 

  11. Deyo RA, Mirza SK, Martin BI. Back pain prevalence and visit rates: estimates from U.S. national surveys, 2002. Spine. 2006;31(23):2724–7.

    Article  Google Scholar 

  12. Strine TW, Hootman JM. US national prevalence and correlates of low back and neck pain among adults. Arthritis Rheum. 2007;57(4):656–65.

    Article  Google Scholar 

  13. Chou R, Deyo R, Friedly J, et al. Nonpharmacologic therapies for low back pain: a systematic review for an American College of Physicians clinical practice guideline. Ann Intern Med. 2017;166(7):493–505.

    Article  Google Scholar 

  14. AHRQ. Noninvasive treatments for low back pain. Systematic review. February 29, 2016. https://www.effectivehealthcare.ahrq.gov/topics/back-pain-treatment/research. Accessed Nov 6, 2019.

  15. Jarvik JG, Comstock BA, Heagerty PJ, et al. Back pain in seniors: the Back pain outcomes using longitudinal data (BOLD) cohort baseline data. BMC Musculoskelet Disord. 2014;15:134.

    Article  Google Scholar 

  16. PCORI (Patient-Centered Outcomes Research Institute) Methodology Committee. The PCORI methodology report. 2017. http://www.pcori.org/assets/2013/11/PCORI-Methodology-Report.pdf. .

  17. PCORI (Patient-Centered Outcomes Research Institute). Financial compensation of patients, caregivers, and patient/caregiver organizations engaged in PCORI-funded research as engaged research partners. 2015. https://www.pcori.org/sites/default/files/PCORI-Compensation-Framework-for-Engaged-Research-Partners.pdf. Accessed Nov 6, 2019.

  18. CERTAIN Website. Back Pain Research Patient Advisory Group Available at: https://www.becertain.org/. Accessed Nov 6, 2019.

  19. Lavallee DC, Comstock B, Scott MR, et al. Study of methods for assessing research topic elicitation and pRioritization (SMARTER): study protocol to compare qualitative research methods and advance patient engagement in research. JMIR Res Protoc. 2017;6(9):e168.

    Article  Google Scholar 

  20. IdeaScale website. Available at: https://ideascale.com/. Accessed Nov 6, 2019.

  21. Cantrill JA, Sibbald B, Buetow S. The Delphi and nominal group techniques in health services research. Int J Pharm Pract. 1996;4(2):67–74.

    Article  Google Scholar 

  22. Van de Ven AH, Delbecq AL. The nominal group as a research instrument for exploratory health studies. Am J Public Health. 1972;62(3):337–42.

    Article  Google Scholar 

  23. Van De Ven A, Delbecq AL. Nominal versus interacting group processes for committee decision-making effectiveness. Acad Manag J. 1971;14(2):203–12.

    Google Scholar 

  24. Bartek MA, Truitt AR, Widmer-Rodriguez S, et al. The promise and pitfalls of using crowdsourcing in research prioritization for back pain: cross-sectional surveys. J Med Internet Res. 2017;19(10):e341.

    Article  Google Scholar 

  25. Truitt AR, Monsell SE, Avins AL, Nerenz DR, Lawrence SO, Bauer Z, et al. Prioritizing research topics: a comparison of crowdsourcing and patient registry. Qual Life Res. 2018;27(1):41–50.

    Article  Google Scholar 

  26. Costa LCM, Koes BW, Pransky G, Borkan J, Maher CG, Smeets RJEM. Primary care research priorities in low back pain: an update. Spine. 2013;38(2):148–56.

    Article  Google Scholar 

  27. Harris PA, Taylor R, Thielke R, Payne J, Gonzalez N, Conde JG. Research electronic data capture (REDCap)—a metadata-driven methodology and workflow process for providing translational research informatics support. J Biomed Inform. 2009;42(2):377–81.

    Article  Google Scholar 

  28. Deverka PA, Lavallee DC, Desai PJ, et al. Stakeholder participation in comparative effectiveness research: defining a framework for effective engagement. J Comp Eff Res. 2012;1(2):181–94.

    Article  Google Scholar 

  29. Guest G, MacQueen KM, Namey EE. Applied thematic analysis. Thousand Oaks: CA. Sage Publishing; 2012. 320 pages.

    Book  Google Scholar 

  30. Dedoose website. Available at: http://dedoose.com/. Accessed Nov 6, 2019.

  31. Lavallee DC, Wicks P, Alfonso Cristancho R, Mullins CD. Stakeholder engagement in patient-centered outcomes research: high-touch or high-tech? Expert Rev Pharmacoecon Outcomes Res. 2014;14(3):335–44.

    Article  Google Scholar 

  32. Forsythe LP, Szydlowski V, Murad MH, Ip S, Wang Z, Elraiyah TA, Fleurence R, Hickam DH. A systematic review of approaches for engaging patients for research on rare diseases. J Gen Intern Med. 2014;29(Suppl 3):S788–800.

    Article  Google Scholar 

  33. Rowbotham NJ, Smith S, Leighton PA, et al. The top 10 research priorities in cystic fibrosis developed by a partnership between people with CF and healthcare providers. Thorax. 2018;73(4):388–90.

    Article  Google Scholar 

Download references

Acknowledgements

In memory of Mary Roberts Scott, our patient partner, for her partnership and contributions to the project’s design, conduct, and analysis. We would also like to acknowledge Sofya Malashanka, Jennifer Talbott, Sabrina Billon, and Caroline Shevrin for their assistance, as well as our participants who made this work possible.

Funding

This project was supported by contract number ME-1310-07328 from the Patient-Centered Outcomes Research Institute (PCORI). REDCap, hosted at the Institute of Translational Health Sciences at the University of Washington, is supported by the National Center for Advancing Translational Sciences of the National Institutes of Health under Award Number UL1 TR002319.

Author information

Authors and Affiliations

Authors

Contributions

All authors were involved in the study development and design. ART, TCE, and SOL were involved in the conduct of study activities. ART, SOL, SEM, and DCL were involved in the data analysis. All were involved in interpretation and finalization of study results. The author(s) read and approved the final manuscript.

Authors’ information

JGJ served as principal investigator for the BOLD study, DRN and ALA served as site principal investigators for the BOLD study. DCL directs patient and stakeholder engagement for the Comparative Effectiveness Research and Translation Network (CERTAIN).

Corresponding author

Correspondence to Danielle C. Lavallee.

Ethics declarations

Ethics approval and consent to participate

Institutional review boards at collaborating institutions (UW, HFHS, and KPNC) approved the protocol for this study. All participants provided informed consent.

Consent for publication

Not applicable.

Competing interests

ART, SEM, ALA, DRN, SOL, ZB, TCE, DLP, and DCL have no conflicts to disclose. JGJ receives royalties from Springer Publishing as a book co-editor; travel reimbursement for the Faculty Board of Review from the GE-Association of University Radiologists Radiology Research Academic Fellowship (GERRAF); and royalties from Wolters Kluwer/UpToDate: as a chapter author.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary information

Additional file 1.

Appendix 1: Phase 2 evaluation surveys

Additional file 2.

Appendix 2: Selected evaluation interview quotes GRIPP2

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Lavallee, D.C., Lawrence, S.O., Avins, A.L. et al. Comparing three approaches for involving patients in research prioritization: a qualitative study of participant experiences. Res Involv Engagem 6, 18 (2020). https://doi.org/10.1186/s40900-020-00196-4

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s40900-020-00196-4

Keywords