Study design
To address our specific goals, we adapted the priority setting methods described by the James Lind Alliance (JLA) [18]. Typically, JLA priority setting partnerships bring together patients and clinicians to identify unanswered research questions in a given clinical area; the list of finalized priorities is then released publicly with the goal of influencing funding agencies and researchers [4, 19, 20]. Priority setting methods include a widely distributed survey with open ended questions on a specific clinical area to generate ideas for research, response management which includes collating and removing duplicates, literature searches to determine if sufficient research has been conducted on responses, and an in-person workshop with approximately 20–25 respondents to identify the final priorities.
Context: translating research in elder care program
Translating Research in Elder Care (TREC), established in 2007, is a longitudinal pan-Canadian research program with a mission to improve the quality of care provided to LTC home residents and the quality of work-life for their paid caregivers [8, 21, 22]. A core value to TREC’s work is partnership with research end-users. The team includes approximately 40 researchers from across Canada, the United States, the United Kingdom, and Sweden, and 20 provincial and health region stakeholders. Researchers and stakeholders In 2016, TREC’s engagement strategy was expanded to include a citizen advisory committee comprised of LTC residents, potential future residents (specifically individuals living with a dementia), and family/friend caregivers to people living in LTC. The committee adopted the name Voices Of Individuals, family and friend Caregivers Educating uS (VOICES) includes members from across Canada and was established in recognition of the need for the voices of persons with lived experience to truly meet TREC’s commitment to integrated knowledge translation. TREC and VOICES jointly decided to refer to VOICES members as citizens, rather than patients. In dementia research, the term citizenship is used to reflect the personhood, visibility, voice, and inclusion of persons with lived experience [23,24,25]. Citizen in this context reflects the proactive engagement of persons with lived experience in efforts to enact social change [23]. This decision to adopt the term citizen also reflects the team’s desire to move away from the medical language of ‘patient’ which often refers to persons receiving care in acute care settings. VOICES was initially intended to play an advisory role [26, 27] but members expressed their desire to move to a more fully partnered position and were particularly interested in opportunities to provide advice on the development of new research projects. We now work with them as team members and their influence on our work, while sometimes difficult to quantify, can be felt at any gathering for which they are present. They, along with other TREC team members, were concerned that the rich database TREC had worked to collect was not being used to its full potential for secondary analyses. VOICES interest in seeing fuller use of the TREC data and their desire to become more involved in project generation led TREC to undertake an internal priority setting process to engage VOICES and other stakeholders to jointly identify 10 priority research questions that could be addressed using TREC’s existing longitudinal data repository.
TREC data
The priority setting focused on questions that could be answered within the available TREC data. TREC data come primarily from 97 LTC homes in 5 health regions across 3 Western Canadian provinces (British Columbia, Alberta, Manitoba). LTC homes were randomly selected to be representative of those in urban areas and are proportionally stratified on bed size and ownership type (public-not-for-profit, private-for-profit, voluntary not for profit) [21, 28]. The TREC team administers a suite of survey instruments (known as the TREC Survey) to staff (regulated, unregulated, social workers, dieticians, pharmacists, rehabilitation therapists, recreation therapists/aids, managers) within all participating LTC homes. This Survey consists of validated measures of physical and mental health, burnout, empowerment, work environment, organizational citizenship behaviours, job satisfaction, and individual staff demographic characteristics. Within the survey is the Alberta Context Tool, a survey based instrument, developed and validated by TREC, which is used to assess staff’s work environment [29, 30]. After 5 waves of data collection, we have information from over 339 care units, 927 nurses, 4158 care aides, and 842 care staff (social workers, dieticians, pharmacists, rehabilitation therapists, recreation therapists/aids, managers). TREC also captures resident data from the participating LTC homes using the Resident Assessment Instrument – Minimum Data Set (RAI-MDS 2.0): The RAI-MDS 2.0 is a comprehensive clinical assessment instrument that has been mandated for completion on all LTC residents in nearly every Canadian province. The instrument includes over 400 items on measures such as cognition, physical function, behaviour, mood, and clinical signs and symptoms [31, 32]. As of April 2020, RAI-MDS 2.0 assessments from nearly 60,738 residents were held in the TREC data. All data are linkable and can be used to create comprehensive, longitudinal datasets that include information on residents, staff, care units, and LTC homes. The breadth of TREC data, the ability to identify care units and validly assess care units’ context or work environment, and its longitudinal nature make it a unique resource for on-going clinical trials in quality improvement and secondary analyses.
We used the JLA priority setting approach to identify research questions within the existing TREC data. We had to modify the JLA approach for these specific reasons: 1) participants were TREC stakeholder and partners, specifically decision makers (representatives from provincial ministry of health, regional health authorities), VOICES members, LTC home owner-operators, and other agencies engaged with TREC, rather than a broader public or clinical community; 2) our focus was specifically on identifying priority research questions that could be addressed with existing TREC data, and not an open discussion of general unanswered research questions regarding LTC homes; and 3) the final priorities are intended to be used by TREC investigators and trainees with TREC data.
Participants: online survey
We administered an online survey to all VOICES members, TREC decision makers (regional health authority leaders, provincial health leaders), LTC owner-operators, and other relevant agencies associated with TREC. We did not include TREC researchers because the goal of the priority setting was to identify citizen and stakeholder research priorities, not researcher priorities. We instructed recipients that they could forward the survey to others in their network who were interested and involved in the TREC program, specifically. Survey recipients received a link to the survey and a reminder of our focus on research questions that could be addressed using TREC’s existing data (and not necessarily all research in LTC). The survey (see Supplementary Material) was divided into 5 sections, each focused on a different key aspect of the available TREC data (resident, staff, work environment, care unit, and facility). Within each section, we provided a short table outlining the main data elements. Questions were asked in the form of: “What questions do you have about LTC residents?” Participant responses were open-ended in free text boxes and respondents could list as many questions as they wanted (or leave blank). VOICES members were involved in the development and final review of the survey before it was distributed to respondents.
Data collection: online survey
Our online survey was administered from October to December 2018. Data collection consisted of three email messages (one welcome, two reminder). The survey was hosted by SimpleSurvey™, a Canadian survey vendor (https://simplesurvey.com/). Once the survey closed, a steering committee, led by two investigators (AG, SC) and one research assistant worked together to categorize the survey responses. This researcher-led steering committee was critical since committee members needed to have substantial knowledge of the data in order to determine which questions were possible within the existing TREC database.
Participants: final workshop
We held a face-to-face workshop in March 2019. The purpose of the workshop was for the attendees to come to consensus on the unranked top 10 priority research questions to be addressed with the TREC data. Ahead of the workshop, the attendees were sent a list of 34 research questions derived from the steering committee’s analysis of the survey responses. They were asked to rank the research questions in order of priority and to email us their ranked list and bring their ranked list with them on the day of the workshop. Twenty-one individuals were invited to the final workshop, two cancelled at the last minute.
Data collection: final workshop
The workshop was facilitated by an experienced facilitator familiar with TREC but not involved in the research, and who was supported by two additional trained facilitators external to TREC. None of the TREC researchers or TREC trainees were in attendance, including the authors of this paper, to ensure that the researchers did not influence or bias the priority setting process. The workshop agenda was highly structured, and consensus was reached using the Nominal Group Technique, a format described by the JLA [33]. In the Nominal Group Technique, each group member states their opinion, without justification or explanation, and once all members have had a turn a moderated discussion follows. The goal is to allow each group member to have an opportunity to express their own views while minimizing opportunities for individual members to dominate the discussion. This is then followed by voting or ranking with structured group discussions [33].
In the morning, attendees were divided into three small groups. During this small group time, everyone shared his or her top and bottom three research questions, followed by discussion of their rationale. This ensured that each person was given dedicated time to talk and was intended to minimize power imbalances. The groups were then asked to rank order all 34 research questions. Over lunch, the rankings from each of the three small groups were aggregated to create a new ranked list. In the afternoon, attendees were assigned to new small groups. They were presented with the new ranked list of research questions and asked to re-rank, focusing on the top 15 research questions, as needed. Each group’s re-ranked list was then aggregated to create the penultimate ranked list of 34 research questions. The new list was presented to attendees who then had the opportunity for final large-group discussion. Any suggested changes were voted on with a majority decision.
At the end of the workshop, we distributed a workshop evaluation to all participants. The evaluation consisted of closed and open-ended questions related to their experience during the in-person workshop. Participants were asked to indicate their level of agreement on ten questions (1 = Strongly disagree, 2 = Disagree, 3 = Neither agree or disagree, 4 = Agree, 5 = Strongly agree).
Analysis
The steering committee followed this process to analyze the online survey responses and prepare the final list of questions for the in-person workshop. First, since respondents could write multiple suggestions within a text box, unique questions were extracted. Second, the steering committee members removed responses that could not be assessed using the existing TREC data (considered “out-of-scope”) or if they coincided with existing TREC research. We retained all the out of scope suggestions for future projects. Third, the remaining suggestions were grouped into broad themes. This thematic assessment allowed us to identify duplicate or similar suggestions that could be merged. From these themes we developed an initial set of questions. Finally, two team members (AG, SC) refined the research questions in an iterative review process which assessed potential overlap in the questions and the ability to clearly assess the question using the available data. In this final step, we also further removed questions that duplicated on-going TREC research and/or were deemed out-of-scope on additional inspection. The JLA recommends a literature review to ensure that suggested research questions are unanswered. We did not conduct this review of the literature because the aim of the priority setting was to specifically address citizen and stakeholder questions using the existing TREC data, not to determine if there was enough research on the topic in the peer-reviewed literature. Instead, we focused on ensuring that research questions had not already been addressed using TREC data, rather than more broadly in the LTC home research literature. At the end of the process, we created a list of research questions derived from the suggestions provided by TREC’s partners and stakeholders in the online survey.
We analyzed the in-person workshop evaluations using descriptive summary statistics for the quantitative response items. The open-ended responses are presented in their entirety in the following results section.