Skip to main content

Construct validation of the Research Engagement Survey Tool (REST)

Abstract

Background

The Research Engagement Survey Tool (REST) was developed to examine the level of partner (e.g., patients, caregivers, advocates, clinicians, community members) engagement in research studies. The REST is aligned with eight engagement principles based on the literature and consensus reached through a five round Delphi process. Each of the engagement principles has three-five corresponding items that are assessed on two Likert type scales quantity (how often: never, rarely, sometimes, often, always, not applicable) and quality (how well: poor, fair, good, very good, excellent, not applicable). We conducted a comprehensive validation of the REST. Despite the importance of partner engagement in research, currently no gold standard measure exists.

Methods

Multiple strategies were employed to validate the REST. Here, we examine the internal consistency of items for each of the eight engagement principles. In addition, we examine the convergent validity of the comprehensive (32-item) REST with other measures (e.g., medical mistrust, Community Engagement in Research Index, Partnership Self-Assessment Tool, Wilder collaboration inventory, Partnership Assessment In community-based Research). We propose two scoring approaches for the REST; one aligned with the engagement principles and the other aligned with levels of community engagement: (1) outreach and education, (2) consultation, (3) cooperation, (4) collaboration, and (5) partnership.

Results

The REST has strong internal consistency (Cronbach’s alpha > 0.75) for each of the eight engagement principals measured on both scales (quality and quantity). The REST had negligible (e.g., medical mistrust, community engagement in research index), low (e.g., Partnership Assessment In community-based Research, Partnership Self-Assessment Tool- benefits scale), and moderate (e.g., Wilder collaboration inventory, Partnership Self-Assessment Tool- synergy scale) statistically significant correlations with other measures based on the Spearman rank correlation coefficient. These results suggest the REST is measuring something similar and correlated to the existing measures, but it captures a different construct (perceived research engagement).

Conclusions

The REST is a valid and reliable tool to assess research engagement of community health stakeholders in the research process. Valid tools to assess research engagement are necessary to examine the impact of engagement on the scientific process and scientific discovery and move the field of stakeholder engagement from best practices and lessons learned to evidence-based approaches based on empirical data.

Plain English summary

Researchers often conduct studies with partners (e.g., patients, caregivers, advocates, clinicians, community members) who also have an interest in the research topic. Depending on the research study the level of partner engagement in the research process may be high or low. Partners may be involved from the beginning including determining what topic to study and what questions the study should examine. They may suggest who should be included in the study, the geographic area of focus, and the outcome measures to be examined. In addition, they may help recruit study participants, interpret study results, and plan for how to share the results with those that need to know. No standard way exists to find out how involved a partner has been in a study from the partner’s perspective. Here we develop and validate survey questions to measure the level of partner engagement in research studies. We looked at existing survey questions used to measure similar topics to make sure that a person who takes the survey gets consistent scores. We tested the survey with community health stakeholders (e.g., patients, caregivers, advocates, clinicians, community members) who are research partners for studies at universities across the United States. Over 2 years, the partners took different versions of the survey online four times. We used the data we collected from each survey to revise the questions and make sure that it measures partner involvement accurately and reliably. The Research Engagement Survey Tool (REST) has 32 questions to examine eight engagement principles on two scales: quantity (how much) and quality (how well). The REST is a valid and reliable tool to examine partner engagement in research.

Peer Review reports

Background

Stakeholder engagement in research is the process of ensuring that key community health constituents are identified and involved throughout the research process as partners (investigators not participants). Ideally this involvement starts before project inception, so that they are able to inform study design, implementation, interpretation of results, and make use of the results when the study is completed [1]. There has been a call for better reporting and evaluation of engagement approaches, initiatives, and activities to advance the science of stakeholder engagement [2]. The engagement of stakeholders (e.g., patients and their families, clinicians, health systems, policy makers, community organizations, advocacy groups) in research projects has created lessons learned and best practices. However, few methods exist for measuring the extent to which stakeholders are engaged in a research project (e.g., quality of engagement efforts), limiting the ability to determine evidence-based approaches for stakeholder engagement [2]. This poses two major problems to advancing stakeholder engaged research. The first is that it is difficult to compare the effectiveness of various strategies employed by different research teams in incorporating stakeholder views and input. The second problem lies in determining the effect of stakeholder engaged research practices on the rates of program adoption and success of implementation.

Currently, researchers must work from a set of case studies and ‘best practices’ recommendations (e.g., actively seeking collaboration with diverse populations, offering many opportunities to give input in a variety of formats and venues, going to where people are, being transparent and trustworthy). For instance, Holzer et al. use three case studies to demonstrate some key elements (e.g., building trust, encouraging participation, promoting uptake of findings) of successful approaches to community engagement in research [3]. However, the breadth of disciplines that undertake stakeholder engaged research impedes any kind of generalization of best practices. Furthermore, stakeholder engagement can occur at any stage of research, yet may look very different in the early stages of a research project (such as hypothesis development) as compared to the dissemination phase of a translational research project. It is impossible to gauge from the existing literature what level of engagement is necessary for a study and what types of engagement practices would be best given a particular population and research question.

Reviewers of the literature tend to suggest community engagement practices have some positive impact on health improvement interventions for a range of health outcomes across various conditions. However, there is insufficient evidence to determine whether one particular model of community engagement is more effective than any other [4, 5]. In addition, such articles also simultaneously note substantial variation in the effectiveness of different practices on improving interventions without being able to determine whether any one approach consistently outshines the rest [6, 7]. A systematic review found no evidence of impact from community engagement on population health or the quality of services, but engagement initiatives did have positive impacts on housing, crime, social capital, and community empowerment. Methodological developments are needed to enable studies of complex social interventions to provide robust evidence of population impact in relation to community engagement. With no consistent approach to measuring engagement, conducting analyses across multiple studies is ineffectual.

Current approaches to measuring stakeholder engagement focus largely on qualitative methods [8,9,10,11,12]. Despite their efficacy at assessing engagement, these methods are difficult to scale up for large-scale projects and produce results that are difficult to compare across studies and do not generalize well into standard practices [13]. For these reasons Bowen et al. called for the development of a quantitative scale, grounded in theory, that is comprehensive in evaluating all elements of engagement, is easy to use, and provides psychometric data [14]. Such a scale, the Research Engagement Survey Tool (REST), has been proposed [15, 16], and has been comprehensively evaluated. This paper examines the internal consistency (reliability) and convergent validity of the REST.

The original version of REST was developed by the evaluation team of the Program for the Elimination of Cancer Disparities (PECaD) at Siteman Cancer Center [17, 18] and pilot tested in one of its programs [13]. The original version of the REST was designed to align with 11 engagement principles selected by the PECaD’s community advisory board (Disparities Elimination Advisory Committee) based on the community based participatory research and community engagement literature [11, 19,20,21,22,23,24,25,26,27,28,29]. Subsequently, revisions to the measure have been made through a five round Delphi process [15, 16, 30] and cognitive response testing [31]. The final version examines eight engagement principles (EPs) [16], applicable along the full continuum of engagement activities [15]. The EPs are:

  1. 1.

    Focus on community perspectives and determinants of health

  2. 2.

    Partner input is vital

  3. 3.

    Partnership sustainability to meet goals and objectives

  4. 4.

    Foster co-learning, capacity building, and co-benefit for all partners

  5. 5.

    Build on strengths and resources within the community or patient population

  6. 6.

    Facilitate collaborative, equitable partnerships

  7. 7.

    Involve all partners in the dissemination process

  8. 8.

    Build and maintain trust in the partnership

Each EP is assessed using three to five items that were measured on two scales: quality (how well: poor, fair, good, very good, excellent, not applicable) and quantity (how often: never, rarely, sometimes, often, always, not applicable) with five-point Likert response options. The stem for the quantity scale is “Please rate how often the partners leading the research do each of the following” and for the quality scale the question stem is “Please rate how well the partners leading the research do each of the following”. There are measures for key constituent stakeholder groups (e.g., patients [32], community [33], community advisory boards [34], coalitions [35]), however, the REST is unique in that it is applicable to all non-academic research partners and is based on their perspective.

Ideally, such a proposed psychometric scale has both high reliability—records consistently the same values for research project stakeholder engagement independent of who is reviewing—and high validity—that is, draws accurate conclusions about the presence and degree of stakeholder engagement [36]. Cronbach’s alpha is a well-developed measure of reliability, characterizing how strongly the items within each EP resemble each other [37], and is robust to the sample size of surveys conducted [38]. Nunnally and Bernstein propose a value of alpha = 0.80 as a satisfactory level of reliability, beyond which decreasing measurement error has little effect on the value for alpha [39]. Ideally, the validity of a scale would be evaluated by demonstrating that the tool produces results that agree with the ‘gold standard’ test. No gold standard exists for measuring stakeholder engagement, therefore convergent validity (the degree to which two measures of constructs that theoretically should be related, are in fact related) was used to assess construct validity by comparing the results from REST to a number of other scales that similarly measure engagement (e.g., Partnership Assessment In community-based Research [40]). Included in this comparison are also tools for measuring constructs that are theoretically associated with strong stakeholder engagement (e.g., trust in medical researchers and health literacy). This paper presents the analysis of reliability and construct validity of the REST measure.

Methods

Study overview

The study was composed of four longitudinal web-based surveys conducted between July 2017 and September 2019 (see Fig. 1). The modified versions of REST presented on sequential surveys correspond to the versions revised through a Delphi process described in detail elsewhere [16, 30]. Surveys one through three contained measures that assess dimensions of collaboration, partnership, trust in medical researchers, and engagement used in determining convergent validity. Finally, we released the fourth survey in January 2019 and it contained the final version of the REST [16, 30, 31] and asked participants to review the categories of community engagement in research and their corresponding definitions [15] and to classify their project into one of these categories: (1) outreach and education, (2) consultation, (3) cooperation, (4) collaboration, and (5) partnership.

Fig. 1
figure 1

Participants complete a short screening instrument. Those screened eligible are sent a link to the survey. Participants that complete the informed consent screen by agreeing to participate are considered enrolled. Surveys were open from July 2017 (depending on release date) to September 2019

Participants

We recruited participants (community partners in research studies) through several different methods throughout the study period (July 2017 to August 2019). Our first recruitment approach consisted of email recruitment to principal investigators (PIs) in the research team’s network involved with stakeholder-engaged research and contacts in health departments, Clinical and Translational Science Awards (CTSA) Programs, Prevention Research Centers, Transdisciplinary Research in Energetics and Cancer Centers, National Institute on Minority Health and Health Disparities Centers of Excellence, National Cancer Institute Community Networks Programs, and U.S. Department of Health & Human Services Regional Health Equity Councils. We also developed a database of community-engaged researchers nationally and reached out to them via email. We asked PIs of community engaged research studies to share information about this study with their community partners. To complement email recruitment, we conducted in person recruitment by attending local (St. Louis, MO) health fairs and local community partner meetings, posting recruitment flyers locally and attended national conferences related to community engagement. We also used recruitment resources at Washington University in St. Louis, including the Washington University Recruitment Enhancement Core and Research Match, a national health volunteer registry created by several academic institutions and supported by the National Institutes of Health as part of the CTSA program (https://www.researchmatch.org/).

Participants were included in the study if they were currently or previously involved in stakeholder-engaged research, were over age 18, and were willing to participate in an 18-month longitudinal study based on an electronic screening instrument (Additional file 1: Table S1). We removed participants from this study that screened eligible but completed the survey multiple times, provided invalid telephone numbers or zip codes, or had odd patterns in their responses (N = 95) [41, 42]. We screened 675 people, of whom 527 (78%) were eligible. Of those eligible, 487 (92%) enrolled in the study (completed informed consent). Of those enrolled, 393 (81%) completed at least one of the four surveys, while 324 (67%) completed all four surveys. See Fig. 1 for participant recruitment and survey timeline.

Procedures

Potential participants completed an eligibility screener (Additional file 1: Table S1) answering questions pertaining to the inclusion and exclusion criteria described above either online or in person. In person screeners were completed either on paper or via a tablet with a member of the research team present. The vast majority of eligible participants provided an email address and were emailed a personalized link to the first survey within two business days. Participants recruited in person that did not have an email address were provided the first survey in person, completed, and returned to a member of the research team (n = 8). After completion of the first survey, participants were provided with a $10 gift card.

For participants completing the surveys online, surveys two through four were emailed to participants either on the survey release date, or if the participant enrolled in the study after the initial survey release date, the survey was emailed to participants within five business days of completing the previous survey. When new surveys were released, they were sent to all enrolled participants, regardless of completion status for previous surveys. Participants who enrolled after initial survey release dates were sent a link to the subsequent survey within approximately four weeks of being sent the previous survey if they had not yet completed the previous survey. All online surveys, including the eligibility screener, were administered through the survey platform Qualtrics (Provo, UT).

For participants completing the surveys on paper, a member of the research team brought participants the subsequent surveys during meetings they both attended and the participants completed the surveys and returned them to the research team. Participants received $10 for completing survey two, and an extra $5 if they completed both surveys one & two. Participants received $15 per survey for completing surveys three and four, and an extra $10 if they completed both surveys. The Institutional Review Boards at both Washington University in St. Louis and New York University approved all portions of this project.

Measures

Research Engagement Survey Tool (REST)

The original version of REST was developed and pilot tested by the evaluation team for the Program for the Elimination of Cancer Disparities at Siteman Cancer Center [13, 17, 18]. The original version (survey one) contained 48 items corresponding to 11 engagement principles (EPs). Each EP contained three to five items that were measured on two scales: quality (how well) and quantity (how often). For the quality scale, response options were Poor, Fair, Good, Very Good, Excellent. For the quantity scale, response options were Never, Rarely, Sometimes, Often, Always.

Three additional revised versions of REST were presented sequentially on surveys two through four. Revisions were made based on a modified Delphi Panel process and cognitive interviews that have been described in detail elsewhere [16, 30, 31]. On survey four, an additional response option of ‘Not Applicable’ was added based on feedback from a Delphi panel process and cognitive interviews described elsewhere [16, 31].

Scoring REST

The REST has two scoring approaches, the first is aligned with EPs. We treat not applicable responses as missing in the analysis. REST scoring is done at the EP level and overall. EP specific scores were calculated as an average of non-missing items and the eight means were averaged to calculate the overall REST score. This scoring approach is used to examine the internal consistency and convergent validity of the REST. The second scoring approach aligns the REST with the categories of community engagement in research and provides a percentage in each of five categories: (1) outreach and education, (2) consultation, (3) cooperation, (4) collaboration, and (5) partnership [15]. This scoring approach does not provide one overall score, rather it is five percentages (one for each engagement level) based on the number of REST items (out of 32 total) that are scored in each category (using the scoring scheme provided in Additional file 1: Table S2) based on the survey responses.

To develop the second scoring approach, we reviewed each item against the definitions of the categories of engagement and identified the category for each response (Additional file 1: Table S2). For example, for item 1.4, “The focus is on cultural factors that influence health behaviors,” we classified it as follows:

  • For quality: poor = outreach & education, fair = outreach & education, good = outreach & education, very good = consultation, excellent = cooperation. This means if a participant responds with poor, fair, or good one point is added to the outreach & education category, if the participant instead responds very good one point is added to the consultation category, or if the participant had responded excellent one point would be added to the cooperation category.

  • For quantity: never = outreach & education, rarely = outreach & education, sometimes = outreach & education, often = consultation, excellent = consultation. This means if a participant responds never, rarely, or sometimes to item 1.4 one point is added to the outreach & education category, if the participant had instead responded often, one point would be added to the consultation category, or if the responded reported excellent for item 1.4 one point would be added to the consultation category.

A similar process was done for each item (Additional file 1: Table S2). To calculate the overall score by survey respondent, we gave each participant a point for the category of engagement based on their response for that item (Additional file 1: Table S2). For example, if a participant responded good for item 1.4 on the quality scale, they would get one point in the outreach & education category. Then, for each survey respondent, we summed the points for each category of engagement and calculated a percentage with 32 as the denominator; the version of REST that the participants completed had 32 items (comprehensive version). We examine the average percentages by category of engagement.

Other measures

On survey one, we included measures of health literacy [43, 44], subjective numeracy [45], medical mistrust [46], trust in medical researchers [47], a survey of community engagement [34], and the Partnership Assessment In community-based Research (PAIR) [40]. The measure of medical mistrust [46] was calculated as an unweighted sum score with 12 subtracted from the total. The trust in medical researchers score [47] was calculated as a percentage of the total range. For both the medical mistrust and trust in medical researchers scores, higher values indicate more trust in medical researchers. The Kagan et al. [34] summary score was calculated similar to REST, as a weighted average over the three sub-sections of community-involvement, relevance or research, and collaboration & communication. The Kagan et al. survey is a measure of the extent in which community advisory boards (CABs) are involved in research activities. The PAIR [40] measure was also calculated similarly, with a mean score for each dimension (communication, collaboration, evaluation/continuous improvement, benefits, and partnership) and then the dimension means averaged to create an overall score. The PAIR measure is designed to evaluate partnerships between community members and researchers. For both the Kagan et al. and PAIR measures, higher scores indicate higher engagement or a more developed partnership.

On survey two, we included the community engagement research index (CERI) [11], the trust subscale of the coalition self-assessment survey (CSAS) [35], and the community campus partnerships for health (CCPH) principles [48]. The CERI measures the level of community participation in research, while the trust subscale of the CSAS examines trust among coalition members and the CCPH principles measure collaborative partnerships between the community and academic institutions. The CERI was calculated according to Khodyakov et al. [11] by creating a summed index score over the 12 items with higher scores indicating more engagement in research. The trust portion of the CSAS was calculated as an average score [35], with higher values indicating higher trust.

On survey three, we included the partnership self-assessment tool (PSAT) [49, 50] and the Wilder collaboration inventory [51, 52]. The PSAT includes measures of 11 dimensions, including: (1) synergy, (2) leadership, (3) efficiency, (4) administration & management, (5) non-financial resources, (6) financial resources, (7) decision making, (8) benefits, (9) drawbacks, (10) comparing benefits and drawbacks, and (11) satisfaction. Each dimension has several items that are averaged together to create the overall dimension score, with higher scores indicating higher levels of the dimension, except for the benefits and drawbacks scales which are created as percentage scores and the comparison of benefits and drawbacks which consists of only one item [49, 50]. The Wilder collaboration inventory contains 40 total items pertaining to 20 factors of collaboration (one to three items per factor), within six overall categories of collaboration (environment, member characteristics, process/structure, communication, purpose, and resources; two to six factors per category) that are averaged to create an overall score [51, 52].

Demographic questions (age, gender, race, ethnicity, education level, region) and project description questions were presented on survey one, however, if a participant had not previously responded to survey one before they were sent the subsequent surveys, the demographic questions and project descriptions questions were asked of the participant on whichever survey they completed first. Age was measured continuously in years and gender was coded as male, female, or other. Race and ethnicity were asked as two separate questions but were combined into categories of Non-Hispanic/Latino(a) Black, Non-Hispanic/Latino(a) White, Hispanic, Asian, and Other/ Multiracial/ Unknown. Education level was coded as less than high school, high school degree or GED, some college or associates degree, college degree, or graduate degree. Region was coded as northeast, west, south, midwest, and non-state area (includes Virgin Islands and Puerto Rico). Project description questions included the following: open ended description of project and project purpose, the participants’ project role, how long participant had worked on the project, and how long the participant had collaborated with the academic/university partner.

Statistical analysis

Descriptive statistics including mean, median, and standard deviation were calculated by item, EP, and for the overall measure. Frequencies and percentage of ‘not applicable’ responses by items were also calculated. We calculated Cronbach’s alpha for each EP of REST to assess internal consistency. To measure convergent validity of REST with other similar constructs, we calculated Spearman’s correlation coefficients between REST and other measures (i.e., Trust in Medical Researchers, Medical Mistrust, PAIR, CERI, Wilder Collaboration inventory, CSAS, PSAT, Kagan survey of community engagement). The addition of the ‘not applicable’ response option led to a larger number of missing responses on the version of REST presented on survey four. We therefore conducted sensitivity analysis throughout, using a sample of only those with all non-missing items and examined differences between the results. We conducted all aforementioned analyses for both the quality and quantity response scales of REST. All statistical analyses were conducted in SAS ® version 9.4.

Results

The majority of participants were female (80%), from the Midwest region of the United States (53%) and had a college degree or higher level of education (75%). The participants were mostly either non-Hispanic/Latino(a) Black (41%) or non-Hispanic/Latino(a) White (42%) and had a mean age of 42 years (Table 1).

Table 1 Demographic characteristics of participants who enrolled (n = 487) and completed survey 4 (n = 336)

REST summary and scores

EP means for the quality scale range from 3.6 to 3.8 (where 1 = poor and 5 = excellent), while means for the quantity scale range from 3.7 to 4.0 (where 1 = never and 5 = always). The mean score on the overall quality version of REST was 3.6 (95% CI 3.5, 3.7) and the overall quantity mean score was 3.9 (95% CI 3.8, 4.0) (Table 2).

Table 2 Mean (95% confidence interval) and Cronbach’s alpha for engagement principles—final version of REST

When looking at how REST aligns with the categories of stakeholder engagement in research (see Additional file 1: Table S2 for classification information), the range of percent by category ranged widely, as participants were engaged in many different types of projects in our sample across the stakeholder engagement continuum. For the quality scale of REST, average percentages by category of stakeholder engagement ranged from 0 to 100% for outreach and education with a median of 6%; 0–75% for consultation with a median of 9%, 0–81% for cooperation with a median of 25%; 0–75% for collaboration with a median of 41%; and 0–22% for partnership with a median of 3%. For the quantity scale of REST, average percentages ranged from 3 to 84% for outreach and education with a median of 3%; 0–75% for consultation with a median of 9%; 0–78% for cooperation with a median of 22%; 0–75% for collaboration with a median of 47%; and 0–25% for partnership with a median of 6%.

Item specific summary statistics are presented in Table 3. Overall, item means were typically higher for the quantity scale versus the quality scale. The median for all items was 4.0, except for item 7.3 (“All partners have the opportunity to be coauthors when the work is published.”) on the quality scale were the median was 3.0. The item with the highest mean score on both the quantity and quality scales was the same, item 8.4 (“All partners respect the population being served.”). The item with the lowest mean score was also the same on both scales, item 7.3 (“All partners have the opportunity to be coauthors when the work is published.”).

Table 3 Item information summary

Six (19%) items had a large amount of not applicable responses (> 5%). Items that met this criteria were consistent on both the quality and quantity scales and included items:

  • 1.3—“The effort incorporates factors (for example housing, transportation, food access, education, employment) that influence health status.”

  • 3.5—“All partners continue community-engaged activities beyond an initial project, activity, or study.”

  • 6.1—“Fair processes have been established to manage conflict or disagreements.”

  • 6.4—“All partners agree on ownership of data for publications and presentations.”

  • 7.3—“All partners have the opportunity to be coauthors when the work is published.”

  • 8.2—“All partners are confident that they will receive credit for their contributions to the partnership.”

Internal consistency

Results on the final comprehensive version of REST (from survey four) showed strong internal consistency among the EPs for both the quality (Cronbach’s alpha range: 0.83 to 0.92) and quantity (Cronbach’s alpha range: 0.79–0.91) versions of the measure (Table 2). For EP 7 (Involve all partners in the dissemination process), results showed a slight increase in alpha if item 7.3 (All partners have the opportunity to be coauthors when the work is published) was removed. The alpha increased slightly to 0.84 from 0.83 for the quality version, and 0.81 from 0.79 for the quantity version. However, given the slight improvements with the deletion of this item it was retained in the comprehensive REST.

Convergent validity

REST was significantly correlated with several of the comparison measures we used (Table 4). REST showed a statistically significant, but negligible positive correlation with Mainous trust in medical researchers scale (quantity only: r = 0.12, p = 0.03) [46], Hall trust in medical researchers (quality: r = 0.18, p < 0.001; quantity: r = 0.21, p < 0.001) [47], the CERI (quality: r = 0.19, p = 0.001; quantity: r = 0.25, p < 0.001)[11], and PSAT drawbacks dimension (quality: r = − 0.21, p < 0.001; quantity: r = − 0.26, p < 0.001) [49, 50]. There was a negligible and insignificant correlation between REST and each of the single item literacy screeners and a negligible significant correlation between the REST and the subjective numeracy ability (quality: r = 0.11, p = 0.04; quantity: r = 0.11, p = 0.05) and preferences (quality: r = 0.12, p = 0.03; quantity: r = 0.12, p = 0.03) subscales [43,44,45].

Table 4 Comprehensive version of REST convergent validity with other measures

The REST has a low positive correlation with the PAIR (quality: r = 0.34, p < 0.001; quantity: r = 0.44, p < 0.001) [40], PSAT non-financial resources dimension (quality only: r = 0.47, p < 0.001), PSAT benefits dimension (quality: r = 0.33, p < 0.001; quantity: r = 0.41, p < 0.001), and PSAT comparing benefits and drawbacks dimension (quality: r = 0.39, p < 0.001; quantity: r = 0.42, p < 0.001) [49, 50]. REST EP8 (Build and maintain trust in the partnership) showed a low positive correlation with the trust measure of the CSAS (quality: r = 0.40, p < 0.001; quantity: r = 0.42, p < 0.001) [35].

The REST has a moderate correlation with the Kagan et al. measure (quality: r = 0.50, p < 0.001; quantity: r = 0.56, p < 0.001) [34] and the Wilder collaboration inventory (quality: r = 0.54, p < 0.001; quantity: r = 0.54, p < 0.001) [51, 52]. REST also showed a moderate correlation with seven dimensions of the PSAT: synergy dimension (quality: r = 0.61, p < 0.001; quantity: r = 0.62, p < 0.001), satisfaction dimension (quality: r = 0.61, p < 0.001; quantity: r = 0.65, p < 0.001), non-financial resources dimension (quantity only: r = 0.52, p < 0.001), leadership dimension (quality: r = 0.69, p < 0.001; quantity: r = 0.69, p < 0.001), efficiency dimension (quality: r = 0.62, p < 0.001; quantity: r = 0.59, p < 0.001), administration/management dimension (quality: r = 0.63, p < 0.001; quantity: r = 0.64, p < 0.001), and the decision making dimension (quality: r = 0.51, p < 0.001; quantity: r = 0.51, p < 0.001) [49, 50, 53].

While the statistically significant correlations show the measures are correlated (as theoretically hypothesized), the levels of correlation were negligible, low or moderate.

Discussion

We examined the internal consistency and construct validity of the REST. Given the lack of a gold standard measure of stakeholder engagement in research, we calculated the correlation (convergent validity) with other theoretically related constructs (e.g., partnership, collaboration, community engagement, trust, and mistrust). We found statistically significant correlations (negligible, low, moderate) with other measures theoretically associated with stakeholder engagement. However, the lack of high correlation with any of the existing measures suggests the REST is measuring a different construct (perceived stakeholder engagement in research) than these existing measures. Together the results suggest the REST is a valid (research engagement construct) and reliable (internally consistent) tool to assess research engagement of non-academic stakeholders in research. Valid and reliable tools to assess research engagement are necessary to examine the impact of stakeholder engagement on the scientific process and scientific discovery and move the field of stakeholder engagement from best practices and lessons learned to evidence-based approaches based on empirical data and rigorous scientific study designs.

Strengths, limitations, and future directions

Our study findings should be considered in the context of several limitations. First, recruitment delays caused the study design to change, and we ended up recruiting throughout the entire study period versus recruiting all participants before survey one, then consecutively releasing surveys two, three and four to all participants at once. This resulted in some participants doing surveys closer together, while some did them further apart. However, only 31 participants (6%) did the surveys out of order, while 456 (94%) did the surveys in order. Demographic characteristics did not differ between these participants. Second, timing of surveys could have had an effect on those involved in ongoing projects. On survey four, we asked participants to classify the status of their project (just started, ongoing, completed). Of the 336 that completed survey four, 20 (6%) indicated that the project had just started, 174 (52%) indicated that the project was ongoing, and 142 (42%) indicated that the project had been completed (Table 1). Participants whose project was ongoing or had just started may have had changes in the level of engagement across the four surveys, whereas those with complete projects may not have had changes in the level of engagement across the four surveys.

Third, we had a large portion of participants lost to follow up, leading to a smaller sample size for survey four as compared to the number of participants who completed consent. Attrition rate for this study was 31% (151 lost to follow up by survey four) most of which occurred from participants not completing any of the surveys (n = 94; 19%). Among participants that completed one survey (n = 393), 85% completed the final longitudinal survey. While 80% follow-up has been stated as a cut-off rate for acceptable loss to follow-up [54], a systematic review of longitudinal studies found an average retention rate of 74% (standard deviation = 20%), a rate that remained consistent independent of the duration of the study or study type [55]. Studies on the impact of attrition in longitudinal research generally suggest that a 25–30% loss to follow up is acceptable [56], with the impact of further attrition dependent on the extent to which the data is missing at random (acceptable results with up to 25–60% loss to follow-up) or missing not at random (bias present at 30% loss to follow-up) [57, 58].

We also had a higher percent of missing responses on the final version of REST (survey four). This was due primarily to the addition of a ‘not applicable’ response option. This was indicated by the low number of missing responses other than “not applicable” responses among all items (Table 3). Due to attrition and missing data some of the analyses are based on samples of size 224 (67% of the analytic sample). However, we conducted a sensitivity analysis to compare complete case data and data including missingness and found the results to be similar. Finally, REST is currently only available in English, and we were unable to estimate the time to complete. We have time to complete the entire survey, calculated as the finished time minus the starting time. However, the survey had additional questions and participants could start and stop the survey and return later to complete. Excluding those that took more than 30 min to complete, the mean time to completion for the survey is 14 min (median = 13 min) based on this we estimate the REST takes less than 10 min to complete.

Despite these limitations, REST and our study have several strengths. The REST was developed through a stakeholder engaged process from a community-academic partnership (Disparities Elimination Advisory Committee) and was validated using input from stakeholders (e.g., patients and their families, clinicians, health systems, policy makers, community organizations, advocacy groups). REST is flexible and a general tool that can be used across a variety of project types, stages, and stakeholder groups (e.g., community advisory boards, patients, community members, health departments, health care organizations) [59]. REST is easy to administer via online web-survey and also shows potential to be completed via a paper-based survey. The REST is disease, demographic group (e.g., gender, race, age), and stakeholder group (e.g., community advisory boards, patients, community members, health departments, health care organizations) agnostic to allow for use across a broad range of community engaged research activities [59]. The REST was designed to fill a gap due to the dearth of existing measures on stakeholder engagement in research. Measures that assess the perceived level of engagement of non-academic research partners across a broad array of engagement activities, research projects, diseases, health outcomes, and populations are necessary to build an evidence-base for stakeholder engagement by determining the quantity and quality of engagement necessary for successful outcomes.

Conclusions

The REST is a tool that examines how stakeholders (e.g., patients and their families, clinicians, health systems, policy makers, community organizations, advocacy groups) understand and experience their engagement in research projects. In the future, REST should be developed and validated in languages other than English (e.g., Spanish and Mandarin). We do not believe direct translation is appropriate but believe we have developed an approach that can be adapted to other languages. In an implementation study we demonstrate the ability of the REST to measure engagement across a broad array of projects with different levels of engagement [59]. However, the REST should also be examined longitudinally in projects over time to examine test re-test reliability and to see how sensitive REST is to changes in engagement over time. This would allow for the examination of the quantity and quality of engagement necessary to move partnerships along the engagement continuum. Tools that assess stakeholder engagement are necessary for the empirical examination of the influence of engagement on the types of research, questions addressed, service improvement, scientific process, and scientific discovery.

Availability of data and materials

The datasets used and/or analyzed during the current study are available from the corresponding author on reasonable request and regulatory (IRB) approval.

Abbreviations

REST:

Research Engagement Survey Tool

PI(s):

Principal investigator(s)

EP(s):

Engagement principle(s)

CTSA:

Clinical and Translational Science Awards

PAIR:

Partnership Assessment In community-based Research

CERI:

Community engagement research index

CSAS:

Coalition self-assessment survey

CCPH:

Community campus partnerships for health

PSAT:

Partnership self-assessment tool

CI:

Confidence interval

GED:

General Education Develop test (high school equivalent)

References

  1. Huzzard T. Achieving impact: exploring the challenge of stakeholder engagement. Eur J Work Organ Psychol [Internet]. 2021;30:379–89. https://doi.org/10.1080/1359432X.2020.1761875.

    Article  Google Scholar 

  2. Goodman MS, Sanders Thompson VL. The science of stakeholder engagement in research: classification, implementation, and evaluation. Transl Behav Med [Internet]. 2017. https://doi.org/10.1007/s13142-017-0495-z.

    Article  Google Scholar 

  3. Holzer JK, Ellis L, Merritt MW. Why we need community engagement in medical research. J Investig Med [Internet]. 2014;62:851–5.

    Article  Google Scholar 

  4. Cyril S, Smith BJ, Possamai-Inesedy A, Renzaho AMN. Exploring the role of community engagement in improving the health of disadvantaged populations: a systematic review. Glob Health Action [Internet]. 2015;8:29842. https://doi.org/10.3402/gha.v8.29842.

    Article  Google Scholar 

  5. Boote J, Telford R, Cooper C. Consumer involvement in health research: a review and research agenda. Health Policy (New York) [Internet]. 2002;61:213–36.

    Article  Google Scholar 

  6. O’Mara-Eves A, Brunton G, Oliver S, Kavanagh J, Jamal F, Thomas J. The effectiveness of community engagement in public health interventions for disadvantaged groups: a meta-analysis. BMC Public Health [Internet]. 2015;15:1–23. https://doi.org/10.1186/s12889-015-1352-y.

    Article  Google Scholar 

  7. Milton B, Attree P, French B, Povall S, Whitehead M, Popay J. The impact of community engagement on health and social outcomes: a systematic review. Community Dev J. 2012;47(3):316–34.

    Article  Google Scholar 

  8. Schulz AJ, Israel BA, Lantz P. Instrument for evaluating dimensions of group dynamics within community-based participatory research partnerships. Eval Program Plann. 2003;26:249–62.

    Article  Google Scholar 

  9. Lantz PM, Viruell-Fuentes E, Israel BA, Softley D, Guzman R. Can communities and academia work together on public health research? Evaluation results from a community-based participatory research partnership in Detroit. J Urban Heal [Internet]. 2001;78:495–507. https://doi.org/10.1093/jurban/78.3.495.

    Article  CAS  Google Scholar 

  10. Israel BA, Schulz AJ, Parker EA, Becker AB. Review of community-based research: assessing partnership approaches to improve public health [Internet]. Annu Rev Public Health. 1998;19:173–202.

    Article  CAS  PubMed  Google Scholar 

  11. Khodyakov D, Stockdale S, Jones A, Mango J, Jones F, Lizaola E. On measuring community participation in research. Health Educ Behav [Internet]. 2013;40:346–54.

    Article  Google Scholar 

  12. Francisco VT, Paine AL, Fawcett SB. A methodology for monitoring and evaluating community health coalitions. Health Educ Res [Internet]. 1993;8:403–16.

    Article  CAS  Google Scholar 

  13. Goodman MS, Sanders Thompson VL, Arroyo Johnson C, Gennarelli R, Drake BF, Witherspoon M, et al. Evaluating community engagement in research: quantitative measure development. J Community Psychol [Internet]. 2017;45:17–32. https://doi.org/10.1002/jcop.21828.

    Article  Google Scholar 

  14. Bowen DJ, Hyams T, Goodman M, West KM, Harris-Wai J, Yu JH. Systematic review of quantitative measures of stakeholder engagement. Clin Transl Sci. 2017;10(5):314–36.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  15. Sanders Thompson VL, Ackermann N, Bauer KL, Bowen DJ, Goodman MS. Strategies of community engagement in research: definitions and classifications. Transl Behav Med [Internet]. 2020. https://doi.org/10.1093/tbm/ibaa042/5838783.

    Article  Google Scholar 

  16. Goodman MS, Ackermann N, Bowen DJ, Thompson V. Content validation of a quantitative stakeholder engagement measure. J Community Psychol [Internet]. 2019;47:1937–51. https://doi.org/10.1002/jcop.22239.

    Article  Google Scholar 

  17. Arroyo-Johnson C, Allen ML, Colditz GA, Hurtado GA, Davey CS, Thompson VLS, et al. A tale of two community networks program centers: operationalizing and assessing CBPR principles and evaluating partnership outcomes. Prog Community Heal Partners Res Educ Action [Internet]. 2015;9:61–9.

    Article  Google Scholar 

  18. Thompson VLS, Drake B, James AS, Norfolk M, Goodman M, Ashford L, et al. A community coalition to address cancer disparities: transitions, successes and challenges. J Cancer Educ [Internet]. 2014;30:616–22.

    Article  Google Scholar 

  19. Israel BA, Schulz AJ, Parker EA, Becker A. Review of community-based research: assessing partnership approaches to improve public health. Annu Rev Public Health [Internet]. 1998;19:173–202.

    Article  CAS  Google Scholar 

  20. McCloskey DJ, McDonald MA, Cook J, Heurtin-Roberts S, Updegrove S, Sampson D, et al. Community engagement : definitions and organizing concepts from the literature. Prince Community Engagem. 2nd ed. Atlanta: The Centers for Disease Control and Prevention; 2012. p. 41.

    Google Scholar 

  21. Khodyakov D, Stockdale S, Jones F, Ohito E, Jones A, Lizaola E, et al. An exploration of the effect of community engagement in research on perceived outcomes of partnered mental health services projects. Soc Ment Health [Internet]. 2011;1:185–99.

    Article  Google Scholar 

  22. Wallerstein NB, Duran B. Using community-based participatory research to address health disparities. Health Promot Pract. 2006;7:312.

    Article  PubMed  Google Scholar 

  23. Nueces DL, Hacker K, DiGirolamo A, Hicks S, De Las-Nueces D, Hicks LS. A systematic review of community-based participatory research to enhance clinical trials in racial and ethnic minority groups. Health Serv Res [Internet]. 2012;47:1363–86.

    Article  Google Scholar 

  24. Burke JG, Hess S, Hoffmann K, Guizzetti L, Loy E, Gielen A, et al. Translating community-based participatory research principles into practice. Prog Community Heal Partnerships Res Educ Action [Internet]. 2013;7:109–109.

    Article  Google Scholar 

  25. Israel BA, Schulz AJ, Parker EA, Becker AB, Allen AJ, Guzman JR. Critical issues in developing and following CBPR principles. Community Based Particip Res Health Process Outcomes; 2008;47–66.

  26. Butterfoss FD, Francisco VT. Evaluating community partnerships and coalitions with practitioners in mind. Heal Promot Pract [Internet]. 2004;5:108–14.

    Article  Google Scholar 

  27. Clinical and Translational Science Awards Consortium Community Engagement Key Function Committee Task Force on the Principles of Community Engagement. Principles of Community Engagement [Internet]. NIH Publication No. 11-7782; 2011. http://www.atsdr.cdc.gov/communityengagement/.

  28. Ahmed SM, Palermo A-GS. Community engagement in research: frameworks for education and peer review. Am J Public Health [Internet]. 2010;100:1380–7.

    Article  Google Scholar 

  29. Butterfoss FD, Goodman RM, Wandersman A. Community coalitions for prevention and health promotion: factors predicting satisfaction, participation, and planning. Health Educ Q. 1996;23:65–79.

    Article  CAS  PubMed  Google Scholar 

  30. Goodman MS, Ackermann N, Bowen DJ, Panel D, Thompson VS. Reaching consensus on principles of stakeholder engagement in research. Prog Community Health Partnersh. 2020;14:117–27.

    Article  PubMed  PubMed Central  Google Scholar 

  31. Thompson VLS, Leahy N, Ackermann N, Bowen DJ, Goodman MS. Community partners’ responses to items assessing stakeholder engagement: cognitive response testing in measure development. PLoS ONE. 2020;15:e0241839.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  32. Hamilton CB, Hoens AM, McQuitty S, McKinnon AM, English K, Backman CL, et al. Development and pre-testing of the Patient Engagement In Research Scale (PEIRS) to assess the quality of engagement from a patient perspective. PLoS ONE. 2018;13:e0206588.

    Article  PubMed  PubMed Central  Google Scholar 

  33. Khodyakov D, Stockdale S, Jones A, Mango J, Jones F, Lizaola E. On measuring community participation in research. Heal Educ Behav [Internet]. 2013;40:346–54. https://doi.org/10.1177/1090198112459050.

    Article  Google Scholar 

  34. Kagan JM, Rosas SR, Siskind RL, Campbell RD, Gondwe D, Munroe D, et al. Community-researcher partnerships at NIAID HIV/AIDS clinical trials sites: insights for evaluation & enhancement. Prog Community Health Partnersh. 2012;6:311–20.

    Article  PubMed  PubMed Central  Google Scholar 

  35. Peterson JW, Lachance LL, Butterfoss FD, Houle CR, Nicholas EA, Gilmore LA, et al. Engaging the community in coalition efforts to address childhood asthma. Health Promot Pract [Internet]. 2006;7:56S-65S. https://doi.org/10.1177/1524839906287067.

    Article  Google Scholar 

  36. Streiner DL, Norman GR, Cairney J. Health measurement scales: a practical guide to their development and use. Oxford: Oxford University Press; 2015.

    Book  Google Scholar 

  37. Barkto J, Bartko JJ. The intraclass correlation coefficient as a measure of reliability. Psychol Rep. 1966;19:3–11.

    Article  Google Scholar 

  38. Iacobucci D, Duhachek A. Advancing alpha: measuring reliability with confidence. J Consum Psychol [Internet]. 2003;13:478–87.

    Article  Google Scholar 

  39. Nunnally J, Bernstein I. Psychometric theory. 1994 [cited 2021 May 8]. http://vlib.kmu.ac.ir/kmu/handle/kmu/84743.

  40. Arora PG, Krumholz LS, Guerra T, Leff SS. Measuring community-based participatory research partnerships: the initial development of an assessment instrument. Prog Community Heal Partnerships Res Educ Action [Internet]. 2015;9:549–60.

    Article  Google Scholar 

  41. Barge S, Gehlbach H. Using the theory of satisficing to evaluate the quality of survey data. Res High Educ. 2012;53:182–200. https://doi.org/10.1007/s11162-011-9251-2.

    Article  Google Scholar 

  42. Leiner DJ. Too fast, too straight, too weird: non-reactive indicators for meaningless data in internet surveys. Surv Res Methods [Internet]. 2019;13:229–48.

    Google Scholar 

  43. Chew LD, Griffin JM, Partin MR, Noorbaloochi S, Grill JP, Snyder A, et al. Validation of screening questions for limited health literacy in a large VA outpatient population. J Gen Intern Med. 2008;23:561–6.

    Article  PubMed  PubMed Central  Google Scholar 

  44. Chew LD, Bradley KA, Boyko EJ. Brief questions to identify patients with inadequate health literacy. Fam Med. 2004;36:588–94.

    PubMed  Google Scholar 

  45. Fagerlin A, Zikmund-Fisher BJ, Ubel PA, Jankovic A, Derry HA, Smith DM. Measuring numeracy without a math test: development of the Subjective Numeracy Scale (SNS). Med Decis Mak. 2007;27:672–80.

    Article  Google Scholar 

  46. Mainous AG, Smith DW, Geesey ME, Tilley BC. Development of a measure to assess patient trust in medical researchers. Ann Fam Med [Internet]. 2006;4:247–52.

    Article  Google Scholar 

  47. Hall M, Camacho F, Lawlor JS, Depuy V, Sugarman J, Weinfurt K. Measuring trust in medical researchers. Med Care. 2006;44:1048–53.

    Article  PubMed  Google Scholar 

  48. Bell-Eikins JB. A case study of a successful community-campus partnership: changing the environment through collaboration. Boston: University of Massachusetts; 2002.

    Google Scholar 

  49. Center for the Advancement of Collaborative Strategies in Health. Partnership self-assessment tool: questionnaire [Internet]. 2002. https://atrium.lib.uoguelph.ca/xmlui/bitstream/handle/10214/3129/Partnership_Self-Assessment_Tool-Questionnaire_complete.pdf?sequence=1&isAllowed=y.

  50. National Collaborating Center for Methods and Tools. Partnership evaluation: the partnership self-assessment tool [Internet]. 2008 [cited 2019 Jan 8]. https://www.nccmt.ca/knowledge-repositories/search/10.

  51. Mattessich PW, Murray-Close M, Monsey BR, Wilder Research Center. Collaboration: what makes it work, a review of research literature on factors influencing successful collaboration. 2nd ed. St. Paul: Amherst H. Wilder Foundation; 2001.

    Google Scholar 

  52. Derose K, Beatty A, Jackson C. Evaluation of community voices Miami: affecting health policy for the uninsured [Internet]. Santa Monica: RAND Corporation; 2004.

    Book  Google Scholar 

  53. Luke DA, Calhoun A, Robichaux CB, Elliott MB, Moreland-Russell S, et al. The program sustainability assessment tool: a new instrument for public health programs. Prev Chronic Dis [Internet]. 2014;11:130184.

    Article  Google Scholar 

  54. Von Elm E, Altman DG, Egger M, Pocock SJ, Gøtzsche C, Vandenbroucke JP. Policy and practice the strengthening the reporting of observational studies in epidemiology (STROBE) statement: guidelines for reporting observational studies*. Bull World Health Organ. 2007;045120:867–72.

    Article  Google Scholar 

  55. Teague S, Youssef GJ, Macdonald JA, Sciberras E, Shatte A, Fuller-Tyszkiewicz M, et al. Retention strategies in longitudinal cohort studies: a systematic review and meta-analysis. BMC Med Res Methodol [Internet]. 2018;18:151. https://doi.org/10.1186/s12874-018-0586-7.

    Article  Google Scholar 

  56. der Wiel AB, van Exel E, de Craen AJ, Gussekloo J, Lagaay A, Knook D, et al. A high response is not essential to prevent selection bias. J Clin Epidemiol [Internet]. 2002;55:1119–25.

    Article  Google Scholar 

  57. Kristman V, Manno M, Côté P. Loss to follow-up in cohort studies: how much is too much? Eur J Epidemiol [Internet]. 2003;19:751–60. https://doi.org/10.1023/B:EJEP.0000036568.02655.f8.

    Article  Google Scholar 

  58. Gustavson K, von Soest T, Karevold E, Røysamb E. Attrition and generalizability in longitudinal studies: findings from a 15-year population-based study and a Monte Carlo simulation study. BMC Public Health [Internet]. 2012;12:918. https://doi.org/10.1186/1471-2458-12-918.

    Article  Google Scholar 

  59. Bowen DJ, Ackermann N, Thompson VS, Nederveld A, Goodman M. A study examining the usefulness of a new measure of research engagement. J Gen Intern Med [Internet]. 2022;37:50–6.

    Article  Google Scholar 

Download references

Acknowledgements

This research was funded by the Patient-Centered Outcome Research Institute (PCORI), ME-1511-33027. All statements in this report, including its findings and conclusions, are solely the authors’ and do not necessarily represent the views of PCORI, its Board of Governors, or its Methodology Committee. The authors would like to thank Drs. Jonathan M. Kagan, Prerna G. Arora, and Mark Hall for sharing additional materials with us regarding their measures. In addition, the authors would like to thank the Disparities Elimination Advisory Committee of the Program to Eliminate Cancer Disparities at Siteman Cancer Center, the St. Louis Patient Research Advisory Board, and the members of the Delphi Panel for their thoughtful contributions to this work. The Washington University Recruitment Enhancement Core services is supported by the Washington University Institute of Clinical and Translational Sciences grant UL1TR002345 from the National Center for Advancing Translational Sciences (NCATS) of the National Institutes of Health (NIH). The content is solely the responsibility of the authors and does not necessarily represent the official view of the NIH. The data for this paper was collecting using Qualtrics software. Qualtrics and all other Qualtrics product or service names are registered trademarks or trademarks of Qualtrics, Provo, UT, USA. https://www.qualtrics.com.

Funding

This research was funded by the Patient-Centered Outcome Research Institute (PCORI), ME-1511-33027. PCORI was not involved in the design of the study and collection, analysis, and interpretation of data or in writing the manuscript.

Author information

Authors and Affiliations

Authors

Contributions

The authors were each involved in several aspects of the research study including design (MG, VT, DB), survey development (NA, MG, VT), implementation (NA, MG, SJ), data analysis (NA), interpretation of the results (NA, MG, VT), drafting the manuscript (MG, NA, ZHC). All authors read and approved the final manuscript.

Corresponding author

Correspondence to Melody S. Goodman.

Ethics declarations

Ethics approval and consent to participate

The Institutional Review Boards at both Washington University in St. Louis and New York University approved all portions of this project.

Consent for publication

Not applicable.

Competing interests

The authors declare that they have no competing interests.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Additional file 1: Table S1.

Screening Questions on Involvement in Community Engagement in Research - A three item screening survey used to screen potential participants for the longitudinal surveys. Table S2. Classification of Comprehensive (32-item) REST Items by Categories of Engagement - Describes the REST scoring approach that is aligned with the five levels of engagement: Outreach and Education, Consultation, Cooperation, Collaboration, and Partnership.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Goodman, M.S., Ackermann, N., Haskell-Craig, Z. et al. Construct validation of the Research Engagement Survey Tool (REST). Res Involv Engagem 8, 26 (2022). https://doi.org/10.1186/s40900-022-00360-y

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s40900-022-00360-y

Keywords

  • Research engagement
  • Stakeholder engagement
  • Validation
  • Survey measure
  • Construct validation
  • Convergent validity
  • Internal consistency