Skip to main content

Table 2 Shifra HCD evaluation score

From: Evaluating human-centred design for public health: a case study on developing a healthcare app with refugee communities

  Comments Points
Empathise and define
Moderately attempted: (2 points)
End users were engaged, felt respected, and were compensated for their participation in co-design sessions. (Compensation may be financially via cash or gift cards, transportation costs to get to sessions, provision of childcare during sessions, or other means.)
Empathy exercises were undertaken to understand lived experience of end users.
• The organisational documents provided evidence of compensation, end user interviews and ethnographic work, CBPR and empathy exercises but there was no documentation regarding whether co-designers felt respected or recognised for their contribution. Instead, this information came from the surveys.
• All co-designers selected ‘agree’ or ‘strongly agree’ to feeling respected and if they would encourage others to participate in a co-design session with the Shifra team.
• The representativeness of the end user co-designers was also not apparent in any of the organisational documents but emerged as a theme during the interviews.
• Co-designers’ views on how representative the end users were varied by the type of co-designer group the participant came from.
• SMEs and UX/programmers generally agreed that there was enough end user representation at the co-design sessions. As one computer programmer said: “We focused bringing on more and more people from the refugee and migrant community which is really good … I don’t think we had a shortage of that diversity … in terms of cultural background it was quite well represented.”
• The end users themselves felt that there were groups within the Arabic-speaking population that were unrepresented. Some of the suggested groups include individuals who did not attend university, Arabic speakers with no or low English proficiency, middle and late middle age individuals, people with different levels of proficiency with mobile technology, and refugees who had just arrived to Australia compared to refugees who have been living in Australia for some time.
• A more representative end user population would have accrued a higher score.
Ideate and design
Minimally attempted: (1 point)
Learnings from empathy exercises were compiled.
Pre-determined solution was minimally modified in response to what was learned from end users.
• An extensive document review verified that the Shifra team did not have a pre-conceived idea of what the prototype would be, and that the app’s features and structures arose in response to insights gained from the end user co-designers.
• The ideation stage took place within the design student teams and was tested at intervals with end users.
• Surveys and interviews with the co-designers revealed that one quarter of all survey participants marked ‘neither agree nor disagree’ to whether Shifra confirmed group consensus for either the problem statement or the solutions to be prototyped. The remaining three quarters indicated they ‘agree’ or ‘strongly agree’ that Shifra did in fact, achieve this.
• Whilst the issue of consensus did not emerge as a theme across all interviews, one computer programmer articulated the problem this way: “I feel like sometimes there was a bit of disconnect between what [Shifra staff] wanted and maybe necessarily what the refugees wanted … they wanted a resource where they could find health information, locations specifically of hospitals, GPs, pharmacies … the actual health information, they would rather go directly to the source, rather than … reading it online.”
• Given the conflicting information received from the surveys and the interviews and the lack of documentation, the evaluator reported finding difficulty scoring Shifra in this area.
• A lower score of 1 out of 3 was given to draw attention to this issue in the future.
• More thorough record keeping during this design stage will shed light on this process and the methods used in future HCD projects.
Satisfied: (3 points)
Multiple iterations of prototypes and/or MVPs created and tested with end user population.
End user feedback incorporated into subsequent iterations of solution.
Solution validated with subject matter experts and/or existing literature.
• End users from different backgrounds tested and contributed to iterations of the app on multiple occasions and continue to do so to this day.
• SME co-designers and their feedback lead to tangible changes in the app appearance, language accessibility and functionality.
• Organisational documents alone verified these requirements
Launch and share
Satisfied: (3 points)
Product launched.
User testing completed to understand users’ experience and satisfaction.
User feedback incorporated into plans for future iterations.
Process or product results shared with program staff, co-designers, and wider community.
• Shifra held a product release in August 2017 which included co-designers, partners, and funders and presented preliminary findings from the CBPR projects. 3
Total score (max score of 12):   9