Skip to main content

Standardised data on initiatives—STARDIT: Beta version

This article has been updated


Background and objective

There is currently no standardised way to share information across disciplines about initiatives, including fields such as health, environment, basic science, manufacturing, media and international development. All problems, including complex global problems such as air pollution and pandemics require reliable data sharing between disciplines in order to respond effectively. Current reporting methods also lack information about the ways in which different people and organisations are involved in initiatives, making it difficult to collate and appraise data about the most effective ways to involve different people. The objective of STARDIT (Standardised Data on Initiatives) is to address current limitations and inconsistencies in sharing data about initiatives. The STARDIT system features standardised data reporting about initiatives, including who has been involved, what tasks they did, and any impacts observed. STARDIT was created to help everyone in the world find and understand information about collective human actions, which are referred to as ‘initiatives’. STARDIT enables multiple categories of data to be reported in a standardised way across disciplines, facilitating appraisal of initiatives and aiding synthesis of evidence for the most effective ways for people to be involved in initiatives. This article outlines progress to date on STARDIT; current usage; information about submitting reports; planned next steps and how anyone can become involved.


STARDIT development is guided by participatory action research paradigms, and has been co-created with people from multiple disciplines and countries. Co-authors include cancer patients, people affected by rare diseases, health researchers, environmental researchers, economists, librarians and academic publishers. The co-authors also worked with Indigenous peoples from multiple countries and in partnership with an organisation working with Indigenous Australians.

Results and discussion

Over 100 people from multiple disciplines and countries have been involved in co-designing STARDIT since 2019. STARDIT is the first open access web-based data-sharing system which standardises the way that information about initiatives is reported across diverse fields and disciplines, including information about which tasks were done by which stakeholders. STARDIT is designed to work with existing data standards. STARDIT data will be released into the public domain (CC0) and integrated into Wikidata; it works across multiple languages and is both human and machine readable. Reports can be updated throughout the lifetime of an initiative, from planning to evaluation, allowing anyone to be involved in reporting impacts and outcomes. STARDIT is the first system that enables sharing of standardised data about initiatives across disciplines. A working Beta version was publicly released in February 2021 (ScienceforAll.World/STARDIT). Subsequently, STARDIT reports have been created for peer-reviewed research in multiple journals and multiple research projects, demonstrating the usability. In addition, organisations including Cochrane and Australian Genomics have created prospective reports outlining planned initiatives.


STARDIT can help create high-quality standardised information on initiatives trying to solve complex multidisciplinary global problems.

Plain English Summary

All major problems, including complex global problems such as air pollution and pandemics, require reliable data sharing between disciplines in order to respond effectively. Such problems require evidence-informed collaborative methods, multidisciplinary research and interventions in which the people who are affected are involved in every stage. However, there is currently no standardised way to share information about initiatives and problem-solving across and between fields such as health, environment, basic science, manufacturing, education, media and international development. A multi-disciplinary international team of over 100 citizens, experts and data-users has been involved in co-creating STARDIT to help everyone in the world share, find and understand information about collective human actions, which are referred to as ‘initiatives’. STARDIT is an open access data-sharing system to standardise the way that information about initiatives is reported, including information about which tasks were done by different people. Reports can be updated at all stages, from planning to evaluation, and can report impacts in many languages, using Wikidata. STARDIT is free to use, and data can be submitted by anyone. Report authors can be verified to improve trust and transparency, and data checked for quality. STARDIT can help create high-quality standardised information on initiatives trying to solve complex multidisciplinary global problems. Among its main benefits, STARDIT offers those carrying out research and interventions access to standardised information which enables well-founded comparisons of the effectiveness of different methods. This article outlines progress to date; current usage; information about submitting reports; planned next steps and how anyone can become involved.

Peer Review reports



Many problems facing life on earth transcend the capacity of any single discipline to address. For example, problems such as pandemics, air pollution and biodiversity destruction cannot be characterised solely as ‘public health’, ‘environment’ or ‘education’ problems [1, 2]. Solving such problems calls for holistic approaches [3] and will require governments, industry, research organisations and people around the world to work in partnership.

People need access to valid and reliable information to make informed decisions [4], which typically requires evidence. Depending on the context, this evidence-informed approach is called ‘research’, ‘evaluation’ [5], ‘international development’, ‘education’ or an ‘initiative’. Hereafter all of the above will be referred to as ‘initiatives’. For example, when deciding a response to a pandemic, standardised data can improve retrieval of relevant information which can be used to inform which affected individuals or organisations could be involved in the design of the response and which outcomes are most important [6]. This can include deciding which stakeholders should be involved in which tasks, such as prioritising outcomes.

In this article we explain how Standardised Data on Initiatives (STARDIT) builds on work to date by standardising a wide variety of data in a format applicable across multiple sectors, disciplines and languages. It is hoped that the creation of this evidence base will add to understanding and evaluating what works, for whom, why, and in what circumstances [7,8,9,10]. Hereafter, data generated by an initiative (including raw data), information about the data (meta-data) and information about the initiative will all be referred to as ‘data’ unless otherwise specified.

In 2020, the United Nations Secretary-General stated that ‘purposes that involve data and analytics permeate virtually all aspects of our work in development, peace and security, humanitarian, and human rights’, encouraging ‘everyone, everywhere’ to ‘nurture data as a strategic asset for insight, impact and integrity—to better deliver on our mandates for people and planet’ [11]. Similarly, the United Nation’s Paris Agreement highlighted the critical role of ‘sharing information, good practices, experiences and lessons’ in response to preventing irreversible climate change [12]. While organisations such as Cochrane (health) and The Campbell Collaboration (social sciences) are working to create high-quality systematic reviews of medical, social and economic initiatives, there remain limitations to the data available for such reviews. After a recommendation from the Organisation for Economic Co-operation and Development (OECD), successful data sharing initiatives in biodiversity exist, such as the Global Biodiversity Information Facility (GBIF) [13], however there also remain limitations and accessibility issues in sharing and standardising biodiversity data [14, 15].

It is often essential to include those affected by initiatives in the design and delivery of those initiatives [16]. For example, with an initiative to respond to a pandemic, those creating and delivering an initiative, and those affected by the outcome may be the same people. Forms of participatory action research where anyone can be involved in any aspect of research [17] (including amorphous terms such as ‘citizen science’ [18]) are increasingly recognised as crucial paradigms for solving such global problems, as they can help ensure that initiatives are aligned with the priorities of those affected [19,20,21]. However, while the importance of involving people is clear [7], evidence-informed methods of doing so are limited [9, 22,23,24,25,26].

A recent statement defined a role for the public in ‘data intensive’ health research [27]. While in the health research disciplines there are over 60 different tools or frameworks for reporting or supporting public involvement, most published tools or frameworks are not used beyond the groups that developed them, and none work across multiple disciplines or languages [28]. Current reporting methods also lack information about the ways in which different people are involved in initiatives, making it difficult to collate and appraise data about the most effective ways to involve different people. In addition, ‘citizen science’ and ‘participatory action research’ are blurring the lines between concepts such as ‘researcher’, ‘public’, ‘patient’ and ‘citizen’ [9, 29,30,31,32,33].

The STARDIT tool features standardised data reporting about initiatives, including who has been involved, what tasks they did, and any impacts observed. STARDIT was created to help everyone in the world find and understand information about collective human actions, which are referred to as ‘initiatives’. In addition to providing new standardised data categories for describing who was involved in which tasks of an initiative, STARDIT can also incorporate the many existing data standards (see Additional file 1 ‘Using Standardised Data on Initiatives (STARDIT): Beta Version Manual’), thus creating a unifying system for data hosting, linking and analysis. STARDIT can also report any different ‘interests’ of stakeholders and the ways power is shared between different stakeholders. The word ‘stakeholders’ here includes the public, those who have important knowledge, expertise or views that should be taken into account and others with a ‘stake’ in an initiative [34, 35].

Stakeholders can also include people who have financial, professional, social or personal ‘interests’. An ‘interest’ can include a kind of commitment, goal, obligation, duty or sense of connection which relates to a particular social role, practice, profession, experience, medical diagnosis or genomic variation [36]. These can include financial or other interests which may compete or conflict with ‘public interest’ [37]. For example, a systematic review found that industry funded research is more likely to have outcomes favouring those with financial interests who are sponsoring the research [37, 38]. Other examples include people from certain sub-populations (including those from populations more likely to be exploited [39]), Indigenous peoples, or people affected by rare diseases may have a personal interest in initiatives relevant to those specific populations, separate to the ‘general public’ [9, 40,41,42]. For example a person with a rare disease may have a personal ‘interest’ in research into a treatment for that disease [42]. STARDIT allows standardised reporting of stakeholders and any interests.

Sharing data in a consistent way may help ensure that benefits of initiatives are shared more equitably (for example, by improving accountability) [9]. In addition sharing information about who ‘owns’ or controls access to data and how such data access decisions are made can help people make informed decisions about participating in research [42]. By reporting involvement in initiatives, STARDIT also allows acknowledgement of people otherwise excluded from the public record—such as patients, people donating personal data, medical writers, laboratory assistants, citizen scientists collecting or analysing data, custodians of traditional or Indigenous knowledge, translators, interviewers, coders and code reviewers.


The objective of STARDIT is to address current limitations and inconsistencies in sharing data about initiatives. The STARDIT tool features standardised data reporting about initiatives, including who has been involved, what tasks they did, and any impacts observed. STARDIT is designed to support a culture of partnership across disciplines and beyond, and is, wherever possible, aligned and interoperable with existing reporting models and frameworks such as those used in health, environment, manufacturing, publishing, government policy, education, arts and international development (see Table 1). In addition, the STARDIT Preference Mapping (STARDIT-PM) tool provides a standardised way to report information about different stakeholders’ preferences, including preferences for power-sharing and methods of involving people during an initiative (see section ‘Mapping preferences for involvement’).

Table 1 Example applications of STARDIT

In alignment with the UNESCO Recommendation on Open Science [43], the co-created values of the STARDIT project state that designs and code should always be open access and relevant licences should always be those which allow others to build on and improve the project, while maintaining central control over quality (such as the Creative Commons Attribution-ShareAlike 4.0 International license (CC BY-SA 4.0) and the GNU General Public License (GPL) 3.0 for code. STARDIT data will released into the public domain (CC0) and integrated into Wikidata, which is a free and open knowledge base for collaboratively editing structured data [44]. The working Beta Version of STARDIT uses Wikidata to enable definitions to be co-created by contributors anywhere in the world, and therefore works across human languages, with interoperability with other platforms planned for future versions.

Potential applications

STARDIT’s potential applications are summarised in Table 1. Among the principal applications, STARDIT offers public access to standardised information which enables the comparison of methods with the most impacts, such as ways of involving stakeholders in initiatives. The United Nations defines assessing impact as ‘establishing cause and effect chains to show if an intervention has worked and, if so, how’ [45]. With more data being shared, STARDIT could support decision making when planning stakeholder involvement in initiatives, and enable more people to assess the rigour of impact assessments [45]. This will be achieved by structuring the data in a way to allow such comparisons between different outcomes and methods of involving people, including using machine learning algorithms (including artificial intelligence).

In addition, STARDIT could be used to share information which makes research more reproducible [46, 47], improving accessibility to the information required to critically appraise research and evidence and thus improving trust in processes such as the scientific method [48, 49], and facilitate an appraisal of different knowledge systems, including Indigenous knowledge systems [50]. Such data sharing could also improve the translation of trusted, quality research and data, by empowering people to both access and appraise relevant data. For example, improved access to more standardised information (in multiple languages) about data and outcomes, could help to facilitate more informed collaborations between researchers and those monitoring and protecting critically-endangered species, particularly where there is no common language [51,52,53].

In addition, many industries use self-regulatory processes to govern industry practices, with examples including the Forest Stewardship Council (FSC), Marine Stewardship Council (MSC) [54], Certified B Corporations [55], and multiple Good Manufacturing Practice (GMP) guidelines. STARDIT could be used to improve public awareness of, and access to, the data already reported by such self-regulatory standards. Increased transparency could, for example, support people to make informed decisions when investing or buying products; automate analysis of data to facilitate such decisions, and improve accountability overall.

Defining ‘initiative’ and ‘involvement’

As STARDIT is designed to report data across disciplines, distinctions between concepts such as ‘intervention’, ‘research’, ‘project’, ‘policy’, ‘initiative’ (and similar terms) are of secondary importance compared with communicating ‘the aims or purposes of specified actions’; ‘who did which tasks or actions’; ‘are there competing or conflicting interests’, and the ‘outcomes from a specific action’. In this way, STARDIT can be used to report on any kind of collective action, which can include interventions, projects or initiatives—including a clinical study, education interventions or any kind of evaluation [5, 56, 57]. In this article, we use the word ‘initiative’ to describe any intervention, research or planned project which is a kind of collective human action. We define ‘involving’ people as the process of carrying out research, initiatives or interventions with people, rather than on them [58]. Involvement occurs when power is shared by researchers, research participants, and other relevant stakeholders (such as the public, industry representatives and experts). While meanings of these terms are often imprecise and can be used interchangeably, ‘involvement’ here is distinct from ‘engagement’. We consciously use 'involvement' rather than 'engagement' to emphasise active participation that goes beyond simply receiving information about initiatives. We use ‘engagement’ here to mean where information and knowledge about initiatives is shared, for example, with study participants who remain passive recipients of interventions [59,60,61].

Using and developing data standards

The current Beta Version of STARDIT maps terms and concepts using the Wikidata initiative (part of the Wikimedia Foundation) [36], which includes definitions (taxonomy), a way of describing relationships between concepts (ontology) [37], and a system to translate definitions and ontology between many languages. Examples of existing taxonomies include the National Library of Medicine’s Medical Subject Headings (MeSH), which are used extensively in multiple kinds of literature reviews [38].

How to involve people in combining or merging overlapping taxonomies for different subsets of data has been identified as an important question in the process of taxonomy [62, 63]. By using Wikidata, STARDIT can be used by anyone to store both publicly accessible data and meta data (data about data), and link to hosted structured linked data. While STARDIT is a novel element set, where possible it will also incorporate element sets from established data standards and map them where possible (see Table 6 in the Additional file 1 for examples of data standards which could be incorporated). This includes standard elements and value sets and controlled vocabularies [64]. The terms used in this paper are working terms, which will be progressively standardised over the lifetime of the project.

Structured Wikidata can help define terms and concepts clearly and unambiguously, in a transparent and open way. For example, colours in the spectrum are described by a standard numerical code in Wikidata, whereas the names of colours change according to different languages. Also, people with different DNA variations will also experience some colours differently. Similarly, the Wikidata entry for ‘patient’ has the human-readable definition of ‘person who takes a medical treatment or is subject of a case study’ (translated into 54 other languages) and a machine-readable definition consisting of dozens of semantic links to and from other Wikidata entries [39]. The terms ‘participant’ and ‘research participant’ are similarly coded, defined and translated. For terms that do not currently exist in Wikidata (for example, ‘biobank participant’), a definition can be contributed by anyone in any language, refined by other users, then coded and translated into multiple languages by Wikidata. Developing taxonomies and ontologies will be an ongoing process facilitated by the current Wikidata infrastructure, and may require creating additional tools to create more inclusive ways of involving people in developing taxonomies [40].

Methods and paradigms

Participatory action research

STARDIT development is guided by participatory action research (PAR) paradigms, which guide initiatives by aiming to involve all stakeholders in every aspect of the development and evaluation of an initiative [65, 66]. Participatory research is a form of collective, self-reflective enquiry undertaken by people in order to understand their situation from different perspectives [67]. Development has also been influenced by existing work in health research, including the multidisciplinary area of public health, which incorporates social, environmental and economic research. In a health context, participatory research attempts to reduce health inequalities by supporting people to be involved in addressing health issues that are important to them, data collection, reflection and ultimately in action to improve their own health [68]. At the core of participatory research is ‘critical reflexivity’. The process asks people involved to reflect on the causes of problems, possible solutions, take any actions required which might improve the current situation, and evaluate the actions [66].

Rights-based paradigm

The United Nations (UN) Universal Declaration Human Rights states everyone should be able to ‘receive and impart information and ideas’ [69]. The UN also states that democracy, development and respect for all human rights and fundamental freedoms are interdependent and mutually reinforcing’ [70]. To uphold human rights and ‘environmental rights’ [71], and for ‘the maintenance of peace’, people require ‘media freedom’ in order to ‘seek, receive and impart information’ [70], free of unaccountable censorship. STARDIT has been created in order to help anyone uphold these universal rights, by providing a way to share open access information in a structured way with a transparent process for quality checking.

Cultural neutrality

Values, assumptions, ways of thinking and knowing are not shared universally. The participatory process used for developing STARDIT required and will continue to require that it attempts to map cultural variations, in order to avoid unconsciously reinforcing particular (often ‘dominant’) [72] values. Transparent acknowledgement of differing values and perspectives is critically important, in particular when mapping if different stakeholders’ values are complementary or opposing. A participatory process requires mapping all of these perspectives and, where possible, involving people in labelling different perspectives and values. For example, STARDIT has already been used to map the varying perspectives of multiple stakeholders when planning a multi-generational cohort study [73].

Many problems facing humans are shared by non-human life forms and ecosystems, including rapid climate change, air pollution and sea-level rise. If initiatives are to operate in inclusive, culturally-neutral ways, reconsideration of the language used to describe relationships between humans, non-human life and the environment is essential [74]. Environmental and social sciences are challenging and redefining colonial-era concepts of what can be ‘owned’ as property or who ‘owns’ [74, 75]. As a result, ecosystems such as rivers and non-human animals, are being assigned ‘personhood’ [76,77,78]. For example, a public consultation by a ‘dominant’ group might ask, ‘who owns the rights to the water in a river system?’ [72]. This question imposes the dominant group’s values on people who may not share the same concept of ‘ownership’. In this way, Western European legal and economic traditions are frequently incompatible with those of some Indigenous peoples’ [74, 79, 80].

The participatory process used for developing STARDIT has attempted to be transparent about how different stakeholders have been involved in shaping it in order to improve how the system can be used to map values and provide more culturally neutral guidance for planning and evaluating involvement in initiatives. However, it is acknowledged that it will be a challenging process to ‘de-colonialise’ and ‘de-anthropocise’ language and action [81, 82], as this may be perceived as a challenge to some people’s cultural attitudes which may not align with the United Nation’s universally enshrined principles of democracy, human rights and environmental rights. In addition, ongoing co-design will be required to ensure STARDIT is as accessible and inclusive as possible.

Development phases and methods

Both the STARDIT Alpha version (0.1) and the Beta version (0.2) have already involved people from diverse disciplines and backgrounds in the development, as this is integral to its effectiveness (Fig. 2). It has been co-created using methodologies informed by PAR and other health research reporting guidelines [83]. PAR describes related approaches which involve experts (such as researchers), the public and other stakeholders “working together, sharing power and responsibility from the start to the end of the project” [84, 85].

The Alpha version of STARDIT (version 0.1) followed the recommendations of a 2019 scoping review led by Nunn et al., which mapped public involvement in global genomics research [9]. This review stated that ‘without a standardized framework to report and transparently evaluate ways people are involved, it will be difficult to create an evidence base to inform best-practice’ [9]. This review was followed by an additional review (conducted in 2020 led Nunn et al., and to be submitted for publication in 2022), which mapped international guidance for planning, reporting and evaluating initiatives across multiple disciplines, and found 158 different reporting standards and reporting guidelines across disciplines (see the preliminary results in Table 7 of Additional file 1) [86]. This included 7 different biodiversity reporting standards, and 15 different reporting standards for health research. STARDIT was also informed by a number of PAR projects [41, 87, 88], and a report for the Wikimedia Foundation by the charity Science for All [89].

The charity Science for All has hosted the co-creation process since 2019. Science for All is a charity based in Australia which supports everyone to get involved in shaping the future of human knowledge, with co-created values guiding their work [90]. Development was informed by a number of literature reviews and guidelines, with methods of involving people in the development of STARDIT guided by the Enhancing the Quality and Transparency of Health Research (EQUATOR) network’s approach to developing reporting guidelines [83, 91]. Methods of involving people included public events, online discussions and a consultation process. Owing to there being no formal budget for this project, the ability to actively involve people who can’t afford to volunteer their time for free was restricted. Details about how inclusive ways of involving people were used are included in the publication consultation report [92]. This includes information about working with people from lower, middle and high-income countries, Indigenous peoples from Australia and Indonesia, people affected by cancer and rare diseases from Europe and the Americas, and people with expert knowledge of protecting endangered animals and eco-systems. The STARDIT project is actively seeking funding from organisations which align with our values, in order to ensure the project is as inclusive as possible.

The co-creation process is currently being supported pro-bono by Science for All, and has also received in-kind support from individuals and organisations worldwide. A modified Delphi technique was used at some stages, with this method to be reviewed when co-creating future versions [93, 94]. Many people were invited to provide feedback on all aspects of STARDIT, including its feasibility, design and implementation. They could comment anonymously using online forms and shared documents, in online discussion forums, via email or during face-to-face or video meetings.

After the feedback from the Alpha version was collated, work began on the Beta version. Between January 2020 and August 2021 multiple meetings and presentations took place to inform the Beta version, with some planned face-to-face involvement cancelled owing to the COVID-19 pandemic. Online activities where feedback on STARDIT was invited and given included interactive presentations by Jack Nunn to the WikiCite 2020 Virtual conference [95], Poche Centre for Indigenous Health [96], Ludwig Boltzmann Gesellschaft [97], La Trobe University [98], Australian Citizen Science Association [99] and Rare Voices Australia. In addition, between February 2021 and May 2021, a total of 27 people provided feedback on the Beta version via the online form and collaborative document. Over 7000 words of feedback and comments were provided via the online form with 144 separate points, comments or corrections [92]. More detailed information about the consultation process for the Alpha and Beta versions up to May 2021 can be found in the 2020 and 2021 public consultation reports [92, 100] and in the Additional files 2, 3 and 4. Further information about who was involved in the Beta Version development and proposed future development phases can be found in the Additional file 1.

Science for All also hosts an online working group which continues to guide the development of STARDIT according to the terms of reference [101]. Anyone is welcome to join the working group, contribute to discussions and vote on decisions and ensure alignment with other initiatives. STARDIT and all associated work and co-designed logos (see Fig. 1) are currently published under the Creative Commons Attribution-ShareAlike 4.0 International license (CC BY-SA 4.0) [102], with the quality of any future iterations being the responsibility of not-for-profit host organisations and future licensing decisions to be made transparent, with anyone invited to be involved. The co-design process so far is summarised in Fig. 2, with further information about the process available in Additional file 1.

Fig. 1
figure 1


Fig. 2
figure 2

STARDIT development

Version one implementation

Now STARDIT Beta (version 0.2) has been published, a Beta version implementation article will be initiated, demonstrating the use of machine learning to generate STARDIT reports using mapped data from a number of international partner organisations. Work will then begin on the next version (version 1.0). Those involved with STARDIT development will disseminate information, gather feedback and recruit more people and organisations to participate as project partners and potentially funders. This stage is estimated to take between 2 and 3 years, at which point a working group will formally invite other appropriate partner organisations (such as the UN and WHO) to adopt the STARDIT framework. A Steering Group will be established to oversee and continually improve the STARDIT system. STARDIT will require continued working with publishers, research funders and governments to encourage adoption of the reporting tool. More detail on the proposed next stages can be found in the Additional file 1 in the section ‘Development phases’.


This section summarises the results from the process of co-designing STARDIT. Since the start of the project in 2019, over 100 people from multiple disciplines and countries have been involved in co-designing STARDIT. A working Beta version was publicly released in February 2021 (ScienceforAll.World/STARDIT). Subsequently, STARDIT reports have been created for peer-reviewed research in multiple journals and multiple research projects [41, 42, 87, 88, 103,104,105]. In addition, organisations including Cochrane [106, 107] and Australian Genomics [108] have created prospective STARDIT reports outlining planned initiatives that will use STARDIT to report them. The Cochrane Council voted to use STARDIT to report planned work on creating a values statement [106, 107], while the Australian Genomics working group ‘Involve Australia’ voted to use STARDIT to report their planned work [108].

Beta version interface

A link to the working Beta version can be found at: ScienceforAll.World/STARDIT/Beta [109]. The data fields in the STARDIT system co-created during the process described in this article are summarised in Table 4. Table 5 presents the full version of the data fields. The ‘Minimum Contribution Reporting Form’ (MICRO) specifies the minimum information required to make a STARDIT report and these fields are highlighted in the table and marked with an asterisk (*).


Acknowledging those involved in reporting ensures accountability for accuracy and increases trust in report content. STARDIT reports must be completed by named people who are accountable for the data being reported. Ideally, a public persistent digital identifier (for example, an ORCID number) [110] or an institutional email address will be linked to authors’ names using Wikidata.

Reports cannot be completed anonymously, but STARDIT editors can redact author details from publicly accessible reports for ethical reasons (such as privacy or risks to safety).

Report authorship can be led by any stakeholder, including people associated with, or affected by, the initiative such as employees, researchers, participants, or members of the public. The affiliations of people formally associated with the initiative can be shared in a report.

Submission and editorial process

Reports can currently be submitted to STARDIT via a simple online form or emailed as a document file. At present, only data which is already publicly accessible can be included in a STARDIT report. It is a way of collaboratively structuring data, not a primary repository for data. Once a report is submitted, editors can review content for quality control (for example, checking that publicly accessible URLs and URIs align with the data in the report), but will not critically appraise the initiatives or methods. The Editorial process is currently parallel to the WikiJournal process, involving selected Editors from these journals. While Editors will not approve the ethics of the initiative, a transparent process for considering ethical issues will be considered before publishing a report. The Editors may consider questions such as, ‘Does data need to be redacted in order to prevent harm and protect or preserve life?’ or, ‘Is personal information being shared without consent?’ For more information about the Editorial process for reviewing data quality and ethical considerations, see the section ‘Editorial and peer review of STARDIT reports’ of the Additional file 1 ‘STARDIT Manual Beta Version’.

Once approved by the Editors, the STARDIT data will be entered into the database in a machine-readable format using structured data, based on the widely used Resource Description Framework (RDF) developed by the World Wide Web Consortium (W3C), which is used by Wikidata [111]. Each STARDIT report is assigned a unique Wikidata item number and all previous versions are navigable in a transparent history.

In future versions, it is proposed that stakeholders will be able to submit reports directly via an application programming interface (API), which will facilitate machine automation of STARDIT report creation. In addition, machine learning algorithms could be programmed to generate STARDIT reports from existing databases. As humans and machines submit reports, categories or meta-tags will be suggested (such as ‘patient’, ‘member of the public’), with the option of adding, or co-defining, new categories using the Wikidata system for structured data [112].

The database will generate a unique version number for the report with a Digital Object Identifier (DOI). To create an immutable version, the report will also be using the Internet Archive (a charity which allows archives of the World Wide Web to be created, searched and accessed for free) [113]. Finally, the report will be assigned a status, with the data quality checking being described as:

  • manually added, no human review (low quality checking—no DOI assigned)

  • machine added, no human review (low quality checking—no DOI assigned)

  • human review (medium quality checking—DOI assigned pending Editorial decision)

  • peer or expert reviewed, with publicly accessible sources for indicators and references checked (higher quality checking—DOI assigned pending peer or expert review).

Processes for data checking and assigning report status need to be further developed and agreed by the STARDIT working group. For example, developing a transparent process if a report has been created about an initiative with no involvement from anyone associated with the project, or only one subset of stakeholders. In such cases, the Editorial team might give a short period of time for any other stakeholders to be involved in checking and editing any information.

Updating reports

STARDIT will enable reports to be updated as initiatives progress over time. Updates will be reviewed by the STARDIT Editors. Once an update is approved, the system generates a new version number, while also preserving the original report. Updates might include, for example, information about involvement in the initiative, or about dissemination, translation, co-creation of new metrics to assess impacts, or longer-term outcomes [114].

A minimum dataset is required for a STARDIT report. This is called the Minimum Contribution Report (MICRO) and the required categories are highlighted in green and marked with an asterisk (*). Relevant Wikidata items and qualifiers for these fields are provided in the Additional file 1 in the section Developing taxonomies and ontologies’ and on the Science for All STARDIT Beta webpage [109].

Scope and applications

STARDIT is the first and only data-sharing system that enables standardised sharing of data about how people are involved in any type of initiative, across any discipline, including involvement in the planning, evaluation and reporting of initiatives. In addition it allows comparison of both evaluation methods and any impacts or outcomes in relation to standardised terminology. The next section summarises the current usage of STARDIT, while Table 1 summarises the proposed scope and potential further applications.

Current usage

STARDIT provides a way to report data about who did which tasks in an initiative. STARDIT reports have already been used to describe a number of research projects, including data about who did which tasks, ethics approval, funding, methods and outcomes [41, 87, 88].

In health and medicine, STARDIT is already being used by an Australian Genomics working group to have describe planned work to improve guidance on involving the public in genomic research [108]. The Cochrane Council voted to use STARDIT to outline a proposed process for co-creating a Cochrane values statement [107, 115]. Other projects which have used STARDIT reports include participatory action research projects involving a large cohort study of > 15,000 healthy, elderly research participants [103], a protocol for precision medicine for Aboriginal Australians [104], and a group of patients and families affected by a rare immunological disorder [42], and a project involving extended family of donor-siblings who share the same sperm-donor father [41, 105].

The Wikipedia-integrated open access peer reviewed WikiJournals are also using STARDIT, which has articles which are integrated into Wikipedia [116]. For example, a STARDIT report has been created to share information about a Wiki Journal of Medicine article about systematic reviews (with an associated integrated Wikipedia page) [116], including information about authors, editors and peer-reviewers [117]. This allows readers to critically appraise the source before deciding whether to use or share it.

An environmental research project has also used STARDIT to report the initiative, which works with citizen scientists to locate critically endangered species using eDNA [118, 119]. Currently, the Standardised Data, which makes up the STARDIT reports, is structured in WikiData, and hosted in the STARDIT report format using WikiSpore, which is hosted on Wikimedia Cloud Services, and is used as an experimental and supplementary space to develop potential Wikimedia projects [120]. Figure 3 summarises how Standardised Data is organised.

Fig. 3
figure 3

STARDIT technical information summary

Further examples of how STARDIT can be used are provided in the Additional file 1, including; using STARDIT in genomic research for mapping phenotypes and reporting who was involved in helping define and describe them; providing data to critically appraise information sources (including public videos); report data about case studies consistently; create ‘living systematic reviews’ and train machine learning from STARDIT data.


Across all disciplines, ‘plan’, ‘do’ and ‘evaluate’ are recognised as distinct stages of initiatives [121]. While there are many ways to involve different people in these stages, standardised reporting and thus evidence-informed methods of doing so are lacking [7, 9, 122]. Figure 4 describes how STARDIT can be used to map how people might be involved in designing, doing, reporting and evaluating initiatives, starting with ‘idea sharing’ (Fig. 4).

Fig. 4
figure 4

Planning and evaluating initiatives using STARDIT

Reporting initiative design in STARDIT

Questions such as, ‘Who decides how people are involved?’ and, ‘Who is involving whom?’ and ‘what are people’s preferences for ways of working?’ can be difficult to answer and is an active area of research [42, 123]. For example, planning a healthcare initiative requires input from experts as well as from the people the initiative is intended to help [122]. Figure 5 summarises a way of using STARDIT to report the design process of initiatives, with Table 2 providing details about how involvement from different stakeholders can be reported at different stages. Table 2 also makes reference to the STARDIT Preference Mapping tool (STARDIT-PM). The section ‘Detailed reporting of design using STARDIT’ in the Additional file 1 ‘STARDIT Manual Beta Version’ provides more comprehensive information.

Fig. 5
figure 5

Reporting initiative design in STARDIT

Table 2 Summary of reporting initiative design in STARDIT
Mapping preferences for involvement

Involving multiple stakeholders in designing how people should be involved in initiatives is considered best practice, as it may facilitate power sharing and improve the process overall [9, 136]. Current explanations of participatory research methods, and the language used to describe them, vary considerably. There is no agreed, consistent way to describe how people have been involved in an initiative, or to report the impacts of their involvement.

The STARDIT Preference Mapping (STARDIT-PM) tool provides a standardised way to report the preference of multiple stakeholders. Anyone can be involved in creating a STARDIT report, which means that data on the impacts and outcomes of participation can be contributed by diverse stakeholders. Such reports will help researchers make informed decisions when planning participation in research.

For example, a recent study showed how a charity for people affected by a rare disease involved a small number of people affected by the rare disease. They were involved in discussing preferences for how best to involve the wider community of people affected in future research prioritisation and planning [42]. Those involved had a good understanding of any specific needs or preferences for involvement, and shared preferences for the tasks (such as overseeing data access), method (facilitated discussions) and mode of involvement (online text-based discussion). The STARDIT-PM data about this processes showed a preference for being involved using online discussions, and the STARDIT report stated that involving people influenced the way the charity planned to involve people prioritising research in the future [87].

Examples of completed STARDIT-PM can be found in the additional files of a number of research projects [41, 87, 103]. Table 3 summarises questions which can be asked to map stakeholder preferences with respect to involvement in initiatives.

Table 3 Questions for mapping preferences for involvement

The first stage of preference mapping requires individuals to self-identify as belonging to a specific grouping of people. People from that grouping then share views on how people from other groupings could be involved (or which groupings should not be involved). For example, labels for such groupings could include:

  • only people with a professional role in the initiative

  • everyone (any member of the public who is interested)

  • anyone who might be indirectly affected by the initiative

  • only people who are directly affected by the initiative

  • only people who are participating in the initiative

  • only people with a financial interest in the initiative.

As a consistent mapping tool for use across all initiatives, STARDIT would allow both comparison of diverse stakeholder views and exploration of similarities and variations in relation to preferences for involvement. Used alongside other planning tools, this information could help align initiatives with stakeholders’ preferences. In this way, how stakeholders are involved throughout an initiative could be co-designed from the outset. Analysis of the data about preferences should involve stakeholders from multiple groupings to ensure that a diversity of perspectives are involved in assigning meaning to any data.


The STARDIT co-design process included co-defining shared values. It was agreed that the STARDIT project must be implemented in a way which encourages those involved to acknowledge cultural values and assumptions in a transparent way. For example, some people can be labelled as having human-centred (anthropocentric) values, which values natural resources in relation to benefits they can provide for humans. In contrast, some people who think the value of nature should be measured using non-human outcomes can be labelled ecocentric [137]. A participatory process requires mapping all of these perspectives and, where possible, labelling them.

The values for STARDIT were adapted from an existing values statement co-created by the charity Science for All [138], with values specific to the STARDIT project summarised in Table 4. Further information about the values are provided in the Additional file 1 ‘STARDIT Manual Beta Version’.

Table 4 Values of the STARDIT project

Discussion and future versions

Since the inception of this project in 2019, subsequent world events have included; the worst bushfires in Australian history [139] in parallel with misinformation campaigns funded by industries whose actions increase the severity and frequency of such fires [140, 141]; the COVID-19 pandemic and associated "infodemic" of misinformation [142]; continued violence inspired by misinformation [143,144,145]; and "infowars" of information control which continue to take place alongside wars fought with physical weapons [146]. The need for tools which can provide a way for all global citizens (and their machines) to share, asses, verify, edit, and link data has never been greater or more urgent. STARDIT is one such tool, which, by using Wikidata, will make use of existing and trusted infrastructure, and allows people to co-define types of data in multiple languages [147,148,149].

STARDIT is the first tool that enables sharing of standardised data about initiatives across disciplines. It enables reporting of who was involved, any impacts of stakeholders’ involvement, and outcomes of initiatives over time. This functionality addresses a serious limitation of the current peer-reviewed publication process in which articles are not easily updated. However, there is no single process for making decisions that would improve and refine the processes, language and taxonomies associated with reporting initiatives, including who was involved in which tasks [150]. Similarly, based on feedback from Indigenous community leaders, patient representatives and others, it is essential to ensure access to learning and development opportunities is available to support people to both access and create STARDIT reports. The STARDIT project therefore needs to continually appraise the inclusiveness and effectiveness of its multidisciplinary, multilingual system, including accessibility of interfaces. To achieve this, the project will continue to work with its partner organisations, including the Wikimedia Foundation, a global leader in this field (Table 5).

Table 5 Summary of STARDIT Beta Version data fields

The co-design process for STARDIT (hosted by the charity Science for All) ensured people from multiple organisations and countries were involved in both creating and refining STARDIT, ensuring it is usable and relevant in multiple disciplines. Consultation with experts, and source materials from around the world, have informed the design of STARDIT. Co-authors come from disciplines including health research and services, environmental research and management, economics, publishing with over 20 different institutions represented. Future versions should be informed by a regular, systematic search, review and appraisal processes, using the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) data set [151], used for reporting in systematic reviews and meta-analyses.

While there are multiple methods for mapping values [152, 153], there is currently no agreed, standardised way to map the values (beliefs and personal ethics) of those involved in initiatives and those creating reports in STARDIT. Further research is needed to facilitate mapping of values and detect whether certain perspectives are being consciously or unconsciously excluded.

STARDIT seeks to be an easy-to-use way for people from multiple disciplines to share data about initiatives. However, amassing sufficient reports to create a useful database is estimated to take at least 5 years, and will likely require machine learning. For example, machine learning may be used in parallel with humans (for verifying data) to generate STARDIT reports from existing publicly accessible data at a scale and speed otherwise impossible for humans alone to achieve. In addition, both the potential and limits of machine learning should be transparently reviewed in relation to the field of adversarial machine learning [154]. Similarly, the process of creating ‘living systematic reviews’ from STARDIT reports is currently theoretical and would require significant development and rigorous testing to realise.

It is important to note that access to Wikidata is actively blocked by governments or internet service providers in some countries. While such censorship limits people’s ability to contribute or critically appraise data, STARDIT has been designed to be both interoperable with existing standards, and ‘future proofed’ by being system and language agnostic, to allow interoperability with existing and emerging data systems beyond Wikidata.

Science for All will continue to host the co-creation process and to monitor and evaluate the project. However, an open, transparent governance process that enables anyone to be involved in decision making and ongoing co-design of STARDIT will need to be established, and is proposed in more detail in the Additional file 1.

Ensuring that the STARDIT development process is inclusive and ethical, and that the database is quality assured, is paramount to ensuring that STARDIT is credible, useful and trustworthy. STARDIT currently relies on volunteers and pro-bono services from not-for-profit organisations. In the future, people should be paid for certain tasks, especially if the project is to avoid excluding the involvement of those from lower socio-economic backgrounds who may not be able to afford to volunteer their time. For the success and longevity of this project, a sustainable, transparently-decided funding model needs to be established, which ensures both the independence of the data, the hosting process and the governance.


This article summarises work to date on developing Standardised Data on Initiatives (STARDIT), an open access web-based data-sharing system for standardising the way that information about initiatives is reported across diverse fields and disciplines. It provides a way to collate and appraise data about how different people have been involved in different tasks of multiple types of initiatives. The current usage by multiple initiatives demonstrates to usability of STARDIT, and will inform the next stages of development. In accordance with the principles of transparent participatory action research, the authors invite the involvement of any interested persons in developing and improving the next version of STARDIT, Version 1.0. Detailed and up-to-date information about STARDIT is available on the Science for All website (ScienceforAll.World/STARDIT) [155].

Availability of data and materials

Supplementary data can be found in the Additional files, with further data available in the associated STARDIT report for this article:

Change history

  • 22 July 2022

    The associated STARDIT report link for this article was not included in the Availability of data and materials section in the original publication, this article has been updated.


  1. WHO. One Health Definition. WHO. 2017. Accessed 12 Jan 2020.

  2. World Economic Forum. Annual report 2018–2019; 2019. Accessed 12 Jan 2020.

  3. von Bertalanffy L. General system theory: foundations, development, applications. New York: George Braziller; 1969.

    Google Scholar 

  4. Rosling H, Rosling O, Rönnlund AR. Factfulness; 2018.

  5. Moore GF, Audrey S, Barker M, et al. Process evaluation of complex interventions: Medical Research Council guidance. BMJ. 2015.

    Article  PubMed  PubMed Central  Google Scholar 

  6. Global Commission on Evidence to Address Societal Challenges. The Evidence Commission report. 2022. Accessed 14 Apr 2022.

  7. Brett J, Staniszewska S, Mockford C, et al. Mapping the impact of patient and public involvement on health and social care research: a systematic review. Health Expect. 2014;17(5):637–50.

    Article  PubMed  Google Scholar 

  8. Brett J, Staniszewska S, Mockford C, et al. A systematic review of the impact of patient and public involvement on service users, researchers and communities. Patient. 2014;7(4):387–95.

    Article  PubMed  Google Scholar 

  9. Nunn JS, Tiller J, Fransquet PD, Lacaze P. Public involvement in global genomics research: a scoping review. Front Public Health. 2019;7:79.

    Article  PubMed  PubMed Central  Google Scholar 

  10. Marchal B, van Belle S, van Olmen J, Hoerée T, Kegels G. Is realist evaluation keeping its promise? A review of published empirical studies in the field of health systems research. Evaluation. 2012;18(2):192–212.

    Article  Google Scholar 

  11. United Nations. Data Strategy of the Secretary-General for Action by Everyone, Everywhere; 2020. Accessed 12 Nov 2020.

  12. United Nations. Paris Agreement; 2015. Accessed 15 Apr 2021.

  13. Global Biodiversity Information Facility (GBIF). What is GBIF? Accessed 11 May 2020.

  14. UN Environment Programme World Conservation Monitoring Centre. A review of barriers to the sharing of biodiversity data and information, with recommendations for eliminating them; 2012. Accessed 15 Apr 2021.

  15. Ganzevoort W, van den Born RJG, Halffman W, Turnhout S. Sharing biodiversity data: citizen scientists’ concerns and motivations. Biodivers Conserv. 2017;26(12):2821–37.

    Article  Google Scholar 

  16. Crowe S, Fenton M, Hall M, Cowan K, Chalmers I. Patients’, clinicians’ and the research communities’ priorities for treatment research: there is an important mismatch. Res Involv Engagem. 2015;1(1):2.

    Article  PubMed  PubMed Central  Google Scholar 

  17. Wikidata. Participatory action research. Wikidata. Accessed 20 Sept 2020.

  18. Wikidata. Citizen Science. Accessed 22 Apr 2022.

  19. Silvertown J. A new dawn for citizen science. Trends Ecol Evol. 2009;24(9):467–71.

    Article  PubMed  Google Scholar 

  20. Auerbach J, Barthelmess EL, Cavalier D, et al. The problem with delineating narrow criteria for citizen science. Proc Natl Acad Sci U S A. 2019;116(31):15336–7.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  21. UNESCO. UNESCO Recommendation on Open Science; 2021.

  22. Staley K, Barron D. Learning as an outcome of involvement in research: what are the implications for practice, reporting and evaluation? Res Involv Engagem. 2019;5(1):14.

    Article  PubMed  PubMed Central  Google Scholar 

  23. Dawson S, Ruddock A, Parmar V, et al. Patient and public involvement in doctoral research: reflections and experiences of the PPI contributors and researcher. Res Involv Engagem. 2020;6(1):23.

    Article  PubMed  PubMed Central  Google Scholar 

  24. Dukhanin V, Topazian R, DeCamp M. Metrics and evaluation tools for patient engagement in healthcare organization- and system-level decisionmaking: a systematic review. Int J Health Policy Manag. 2018.

    Article  PubMed  PubMed Central  Google Scholar 

  25. Crocker JC, Ricci-Cabello I, Parker A, et al. Impact of patient and public involvement on enrolment and retention in clinical trials: systematic review and meta-analysis. BMJ. 2018;363:k4738.

    Article  PubMed  PubMed Central  Google Scholar 

  26. Regan de Bere S, Nunn S. Towards a pedagogy for patient and public involvement in medical education. Med Educ. 2016;50(1):79–92.

    Article  PubMed  Google Scholar 

  27. Aitken M, Tully MP, Porteous C, et al. Consensus statement on public involvement and engagement with data-intensive health research. Under Rev. 2018.

    Article  Google Scholar 

  28. Greenhalgh T, Hinton L, Finlay T, et al. Frameworks for supporting patient and public involvement in research: systematic review and co-design pilot. Health Expect. 2019;22(4):785–801.

    Article  PubMed  PubMed Central  Google Scholar 

  29. Bergold J, Stefan T. Participatory research methods : a methodological approach in motion. Forum Qual Soc Res. 2012;13(1). Accessed 19 June 2017.

  30. Chambers R. The origins and practice of participatory rural appraisal. World Dev. 1994;22(7):953–69.

    Article  Google Scholar 

  31. Kullenberg C, Kasperowski D. What is citizen science?—A scientometric meta-analysis. Dorta-González P, ed. PLoS ONE. 2016;11(1):e0147152.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  32. Eitzel MV, Cappadonna JL, Santos-Lang C, et al. Citizen science terminology matters: exploring key terms. Citiz Sci Theory Pract. 2017;2(1):1.

    Article  Google Scholar 

  33. Vetter J. Introduction: lay participation in the history of scientific observation. Sci Context. 2011;24(2):127–41.

    Article  PubMed  Google Scholar 

  34. Deverka PA, Lavallee DC, Desai PJ, et al. Stakeholder participation in comparative effectiveness research: defining a framework for effective engagement. J Comp Eff Res. 2012;1(2):181–94.

    Article  PubMed  Google Scholar 

  35. Burton H, Adams M, Bunton R, et al. Developing stakeholder involvement for introducing public health genomics into public policy. Public Health Genom. 2009;12(1):11–9.

    Article  CAS  Google Scholar 

  36. Komesaroff PA, Kerridge I, Lipworth W. Conflicts of interest: new thinking, new processes. Intern Med J. 2019;49(5):574–7.

    Article  PubMed  Google Scholar 

  37. Gross M, McGoey L. Routledge international handbook of ignorance studies (Routledge international handbooks); 2015.

  38. Lexchin J, Bero LA, Djulbegovic B, Clark O. Pharmaceutical industry sponsorship and research outcome and quality: systematic review. BMJ. 2003;326(7400):1167–70.

    Article  PubMed  PubMed Central  Google Scholar 

  39. Katz AS, Hardy BJ, Firestone M, Lofters A, Morton-Ninomiya ME. Vagueness, power and public health: use of ‘vulnerable’ in public health literature. Crit Public Health. 2020;30(5):601–11.

    Article  Google Scholar 

  40. Nunn JS, Scott CL, Stubbs JW, Cherry SF, Bismark MM. Involving the public in rare cancer care and research. Textb Uncommon Cancer. 2017.

    Article  Google Scholar 

  41. Nunn J, Crawshaw M, Lacaze P. Co-designing genomics research with a large group of donor-conceived siblings. Res Involv Engagem. 2021.

    Article  PubMed  PubMed Central  Google Scholar 

  42. Nunn JS, Gwynne K, Gray S, Lacaze P. Involving people affected by a rare condition in shaping future genomic research. Res Involv Engagem. 2021;7(1):14.

    Article  PubMed  PubMed Central  Google Scholar 

  43. Thorogood A, Zawati MH. International guidelines for privacy in genomic biobanking (or the unexpected virtue of pluralism). J Law Med Ethics. 2015;43(4):690–702.

    Article  PubMed  Google Scholar 

  44. CC0 - Creative Commons. Accessed 16 Sept 2021.

  45. United Nations Evaluation Group. Detail of impact evaluation guidance document. 2013. Accessed 6 May 2020.

  46. Fanelli D. How many scientists fabricate and falsify research? A systematic review and meta-analysis of survey data. Tregenza T, ed. PLoS ONE. 2009;4(5):e5738.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  47. Clark TD. Science, lies and video-taped experiments. Nature. 2017;542(7640):139.

    Article  CAS  PubMed  Google Scholar 

  48. Huber B, Barnidge M, Gil de Zúñiga H, Liu J. Fostering public trust in science: the role of social media. Public Underst Sci. 2019;28(7):759–77.

    Article  PubMed  Google Scholar 

  49. Braun R. The public’s growing distrust of science? Nat Biotechnol. 1999;17(S5):14–14.

    Article  Google Scholar 

  50. UNESCO. Local knowledge, global goals. 2017. Accessed 21 Apr 2021.

  51. Linklater WL. Science and management in a conservation crisis: a case study with rhinoceros. Conserv Biol. 2003;17(4):968–75.

    Article  Google Scholar 

  52. Plotz RD, Grecian WJ, Kerley GIH, Linklater WL. Standardising home range studies for improved management of the critically endangered black rhinoceros. Roca AL, ed. PLoS ONE. 2016;11(3):e0150571.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  53. Ripple WJ, Wolf C, Newsome TM, et al. World scientists’ warning to humanity: a second notice. Bioscience. 2017;67(12):1026–8.

    Article  Google Scholar 

  54. Marine Stewardship Council. Marine Stewardship Council. 2020.

  55. B Lab. Certified B Corporations. 2020.

  56. Nordheim LV, Gundersen MW, Espehaug B, Guttersrud Ø, Flottorp S. Effects of school-based educational interventions for enhancing adolescents abilities in critical appraisal of health claims: a systematic review. Zhou X, ed. PLoS ONE. 2016;11(8):e0161485.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  57. Nsangi A, Semakula D, Oxman AD, et al. Effects of the informed health choices primary school intervention on the ability of children in Uganda to assess the reliability of claims about treatment effects: a cluster-randomised controlled trial. Lancet. 2017;390(10092):374–88.

    Article  PubMed  Google Scholar 

  58. National Institute for Health Research. Briefing note eight: Ways that people can be involved in the research cycle. National Institute for Health Research. 2017. Accessed 5 June 2017.

  59. Health Research Authority. Public Involvement (Health Research Authority). 2019. Accessed 18 Jul 2019.

  60. Woolley JP, McGowan ML, Teare HJA, et al. Citizen science or scientific citizenship? Disentangling the uses of public engagement rhetoric in national research initiatives. BMC Med Ethics. 2016;17(1):1–17.

    Article  Google Scholar 

  61. Sousa S. Social participation in the assessment of health technologies for health systems: findings from a synthesis of qualitative evidence. 2017. Accessed May 5, 2021.

  62. Nickerson RC, Varshney U, Muntermann J. A method for taxonomy development and its application in information systems. Eur J Inf Syst. 2013;22(3):336–59.

    Article  Google Scholar 

  63. Gruber T. Ontology of folksonomy: a mash-up of apples and oranges. Int J Semant Web Inf Syst. 2007;3(1):1–11.

    Article  Google Scholar 

  64. Zhang Y, Ogletree A, Greenberg J, Rowell C. Controlled vocabularies for scientific data: users and desired functionalities. Proc Assoc Inf Sci Technol. 2015;52(1):1–8.

    Article  Google Scholar 

  65. Cook T, Abma T, Gibbs L, et al. Position paper no. 3: impact in participatory health research. 2020. Accessed 24 May 2020.

  66. International Collaboration for Participatory Health Research (ICPHR). Position paper 1: what is participatory health research? Version: May 2013. 2013. Accessed 13 June 2017.

  67. Kemmis S, Nixon R, McTaggart R. The action research planner: doing critical participatory action research. Singapore: Springer; 2014.

    Book  Google Scholar 

  68. Baum F, Macdougall C, Smith D. Participatory action research. J Epidemiol Community Health. 2006;60(60):854–7.

    Article  PubMed  PubMed Central  Google Scholar 

  69. United Nations. Universal declaration of human rights. 1948. Accessed 5 Feb 2018.

  70. Report of the Secretary-General, Nations U, Assembly UNG. Strengthening the role of the United Nations in enhancing the effectiveness of the principle of periodic and genuine elections and the promotion of democratization. 2013. Accessed 17 Apr 2019.

  71. UN Environment Programme. Why does environmental rights and governance matter?. 2021. Accessed 5 Feb 2021.

  72. United Nations For Indigenous Peoples. Indigenous peoples at the UN. 2018. Accessed 16 Apr 2019.

  73. Nunn JS, Sulovski M, Tiller J, Holloway B, Ayton D, Lacaze P. Involving elderly research participants in the co-design of a future multi-generational cohort study. Res Involv Engagem. 2021;7(1):23.

    Article  PubMed  PubMed Central  Google Scholar 

  74. Bromley DW. The commons, common property, and environmental policy. Environ Resour Econ. 1992;2(1):1–17.

    Article  Google Scholar 

  75. Butler JRA, Tawake A, Skewes T, Tawake L, McGrath V. Integrating traditional ecological knowledge and fisheries management in the Torres strait, Australia: the catalytic role of turtles and dugong as cultural keystone species. Ecol Soc. 2012.

    Article  Google Scholar 

  76. Rachel Feltman. Orangutan granted rights of personhood in Argentina. The Washington Post. 2014. Accessed 17 Apr 2019.

  77. Hutchison A. The Whanganui river as a legal person. Altern Law J. 2014;39(3):179–82.

    Article  Google Scholar 

  78. O’Donnell E. Legal rights for rivers: competition, collaboration and water governance. 2018. Accessed 26 Apr 2021.

  79. Genome British Columbia. Genomics positively affects life, every day. 2019. Accessed 18 Jul 2019.

  80. Indigenous Corporate Training. Who owns traditional ecological knowledge?. 2013. Accessed 18 Jul 2019.

  81. Rubis JM. The orang utan is not an indigenous name: knowing and naming the maias as a decolonizing epistemology. Cult Stud. 2020;34(5):811–30.

    Article  Google Scholar 

  82. Rubis JM, Theriault N. Concealing protocols: conservation, indigenous survivance, and the dilemmas of visibility. Soc Cult Geogr. 2020;21(7):962–84.

    Article  Google Scholar 

  83. Moher D, Schulz KF, Simera I, Altman DG. Guidance for developers of health research reporting guidelines. PLoS Med. 2010;7(2): e1000217.

    Article  PubMed  PubMed Central  Google Scholar 

  84. INVOLVE. Guidance on Co-Producing a Research Project. 2018. Accessed 14 Mar 2018.

  85. Macaulay AC. Participatory research: What is the history? Has the purpose changed? Fam Pract. 2016;351(3):117.

    Article  Google Scholar 

  86. Nunn J, Chang S. Guidance for planning, reporting and evaluating initiatives: A multidisciplinary scoping review [pre-print].,_reporting_and_evaluating_initiatives:_A_multidisciplinary_scoping_review. Accessed 24 Aug 2021.

  87. Nunn JS, Gwynne K, Gray S, Lacaze P. Involving people affected by a rare condition in shaping future genomic research. 2020.

  88. Nunn JS, Sulovski M, Tiller J, Holloway B, Ayton D, Lacaze P. Involving elderly research participants in the co-design of a future multi-generational cohort study.

  89. Jack S Nunn. WikiJournal Youth Salon Evaluation Report: Future Knowledge. Melbourne, Australia; 2019. Accessed 3 Sept 2020.

  90. Science for All. About - Science For All. Accessed 26 Apr 2021.

  91. The EQUATOR Network. Developing your reporting guideline. 2018. Accessed 7 May 2020.

  92. Nunn JS. Standardised Data on Initiatives (STARDIT) public consultation report—September 2019 to May 2021. 2021.

  93. Wang X, Chen Y, Yang N, et al. Methodology and reporting quality of reporting guidelines: systematic review. BMC Med Res Methodol. 2015;15(1):74.

    Article  PubMed  PubMed Central  Google Scholar 

  94. Banno M, Tsujimoto Y, Kataoka Y. The majority of reporting guidelines are not developed with the Delphi method: a systematic review of reporting guidelines. J Clin Epidemiol. 2020;124:50–7.

    Article  PubMed  Google Scholar 

  95. Nunn JS. “Standardised Data on Initiatives” WikiCite/2020 Virtual conference – Meta. 2020. Accessed 6 May 2021.

  96. Nunn JS. Standardised Data on Initiatives (STARDIT), 9th annual research showcase program.2020. Accessed 6 May 2021.

  97. Nunn JS. Involving People In DNA Research - presentation for Ludwig Boltzmann Gesellschaft. Published 2020.

  98. Nunn J. Genomics research and involving people—PhD presentation by Jack Nunn 13.10.20. 2020.

  99. Australian Citizen Science Association. EMCR seminar update. 2021. Accessed 6 May 2021.

  100. Nunn j. STARDIT public consultation report—September to December 2019. 2020. Accessed 30 Apr 2020.

  101. Science For All. STARDIT. Accessed 17 Jan 2021.

  102. Creative Commons — Attribution-ShareAlike 4.0 International — CC BY-SA 4.0. Accessed 27 Aug 2021.

  103. Nunn JS, Sulovski M, Tiller J, Holloway B, Ayton D, Lacaze P. Involving elderly research participants in the co-design of a future multi-generational cohort study. 2020.

  104. Cheng Y-Y, Nunn J, Skinner J, et al. A pathway to precision medicine for aboriginal Australians: a study protocol. Methods Protoc. 2021;4(2):42.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  105. Nunn JS, Crawshaw M, Lacaze P, et al. Co-designing genomics research with donor-conceived siblings (STARDIT Beta Version Report). Accessed 7 Sept 2021.

  106. Nunn JS, Smith M. Cochrane Values Statement (STARDIT Report). 2021. Accessed 30 Nov 2021.

  107. Nunn J. Cochrane values statement: a proposal for a co-creation protocol. 2021. paper 011221-2 cochrane values statement—a proposal for a co-design protocol %5BOPEN ACCESS%5D.pdf. Accessed 16 Mar 2022.

  108. Sherburn I, Boughtwood T, Nunn J. Involve Australia prospective STARDIT Report 2021–2023. 2021. Accessed 29 Sept 2021.

  109. Science For All. Beta - Science For All. Accessed 6 Aug 2021.

  110. ORCID. ORCID. Published 2020. Accessed May 17, 2020.

  111. The World Wide Web Consortium (W3C). RDF—Semantic Web Standards. Accessed 18 Jan 2021.

  112. Wikimedia Foundation. Wikidata. 2020. Accessed 17 May 2020.

  113. Internet Archive. Internet Archive: About IA. 2018. Accessed 2 Feb 2018.

  114. Staniszewska S, Denegri S, Matthews R, Minogue V. Reviewing progress in public involvement in NIHR research: developing and implementing a new vision for the future. BMJ Open. 2018;8(7): e017124.

    Article  PubMed  PubMed Central  Google Scholar 

  115. Nunn J, Smith M. STARDIT Report: Cochrane Values Statement.; 2021.

  116. Nunn J, Chang S. What are systematic reviews? WikiJournal Med. 2020.

    Article  Google Scholar 

  117. STARDIT Report: what is a systematic review?—Wikidata. Accessed 21 Apr 2021.

  118. Nunn JS. Science for All—publicly funded research report (June 2018–December 2019). 2019. Accessed 12 May 2020.

  119. Nunn JS. STARDIT: Wild DNA. 2021. Accessed 6 Aug 2021.

  120. Wikispore. Wikispore FAQ.

  121. Taylor MJ, McNicholas C, Nicolay C, Darzi A, Bell D, Reed JE. Systematic review of the application of the plan-do-study-act method to improve quality in healthcare. BMJ Qual Saf. 2014;23(4):290–8.

    Article  PubMed  Google Scholar 

  122. World Health Organisation. Declaration of Alma-Ata. 1978. Accessed 25 Jun 2018.

  123. Oliver JL, Brereton M, Watson DM, Roe P. Listening to save wildlife: lessons learnt from use of acoustic technology by a species recovery team. In: DIS 2019—proceedings of the 2019 ACM designing interactive systems conference. New York, NY, USA: Association for Computing Machinery, Inc; 2019, pp. 1335–48.

  124. Patient Focused Medicines Development. Patient engagement quality guidance tool. 2018. Accessed 13 Mar 2019.

  125. Collins M. PiiAF the public involvement impact assessment framework guidance. 2014. Accessed 4 Oct 2017.

  126. WHO. Supporting the use of research evidence (SURE). WHO. 2013. Accessed 5 Apr 2019.

  127. Stickley T, Basset T. Learning about mental health practice. New York: Wiley; 2008.

    Book  Google Scholar 

  128. Alison Faulkner. National involvement standards involvement for influence. 2015. Accessed 30 Aug 2017.

  129. Hancock T, Bezold C. Possible futures, preferebale futures. Healthc Forum J. 1994;(March/April):23–29.

  130. INVOLVE. National standards for public involvement. 2018. Accessed 29 Jan 2019.

  131. Pecl G, Gillies C, Sbrocchi C, Roetman P. Building Australia through citizen science. 2015. Accessed 24 Aug 2017.

  132. Brett J, Staniszewska S, Mockford C, Seers K, Herron-marx S, Bayliss H. The PIRICOM study: a systematic review of the conceptualisation, measurement, impact and outcomes of patients and public involvement in health and social care research. 2010:1–292. Accessed 27 Mar 2017.

  133. BehaviourWorks Australia. The BehaviourWorks Method. 2019. Accessed 30 Apr 2020.

  134. European Patients’ forum. The Value + Toolkit. 2013. Accessed 22 Aug 2016.

  135. Australian Research Council. Engagement and impact assessment pilot report. 2018. Accessed 5 Feb 2019.

  136. INVOLVE. Co-production in action: number one. Southampton; 2019. Accessed 16 Jul 2019.

  137. Gagnon Thompson SC, Barton MA. Ecocentric and anthropocentric attitudes toward the environment. J Environ Psychol. 1994;14(2):149–57.

    Article  Google Scholar 

  138. Science for All (charity). Science for All - Values 2019. for All - Our Values - Public Consulatation 10.5.19.pdf. 2019. Accessed 7 Mar 2022.

  139. Australian Geographic. The worst bushfires in Australia’s history. Accessed 14 Apr 2022.

  140. Farrell J, McConnell K, Brulle R. Evidence-based strategies to combat scientific misinformation. Nat Clim Change. 2019;9(3):191–5.

    Article  Google Scholar 

  141. Jan Van Oldenborgh G, Krikken F, Lewis S, et al. Attribution of the Australian bushfire risk to anthropogenic climate change. Nat Hazards Earth Syst Sci. 2021;21(3):941–60.

    Article  Google Scholar 

  142. World Health Organisation. Infodemic. 2022. Accessed 14 Apr 2022.

  143. Andrea Carson. Fighting Fake News; 2021. Accessed 14 Apr 2022.

  144. Rampersad G, Althiyabi T. Fake news: acceptance by demographics and culture on social media. J Inf Technol Polit. 2019;17(1):1–11.

    Article  Google Scholar 

  145. Banaji S. WhatsApp vigilantes: an exploration of citizen reception and circulation of WhatsApp misinformation linked to mob violence in India—LSE Research Online. Accessed 14 Apr 2022.

  146. NATO. What is information warfare?.

  147. Turki H, Shafee T, Hadj Taieb MA, et al. Wikidata: a large-scale collaborative ontological medical database. J Biomed Inform. 2019;99:103292.

    Article  PubMed  Google Scholar 

  148. Turki H, Taieb MAH, Shafee T, et al. Representing COVID-19 information in collaborative knowledge graphs: a study of Wikidata. 2020. Semant Web.

  149. Waagmeester A, Stupp G, Burgstaller-Muehlbacher S, et al. Wikidata as a knowledge graph for the life sciences. Elife. 2020.

    Article  PubMed  PubMed Central  Google Scholar 

  150. Fazey I, Bunse L, Msika J, et al. Evaluating knowledge exchange in interdisciplinary and multi-stakeholder research. Glob Environ Change. 2014;25:204–20.

    Article  Google Scholar 

  151. Moher D, Liberati A, Tetzlaff J, Altman DG. Preferred reporting items for systematic reviews and meta-analyses: the PRISMA statement. PLoS Med. 2009;6(7): e1000097.

    Article  PubMed  PubMed Central  Google Scholar 

  152. Gradinger F, Britten N, Wyatt K, et al. Values associated with public involvement in health and social care research: a narrative review. Health Expect. 2015;18(5):661–75.

    Article  PubMed  Google Scholar 

  153. INVOLVE. Public involvement in research: values and principles framework. 2015;(October). Accessed 14 Mar 2018.

  154. Biggio B, Roli F. Wild patterns: Ten years after the rise of adversarial machine learning. Pattern Recognit. 2018;84:317–31.

    Article  Google Scholar 

  155. Science For All. STARDIT - Science For All. 2020. Accessed 17 May 2020.

  156. Kharas H. Trends and issues in development aid; 2007. Accessed 15 Apr 2021.

  157. Sainsbury Institute. What is a Japanese Living National Treasure?. 2020. Accessed 8 Sept 2020.

  158. Chambers LE, Plotz RD, Dossis T, et al. A database for traditional knowledge of weather and climate in the Pacific. Meteorol Appl. 2017.

    Article  Google Scholar 

  159. Malsale P, Sanau N, Tofaeono TI, et al. Protocols and partnerships for engaging Pacific Island communities in the collection and use of traditional climate knowledge. Bull Am Meteorol Soc. 2018;99(12):2471–89.

    Article  Google Scholar 

  160. Lane R, McNaught R. Building gendered approaches to adaptation in the Pacific. Gend Dev. 2009;17(1):67–80.

    Article  Google Scholar 

  161. Balakrishnan R. Rural women and food security in Asia and the Pacific: prospects and paradoxes. 2005.

  162. Open Data Institute. Data ethics maturity model: benchmarking your approach to data ethics. Accessed 22 Apr 2022.

  163. UNESCO. Convention for the protection of cultural property in the event of armed conflict with regulations for the execution of the convention. 1999. Accessed 3 May 2021.

  164. Organizing Engagement. Participatory action research and evaluation. 2022. Accessed 22 Apr 2022.

  165. WHO. Health technology assessment. 2022. Accessed 22 Apr 2022.

  166. Research Activity Identifier (RAiD). Accessed 16 Sept 2021.

  167. Frequently Asked Questions - Accessed 16 Sept 2021.

  168. ISRCTN Registry. Accessed 16 Sept 2021.

  169. Introduction to MeSH. Accessed 16 Sept 2021.

  170. Wilkinson MD, Dumontier M, Aalbersberg IjJ, et al. The FAIR guiding principles for scientific data management and stewardship. Sci Data. 2016;3:160018.

    Article  PubMed  PubMed Central  Google Scholar 

  171. Carroll SR, Garba I, Figueroa-Rodríguez OL, et al. The CARE principles for indigenous data governance. Data Sci J. 2020;19(1):1–12.

    Article  Google Scholar 

  172. CARE Principles for Indigenous Data Governance - Wikidata. Accessed 16 Sept 2021.

Download references


The authors wish to acknowledge the support the following people have given to the STARDIT project so far. The following people have agreed to be named as project supporters (in chronological order of project support):

(1) Sir Iain Chalmers, Cochrane (co-founder)

(2) Simon Denegri, The Academy of Medical Sciences (Executive Director)

(3) Tom Calma, University of Canberra (Chancellor)

(4) Mick Mullane, National Institute of Health Research, UK

(5) Chloe Mayeur, Sciensano

(6) Wannes Van Hoof, Sciensano

(7) Nicholas Gruen, King’s College London, (visiting Professor)

(8) Magdalena Wailzer (Open Innovation in Science Center, Ludwig Boltzmann Gesellschaft)

(9) Marc Tischkowitz, University of Cambridge (Professor of Medical Genetics)

(10) Sally Crowe, Crowe Associates (Director)

(11) Anubhav Nigam, Lake Zurich High School (Student)

(12) Jennifer Morris

(13) Monica Westin, Google (Google Scholar partnerships lead).

The authors would also like to thank:

Pauline Sanders for her valuable assistance in editing versions of the Beta manuscript

Pauline Stobbs for her assistance in editing versions of the abstract and plain English summary

Saloni Swaminathan Aidan Levy for their valuable assistance in supporting the associated scoping review and collating data [86]

Paul Lacaze for his valuable input throughout the STARDIT project so far, including recommending the use of the Minimum Contribution Reporting (MICRO)

Dave McCall for his pro-bono support with designing logos and brand guides for STARDIT

Alicia Merrick (Secretary, Science for All) for her support throughout this project

The charity Science for All, and all the volunteers for supporting with hosting and facilitating online discussions and providing practical support to the project.


No funding has been received for this project directly. Many people have volunteered their time and expertise. Organisations which have provided in-kind support include the charity Science for All (hosting and facilitating the co-design process) and the EPPI-Centre, who hosted an in-person meeting in 2019.

Author information

Authors and Affiliations



JN conceived of STARDIT, conceived of and led the co-design process, facilitated face to face events, facilitated online discussions and decision making processes, analysed data, designed and carried out the evaluation of the process and the reporting (including STARDIT and GRIPP2 reports), wrote the manuscript and collated and integrated feedback from public consultations. TS provided critical input and feedback on early versions of STARDIT, built the working STARDIT Beta version in WikiData and WikiSpore and provided feedback on multiple versions of STARDIT. SC provided insight into early versions of STARDIT and provided feedback on multiple versions of STARDIT. RS, JE, SO, CT and JA provided input at the first STARDIT event and provided feedback on multiple versions of STARDIT. P-YH provided expert advice on terminology to describe legal and licencing information, including use of open access licenses. All other authors provided detailed feedback on the STARDIT manuscript.

Corresponding author

Correspondence to Jack S. Nunn.

Ethics declarations

Ethics approval and consent to participate

Not applicable.

Consent for publication

All authors, people named in acknowledgements and people identified in Additional files have given consent for this to be published.

Competing interests

No authors have declared any competing or conflicting interests.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Additional file 1.

This document contains additional information relevant to the article ‘Standardised Data on Initiatives (STARDIT) Beta Version’.

Additional file 2.

This document contains a STARDIT Beta version report about the co-creation process of the STARDIT Beta version.

Additional file 3.

This document contains a GRIPP report about the co-creation process of the STARDIT Beta version.

Additional file 4.

This document describes how the public were invited to be involved in giving feedback on ‘Standardised Data on Initiatives (STARDIT)' between 2019 and 2021.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit The Creative Commons Public Domain Dedication waiver ( applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Nunn, J.S., Shafee, T., Chang, S. et al. Standardised data on initiatives—STARDIT: Beta version. Res Involv Engagem 8, 31 (2022).

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: