10 Shattuck St, Boston MA 02115 | (617) 432-2136
| |
Copyright © 2020 President and Fellows of Harvard College. All rights reserved.
Advanced Search
Eliminating health and health care inequities is a longstanding goal of multiple United States health agencies, but overwhelming scientific evidence suggests that health and health care inequities persist in the United States, despite decades of research and initiatives to alleviate them. Because of its comprehensiveness, studying health inequities in the context of primary care allows for the use of multiple paradigms and methodologic approaches to understanding almost any state of health, disease, social challenge, or societal circumstance a patient or group of patients might face. We argue in this special communication that the many features/advantages of primary care research have valuable contributions to make in reducing health inequity, and scientists, journals, and funders should increase the incorporation of primary care approaches and findings into their portfolios to better understand and end health inequity.
Health inequities are differences in health status or the distribution of health resources between different population groups, arising from the social conditions in which people are born, grow, live, work and age. 1 Eliminating health and health care inequities is a longstanding stated goal of multiple United States health agencies, but overwhelming evidence suggests that these inequities persist in the United States, despite decades of research and initiatives to alleviate them. This stasis has led to calls for advancement in health inequities research methods and content by several US federal organizations. In 2012, the National Institutes of Health (NIH) convened a summit calling for a broadening of approaches to address health inequities, 2 and the National Institute of Minority Health and Health Disparities (NIMHD) has led visioning exercises to identify health inequity research priority areas. 3 , 4 While these renewed calls are needed, there are still gaps to better study health inequity. Overall, US health inequities research has been frequently described as a subdiscipline of public health research, 5 and major federal health inequities initiatives have relied on surveys initially developed around the mid-20 th century. 6 While a survey-based, public health approach benefits understanding region and society-wide trends and intervention efforts to reduce inequities, definitive progress on fully understanding and eliminating health inequities remains unfulfilled. An essential avenue for understanding and addressing health care inequities may be to more directly observe how vulnerable populations interact with the US health care system. Primary care providers are the front door to this system-even in a nation without universal primary care access- to which a wide swath of the United States, including vulnerable populations, access at multiple points throughout their life. 7 , 8 The addition of primary care research perspectives, approaches, and data into health inequities research may be a crucial step toward understanding, improving, and ultimately helping end health inequity in the United States.
Primary care is first contact health care that is comprehensive, continuous, and coordinated. 9 Primary care research is research done in the primary care environment, 10 therefore, involving primary care patients, practitioners, perspectives, and priorities. Because of its comprehensiveness, studying health inequities in the context of primary care allows for the use of multiple paradigms and methodologic approaches to understanding almost any state of health, disease, social challenge, or societal circumstance patients might face. Further, while most research methods can be used in primary care, some methods such as pragmatic trials, 11 , 12 dissemination and implementation research, 13 and patient-investigator partnerships 14 are especially appropriate for primary care settings. Primary care delivery will not solve inequity alone, but observational and interventional research in the primary care setting is an essential and overlooked piece of the science to understand and reduce health inequity. Research in the primary care setting is a window that displays disease and health care and a wide representation of the issues relevant to inequity: the experience of violence, poverty, addiction, racism, cultural factors, and disadvantage, among others, throughout a lifetime. 7 , 8 , 15 , 16 The beneficial relationships forged in primary care 17 , 18 may, in part, start to mitigate the effects of violence perpetrated by researchers in the past. 19 There have been calls to examine inequities over the life course, 20 and primary care disciplines, especially family medicine, are well-positioned to do this given their comprehensiveness in scope.
For the researcher interested in health inequities research, a context-specific discipline might elicit sampling concerns: does the US primary care environment contain enough patients experiencing inequities to produce meaningful understanding on these issues? Is not studying those in the US primary care environment just the study of care quality for a subpopulation with unlimited access to resources and all the health care they need? Are vulnerable people—with poor access to services and resources—represented in a context that requires access a priori ? Historically, in the United States, these questions may have resulted in caution in evaluating health inequities in primary care settings, but this is rapidly changing. Even in a society that does not have universal health care coverage, a large proportion of the population does have contact with primary care providers; in national surveys, more than 85% of US individuals, across demographic groups, have at least some usual source of care (doctor's office or clinic/health center—not the emergency department). 21 Specifically, vulnerable and marginalized populations do see primary care providers, especially in the nation's network of community health centers (CHCs). CHCs (clinics receiving federal funding to provide comprehensive primary care) serve ∼30 million patients in the United States, approximately 10% of the country, regardless of citizenship, income, insurance status, language spoken, or other socioeconomic criteria, and especially serve low-income patients and racial/ethnic minorities. 8 Whether a patient accesses a CHC or not, numerous primary care networks, many of them now interconnected, widely represent those who might experience health inequities. For instance, primary care practices nationwide are increasingly part of data-connected networks – research networks, networks with shared administrative resources, and networks that share electronic health records and their functionalities for innovation and data aggregation. 22 , 23 These networks join the existing core resource of practice-based research networks (PBRNs) in primary care. 24 Though large connected primary care networks (data networks and PBRNs) may not have the representativeness of national surveys, they contain large patient samples with richer information on objectively measured health outcomes, care utilization, and increasingly, robust social determinants of health data. 25 All this is routinely collected in primary care clinics, which is challenging to collect or subject to recall bias in public health surveys. Amid calls for the integration of social care and the evaluation of social determinants of health into health care, 26 , 27 and calls for multi-level and “complex system analysis reflective of real-world settings” 4 to better understand inequity, these reports have missed an opportunity to explicitly recommend primary care research as a viable and necessary response to these calls. The primary care setting sits at the nexus of complex system factors, is already in the “real world” and therefore may have enhanced external validity, is where most social needs are witnessed in health care, and is where research into these aims is likely to be most effective. In addition, primary care data are already multi-level and routinely collected: multiple visit observations for a patient over time, patients nested within providers, providers nested within clinics, and clinics nested in neighborhoods, cities, and states. 22 , 25
Researchers interested in US health inequities should consider primary care settings as a crucial avenue for understanding the full picture of health inequity and developing real-world interventions to end this inequity. The published opportunities of the NIMHD Health Disparities Science Visioning Initiative 3 all rely on studying the primary care environment. Still, primary care is not explicitly mentioned in this list. We would continue the call for an enhanced partnership between primary care and public health in a manner that leverages the research strengths of both fields to take advantage of these opportunities optimally. This outcome would mean a concerted and longitudinal integration of national US survey data with primary care-related datasets to even more fully capture the exposures, experiences, and care of those most at risk for poor health outcomes. Second, it would mean sustained collaboration in developing and testing scalable health-related interventions that span boundaries: boundaries between regions, care settings, and between “community” and “health care” settings. In the long-term, funding agencies and health systems could invest even more in primary care centered networks to continue building data sources that have the potential to aggregate significant data on the longitudinal experience and outcomes of vulnerable populations over the entire life course. While Congress has designated the Agency for Health Care Research and Quality (AHRQ) as the “principal source of funding for primary care research,” the AHRQ's 2021 budget was 0.5% of the NIH's budget, 28 , 29 and a very small proportion of the NIH budget is awarded to disciplines in primary care research. 30 In response to all these issues, we make the following recommendations:
Funding agencies in the United States should increasingly fund research projects that utilize broad primary care settings to study health inequity.
Journal editorial boards should recognize the importance, scientific merit, and enhanced external validity of utilizing primary care settings in health inequity research. They should prioritize the inclusion of primary care researchers—especially those with experience in health equity research— on board rosters.
Researchers should consider multi-level, etiologic, and complex system analyses 4 and understand that primary care sits at a nexus of multi-level investigations into health inequity (primary care is the bridge between biology, behavior, health care, and community); researchers should utilize the existing multi-level data in primary settings and networks for observational and intervention studies.
Primary care providers treat and health inequities affect every organ, every system, every malady, in every family, and every community. Primary care researchers, along with public health researchers, may bring about understanding and intervention to end health inequity in the United States together.
The authors would acknowledge our home institutions and the patients and staff of the OCHIN Practice-Based Research Network, who support our work in general.
This article was externally peer reviewed.
Funding: National Institute on Aging and National Institute on Minority Health and Health Disparities.
Conflict of interest: None.
To see this article online, please go to: http://jabfm.org/content/33/5/849.full .
Thank you for your interest in spreading the word on American Board of Family Medicine.
NOTE: We only request your email address so that the person you are recommending the page to knows that you wanted them to see it, and that it is not junk mail. We do not capture any email address.
Related articles.
BMC Research Notes volume 16 , Article number: 255 ( 2023 ) Cite this article
1432 Accesses
19 Altmetric
Metrics details
There is increasing focus to review the societal impact of research through assessment and research excellence frameworks. These often link to financial and reputational incentives within the academic community. However, timeframes to demonstrate impact using these approaches are often long and are not designed to show benefit to service collaborators who require evidence of improvement and change to their services more immediately. Impacts that are measured this way may also miss out on unintended and positive impacts that occur as by-products of research, or through the ‘ripple effect’ that research may have on practice. Importantly, demonstrating how research makes a difference can improve the research culture in services, and motivations in service partners to become, and stay involved in research. This article describes, and provides access to, a tool called VICTOR (making V isible the I mpa CT O f R esearch) that was developed by a community of practice involving 12 NHS organisations through blending evidence from the literature, practice and service users. We describe the types of impact that have been collected by VICTOR and explore how collecting impact in this way might help research-practice partnerships and inform research methodologies and may be useful to show impacts alongside, and shortly after the research process.
Peer Review reports
There is increasing focus within the academic establishment to review the societal impact of research through various assessment and research excellence frameworks. These often link to financial and reputational incentives in academia, for example the research excellence framework in the UK https://www.ref.ac.uk/about/ and Excellence in Research in Australia https://www.arc.gov.au/excellence-research-australia/era-2023 ). Many governments invest in both applied and basic health research for impact and benefit. The Canadian institute for Health Research (CHIR) for example, aims to develop scientific knowledge into improved health, more effective health services and products, and an effective care system http://www.cihr-irsc.gc.ca/e/37792.html . The UK based National Institute Health Research (NIHR) aims to provide health research that focuses on the needs of patients and the public [ 1 ] [ 2 ]. However, the timeframes to demonstrate impact from research findings are often very long [ 3 ], and many services want to show impact sooner than this resulting in tensions in academic- practice partnerships [ 4 ] [ 5 ]. There is emerging evidence that there are benefits for healthcare organisations to be part of research delivery in collaborations. For example, hospitals that are research active (defined in terms of linked citations in peer reviewed journals) are associated with improved mortality rates [ 6 ], and quality of care and health outcomes positively correlate with the conduct of clinical trials in NHS organisations [ 7 ]. There is also an association between research engagement of practitioners and improvements in performance and the process of care [ 8 ]. Boaz et al. [ 9 ] described these as the ‘by-products’ of research itself, but perhaps it is more than this, and may help to support motivation and engagement of services and increase collaboration with less engaged groups? There is also a growing debate that research could be more immediately beneficial to healthcare providers if conducted in a co-productive manner [ 10 ] [ 11 ] [ 12 ]. Coproduction can stimulate ‘win-win’ and mutually beneficial outcomes in the short-term [ 13 ], especially for services and service users and aids longevity of research collaborations and better reach into the healthcare system [ 14 ]. Indeed, a realist review focussing on research capacity development in health and care systems has highlighted how showing that research makes a difference can act as an important symbolic mechanism that increases research capacity and research culture in healthcare organisations [ 15 ]. Ideally these should be captured contemporaneously within the coproduction process.
With this context in mind, a community of practice (CoP) that included members of Research and Development leaders in 12 NHS organisations in England completed a service development project to develop a tool that would enable the collection of case studies to uncover the immediate impact of conducting research in their organisations. This is more than a ‘by-product’ for them and contributes to quality assessment by the Care Quality Commission and establishes direct benefit to the organisation. The CoP was called ACORN (Addressing Capacity in Organisations Network) and they worked with two NIHR partnerships: The Collaboration and Leadership in Applied Health and Care for Yorkshire and Humber (CLAHRC YH) and the NIHR CRN YH.
VICTOR aimed to identify impact where it matters in the NHS, services, and people within them and to create a resource to support NHS Trusts to capture and show how applied research projects can have an impact within the organisation. Two senior NHS managers (JH and NJ) were seconded into the NIHR partnership to develop the VICTOR approach. Areas of impact were developed through collecting and organising information from a range of sources including a workshop with ACORN members to identify areas they thought were important, that made a difference to services when conducting research. The particular focus was on how undertaking research can make a difference in healthcare organisation and the wider health system.
A scoping literature review was conducted with the aim of understanding the current landscape of research impact tools and mapping out the published tools available for capturing research impact [ 16 ]. Keywords were used to systematically search the published literature to identify research, policy, and research impact tools relevant to the project. Online databases such as CINAL and Medline were iteratively searched as well as grey literature. Reports, tools and studies detailing research impact tools were exported to a reference manager so that they could be analysed. NJ and JH then screened the papers to ensure they were relevant to the project. A spreadsheet was created to list the research impact tools and extract data on the key domains of impact. NJ and JH were interested in where the research impact tools were similar, any gaps and the relevance of the tools to the NHS context.
The tools were discussed with JC. The merits of each were analysed. Findings from this review discovered gaps in the patient perspective on research impact and that many of the tools were designed for academic purposes or for contexts other than the NHS. Key tools of interest that were identified were:
Becker Medical Library Model [ 17 ].
Payback Framework [ 18 ].
Canadian Health Services Policy Research Alliance (CHSPRA) making an impact framework [ 19 .
Research Excellence Framework [ 20 .
Sarli CC, Dubinsky EK, Holmes KL. Beyond citation analysis: a model for assessment of research impact [ 21 ].
Stakeholder engagement in this project included working with ACORN which included 12 NHS organisations: three teaching hospitals; five mental health trusts; and four acute trusts. Many of these trusts also include outreach into community and public health practice. Each trust has at least two representatives in ACORN, one being a senior R&D manager, and the other a research-active or research interested practitioner. Stakeholder engagement is a powerful tool for involving those in research who have lived insights and ideas about ways to improve healthcare. [ 22 ]
Stakeholders in this project were involved in several ways:
12 ACORN NHS trusts met several times during the project to advise on progress and prototype tools.
Experts in the field were consulted about research impact domains via telephone calls.
Patient and carer representatives were consulted about prototype tools one to one and via patient research engagement groups. Feedback was also sought from a mental health charity and an older people’s charity.
Prototyping involved creating versions of the research impact tool and testing them out with stakeholders. Prototyping is a helpful way to test out a new tool in the early stages of development and design. [ 23 ]
Feedback on the prototype tools was collated by NJ and JH and used to inform the next version of the tool.
Several patient representatives tested the tool by completing the questions. They used their experiences of participation in a recent study to answer the questions. This gave the authors an understanding of whether the questions were collecting sufficient and focused information. Feedback from patient and informal carer representatives shaped the prototype tool so that the number of questions were reduced to make completing the questionnaire less onerous and the language of the tool was developed to avoid professional jargon.
In the first prototype, the domains of the tool were created by using the data extracted from the scoping review. NJ and JH extracted the key domains from other research impact tools. Information and insights from stakeholder consultation about what needed to be included in the tool were mapped onto the emerging domains. A master domain list was developed and tested out with JC and the ACORN group. Each domain had a list of criteria to define the focus for the domain for example, the ‘health benefits’ domain considers health benefits, safety and quality improvements for research participants and carers. This is that as a result of taking part in the research the participants (patient, carer or family) have improved health, a better experience of care, improved quality of life and/or more equitable access to healthcare. This domain includes the subgroups:
Health benefits such as; quality of life impacts, access to different treatments; care delivered differently; quality of information provided; health literacy; providing the same quality of care for a reduced cost.
Experience; during the study, were there any changes made to patient care that improved the experience of care for participants, carers or family as part of / as a result of being in the study for example information giving, carer support, carer interventions; health literacy.)
Patient safety; are there any examples of improved governance and/or safety for patients taking part in the study? This would include improvements to quality of research in terms of scientific quality, standards of ethics and related management aspects – set up, conduct, reporting and progression towards healthcare improvements.
Social capital; are participants / carers better connected or part of any new networks as a result of taking part in the research? This includes self-help groups, increased social networks or activities.
By socialising the draft domains we were able to gauge if there were any gaps, duplications, or areas of impact that might have been missed. Feedback shaped version 2 of the list of domains, criteria and prompts which were then used to create questions relevant to the domain criteria. Open questions were developed to elicit information from the research team members or patients [ 24 ].
The resulting areas of impact are given in Table 1 . There were six general domains of impact, with subgroups within each domain.
This framework was then used to develop a questionnaire that was modified and adapted based on two rounds of piloting within the ACORN organisations. A final VICTOR questionnaire was developed that includes 26 questions organised in six sections reflecting the impact domains and domain subgroups described in Table 1 . A Tool of four questions was developed for patients and members of the public based on consulting with service user groups. The VICTOR tool can be accessed https://www.e-repository.clahrc-yh.nihr.ac.uk/visible-impact-of-research/ )
As a service evaluation, the project does not require ethical approval through HRA however this project was conducted with the rigour and safeguards of research to protect participants’ data. The service evaluation was registered with the author’s organisation (STH) clinical effectiveness unit. Efforts were made to ensure that this project adhered to best practice guidance for service evaluation practice [ 25 ]. Consent to participate in the stakeholder consultations was through explicit verbal or written consent. Those agreeing to view the prototype tool and provide feedback were aware that their feedback data would be used in project reports and dissemination, and all data would be anonymised.
Trusts who piloted the VICTOR tool shared their summary documents with the ACORN CoP. Many trusts reported that VICTOR had been helpful in identifying unanticipated and ‘hidden’ impacts of research, and documented changes that would otherwise have been overlooked, or not linked to research activity.
The impacts frequently cited in the pilot sites included service and workforce changes, research capacity building, and health and experiential impacts of patients and carers. Intervention studies often, but not exclusively, produced changes in workforce and services. For example, practitioners who received training as part of developing skills for new interventions frequently highlight how these skills were used in their practice more generally after the research project. These can be diverse skills, like paramedics developing better airway management techniques, or community nurses using cognitive behavioural therapy with patients who have long term conditions. Sometimes elements of the research method were then incorporated into clinical pathways, for example using screening questionnaires in radiography services, or use of autophotography in mental healthcare, where patients use photographs to express their world view or how they feel. The advantages of using such techniques were demonstrated in the research delivery and continued into everyday practice.
Many examples of impact on working practice in the healthcare system were established because of working together on a research project, for example between pharmacy and a clinical area, or between primary and secondary care. These continued to benefit the services after the research had been completed. Such stories were very insightful and meaningful to practitioners and managers, and were able to promote research in the organisation and wider community, for example in newsletters and press releases. Importantly, some patients described impacts that were not mentioned by research teams who were delivering projects, for example patients felt they were closely monitored, felt that they were making a difference, but they also had a contact person, usually the research nurse, who provided support and information about care and services. The process of collecting the information through VICTOR sometimes helped internal cohesion. Informal feedback was collected from the individuals or research teams (collated by NJ and JH) testing out the prototype tools. This suggests that using the VICTOR tool as a team facilitated reflexivity and team thinking about the benefits of the research project, and enabled teams to reflect on the successes of research together. One participant remarked “ Teams don’t usually get together after a research project ends, everyone is getting on with the next project, so it was nice to take some time together and reflect on the project”.
Another participant comments on the value of the team coming together to collaborate and completing the tool “ We collaborated across a pathway of care, medical, therapy and nursing staff, we would not normally get together to discuss the research, this was helpful as we could discuss changes and improvements in our systems and processes, applying the learning from the study”.
This strengthened relationships between research and clinical teams by recognising and documenting shared achievements and strengthened the partnerships with researchers. The process also enabled increased awareness of each other’s role and to share their views of impact.
During the prototyping notes of informal feedback suggested that it was more difficult than anticipated for the PI or research coordinator to track down members of the research team to ask them to complete a VICTOR questionnaire. This suggests that doing the feedback directly after the project was concluded could make it easier to gain feedback however this could potentially miss impacts that occur after the study 3–6 months after the project has been completed.
The VICTOR tool can help to describe the impact of conducting research in healthcare organisations, and it offers fertile ground for further work and debate on its wider influence. The logic for VICTOR’s development was that by uncovering impact of undertaking research ‘close to practice’, it could show immediate usefulness to clinicians, managers and patients, and stimulate a research culture, triggering a mechanism for change [ 26 ]. A report on enabling staff to do research in NHS organisations [ 27 ] highlights that feedback on research impact is an enabler to promote a research culture and encourages positive attitudes and values towards research. This may well be more beneficial in in supporting research collaborations within the wider ‘research ecosystem’, particularly in social and community care, where research capacity in needed and where immediate benefits are important and practical benefits realised [ 28 ].
There is a growing body of support and funding for long term research and practice collaborations such as the CLAHRCs in England, and the Hunter New England Population health research-practice partnerships [ 29 ]. These partnerships provide an opportunity to produce co-benefits to the researchers but currently there is not systematic evidence of how to identify immediate benefits to service partners [ 30 ] including methods to capture the intended and unintended outcomes that are context dependant [ 31 ]. VICTOR could provide a basis for this. It is argued that impact should be recognised in the eyes of the end- user and be tailored to context of where impact should occur [ 32 ] [ 33 ] and certainly we have found that hidden benefits have been uncovered through using the tool. The timeframe for VICTOR is undertaken contemporaneously, or shortly after the research and so shows immediate benefit that complements with more longer-term impacts of research collected in the academic research assessment frameworks.
VICTOR also has the potential to determine which research methods and methodologies are valuable to different care provider partners, and help to assess impact and different models of conducting research [ 30 ] [ 29 ]. Context, for example where coproduction in research is used can influence both process and outcomes [ 5 ]. VICTOR has found both stages in research can have a positive and ‘rippled effect’ on service provider organisations further down the pathways to impact and this has also been found by others [ 34 ]. Such a body of accumulated knowledge through VICTOR use might help to inform coproduction partnerships providing win-win scenarios linked to process as well as outcomes in research.
We acknowledge that this tool was coproduced with managers, practitioners, and service users in the NHS, which is both a strength and a limitation. It certainly was reported to be useful to the ACORN group and it has been downloaded by hundreds of healthcare organisations. However, it would be beneficial to see if it is useful across the health and care system, or in other countries. There may well be cultural differences in terms of benefit. This calls for more internationally work and comparison and incorporating tools like VICTOR into the research process itself. The optimum timeframe for completing VICTOR was not explored during this evaluation. We hope that by sharing our experience and access to VICTOR we can establish transferability and open dialogue with other partners and provide opportunities to explore mechanisms of impact of research in healthcare organisations.
Post development note.
The VICTOR tool and process was made available at https://hseresearch.ie/wp-content/uploads/2021/04/VICTOR-pack.pdf#:~:text=VICTOR%20enables%20engagement%20with%20research%20participants%2C%20professionals%2C%20managers,and%20help%20plan%20for%20improved%20impact%20in%20future . in Feb 2019 and to date 200 organisations have requested a pack. A web based version has been developed and is available at https://sites.google.com/nihr.ac.uk/victor/home and https://victorimpacttool.net/ . For further information on accessing the online tool please contact [email protected]
Data sharing not applicable to this article as no datasets were generated or analysed during the current study.
National Institute Health Research
Department of Health
Department of Health and Social Care
Making Visible the ImpaCT Of Research
Community of Practice
Addressing Capacity in Organisations to do Research Network
Applied Health and Care for Yorkshire and Humber
Clinical Research Network Yorkshire and Humber
Department of Health. 2006 Best research for best health: a new national health research strategy London:: Department of Health; Available at https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/568772/dh_4127152_v2.pdf .
Department of Health and Social Care. 2020. Best Research for Best Health: the next chapter. London: Department of Health and Social Care. Available at https://www.nihr.ac.uk/documents/about-us/best-research-for-best-health-the-next-chapter.pdf .
Morris ZS, Wooding S, Grant J. The answer is 17 years, what is the question: understanding time lags in translational research. J R Soc Med. 2011;104(12):510–20.
Article PubMed PubMed Central Google Scholar
Nutley S, Boaz A, Davies H, Fraser A. New development: what works now? Continuity and change in the use of evidence to improve public policy and service delivery. Public Money & Management. 2019;39(4):310–6.
Article Google Scholar
van der Graaf P, Cheetham M, Redgate S, Humble C, Adamson A. Co-production in local government: process, codification and capacity building of new knowledge in collective reflection spaces. Workshops findings from a UK mixed methods study. Health Res Policy Syst. 2021;19(1):1–13.
Google Scholar
Bennett WO, Bird JH, Burrows SA, Counter PR, Reddy VM. Does academic output correlate with better mortality rates in NHS trusts in England? Public Health [Internet]. 2012;10.1016/j:2–5. Available from: https://doi.org/10.1016/j.puhe.2012.05.021 .
Jonker L, Fisher SJ. The correlation between National Health Service trusts’ clinical trial activity and both mortality rates and care quality commission ratings: a retrospective cross-sectional study. Public Health [Internet]. 2018;157:1–6. Available from: https://linkinghub.elsevier.com/retrieve/pii/S0033350618300015 .
Wenke RJ, Ward EC, Hickman I, Hulcombe J, Phillips R, Mickan S. Allied health research positions: A qualitative evaluation of their impact. Heal Res Policy Syst [Internet]. 2017;15(1):1–11. https://doi.org/10.1186/s12961-016-0166-4 .
Boaz A, Hanney S, Jones T, Soper B. Does the engagement of clinicians and organisations in research improve healthcare performance: a three-stage review. BMJ Open [Internet]. 2015;5(12):e009415. Available from: http://bmjopen.bmj.com/lookup/doi/ https://doi.org/10.1136/bmjopen-2015-009415 .
Rycroft-Malone J, Burton CR, Bucknall T, Graham ID, Hutchinson AM. Collaboration and Co-Production of Knowledge in Healthcare: Opportunities and Challenges. Int J Heal Policy Manag [Internet]. 2016;5(4):221–3. Available from: http://ijhpm.com/article_3152_629.html .
Greenhalgh T, Jackson C, Shaw S, Janamian T. Achieving Research Impact Through Co-creation in Community-Based Health Services: Literature Review and Case Study. Milbank Q [Internet]. 2016;94(2):392–429. https://doi.org/10.1111/1468-0009.12197 .
Castle-Clarke S, Edwards N, Buckingham H. Falling short: Why the NHS is still struggling to make the most of new innovations [Internet]. Nuffied Trust. 2017. Available from: https://www.nuffieldtrust.org.uk/research/falling-short-why-the-nhs-is-still-struggling-to-make-the-most-of-new-innovations .
Cooke J, Ariss S, Smith C, Read J. On-going collaborative priority-setting for research activity: a method of capacity building to reduce the research-practice translational gap. Health Res Policy Syst. 2015;13(1). https://doi.org/10.1186/s12961-015-0014-y .
Steens R, Van Regenmortel T, Hermans K. Beyond the research–practice gap: the development of an academic collaborative centre for child and family social work. Br J Social Work. 2018;48(6):1611–26.
Cooke J, Gardois P, Booth A. 2018. Uncovering the mechanisms of research capacity development in health and social care: a realist synthesis. Health research policy and systems , 16 (1), pp.1–22. Available from https://health-policy-systems.biomedcentral.com/articles/ https://doi.org/10.1186/s12961-018-0363-4 .
Mak S, Thomas A. Steps for conducting a scoping review. J Grad Med Educ. 2022;14(5):565–7. https://doi.org/10.4300/JGME-D-22-00621.1 . PMID: 36274762; PMCID: PMC9580325.
( https://becker.wustl.edu/impact-assessment/model ).
Donovan C, Hanney S. The ‘Payback Framework’ explained. Res Evaluation. 2011;20:181–3. https://doi.org/10.3152/095820211X13118583635756 .
Canadian Academy of Health Sciences. Making and Impact: A Preferred Framework and Indicators to Measure Returns on Investment in Health Research [Internet]. Report of the Panel on the Return on Investments in Health Research Canadian. Ottawa, Ontario. ; 2009. Available from: http://www.xn--cahsacss-3m3d.ca/ .
https://www.ref.ac.uk/ .
Sarli CC, Dubinsky EK, Holmes KL. Beyond citation analysis: a model for assessment of research impact Recommended Citation"Beyond citation analysis: a model for assessment of research impact Beyond citation analysis: a model for assessment of research impact. J Med Libr Assoc [Internet]. 2010;981(1):17–23. Available from: http://digitalcommons.wustl.edu/open_access_pubs .
Goodman MS, Ackermann N, Bowen DJ, Panel D, Thompson VS. Reaching Consensus on Principles of Stakeholder Engagement in Research. Progress in Community Health Partnerships: Research Education and Action. 2020;14(1):117–27. https://doi.org/10.1353/cpr.2020.0014 .
Article PubMed Google Scholar
Lambeth G, Szebeko B. Prototyping public services. Issue November; 2011. http://www.guardianpublic.co.uk/prototyping-public-services) .
O’Cathain A, Thomas KJ. (2004) ‘Any other comments?’ Open questions on questionnaires–a bane or a bonus to research? BMC Medical Research Methodology, 4(1), 1–7).
https://arc-w.nihr.ac.uk/Wordpress/wp-content/uploads/2020/02/Full-guidelines-for-Best-Practice-in-the-Ethics-and-Governance-of-Service-Evaluation-Final02.pdf.
Cooke J, Gardois P, Booth A. Uncovering the mechanisms of research capacity development in health and social care: a realist synthesis. Health Res Policy Syst. 2018;16(1):1–22.
Dimova S, Prideaux R, Ball S, Harshfield A, Carpenter A, Marjanovic S. Enabling NHS staff to contribute to research: reflecting on current practice and informing future opportunities. Santa Monica, CA: RAND Corporation; 2018.
Book Google Scholar
Lorenc T, Tyner EF, Petticrew M, Duffy S, Martineau FP, Phillips G, Lock K. Cultures of evidence across policy sectors: systematic review of qualitative evidence. Eur J Pub Health. 2014;24(6):1041–7.
Wolfenden L, Yoong SL, Williams CM, Grimshaw J, Durrheim DN, Gillham K, Wiggers J. 2017. Embedding researchers in health service organizations improves research translation and health service performance: the Australian Hunter New England Population Health example. Journal of Clinical Epidemiology , 85 , pp.3–11. Available at https://www.jclinepi.com/action/showPdf?pii=S0895-4356%2817%2930254-8 .
Oliver K, Kothari A, Mays N. The dark side of coproduction: do the costs outweigh the benefits for health research? Health Res Policy Sys. 2019;17:33. https://doi.org/10.1186/s12961-019-0432-3 . https://health-policy-systems.biomedcentral.com/articles/ .
Kislov R, Wilson PM, Knowles S, Boaden R. Learning from the emergence of NIHR Collaborations for Leadership in Applied Health Research and Care (CLAHRCs): a systematic review of evaluations. Implement Sci. 2018;13(1):111. https://doi.org/10.1186/s13012-018-0805-y . https://implementationscience.biomedcentral.com/articles/ .
Reed MS. (2018) The Research Impact Handbook, 2nd edition., Fast Track Impact.
Alla K, Hall WD, Whiteford HA, Head BW, Meurk CS. 2017. How do we define the policy impact of public health research? A systematic review. Health research policy and systems, 15(1), p.84.
Jagosh J, Bush PL, Salsberg J, Macaulay AC, Greenhalgh T, Wong G, Cargo M, Green LW, Herbert CP, Pluye P. 2015. A realist evaluation of community-based participatory research: partnership synergy, trust building and related ripple effects. BMC public health , 15 (1), pp.1–11. Available at https://bmcpublichealth.biomedcentral.com/articles/ https://doi.org/10.1186/s12889-015-1949-1 .
Download references
Thanks to the ACORN group for piloting and using the VICTOR tool.
Funding for the development of the tool was provided by NIHR YH CRN and NIHR CLAHRC both hosted by Sheffield Teaching Hospitals Trust.
Authors and affiliations.
Research Department, Mid Yorkshire Teaching NHS Trust, Pinderfields Hospital, Aberford Road, Wakefield, WF1 4AL, UK
Judith Holliday
Primary Care Sheffield, Fifth Floor, 722 Prince of Wales Road, Sheffield, S9 4EU, UK
Natalie Jones
School of Health Science, University of Sheffield, 30 Regent Street, Regent Court, Sheffield, S1 4DA, UK
You can also search for this author in PubMed Google Scholar
All authors contributed to the development of the tool; writing of the manuscript.
Correspondence to Judith Holliday .
Competing interests.
The authors declare no competing interests.
Ethical approval was not required as this is a service improvement project, registered at Sheffield Teaching Hospitals Trust.
This project was registered with Sheffield Teaching Hospitals FT Trust as a service evaluation project with the clinical effectiveness unit (CEU) project number 8952 on electronic database AIMS and all methods were carried out in accordance with relevant guidelines and regulations. The project is titled ‘VICTOR- Making Visible the Impact of Research in the NHS: developing a research impact tool for clinicians.’ Natalie Jones was listed as the project lead. The sample period was 17/09/2017–07/01/2019 and the data collection period was 08/01/2018–31/12/2018. All participants in the evaluation gave informed consent to participate. They were provided with information about the project in writing and /or verbally. They had an opportunity to think about participation and whether they would like to take part before consent was taken. Findings from participants were anonymised to protect confidentiality. All relevant procedures for service evaluations in Sheffield Teaching Hospitals were adhered to and the project was supported by a co-ordinator from the clinical effectiveness unit to ensure relevant procedures were followed.
Not applicable.
Publisher’s note.
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Below is the link to the electronic supplementary material.
Supplementary material 2, rights and permissions.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.
Reprints and permissions
Cite this article.
Holliday, J., Jones, N. & Cooke, J. Organisational benefits of undertaking research in healthcare: an approach to uncover impact. BMC Res Notes 16 , 255 (2023). https://doi.org/10.1186/s13104-023-06526-5
Download citation
Received : 10 February 2023
Accepted : 21 September 2023
Published : 05 October 2023
DOI : https://doi.org/10.1186/s13104-023-06526-5
Anyone you share the following link with will be able to read this content:
Sorry, a shareable link is not currently available for this article.
Provided by the Springer Nature SharedIt content-sharing initiative
ISSN: 1756-0500
An official website of the United States government
The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.
The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.
Email citation, add to collections.
Your saved search, create a file for external citation management software, your rss feed.
Affiliations.
Background: Evidence-based practice and decision-making have been consistently linked to improved quality of care, patient safety, and many positive clinical outcomes in isolated reports throughout the literature. However, a comprehensive summary and review of the extent and type of evidence-based practices (EBPs) and their associated outcomes across clinical settings are lacking.
Aims: The purpose of this scoping review was to provide a thorough summary of published literature on the implementation of EBPs on patient outcomes in healthcare settings.
Methods: A comprehensive librarian-assisted search was done with three databases, and two reviewers independently performed title/abstract and full-text reviews within a systematic review software system. Extraction was performed by the eight review team members.
Results: Of 8537 articles included in the review, 636 (7.5%) met the inclusion criteria. Most articles (63.3%) were published in the United States, and 90% took place in the acute care setting. There was substantial heterogeneity in project definitions, designs, and outcomes. Various EBPs were implemented, with just over a third including some aspect of infection prevention, and most (91.2%) linked to reimbursement. Only 19% measured return on investment (ROI); 94% showed a positive ROI, and none showed a negative ROI. The two most reported outcomes were length of stay (15%), followed by mortality (12%).
Linking evidence to action: Findings indicate that EBPs improve patient outcomes and ROI for healthcare systems. Coordinated and consistent use of established nomenclature and methods to evaluate EBP and patient outcomes are needed to effectively increase the growth and impact of EBP across care settings. Leaders, clinicians, publishers, and educators all have a professional responsibility related to improving the current state of EBP. Several key actions are needed to mitigate confusion around EBP and to help clinicians understand the differences between quality improvement, implementation science, EBP, and research.
Keywords: evidence-based decision making; evidence-based practice; healthcare; patient outcomes; patient safety; return on investment.
© 2023 The Authors. Worldviews on Evidence-based Nursing published by Wiley Periodicals LLC on behalf of Sigma Theta Tau International.
PubMed Disclaimer
Full text sources.
NCBI Literature Resources
MeSH PMC Bookshelf Disclaimer
The PubMed wordmark and PubMed logo are registered trademarks of the U.S. Department of Health and Human Services (HHS). Unauthorized use of these marks is strictly prohibited.
Measurement, design, and analysis methods for health outcomes research, related articles, how health care executives can use health outcomes research for business decision making, using health outcomes research to improve quality of care, genomic medicine in the next decade, the value of applying health outcomes research to improve treatment results.
by Lisa D. Ellis
With mounting pressure on health care organizations to provide high-quality care while containing costs, there’s been an increasing reliance on using health outcomes research to identify the most effective interventions and incorporate them into clinical practice. As such, health outcomes research can provide a valuable resource to help clinicians make strategic treatment choices that will ultimately benefit patients and systems on many levels.
“Health outcomes research studies the end results of health care services, providing evidence for the value of specific medical treatments or interventions that can be used to make better decisions and improve health care,” explains Marcia A. Testa, MPH, MPhil, PhD, who serves as Senior Lecturer on Biostatistics for the Harvard T.H. Chan School of Public Health. Testa and Donald C. Simonson, MD, MBA, MPH, ScD, of the Division of Endocrinology, Diabetes and Hypertension at Brigham and Women’s Hospital and Harvard Medical School, co-direct a program called Measurement, Design, and Analysis Methods for Health Outcomes Research offered by the Harvard Chan School’s Executive and Continuing Education (ECPE) division.
Testa points out that some of the most effective examples of health outcomes research consider patient-centered outcomes; incorporating the patient’s lifestyle, preferences, and voice in applied research is critical, since these aspects impact treatment compliance and, therefore, outcomes. “‘Patient-reported’ outcomes are a big part of ‘patient-centeredness,’” she stresses.
Health outcomes research can also play an important role in identifying disparities among different populations and guiding clinicians on taking action to help even the playing field for patients of all socioeconomic groups and backgrounds.
Further, patient-centered outcomes, including those reported by patients themselves, enable “people and their caregivers [to] communicate and make informed health care decisions, allowing their voices to be heard in assessing the value of health care options,” according to the Patient-Centered Outcomes Research Institute’s (PCORI) website.
With so much to gain, it’s essential that prospective investigators understand the scope of health outcomes research and recognize how to utilize it in their efforts in the most appropriate way.
Specifically, when researchers understand how to properly design and implement health outcomes research studies and to synthesize the data, the findings can be used in a multitude of situations. For instance, the information can help practitioners to select the most appropriate treatment options for individual patients that take into account their specific needs and situations. The findings can also help to identify gaps in treatment choices for different patient populations. In addition, the data can be used to determine any interventions that are over- or under-used by population groups in order to help providers develop evidence-based treatment strategies that bring value-added care, Simonson says.
But of course, even the most well-designed and executed health outcomes research projects will only be effective if clinicians and other medical providers know how to apply the data on patients in a clinical setting. In fact, the process of translating and using research findings is itself a relatively new field of research, called implementation science. This field studies methods to promote the adoption and integration of evidence-based practices, interventions, and policies into routine health care and public health settings. It is the natural “next step” in health outcomes research, and PCORI even offers special funding opportunities to previous PCORI research recipients for “Dissemination and Implementation” of their research results.
To help bridge some of the challenges that exist in applying health outcomes findings, Simonson, Testa, and their colleagues are currently working on a grant project funded by PCORI titled “Benchmarking the Comparative Effectiveness of Diabetes Treatments Using Patient-Reported Outcomes and Socio-Demographic Factors,” which was recently featured at the American Diabetes Association’s (ADA) 2017 annual meeting.
With diabetes currently affecting 11.3 percent of Americans aged 21 and older, causing high blood glucose levels that can lead to a variety of serious health issues, many patients require specific dietary requirements and regular exercise, along with glucose monitoring and treatments including insulin. But while clinical trials measure the effectiveness of different treatment approaches in a general way, most physicians typically don’t alter their treatment approach to respond to these variables for each individual.
While clinical trials measure the effectiveness of different treatment approaches in a general way, most physicians typically don’t alter their treatment approach to respond to these variables for each individual.
“There are many factors that influence how well patients are able to comply with these requirements including age, education, income, and cultural and lifestyle issues,” Simonson says. Further, any side effects of treatment, or any other co-existing conditions, may also impact a person’s likelihood to follow a specific treatment regimen. Yet these aspects are largely unexplored when developing treatment strategies.
“Since diabetes patients act and respond differently to treatment due to many reasons, health care providers often cannot advise patients as to how they might respond given their personal characteristics simply because they do not have the required information,” Simonson points out.
The reason this crucial information is lacking is that “typically, no one clinical study can separate out the results by all the patient characteristics that might affect treatment,” Testa offers. “In addition, clinical drug trials do not typically measure how patients feel or how satisfied they are with their assigned treatment,” she says, adding, “In most clinical trials, the ‘true voice’ of the patient is usually silent.” To better capture this important component, Testa, Simonson, and their colleagues are pooling existing databases of diabetes patients with information gathered online and through social media.
“We will incorporate the results of our findings and analyses into a web-based application that will allow clinicians to predict how likely a patient will be to respond given their individual characteristics, and will allow patients and physicians to benchmark their progress against others with similar characteristics to improve the quality of care,” Testa says.
The hope is that, moving forward, more investigators and practitioners will recognize the importance of considering patients as individuals when developing treatment strategies to create a more personalized approach that will likely achieve better results.
The researchers are also using their findings to develop an online “toolkit” designed to educate physicians on how to interpret health outcomes data and put the findings into practice in the patient-care setting.
“A diverse and growing number of groups, including employers, health care delivery organizations, insurers, pharmaceutical companies, and government agencies, currently use actionable data provided by health outcomes research to guide their decisions about different treatment options and interventions,” Testa says. This widespread use should prompt prospective investigators to make a concerted effort to consider this element in future research projects. “By using health outcomes data to guide the decision-making process across organizations, this can increase the value of every dollar spent on health care,” she adds.
Harvard T.H. Chan School of Public Health offers Measurement, Design, and Analysis Methods for Health Outcomes Research , a program focused on designing, implementing, and analyzing health outcomes studies. To learn more about this opportunity, click here .
The National Institutes of Health (NIH), a part of the U.S. Department of Health and Human Services, is the nation’s medical research agency — making important discoveries that improve health and save lives.
Monica M. Bertagnolli, M.D., is the 17 th director of the National Institutes of Health, officially taking office on November 9, 2023.
NIH leadership plays an active role in shaping the agency's research planning, activities, and outlook.
The NIH is made up of 27 different components called Institutes and Centers.
The NIH Enterprise Directory (NED) is an electronic directory of people who work at the NIH.
NIH’s project to capture an oral history of the research experience.
For over a century, NIH scientists have paved the way for important discoveries that improve health and save lives.
September is Blood Cancer Awareness Month and Today is Legacy Giving Day. Giving hope for 60 years: Leukaemia & Lymphoma NI's legacy in the fight against blood cancer - As we celebrate 60 years of Leukaemia & Lymphoma NI's tireless work in the fight against blood cancer, we are honoured to share the powerful stories of those who have been directly touched by this disease. Through a series of compelling case studies, patients, family members and researchers open up about their personal journeys, offering a poignant look at the challenges they’ve faced, the resilience they’ve shown and the hope that continues to drive them forward.
Stay on top of the headlines from Belfast and beyond by signing up for FREE email alerts
We have more newsletters
A local father of two has been sharing the story of his wife’s fight against blood cancer to help highlight the importance of supporting local medical research funded by Leukaemia and Lymphoma NI.
Alison and Barry Williamson and their two young children Rhys and Mya lived in Tandragee, Co Armagh . Alison was a much-loved member of her community. She worked as a classroom assistant at Tandragee Primary School and was well-known for her enthusiasm, energy and mischievous sense of humour. She had a zest for life and was fiercely positive, no matter what challenges life threw at her.
“Alison was pretty much just the perfect person,” explained Barry. “She was truly an inspiration to everyone that knew her. Her family was her life but she was always helping others.”
Towards the end of 2014, Alison began to feel unwell. She saw her weight drop rapidly.
“Alison really started behaving out of character. Often she would come home from school and need to lie down. She was constantly feeling drained and was sleeping more and more. That’s when we went to the doctor.”
Alongside the fatigue, Alison started experiencing pain in her abdomen. Eventually she underwent an operation to remove her spleen. “At that point, we thought she had started to turn a corner” said Barry, “we were hopeful that might be it and we had found the root of the problem.”
Unfortunately, in August 2015 after several more tests, the family received news of Alison’s diagnosis of Hepatosplenic T-cell Lymphoma, a rare and aggressive form of blood cancer.
“When we received the news, Alison was steadfast and determined. Whilst I sat in the corner of the room distraught, she simply said ‘Ok, I’m going to fight this.’ That’s what kind of person she was. Sharing the news with our two children was obviously a very difficult thing to have to do. But Alison was unwaveringly positive.”
After her diagnosis, Alison endured 50 days of chemotherapy and eventually was earmarked for a stem cell transplant.
“It became a race against time to find a matching donor. Everyone in our family got tested. Both Alison and I were very lucky to have such close supportive families and friends and everyone rallied round us. As always, Alison took on the news of the transplant head on.”
Alison was treated at Craigavon Hospital, was then moved to Dublin and then finally brought to Belfast City Hospital. At one stage she was taking 53 tablets a day.
“She never complained” said Barry, “it wasn’t in her nature. She was prepared to do whatever it took – undergo any treatments available – to give her the chance to spend more time with her family.”
However, on 14 May 2016, in the Intensive Care Unit of Belfast City Hospital and nine months after her initial diagnosis, Alison died. “She never gave up the fight and up until her last day I was still holding hope that she would pull through, but her body just simply couldn’t take any more.
"Alison is, without doubt, sorely missed by us and those close to her. When grief comes into your life it stays day and night. I found strength from my two children. They didn’t want to see me so sad all the time. They were grieving too and I had to make life good for them and teach them, as best I could, how to live life and find joy in things. That is why the charity and the work I try to do will make a difference to others. I take inspiration from my late wife and the fight she put up to live, and I hope she is proud of me for that.”
Since her passing, Barry, his friends and his family decided that raising money for Leukaemia & Lymphoma NI was the best way to honour her memory.
“It was Alison who started the charity work before she died, so it seemed only right to continue with it after she passed.
“Since then we’ve climbed the four highest peaks in the UK in the space of 48 hours and scaled part of the Alps, moving through three countries across three consecutive days. We’ve done glass walks, fire walks, held gala charity balls and staged mega balloon releases.
“Friends held coffee mornings and my then Rector Dean Forster and Ballymore Church allsupported me, the children and the charity in so many ways. Alison’s Mum and friend did a sky dive, the school where Alison worked staged various fundraising events and we have done so many other things with the help of family and friends, too many to mention.
“Throughout it all, we’ve felt closer to Alison – it’s kept her with us. The fundraising is something we plan to continue and we hope others will consider donating or running their own fundraising activities. It’s such a worthy cause and the charity is funding crucial research.”
The money raised by Barry and his friends and family funded the Alison Williamson PhD Studentship in 2017, leaving a lasting legacy in blood cancer research. Dr Harmony Black completed her PhD in Repair Mechanisms in 2020 and is now working as a clinical scientist in the haematology department at Belfast City Hospital where she plays a vital role in screening and analysing patient samples.
Leukaemia and Lymphoma NI recently announced a special programme of activities to mark the 60th year of the charity and to raise funds for the fight against blood cancer.
Alongside this, the charity is calling for those who have experienced Leukaemia, Lymphoma, Myeloma, or other blood cancer, and their loved ones, to share their experiences online via the LLNI website – Share your story - LLNI
The photos and extended captions gathered will form part of a special canvas presented online where members of the public can read the stories of people affected by blood cancer across the region.
Throughout September LLNI is holding a series of fundraising activities, culminating in the charity hosting a Black Tie & Diamonds Gala Ball at Titanic Belfast, where the winner of an ongoing raffle for a diamond pendant necklace will be selected at random.
Members of the public can share their story, make a donation or buy tickets for the diamond necklace raffle on the Leukaemia & Lymphoma NI website – www.llni.co.uk
Choosing to leave a gift to LLNI is a wonderful gesture to ensure that your legacy lives on through research long after you are gone. Visit www.lni.co.uk.
BMC Medical Research Methodology volume 24 , Article number: 205 ( 2024 ) Cite this article
Metrics details
There has been a growing push to involve patients in clinical research, shifting from conducting research on, about, or for them to conducting it with them. Two arguments advocate for this approach, known as Patient and Public Involvement (PPI): to improve research quality, appropriateness, relevance, and credibility by including patients’ diverse perspectives, and to use PPI to empower patients and democratize research for more equity in research and healthcare. However, while empowerment is a core objective, it is often not clear what is meant by empowerment in the context of PPI in clinical research. This vacancy can lead to insecurities for both patients and researchers and a disconnect between the rhetoric of empowerment in PPI and the reality of its practice in clinical trials. Thus, clarifying the understanding of empowerment within PPI in clinical research is essential to ensure that involvement does not become tokenistic and depletes patients’ capacity to advocate for their rights and needs.
We explored the historical roots of empowerment, primarily emerging from mid-20th century social movements like feminism and civil rights and reflected the conceptual roots of empowerment from diverse fields to better understand the (potential) role of empowerment in PPI in clinical research including its possibilities and limitations.
Common themes of empowerment in PPI and other fields are participation, challenging power structures, valuing diverse perspectives, and promoting collaboration. On the other hand, themes such as contextual differences in the empowerment objectives, the relationship between empowerment and scientific demands, research expertise, and power asymmetries mark a clear distinction from empowerment in other fields.
PPI offers potential for patient empowerment in clinical trials, even when its primary goal may be research quality. Elements like participation, sharing opinions, and active engagement can contribute to patient empowerment. Nonetheless, some expectations tied to empowerment might not be met within the constraints of clinical research. To empower patients, stakeholders must be explicit about what empowerment means in their research, engage in transparent communication about its realistic scope, and continuously reflect on how empowerment can be fostered and sustained within the research process.
Peer Review reports
Introduction.
There has been a growing demand from patients, researchers, research sponsors, and scientific journals to shift clinical studies from being exclusively conducted on, about, or for patients to involving patients themselves or members of the public [ 1 , 2 ]. Two primary lines of reasoning underlie active patient and public involvement (PPI):
By integrating patients’ diverse perspectives into research, the aim is to enhance the quality, appropriateness, relevance, and credibility of the research [ 3 , 4 ].
Additionally, there are normative arguments supporting PPI that revolve around moral, ethical, and rights-based considerations, primarily linked to empowering patients or the public [ 5 ]. In essence, the idea is that patients should have a say in research that directly concerns them [ 3 , 6 ]. This notion aligns with the principle of “nothing about us, without us,” which has guided movements in various contexts, including the disability rights movement [ 7 ] and Indigenous contexts [ 8 ].
By empowering patients and upholding their right to participate in research, PPI seeks to diminish social inequalities. In doing so, it aims to democratize the research process, making it more accountable and transparent to the broader population [ 2 , 3 , 4 , 5 , 9 ]. This democratization is particularly significant for marginalized groups whose perspectives are often overlooked [ 1 , 5 ].
While patient empowerment is a core objective of PPI [ 4 ], it is seldom explicitly defined within the context of PPI. Based on the etymology, the root of the term implies that ‘empowerment’ concerns matters of ‘power’. The Oxford English Dictionary offers three distinct meanings of the verb “empower” [ 10 ]. One involves granting someone legal or formal authority, another focuses on bestowing power over something, and the third pertains to strengthening an individual by providing greater control, specific attributes, or enhanced abilities. Empowerment can denote either a process or a state of being respectively an outcome.
A narrative review by Gradinger et al. revealed that in the context of public involvement, normative values are frequently referenced without clear definitions, resulting in significant variations in the understanding of empowerment [ 4 ]. While there is a general need to clarify the conceptualization of PPI to align with its intended goals [ 11 ], the emancipatory aspect of PPI remains underexplored compared to other approaches [ 12 ]. Without a precise meaning and operationalization of the term ‘empowerment’, the normative claim of PPI becomes difficult to realize and its implementation virtually impossible to assess. The lack of a shared understanding of empowerment within PPI not only fosters misinterpretation and arbitrariness in PPI practices but may also inadvertently undermine patient empowerment. From the perspective of a patient, ambiguous roles, a sense of inability to contribute, insufficient recognition of one’s contributions, or inadequate information about the benefits of involvement could potentially be rather disempowering than empowering [ 13 ]. There is a risk that the involvement may become tokenistic, and patients’ voices might be silenced when they are merely involved for show, as a formality, without genuine influence on the research. Additionally, this involvement may deplete patients’ resources and capacity to advocate for their rights and needs in potentially more effective ways [ 14 , 15 ].
In a previous study, we discovered that within the same project, patients and researchers assign varying degrees of importance to patient empowerment. While patients engaged in a patient board for a clinical trial endorsed the idea of empowerment through research participation, only one out of five researchers explicitly addressed patient empowerment as a rationale for conducting PPI [ 16 ]. Furthermore, the experiences of patients and researchers with the patient board indicated that patient empowerment is often overlooked in the implementation of PPI. Other forms of collaboration, such as open dialogues on an equal footing and providing training to enhance patients’ confidence and skills, might have proven more effective in empowering patients [ 17 ]. These findings align with those of Ives et al. [ 3 ], who also noted a potential mismatch between the stated goals of PPI and its practical execution. Ives et al. argue that the nature and conduct of PPI can vary significantly depending on who initiates it and for what purpose. For instance, if researchers involve patients primarily to enhance the quality of their research projects, the focus might be on outcome-oriented, pragmatic consultation, potentially sidelining the goal of patient empowerment. Patients may be relegated to an informational role rather than active partners in the research process. Based on these insights, we assume that empowerment does not naturally evolve from PPI and is not an automatic byproduct of it.
Considering the above, it seems necessary to clarify the term empowerment within PPI in clinical research. Despite the absence of a precise understanding of empowerment in the context of PPI, the term “empowerment” has been in use across various domains for over half a century, including social work, education, corporate settings, psychology, and healthcare [ 18 ]. Therefore, this article aims to contribute to the understanding of empowerment in PPI by reflecting on the history and tradition of the term and concept of empowerment in other fields. Building on this, we aim to reflect on what lies behind the term empowerment in the context of PPI in clinical research and try to explain the disconnect between the rhetoric of empowerment in PPI and the reality of its practice in clinical trials. We have been guided by the following questions and have structured the article accordingly:
How has the concept of empowerment evolved historically?
How has empowerment been conceptualized in other fields?
To what extent does the concept of empowerment of patients through or for PPI in clinical research align with conceptual approaches to empowerment in other fields?
The article provides researchers who organize PPI with orientation on the relationship between empowerment and PPI. It offers perspectives on the possibilities and limits of empowerment in this context and invites further reflection on the topic from both researchers and patients involved in PPI.
For consistency, the term ‘patient’ is exclusively used in this article to refer to individuals who have had specific health-related experiences. However, we acknowledge that other terms, such as ‘service users’, may be more suitable and better reflect the active role that PPI strives for. This article is centered around PPI in clinical research and does not encompass reflections on PPI in other contexts, such as healthcare.
The term “to empower” has been documented since the mid-17th century, with older forms such as ‘impover,’ ‘empour,’ and ‘empowre’ [ 10 ]. In the mid-17th century, William Penn, founder of the Quaker colony of Pennsylvania, utilized the term in a religious and early democratic context. Penn’s theology of individual empowerment was based on the belief in the intrinsic dignity of all individuals, the presence of a part of God within each person (referred to as the “inward light” or “inner spirit”), and the assertion of the right to freedom of conscience. Penn’s ideas influenced the formulation of a groundbreaking constitution for Pennsylvania, serving as a model for subsequent democratic constitutions [ 19 ].
The term “empowerment,” intertwined with democracy since its inception, has evolved over time, primarily shaped by mid-20th-century social movements.
The civil rights movement in the 1950s and 1960s among the Black minority in the U.S. significantly influenced the idea and implementation of empowerment. Acts of civil disobedience exposed racial inequalities [ 20 ], and multiplier programs aimed to provide education and raise consciousness among the Black community [ 18 , 20 ]. Grounded in the belief in individuals’ abilities to control their lives, the movement sought to integrate the Black minority as equals with equal social rights into the democratic society. Freeing the Black minority community from oppression through collective self-organization resulted in a “new sense of somebodiness” (Martin Luther King as cited in Simon [ 19 ]).
Another driver of the empowerment discourse was the second wave of the feminist movement in the 1960s and 1970s, which addressed women’s opportunities and rights for societal equality [ 21 ]. Through expanded education, improved labor conditions, economic independence, far-reaching changes in the possibilities for self-determined birth control and a developed awareness of personal (bodily) autonomy, women’s life plans became more individualized [ 22 ]. Within the movement, women found a protective framework to navigate their evolving opportunities and resulting responsibilities. It provided a social reference structure, creating spaces for self-clarification, collective articulation of devaluation, and deconstruction of internalized beliefs. This support allowed women to envision, develop, and test new life possibilities and identities, thereby fostering self-confidence [ 18 ].
A third root of the modern empowerment concept is the self-help movement, which gained importance in the 1970s in the USA and other developed countries, especially within health-related contexts [ 7 , 18 ]. As self-organized networks, self-help aimed to establish social support, explore coping strategies, and reclaim autonomy and empowerment resources. Self-help served as a counter-program to perceived disempowering state care [ 7 ], emphasizing the perspective of individuals as ‘experts on their own account’, introducing self-organized services, creating (a sense of) community and thus producing emotional ‘services’, empowering critical consumers, and representing peoples’ interests to influence socio-political decisions [ 18 ]. Key features of self-help networks included the involvement of members with a common problem, minimal professional helper involvement, emphasis on immaterial support, and goals of self- and social change achieved through equal cooperation and mutual help. Self-help groups provided critical support in niches not covered by professional care services [ 18 ].
In the U.S., community-based programs aimed at empowering individuals and building networks to address social segregation [ 23 ]. These programs furnished resources and support to enable individuals and communities to take charge of their lives and implement positive changes in their community. Political initiatives, like the Equal Opportunities Act of 1965, sought to reduce inequalities and poverty, promoting “maximum feasible participation” [ 18 ]. Empowerment was considered a means of encouraging self-sufficiency and reducing dependence on government support.
In the 1970s, community action programs became linked to community psychology, viewing individuals as part of communities and collaborating to identify strengths, resources, and needs. Strategies formulated aimed to empower and promote social justice while reducing social inequalities.
The tradition of empowerment in social movements encompasses both individual self-determination and collective action against structural constraints. The primary concerns were not only about self-empowerment but also about advocating for structural changes through mass mobilization and collective efforts. In these contexts, empowerment was often pursued through independently organized groups that fostered community solidarity and collective identity. Unlike prevalent deficit-based approaches, which tend to focus on individuals’ lacks and weaknesses, empowerment in social movements nurtures and strengthens individuals’ skills and capabilities while also addressing and dismantling oppressive structures.
Since its emergence in mid-20th-century social movements and subsequent development in community psychology, the concept of empowerment has found application across diverse domains [ 18 , 24 ]:
Social work, encompassing individual support and collective actions.
Educational programs, such as literacy campaigns and increased pupil participation opportunities.
Development aid, representing a shift from external, top-down approaches to fostering local community capacity for participatory development and poverty reduction in developing countries.
Corporate contexts, where empowerment principles are integrated into management strategies.
Healthcare, where applications include shared decision-making and broader patient involvement.
Contemporary movements, such as racial empowerment in the “Black Lives Matter” movement and the Indigenization.
In this section, we explore the foundational concepts and theoretical underpinnings of empowerment.
Social scientist Barbara Bryant Solomon pioneered the conceptual foundation of empowerment in her 1976 book, “Black Empowerment: Social Work in Oppressed Communities.” Originating as a resource for students and social workers assisting Black minority clients, Solomon’s empowerment concept is based on research into the mechanisms of power and powerlessness. According to her, “empowerment refers to the reduction of an overriding sense of powerlessness to direct one’s life in the direction of meaningful personal satisfaction” [ 25 ]. At the core of this concept is the experience of powerlessness, arising from membership in a minority group subject to negative assumptions and discrimination from the majority society and its institutions [ 26 , 27 ].
While previous authors had emphasized the need to consider stigma as a factor that permeates the social situation of Black people, Solomon added that the unequal distribution of power and the experience of (structural) discrimination could affect the psyche and the negative attributions could find their way into self-perception. Thus, powerlessness of an individual means “the inability to manage emotions, knowledge, skills or material resources in a way that makes possible effective performance of valued social roles so as to receive personal gratification” [ 26 ].
At the community level, powerlessness is described as the inability to utilize resources for collective goals [ 26 ]. In short, stigma affects powerlessness, hindering access to the resources necessary for overcoming negative self-perceptions and social challenges [ 27 ]. Introducing empowerment as a method, Solomon suggested that professionals could employ it to address the powerlessness experienced by stigmatized individuals or groups. Empowerment, in her view, enables individuals to recognize their competence, perceive available opportunities for control, and ultimately enhance their self-worth and dignity [ 25 ]. In summary, Solomon’s empowerment approach is based on the belief that individuals and families have strengths and abilities and that they can be supported to use their resources more effectively for their own benefit. Solomon saw empowerment as both a process and a goal for social work in Black communities, and stated that the success of empowerment is “directly related to the degree to which the service delivery system itself is an obstacle course or an opportunity system” [ 26 ].
In 1981, community psychologist Julian Rappaport advocated for empowerment as a superior approach to paternalistic public health policies and rights-based advocacy in social work [ 28 ]. Acknowledging the diverse nature of social problems, Rappaport urged professionals to reconsider their roles in relation to clients, aligning with Solomon’s view that empowerment enhances individuals’ control over their lives.
Rappaport emphasized viewing individuals not solely as children in need or rights-bearing citizens but as complete human beings with both rights and needs. He argued that even those seemingly incompetent and in need require “[…] more rather than less control over their own lives, and fostering more control does not necessarily mean ignoring them“ [ 28 ]. Increased control is believed to positively influence psychological well-being.
Empowerment, according to Rappaport, relies on the belief that people possess or can acquire competencies, with inadequate functioning attributed to social structures or the lack of resources that prevent people from using these competencies. He advocated for competency development in real-life settings and positioned those providing help as collaborative teammates who take into account social structures and living conditions, and not as authoritative experts [ 28 ].
Furthermore, Rappaport stressed the need for diverse solutions to divergent problems, rejecting a one-size-fits-all approach in social policy. He championed a bottom-up, participatory social policy that recognizes the context-specific and varied nature of empowerment in each situation [ 28 ].
Brazilian educator and social reformer Paulo Freire expanded the concept of empowerment through his work with marginalized communities in Brazil [ 29 ]. Central to his ideas is the development of ‘critical consciousness’ through dialogic education [ 30 ]. Freire contended that oppressed individuals often lack awareness of the social and political factors sustaining their subjugation. Critical consciousness involves recognizing oppressive systems and understanding the socio-economic and political contexts fostering inequality, along with realizing one’s potential for transformation. Freire regarded the critical consciousness experience as the key to gaining strength, with education playing a fundamental role to conscientization. Freire’s dialogic teaching method, emphasizing two-way learning between teachers and students, fosters critical thinking, self-reflection, and active participation, empowers students to question and reshape their reality. Working in partnership assigns the teacher the role of a facilitator and underscores the central importance of the consumer or marginalized individuals in the process of change [ 19 , 30 ]. Complementing this, Freire’s pedagogy of questioning encourages students to critically assess the influences shaping their lives. The emphasis is not on remembering details, but on cultivating analytical skills and the capability to challenge prevailing beliefs.
Beyond individual liberation, Freire argued that true empowerment encompasses collective action and social transformation. He underscored the importance of solidarity and creating dialogic spaces for individuals to collaboratively address common experiences of oppression and work towards societal progress [ 30 ].
In summary, Freire sought to empower individuals and communities by promoting critical consciousness, dialogue, and collective action to challenge oppressive systems and foster a more inclusive and equitable society. While he placed responsibility on the oppressed for seeking their own empowerment, caution was advised to prevent reinforcing a sense of helplessness [ 29 ].
While there is no universally agreed upon definition or concept of empowerment, some common principles can be identified, then, from what we have reviewed: Empowerment comes from a variety of sources, refers to processes and outcomes, involves both personal and collective dimensions, is based on participation, assumes that each individual has strength and capacities upon which they can build, challenges power structures with a focus on marginalized groups and the systematic inequalities they face, and must be obtained by the individuals themselves, but can be supported by third parties, e.g. professionals, who facilitate the process of empowerment in collaboration with individuals or communities [ 19 , 26 , 28 , 29 , 30 , 31 ].
As the most basic definition of empowerment, Herringer outlines: “Developmental processes over time in which individuals acquire the skills necessary to live a life that meets their own standards of ‘better’” [ 32 , translated by IS]. These processes of gaining more power or autonomy can be individual and collective [ 32 ].
At the same time there exist some controversies around empowerment. Herringer continues his definition with the thought: “[.] what exactly constitutes a “more livable” existence is open to conflicting interpretations and ideological frameworks” [ 32 ]. Other controversies surrounding the concept of empowerment are:
Instrumentalization , tokenism and depoliticization : the concern that empowerment programs or initiatives may be implemented for instrumental, tokenistic purposes or to create the illusion of progress [ 27 ]. In such cases, empowerment becomes an empty concept without substantial impact. The adoption of empowerment concepts by the powerful (e.g. institutions or entities that hold significant structural and decision-making authority) can lead to a depoliticization of empowerment programs, as the transformative potential of such initiatives may be diminished or neutralized when circumscribed by institutional capture. This co-option of empowerment by those in power can result in a form of engagement that maintains existing power dynamics rather than challenging them.
Lack of clarity and measurement : empowerment is so diverse and open-ended that it is difficult to define in a way that its outcomes can be measured [ 24 ]. Clarity is needed regarding which aspects of empowerment are targeted. Without evaluating empowerment attempts, it is challenging to learn from experience.
The concept of empowerment has deep roots in various social movements that sought to challenge systemic inequalities and give voice to marginalized groups. To analyze how these conceptual approaches to empowerment from social movements relate to the empowerment of patients in PPI within clinical research, we will first provide an overview of the historical development of PPI in research, followed by a recall of the relevance of empowerment in the context of PPI. We will then analyze and critically address (a) the similarities of approaches to empower patients or the public in PPI as compared with other fields, and by that get an impression how PPI in clinical research can empower patients, and (b) the distinctions and limitations of empowerment in this context, both practically and conceptually.
Patient advocacy movements, gaining momentum in the mid-20th century, played a pivotal role in pushing for increased patient involvement in research [ 33 ]. These movements, which often emerged from broader social and civil rights movements, laid the foundation for what we now recognize as PPI.
For instance, the HIV-AIDS activism of the 1980s, heavily influenced by the gay civil rights movement, led to significant changes in health research by challenging the prevalent research expertise and bringing in “a ´patient perspective` to bear on institutions of health research” [ 34 ].
In the 1970s, Rose Kushner, a breast cancer patient and writer, exemplified this movement by assessing research proposals for the US National Cancer Institute, marking a notable instance of patient influence [ 33 ]. Her efforts reflected a broader movement towards giving patients a voice in research, a theme that is echoed in many PPI initiatives. The 1980s collaboration between patient organizations and the Association for Maternity Services, endorsing a randomized controlled trial on chorionic villus sampling, is another example where patient involvement began to influence research decisions directly. The 1997 international breast cancer advocacy conference organized by the US National Breast Cancer Association (NBCC) and supported by patient organizations from several countries marked a pivotal shift towards PPI, fostering dialogue on patient experiences and challenges. The conference demonstrated the NBCC’s belief that breast cancer patients should be consulted when making policies and decisions regarding research funding, and was instrumental in establishing an international advocacy movement [ 35 ].
The connection between PPI and social movements became more explicit with the establishment of organizations like INVOLVE in 1996, funded by the British government as part of their aim to create a patient-oriented healthcare system, the Canadian Institutes for Health Research in 2000, and the Patient-Centered Outcomes Research Institute (PCORI) in the United States in 2010. These organizations, drawing inspiration from social movements, emphasize the importance of involving patients and the public throughout the research process, thereby continuing the advocacy for marginalized voices in health research [ 36 , 37 , 38 ].
Globally, there is a trend toward formalized PPI approaches. Research funders, regulatory bodies, and institutions recognize the importance of involving patients and the public throughout the research process, from prioritization to dissemination [ 1 , 2 ]. At current there is still a lot of development and movement in the process.
As discussed, there are two arguments advocating for PPI use in research, that Ives et al. summarize [ 3 ]: (1) to improve research quality, appropriateness, relevance, and credibility (PPI as a means to an end) and (2) to use PPI to empower patients and democratize research along with its consequential impact on health(care) (PPI as an end in itself). However, empowerment through PPI should not be seen as an isolated goal, and Ives et al. phrasing as “an end in itself” might be misleading and be better put as “an end beyond narrowly instrumental goals”. PPI is a strategy that allows patients to actively shape research, thereby ensuring that the research directly addresses the practical problems they face – an argument rooted in the social movements.
PPI is essential in transforming the relationship between patients and institutions, challenging traditional power dynamics [ 34 ]. Its role is dual-faceted: it improves the quality and relevance of research while simultaneously fostering a more participatory and inclusive approach to healthcare. This dual function makes PPI a powerful tool for achieving both immediate research goals and broader societal change.
However, depending on the reasons and initiators of PPI, PPI practices can vary greatly. According to Ives et al. [ 3 ], different aims of PPI can result in distinct forms of involvement, as illustrated in Table 1 . While Ives et al. [ 3 ] seem to indicate two opposite ends of the spectrum, these “ideals” do not always play out and there are numerous intermediate forms of involvement that can exist. However, this example illustrates that the potential for empowerment in PPI, as well as its manifestations, can vary greatly depending on the approach taken.
Today PPI spans a broad range, from sporadic consultations, to ongoing collaboration between patients and researchers, and even (still rare examples of) research led by patients with support from researchers [ 39 ].
In the following sections we analyze and critically address the similarities and limits of empowerment in PPI in clinical research as compared with earlier concepts. Similarities of empowerment in PPI in clinical research to earlier concepts seem to be in a focus on participation, challenging power structures, valuing diverse knowledge and perspectives, and supporting collaboration.
Active participation in decision-making processes that influence the lives of individuals and communities is a fundamental aspect of empowerment concepts across various fields [ 24 ]. In research-based PPI, facilitating the ability of patients and members of the public to have a voice, participate in decision-making processes, and contribute to research aligns with the core principles of empowerment.
Empowerment theories from different disciplines aim to reduce powerlessness and increase the power of marginalized individuals [ 25 , 28 , 29 ]. The objective of challenging power structures aligns with the concept of empowerment in PPI in research. Involving patients in planning, conducting, and communicating clinical research on a regular basis constitutes a significant shift in the power dynamics of the research landscape. Individual patients may be engaged on a one-time basis, but the collective voice of patients and the public becomes significant and co-determines research. Long-term patient involvement may be achieved through the integration of patient advisory boards in research institutions [ 40 ]. The inclusion of patient perspectives has become an expected practice, influencing power dynamics within the clinical research domain.
Empowerment in various fields recognizes the worth of diverse knowledge and perspectives [ 26 , 28 , 32 ]. By incorporating them, empowerment aims to challenge the conventional power structures that have systemically marginalized some voices and sustained inequality. Moreover, involving individuals with varied experiences offers exceptional insights and understandings that enhance dialogues and contribute to more thorough resolutions [ 28 ]. Similarly, patient experiential knowledge and unique insights are recognized as crucial in PPI for shaping research and complementing the specialist knowledge of clinical researchers [ 4 , 13 ]. According to the Montreal Model, patients’ experiences with illnesses, which they must manage for the rest of their lives if chronically, offer a rich source of knowledge essential for decision-making [ 41 ]. This experiential knowledge includes patients’ insights into their health issues, the trajectory of their care, and the impacts on their personal lives and those of their loved ones [ 41 ]. The involvement of patients strengthens the focus of clinical research on patients’ needs, ultimately enhancing its quality, adequacy, relevance, and credibility [ 3 , 4 ].
Empowerment approaches typically foster collaborative relationships among various stakeholders [ 19 , 28 ]. In social work, these relationships arise between the practitioner and the client and are characterized, analogous to the idea of an alliance, by a “shared sense of urgency” (regarding the client’s problems), a “conjoint commitment to problem solving in as democratic a manner as possible”, and a “shared emphasis [.] on [the] common humanity” in the relationship [ 19 ]. Depending on the PPI approach, the concept of collaborative relationships among various stakeholders can also apply to empowering patients in research. Three involvement approaches in PPI are distinguished [ 6 ]: (1) The consultation approach achieves the lowest level of engagement and collaborative relationships, wherein patients provide advice to researchers but are not involved in decision-making. (2) Patients are partners in the research process in the collaboration approach, with their involvement in decision making and shared responsibility for the research. (3) Patients in user-led research take full responsibility for individual aspects or the whole research, with support from researchers [ 6 ]. User-led research can only be implemented to a limited extent in clinical studies, as it is subject to ethical and legal framework conditions.
To strengthen the principles of social movements in PPI, a collective approach to research, as proposed by MacDonald’s theory of civic patienthood, could provide valuable insights [ 34 ]. This theory views patients as civic actors who seek collective solutions to collective problems, shifting the understanding of patients from merely clinical subjects to engaged participants in shaping research and healthcare outcomes. This approach needs robust institutions, resources, and socialization processes to support patients’ involvement. It is particularly critical in ensuring that PPI remains genuinely democratic and is not co-opted by more powerful interests [ 34 ].
While we found the heritage of social movements to inform the ethos of PPI in the principles of participation, giving people a say in decisions that affect their lives, confronting power structures—albeit on a smaller scale—, and collaborative relationships, we also found distinctions of empowerment in PPI in clinical research to earlier concepts. These seem to be in the areas of context and focus, scientific demands and ethics, expertise in research methods, and power dynamics.
While the goals of empowerment in other fields and PPI share similarities, there are differences in the context and focus. In social movements, empowerment refers to the process through which marginalized individuals and communities obtain power, active participation, and the ability to challenge oppressive systems [ 18 , 29 ]. These movements often aim to effect systemic changes and combat inequalities, drawing upon collective action, raising awareness, and advocacy to achieve their goals [ 29 , 30 ]. In contrast, the context of empowerment in PPI is more specific to the research process. Here, empowerment is about providing patients and the public with a voice in decision-making within that process [ 4 ]. While the influence of social movements is undeniable, the primary objective is not necessarily to address systemic inequalities on a broad scale but to enhance the quality and relevance of research by incorporating diverse perspectives. In PPI, people are empowered or given a voice “to influence research outcomes that will (or may) have a direct impact on their health status“ [ 6 ]. Though not the main objective, this involvement of diverse perspectives in research may nonetheless potentially contribute to a reduction in inequalities [ 42 , 43 ].
However, the practical implementation of PPI often faces challenges that may undermine its empowering potential. Researchers, under pressure to demonstrate measurable impact, tend to focus the conduct of involvement on substantive values such as effectiveness, quality, and validity – outcomes that are more easily quantified and aligned with traditional research goals [ 4 , 14 ]. This focus may lead to the marginalization of crucial but less easily measured normative values like empowerment, rights and accountability and process values such as partnership or respect. The demand for measurable outcomes and recommendations for the conduct of PPI that lead to rather structured and controlled PPI mechanisms shape PPI practices in ways that may suppress rather than amplify the voices of patients [ 14 ]. A more reflexive and dialogic approach to evaluating PPI might better capture its ethical and formative dimensions, ensuring that public involvement in research remains a tool for true empowerment rather than an instrument of containment [ 14 ].
Empowerment in clinical research must balance patient empowerment with scientific demands and the integrity of research findings. Empowerment approaches in other fields may concentrate on personal growth and social change. However, in clinical research there is a need to find ways that respect both the methodological and ethical requirements of research and the interests of PPI. This aspect, which is specific to this context, distinguishes it from empowerment in other fields and may restrict the potential for empowerment in clinical research as well as put specific demands on the conduct of research [ 17 , 44 ]. As a result, the level of patient co-determination may be limited. For example, for methodological reasons randomization might be preferable, even if alternative methods are perceived as more appropriate by the patients involved for understandable reasons. Additionally, patients may lack a full understanding of these restrictions, causing them to suggest ideas that do not comply with the logic of scientific protocols. This encounter with limitations during interactions with scientists can potentially diminish their level of empowerment.
In addition to methodological hurdles, PPI must address ethical considerations in the pursuit of empowerment. Although it is generally assumed that patient involvement does not necessitate an ethics vote, it is nonetheless crucial to discuss with potentially involved parties regarding matters such as safeguarding their privacy and potential conflict of interests, and to furnish them with comprehensive information about the involvement’s goals and methodology [ 45 ]. The framing of the involvement, and therefore the empowerment, in this manner distinguishes it from empowerment in other fields.
Empowering patients in research requires providing objective support and resources to enhance their comprehension of research methods and ethics [ 17 ]. Usually, patients need assistance in navigating the complexities of research processes and methodologies [ 17 , 46 ], which distinguishes empowerment in PPI from other fields. However, learning is a common aspect in any kind of empowerment. For instance, Freire’s theory of critical consciousness highlights education’s role in empowering marginalized individuals [ 30 ]. His approach centers on learners directing their own education by posing questions and emphasizes skill development over knowledge acquisition with a focus on increasing critical awareness of their circumstances.
The disparity in PPI may stem from individuals, who desire and deserve empowerment, not being the ones to decide what to learn, but from the fact that this choice is often made for them and is very factual. In terms of preparation for PPI, the learning is mostly unidirectional, whereby the researchers instruct the patients on research fundamentals [ 47 ]. However, there is a mismatch between the perception of training needs between researchers and PPI contributors (i.e. patients), both in terms of training for PPI contributors and researchers. Dudley et al. [ 47 ] found that this discrepancy leads to gaps in the support and training provided. That said, the characterization of unidirectional learning does not apply universally. For example, some PPI initiatives have employed more interactive and participatory training methods, allowing patients to engage more actively in shaping their learning experience [ 48 ].
Providing PPI support and training enables patients to acquire the necessary knowledge and skills to work alongside researchers on an equal basis, and to furnish patients with the confidence they need to challenge researchers opinions when needed [ 49 ]. Importantly, expertise in clinical research methods is not only a means of achieving empowerment but also a crucial component of enhancing the quality and relevance of research. By developing expertise, patients can contribute more meaningfully to the research process, ensuring that their perspectives and experiences are integrated in ways that improve research outcomes.
To strengthen empowerment in PPI and reduce vulnerability to co-optation by more powerful forces with different problem-solving interests, it is critical that participants have a clear understanding of the power they seek to build [ 34 ]. MacDonald’s theory of civic patienthood illustrates that socialization is central to helping patients understand their agency, role, and limitations as civic actors in PPI [ 34 ]. The design of this process can significantly impact how power and empowerment are navigated within PPI.
Self-determination of the client is an essential aspect of empowerment practice in social work, and it is commonly believed that empowerment cannot be imposed upon anyone else [ 29 ]. In this regard, professionals are responsible for providing support and facilitation and it is crucial to minimize power differentials between all parties involved in order to foster relationships based on equality and partnership [ 29 ].
In research-based PPI, addressing power asymmetries between researchers and patients is critical. Researchers typically operate with institutions that have structures and established norms, facing constraints and pressures imposed by their institutions which can influence the extent of shared-decision making and the balance of power. Often, researchers have the final say in decisions [ 6 ]. These dynamics of institutional power can lead to challenges in achieving equal partnerships.
To navigate these constraints effectively, it is crucial to understand the extent to which patients are involved in the research process, how their roles are negotiated with researchers, and the level of their involvement in decision-making. Researchers must balance their own institutional limitations and the robustness of the research with the need to foster patient empowerment. This process can be challenging and at times frustrating. Promoting patient empowerment in clinical research impacts organizational processes, cultures and public relationships, requiring frameworks that recognize, address and integrate patient perspectives into research activities [ 49 ].
The goal of this article is to contribute to the understanding of empowerment in PPI in clinical research by analyzing the history and development of the concept of empowerment in earlier fields. We presented an overview of the history of empowerment in the social movements of the 20th century and outlined key concepts of empowerment from Solomon, Rappaport, and Freire. Based on this, we suggested common principles of empowerment concepts. We then presented an overview of the historical development of PPI in research, that is strongly connected to the social movements’ heritage, and reflected on the relevance of empowerment in PPI. Finally, we assessed in how far empowerment in PPI mirrors the previously developed common principles of empowerment, and analyzed similarities and distinctions.
We found the heritage of social movements to inform the ethos of PPI, as principles such as promoting participation, providing people with a say in decisions that may affect their lives, appreciating diverse knowledge, fostering respectful collaborations, and confronting power structures (even at a smaller, less existential scale) are deeply embedded in PPI practices. However, we also observed considerable distinctions in contexts and objectives: Social movement-based empowerment aimed to effect systemic changes and combat inequalities. Empowerment movements typically arose from significant inequalities and were often initiated by the oppressed. While these movements laid the groundwork for later involvement in research, the empowerment objectives in PPI are more specific to the research context. Today, the involvement process is predominantly initiated by researchers seeking to incorporate patients to increase the quality and relevance of their trials.
In the practical implementation of PPI in clinical research, empowerment may often play only a minor role, irrespective of claims made to the contrary. PPI may offer ample opportunities for fostering patient empowerment, even if the primary goal is to involve patients for the enhancement of research quality or for meeting certain requirements. Nevertheless, even in trials explicitly designed to promote patient empowerment, the level of empowerment may not satisfy each individual involved. We found that these constraints are often related to researchers’ need to adhere to institutional requirements, the duration of PPI involvement, and power imbalances in relation to researchers.
Still, we feel that tentative recommendations are warranted for facilitating empowerment in clinical trials:
Throughout the planning, execution, and dissemination of the study, close collaboration between patients and researchers is crucial. The relationship between patients and researchers should be marked by respect and mutual appreciation [ 4 ]. Both parties should value all perspectives and prioritize inclusivity in decision-making processes. MacDonalds’ model of civic patienthood offers valuable insights for strengthening patients’ voice and the power dynamics in PPI [ 34 ].
As defined by Salomon, the success of empowerment depends on “the extent to which the service delivery system functions as either an obstacle course or an opportunity system” [ 26 ]. In the case of PPI, the study and patient involvement should be designed in such a way that patients fully understand the process and its realistic limitations. It is essential to make the research accessible and transparent, with clear communication about what it can and cannot promise. Acknowledging the limitations of clinical research as a vehicle for empowerment respects patients’ capacity to understand these limitations and helps manage their expectations, fostering a more honest and trustful relationship between researchers and patients [ 16 , 17 ].
Prior to and throughout their collaboration, patients and researchers should engage in discussions about their shared objectives, expectations, and experiences [ 16 ]. These should include notions of empowerment and empowerment should be an aspect that guides the involvement.
To promote collaborative equality, patients may participate in training sessions prior to or at the beginning of their involvement. These sessions should offer a comprehensive understanding of clinical research and enhance their perspective as patients, empowering them to challenge researchers when necessary [ 3 ]. In the spirit of peer support and collective action [ 30 ], patients themselves may offer these training sessions for the benefit of their fellow patients, thereby reducing power imbalances in the learning environment.
Researchers ought to engage in training sessions for PPI [ 50 ], including instructions on how to foster empowerment.
Patients collaborating with researchers should be accompanied and supported as needed by a person who feels responsible and plays a role similar to that of a social worker in other contexts [ 29 ]. Despite time constraints in PPI, there should be opportunities for patients to share and analyze experiences, provide mutual support, and collaborate during the course of the clinical trial [ 18 , 30 ].
At the end of the participation, there should be a closing session where, among other things, the participation is reflected upon and its added value is highlighted [ 17 ]. This includes not only aspects that have changed the quality of the study, but also, for example, changes and developments at the personal level of patients and researchers. Patients who wish to continue their involvement should have opportunities to do so.
This list presents several ways for promoting empowerment within the context of PPI. It is not conclusive but rather intended to be extended and elaborated upon in further examinations of the subject. However, defining empowerment is a complex undertaking, and one may select different criteria or aspects that may lead to alternative approaches to promoting it.
The primary objective of clinical research is not to empower patients but to generate scientific knowledge that can improve healthcare outcomes. However, with the increasing call for involving patients in research, the concept of empowerment has become an associated goal. Our investigation sought to unpack what empowerment might mean within the context of PPI in clinical research.
Given the absence of a consensus on what empowerment in this context entails, we turned to the history and foundational concepts of empowerment from various social movements to illuminate its potential meanings and implications. We found both similarities and differences between empowerment in PPI and earlier empowerment concepts. While PPI reflects principles such as participation, challenging power structures, and valuing diverse perspectives, the empowerment it offers is often constrained by the specific context of clinical research.
Some limitations to empowerment in PPI are intrinsic to the research context itself, such as the need to adhere to rigorous scientific standards. However, other limitations are less evident and may, in fact, undermine the empowerment of patients. These include institutional power dynamics, limited opportunities for genuine decision-making, and inadequate support for patients to navigate the complexities of research processes.
To address these challenges, it is crucial for those involved in PPI to be explicit about what they mean by empowerment and to consider whether and how it is valued in their research endeavors. Transparency regarding both external and internal limitations is essential. This includes an explicit exchange between researchers and patients about the realistic scope and potential of patients’ involvement, as well as ongoing reflection and dialogue about how empowerment can be fostered and sustained within the research process. By doing so, PPI can move closer to fulfilling its promise of genuinely empowering patients, rather than merely using the term as a rhetorical tool.
No datasets were generated or analysed during the current study.
Patient and Public Involvement
Greenhalgh T, Hinton L, Finlay T, Macfarlane A, Fahy N, Clyde B, et al. Frameworks for supporting patient and public involvement in research: systematic review and co-design pilot. Health Expect. 2019;22(4):785–801.
Article PubMed PubMed Central Google Scholar
Domecq J, Prutsky G, Elraiyah T, Wang Z, Nabhan M, Shippee N, et al. Patient engagement in research: a systematic review. BMC Health Serv Res. 2014;14(1):89.
Ives J, Damery S, Redwod S. PPI, paradoxes and Plato: who’s sailing the ship? J Med Ethics. 2012;39(3):181–5.
Article PubMed Google Scholar
Gradinger F, Britten N, Wyatt K, Froggatt K, Gibson A, Jacoby A, et al. Values associated with public involvement in health and social care research: a narrative review. Health Expectations: Int J Public Participation Health care Health Policy. 2015;18(5):661–75.
Article Google Scholar
Boote J, Baird W, Beecroft C. Public involvement at the design stage of primary health research: a narrative review of case examples. Health Policy. 2010;95(1):10–23.
Boote J, Telford R, Cooper C. Consumer involvement in health research: a review and research agenda. Health Policy. 2002;61(2):213–36.
Charlton J. Nothing about us without us: disability oppression and empowerment. Oakland: University of California Press; 1998.
Book Google Scholar
Came H, Gifford H, Wilson D. Indigenous public health: nothing about us without us! Public Health. 2019;176:2–3.
Article CAS PubMed Google Scholar
Baxter L, Thorne L, Mitchell A. Small voices, big noises. Lay involvement in Health Research: lessons from other Fields. Exester: Washington Singer; 2001.
Google Scholar
Oxford English Dictonary. Oxford: Oxford University Press; 2021. empower, v.
Esmail L, Moore E, Rein A. Evaluating patient and stakeholder engagement in research: moving from theory to practice. J Comp Eff Res. 2015;4(2):133–45.
Rowland P, McMillan S, McGillicuddy P, Richards J. What is the patient perspective in patient engagement programs? Implicit logics and parallels to feminist theories. Health. 2016;21(1):76–92.
Staley K. Exploring Impact: Public Involvement in NHS, Public Health and Social Care Research. National Institute for Health Research (NIHR), INVOLVE, editors. Eastleigh2009 28.10.2009.
Russell J, Fudge N, Greenhalgh T. The impact of public involvement in health research: what are we measuring? Why are we measuring it? Should we stop measuring it? Res Involv Engagem. 2020;6(1):63.
Komporozos-Athanasiou A, Fudge N, Adams M, McKevitt C. Citizen Participation as Political Ritual: towards a sociological theorizing of ‘Health Citizenship’. Sociology. 2016;52(4):744–61.
Schilling I, Behrens H, Hugenschmidt C, Liedtke J, Schmiemann G, Gerhardus A. Patient involvement in clinical trials: motivation and expectations differ between patients and researchers involved in a trial on urinary tract infections. Res Involv Engagem. 2019;5(15).
Schilling I, Behrens H, Bleidorn J, Gagyor I, Hugenschmidt C, Jilani H, et al. Patients’ and researchers’ experiences with a patient board for a clinical trial on urinary tract infections. Res Involv Engagem. 2019;5:38.
Herriger N, Spurensuche. Eine Kurze Geschichte Des Empowerment-Konzeptes. In: Herriger N, editor. Empowerment in Der Sozialen Arbeit. Stuttgart: Kohlhammer; 2020. pp. 22–56.
Chapter Google Scholar
Simon B. The empowerment tradition in American Social Work: a history. New York: Columbia University; 1994.
Weisbrot R. Freedom bound: a history of America’s civil rights movement. New York: Plume; 1991.
West G, Blumberg RL, editors. Women and social protest. Oxford: Oxford University Press; 1990.
Beck-Gernsheim E. Vom „Dasein für andere zum anspruch auf ein Stück „eigenes Leben: Individualisierungsprozesse Im Weiblichen Lebenszusammenhang. In: Wilz SM, editor. Geschlechterdifferenzen - Geschlechterdifferenzierungen. Volume 1. Wiesbaden: VS Verlag für Sozialwissenschhaften; 2008. pp. 19–62.
Schutz A, Miller M, editors. People Power. The Community Organizing tradition of Saul Alinsky. Nashville: Vanderbilt University Press; 2015.
Pankofer S. Empowerment - Eine Einführung. In: Tilly M, Sabine P, editors. Empowerment konkret. Berlin, Boston: De Gruyter Oldenbourg; 2016. pp. 7–22.
Solomon B, Empowerment. Social work in oppressed communities. J Social Work Pract. 1987;2(4):79–91.
Solomon B. Black empowerment: social work in oppressed communities. New York: Columbia University; 1976.
Blank B. Empowerment - Ein Leitkonzept Der Sozialen Arbeit in Der Migrationsgesellschaft? In: Blank B, Gögercin S, Sauer KE, Schramkowski B, editors. Soziale Arbeit in Der Migrationsgesellschaft: Grundlagen – Konzepte – Handlungsfelder. Wiesbaden: Springer Fachmedien Wiesbaden; 2018. pp. 327–40.
Rappaport J. In praise of paradox: a social policy of empowerment over prevention. Am J Community Psychol. 1981;9(1):1–25.
Boehm A, Staples L, Empowerment. The point of View of consumers. Families Society: J Contemp Social Serv. 2004;85:270–80.
Freire P. Education for critical consciousness. 2013 ed. London/New York: Bloomsbury Academic; 1974.
Cattaneo LB, Chapman AR. The process of empowerment: a model for use in research and practice. Am Psychol. 2010;65(7):646–59.
Herriger N. Begriffliche Annäherungen: Vier Zugänge zu Einer Definition Von empowerment. In: Herriger N, editor. Empowerment in Der Sozialen Arbeit. Stuttgart: Kohlhammer; 2020. pp. 13–21.
Thornton H. Patient and public involvement in clinical trials. BMJ: Br Med J. 2008;336(7650):903–4.
Macdonald GG. Civic patienthood: a critical grounded theory of how patients transform from clinical subjects to civic actors. University of British Columbia; 2023.
Liberati A. Consumer participation in research and health care. BMJ. 1997;315(7107):499.
Article CAS PubMed PubMed Central Google Scholar
INVOLVE, Resources. 2019 [ https://www.invo.org.uk/resource-centre/
Frank L, Basch E, Selby JV. The PCORI perspective on patient-centered outcomes research. JAMA. 2014;312(15):1513–4.
Canadian Institutes of Health Research (CIHR). Strategy for Patient-Oriented Research 2023 [ https://www.cihr-irsc.gc.ca/e/41204.html
Schilling I, Herbon C, Jilani H, Rathjen KI, Gerhardus A. Aktive Beteiligung von Patient*innen an klinischer Forschung – Eine Einführung. Zeitschrift für Evidenz, Fortbildung und Qualität im Gesundheitswesen. 2020:56–63.
Engler J, Kuschick D, Tillmann J, Kretzschmann C, Wallacher S, Kersting C, et al. [Patient and Public Involvement in Family Medicine Research]. ZFA (Stuttgart). 2022;98(5):178–83.
PubMed Google Scholar
Pomey M-P, Flora L, Karazivan P, Dumez V, Lebel P, Vanier M-C, et al. The Montreal model: the challenges of a partnership relationship between patients and healthcare professionals. Santé publique (Vandoeuvre-lès-Nancy. France). 2015;27:S41–50.
NHS England. Working in partnership with people and communities: Statutory guidance. 2023. https://www.england.nhs.uk/long-read/working-in-partnership-with-people-and-communities-statutory-guidance/
INVOLVE. Diversity and inclusion: what’s it about and why is it important for public involvement in research? Eastleigh: INVOLVE; 2012.
Staniszewska S, Jones N, Newburn M, Marshall S. User involvement in the development of a research bid: barriers, enablers and impacts1. Health Expect. 2007;10(2):173–83.
Jilani H, Rathjen KI, Schilling I, Herbon CM, Scharpenberg M, Brannath W, et al. Handreichung Zur Patient*innenbeteiligung an klinischer Forschung. Bremen: Universität Bremen; 2020.
Forsythe LP, Ellis LE, Edmundson L, Sabharwal R, Rein A, Konopka K, et al. Patient and Stakeholder Engagement in the PCORI Pilot projects: description and lessons learned. J Gen Intern Med. 2015;31(1):13–21.
Dudley L, Gamble C, Allam A, Bell P, Buck D, Goodare H, et al. A little more conversation please? Qualitative study of researchers’ and patients’ interview accounts of training for patient and public involvement in clinical trials. Trials. 2015;16(1):190.
Clausen J. Partizipative Forschung in Der Deutschen Rheuma-Liga — Inhaltliche und praktische Umsetzung Der Partizipativen Forschung in Einer Patientenorganisation. Zeitschrift für Evidenz Fortbildung Und Qualität Im Gesundheitswesen. 2020;155:64–70.
Hickey G, Brearley S, Coldham T, Denegri S, Green G, Staniszewska S, et al. Guidance on co-producing a research project. Southampton: INVOLVE; 2018.
Hickey G, BMC. On Medicine [Internet]. BMC:. 2018. https://blogs.biomedcentral.com/on-medicine/2018/12/07/global-patient-public-involvement-network-vision-mission/
Download references
Not applicable.
Imke Schilling was awarded a post-doctoral scholarship by the University of Bremen.
Open Access funding enabled and organized by Projekt DEAL.
Authors and affiliations.
Department for Health Services Research, Institute of Public Health and Nursing Research, University of Bremen, Grazer Straße 4, 28359, Bremen, Germany
Imke Schilling & Ansgar Gerhardus
Health Sciences Bremen, University of Bremen, 28359, Bremen, Germany
You can also search for this author in PubMed Google Scholar
Imke Schilling: Conceptualization, Methodology, Investigation, Resources, Data Curation, Writing – Original Draft, Writing – Review and Editing, Project administration, Funding acquisition, Ansgar Gerhardus: Conceptualization, Methodology, Writing – Review and Editing, Supervision, Funding acquisition.
Correspondence to Imke Schilling .
Ethics approval and consent to participate, consent for publication, competing interests.
The authors declare no competing interests.
During the preparation of this work the authors, as non-native speakers, used Deepl Write and ChatGPT in order to eliminate grammatical or spelling errors and to conform to correct scientific English within the article. After using these tools, the authors reviewed and edited the content as needed and take full responsibility for the content of the publication.
Publisher’s note.
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .
Reprints and permissions
Cite this article.
Schilling, I., Gerhardus, A. Is this really Empowerment? Enhancing our understanding of empowerment in patient and public involvement within clinical research. BMC Med Res Methodol 24 , 205 (2024). https://doi.org/10.1186/s12874-024-02323-1
Download citation
Received : 11 December 2023
Accepted : 27 August 2024
Published : 13 September 2024
DOI : https://doi.org/10.1186/s12874-024-02323-1
Anyone you share the following link with will be able to read this content:
Sorry, a shareable link is not currently available for this article.
Provided by the Springer Nature SharedIt content-sharing initiative
ISSN: 1471-2288
Health Research Policy and Systems volume 15 , Article number: 31 ( 2017 ) Cite this article
15k Accesses
25 Citations
9 Altmetric
Metrics details
With massive investment in health-related research, above and beyond investments in the management and delivery of healthcare and public health services, there has been increasing focus on the impact of health research to explore and explain the consequences of these investments and inform strategic planning. Relevance is reflected by increased attention to the usability and impact of health research, with research funders increasingly engaging in relevance assessment as an input to decision processes. Yet, it is unclear whether relevance is a synonym for or predictor of impact, a necessary condition or stage in achieving it, or a distinct aim of the research enterprise. The main aim of this paper is to improve our understanding of research relevance, with specific objectives to (1) unpack research relevance from both theoretical and practical perspectives, and (2) outline key considerations for its assessment.
Our approach involved the scholarly strategy of review and reflection. We prepared a draft paper based on an exploratory review of literature from various fields, and gained from detailed and insightful analysis and critique at a roundtable discussion with a group of key health research stakeholders. We also solicited review and feedback from a small sample of expert reviewers.
Research relevance seems increasingly important in justifying research investments and guiding strategic research planning. However, consideration of relevance has been largely tacit in the health research community, often depending on unexplained interpretations of value, fit and potential for impact. While research relevance seems a necessary condition for impact – a process or component of efforts to make rigorous research usable – ultimately, relevance stands apart from research impact. Careful and explicit consideration of research relevance is vital to gauge the overall value and impact of a wide range of individual and collective research efforts and investments. To improve understanding, this paper outlines four key considerations, including how research relevance assessments (1) orientate to, capture and compare research versus non-research sources, (2) consider both instrumental versus non-instrumental uses of research, (3) accommodate dynamic temporal-shifting perspectives on research, and (4) align with an intersubjective understanding of relevance.
Peer Review reports
Various levels of government in Canada collectively invest multiple billions of dollars in health-related research per annum, above and beyond investments in the management and delivery of healthcare and public health services. In recognition of this sizeable collective commitment, much work has focused on the impact of health research to explore and explain the consequences of these investments and inform strategic planning. Relevance is tacit in the increased attention to the usability and impact of health research. Additionally, research funders increasingly engage in relevance assessment as an input to decision processes; yet, it is unclear whether relevance is a synonym for or predictor of impact, a necessary condition or stage in achieving it, or a distinct aim of the research enterprise. Therefore, the main aim of this paper is to improve our understanding of research relevance as it relates to research quality and research impact, with specific objectives to (1) unpack research relevance from both theoretical and practical perspectives, and (2) outline key considerations for the assessment of research relevance.
Globally, there has been increasing critical assessment of the value of health research investments [ 1 – 3 ], with growing interest in research impact assessment (RIA) in the health sector [ 4 – 6 ]. RIA focuses on understanding how research activity can directly and indirectly advance knowledge, influence decision-making, and effect health and socio-economic outcomes, with a small but growing body of work seeking to develop better measures to evaluate (and ideally attribute) the returns on health research investments [ 6 ]. The Canadian Academy of Health Sciences (CAHS) released a comprehensive report on the subject in 2009 that presented a call for action, with a number of recommendations including establishing collaborative efforts among Canadian research funders to advance frameworks and sets of indicators and metrics for health research impact [ 4 ]. The CAHS impact framework [ 4 ], which drew on the Buxton and Hanney [ 7 ] ‘payback model’, among others, has provided a thoughtful starting point for considering the impact of health research in Canada. Subsequent work by Alberta Innovates – Health Solutions (AIHS) on a Research to Impact Framework (described in Graham et al. [ 8 ]) provides further insights on operationalising RIA frameworks for health research in Canada.
These initiatives are part of a broadly discussed shift in approaches to knowledge production, from an emphasis on investigator-initiated, curiosity driven work judged and guided by scientists, to expanded approaches to knowledge production, drawing on a wider set of actors and approaches, and emphasising relevance and usability. This shift from science produced by and for scientists to knowledge production that is “ socially distributed, application-oriented, trans-disciplinary, and subject to multiple accountabilities ” [ 9 ] has been characterised as a shift from ‘mode 1’ to ‘mode 2’ knowledge regimes. In the language of mode 2, interest in research ‘impact’ expresses a concern for application or consequence, and – in the economic language of return on investment – a concern that the yield is at least equal to the investment in the research itself. Extending this reasoning, interest in research ‘relevance’ may reflect a concern for accountability – linking research to the actor(s) for whom the research is performed and who will, ideally, put it to use.
In Canada, interest in research impact and relevance appears to have been felt most forcefully in the context of health services and policy research, which has long been encouraged to orient to the needs of policymakers, health system planners and related decision makers. More recently, there has been increased attention to ensuring that all forms of health research are ‘patient oriented’ – that is, that the research is prioritised, conducted and applied in ways that are accountable to this important end user. This call has been picked up on several fronts, including by the Canadian Institutes of Health Research (CIHR), which released its Strategy for Patient-Oriented Research (SPOR) in 2011. The SPOR vision “…is to demonstrably improve health outcomes and enhance patients’ health care experience through integration of evidence at all levels in the health care system ” [ 10 ]. In some respects, it represents a fundamental re-orientation for the primary funder of health research in Canada.
Though relevance is tacit in attention to research impact and the wider concern with mode 2 knowledge production, explicit attention to the meaning or measurement of research relevance is limited. The CAHS and AIHS frameworks, for example, acknowledge ‘relevance’ of health research but do not clearly define the term nor describe approaches for assessing it [ 4 , 8 ]. Rather, these frameworks emphasise the role of broad stakeholder engagement approaches and feedback mechanisms as methods for addressing relevance. For example, the AIHS framework notes the challenge of, and need to, move “ …beyond the collection of traditional scientific indicators […] to include measures of greater interest to the broader stakeholder community… ” [ 8 ] without stating explicitly how “ greater interest ” or related concepts such as relevance should be judged. As currently constructed, these RIA frameworks provide important advances in how we think about the impact of health research, but they were not intended to provide guidance specifically to the assessment of the relevance of health research.
Despite this lack of specific guidance on research relevance from a scholarly or measurement perspective, attention to it as a practical component of health research funding and organisation is evolving. There is, for example, growing use of ‘relevance assessment’ by research funders. The Canadian Health Services Research Foundation, in particular, was an innovator in incorporating relevance review into its applied research funding programmes, including promoting partnerships and knowledge translation (KT) with health system stakeholders [ 11 ]. Current applications for funding from the Institute of Gender and Health at CIHR go through ‘relevance review’ ( http://www.cihr-irsc.gc.ca/e/45212.html ). Similarly, applications for Ontario’s Health System Research Fund are judged based on ‘internal review of relevance and impact’ ( http://www.health.gov.on.ca/en/pro/ministry/research/cihr.aspx ). However, given the lack of conceptual clarity on research relevance, and in particular, how relevance assessment aligns with and differs from impact assessment, there is a critical gap in our understanding that has implications for both its contemporary and ongoing application and our ability to make sound research investment decisions.
This work was commissioned by the Ontario SPOR SUPPORT (Support for People and Patient-Oriented Research and Trials) Unit (OSSU) – one of several units established at provincial and regional levels across Canada to work with CIHR in pursuing the SPOR. Like other research organisations, OSSU saw the need to consider the relevance of the research it supported, and it established both scientific and relevance advisory committees as part of its original governance structure [ 12 ], tasking the latter to “ …develop a measure, or small set of strategic measures, that serves to inspire the Ontario research, implementation, provider and patient communities to come together to make a difference for patients ” [ 12 ]. In the spirit of research and scholarship, OSSU then asked what exactly this commitment to research ‘relevance’ entailed.
Our approach to answering this question involved the scholarly strategy of review and reflection. As with the early investigations into research impact assessment, we were surprised to find so little reflexive attention to the topic within the health research community [ 13 ]. We prepared a draft paper based on an exploratory review of literature from various fields, and gained from detailed and insightful analysis and critique at a roundtable discussion with a small group of key health research stakeholders. We also solicited review and feedback from a small sample of expert reviewers.
The structure of our paper is as follows. First, to ‘unpack’ the concept of relevance, we review theoretical literature and then consider practical work both from within and outside the health sector, to ask what has been argued and concluded about the nature of relevance and its appropriate assessment. Next, we outline a series of forward-looking considerations for assessing research relevance and conclude with reflections on how research relevance assessment fits with evolving interest in RIA.
Theoretical perspectives.
Before considering the relevance of health research, we need to step back and consider what we mean by the term ‘relevance’. A range of descriptors is often used to define relevance, including ‘pertinent to…’, ‘bearing upon…’, ‘connected with…’, or ‘appropriate to…’, ‘…the matter at hand’, as well as ‘germane’, ‘apropos’, ‘material’, ‘applicable’ and ‘satisfactory’. A large body of dedicated theoretical work on relevance, drawn from many fields and perspectives, such as computer science, information science, statistics/probability theory, artificial intelligence, cognitive science, epistemology, linguistics and jurisprudence [ 14 ], reflects its importance but also the challenge for establishing a common understanding of the term [ 14 , 15 ]. For example, Gärdenfors [ 16 ], in his discussion on the logic of relevance, noted that “ …relevance ought to be a central concept in the philosophy of science… ” given the position that “ …it is only relevant information that is of any importance… ” (p. 351). However, from a ‘research’ relevance perspective, the theoretical work on relevance has been linked to ‘information’, ‘evidence’, ‘reasoning’, ‘argument’ and ‘decision’ [ 15 – 18 ], each presenting variable framing that impedes practical definition or consistent comprehension of the term. Floridi [ 14 ] recently suggested that existing theories are “ …utterly useless when it comes to establish the actual relevance of some specific piece of information ” (p. 69), and goes on to advance a ‘subjectivist’ interpretation, with relevance judged by the questioner. While a subjectivist approach to relevance is intuitively appealing, its contribution to the assessment of research relevance presents particular challenges that we will discuss later in the paper.
Another approach to unpacking relevance is to consider the theoretical model behind the broad-based research strategies that have governed research investments and policies in high-income countries since the end of the Second World War. For the better part of the 20th century, a linear model was the dominant conceptual framework, whereby basic research was viewed as a necessary input for applied research, which then led to development and production [ 19 , 20 ]. In the late 1990s, an alternate thesis was introduced when Stokes proposed a new model for broad-based research strategy – known as Pasteur’s Quadrant – that highlighted the conceptual relationship between the ‘quest to understand’ and ‘practical needs’ [ 21 ]. While some research is clearly focused on advances in basic research (e.g. Niels Bohr’s foundational research on atomic structure and quantum theory), and some research is clearly focused on applied problems (e.g. Thomas Edison’s practical inventions), Stokes emphasised the potential for use-inspired basic research (e.g. Louis Pasteur’s foundational research on microbiology that addressed contemporaneous population health challenges). Pasteur’s Quadrant invokes consideration of ‘relevance’ with some commentators framing the two-by-two relationship as the relevance for advancement of basic knowledge and the relevance for immediate application [ 22 ]. Stokes’ model adds conceptual insight on the role of relevance when considering the value of research to society, however, it was not intended to specifically conceptualise the term and does not distinguish it from other related concepts such as research impact or value. Therefore, to provide further insights, we next consider relevance in practical settings.
In the health sector, the idea that research should be ‘relevant’ is commonplace. Commitments to ‘knowledge translation’ and the ‘knowledge to action cycle’ [ 23 ] emphasise issues of relevance and provide considerable insight into approaches to ensuring research usability and use. At the same time, the health research community has given disproportionate attention to issues of research quality, with an emphasis on internal validity that may downplay external validity and suggest some tension between rigour and relevance. Thus, though the concept of relevance is of central importance to the health research enterprise, the failure to unpack it or explore it both theoretically and practically leaves room for misunderstanding and misapplication.
In the health sector, research relevance often arises as a practical question of the ‘fit’ between a body of knowledge or research approach and a specific field or issue (e.g. public health, primary healthcare, healthcare access, genomics, alternative healthcare, healthcare reform in rural areas). The results of two recent International Society for Pharmacoeconomics and Outcomes Research task forces take this approach. The task forces developed questionnaires to assess the relevance and credibility of research other than randomised controlled trials (e.g. observational research, meta-network analysis) to inform healthcare decision-making [ 24 , 25 ]. Both make similar observations about relevance, reinforcing the subjectivist approach noted earlier, and can be summarised by the following statement by Berger et al. [ 24 ]:
“ Relevance addresses whether the results of the study/apply [sic] to the setting of interest to the decision maker. It addresses issues of external validity similar to the population, interventions, comparators, outcomes, and setting framework from evidence based medicine. There is no correct answer for relevance. Relevance is determined by each decision maker, and the relevance assessment determined by one decision maker will not necessarily apply to other decision makers. Individual studies may be designed with the perspective of particular decision makers in mind (e.g. payer or provider) ” (p.148, emphasis added).
Research relevance in health is also noted in discussion and debate regarding the value of qualitative research relative to the more established forms of quantitative health research. For example, Mays and Pope [ 26 ] suggest that qualitative research can be assessed “… by two broad criteria: validity and relevance ”. Their further discussion provides some insight into the several ways that research might be relevant, suggesting that:
“[r] esearch can be relevant when it either adds to knowledge or increases the confidence with which existing knowledge is regarded. Another important dimension of relevance is the extent to which findings can be generalised beyond the setting in which they were generated ” [ 27 ].
The work of Mays and Pope positions research relevance amidst the longstanding tension between internal and external validity. This tension reflects opposing foci on internal validity as the quality/rigour of research methodology and external validity as the applicability/transferability of research to other settings or contexts. While external validity is not the only measure of relevance – as research may remain relevant to some contexts even when not generalisable to others – it is an important component, and one that has not always attracted sufficient attention. For example, the Canadian health research community has focused considerable practical attention on internal validity as a critical component of evidence for clinical and health policy decisions. Evidence-based medicine, the Cochrane Collaboration, the Canadian and United States task forces on preventive healthcare/services and a long list of aligned groups have developed and established many tools to assess the quality of research evidence (e.g. GRADE [ 28 ]), with a predominant focus on issues of internal validity, and an emphasis on evidence hierarchies that is sometimes seen to be incompatible with ‘real world’ relevance. The relative lack of similar approaches or tools that focus on external validity in health research is notable, though movements to marshal evidence in support of sound public policy, such as the Campbell Collaboration, have attended to issues of external validity in other areas of health and social policy [ 29 ]. Further, there are emerging approaches and tools for documenting the external validity of health research and facilitating its use [ 30 ]. For example, WHO has supported the development of workbooks to contextualise health systems guidance for different contexts [ 31 ] and the field of local applicability and transferability of research has emerged to facilitate the adaptation of interventions from one setting to another, including the development of some well-documented tools like RE-AIM [ 32 ].
Alongside these emerging approaches and tools sits the established field of KT. KT has a strong history in Canada with a distinctive feature being a reliance on stakeholder engagement to support a commitment to improve research relevance. For example, the AIHS framework relies heavily on KT and stakeholder engagement approaches as part of its RIA, describing the mobilisation of knowledge through “ …a process of interactions, feedback, and engagement using a variety of mechanisms (e.g. collaborations, partnerships, networks, knowledge brokering) with relevant target audiences (i.e. actors and performers) across the health sector ” ([ 8 ] p. 362). Experience in stakeholder engagement, particularly with clinical, management and policy decision-makers, has become fairly extensive and there is now increased attention on engaging patients as core stakeholders in health research. If relevance is truly subjective, then KT efforts (including engagement, dissemination, promotion, communication) would appear to represent reasonable approaches for articulating, conveying and improving research relevance. However, if there are underlying elements of relevance that are more universal, then there is a risk that KT efforts – and subjectivist approaches to ensuring relevance – are akin to commercial marketing or communication strategies where the aim is to ‘sell’ more product and/or generate more influence that may not align with a more objective lens.
In sum, the health research community in Canada has a longstanding history of critically appraising research quality based on study design and research methodology, with greater emphasis on internal rather than external validity. As the same time, there is established expertise in KT, emphasising engagement with research users and adaptation to settings or contexts of use – approaches that may imply a subjectivist interpretation of relevance. Thus, while relevance is an important concept for the health research enterprise, its use is largely tacit and taken for granted.
To unpack relevance further we consider some non-health sector perspectives that give attention to the term, often with formal definitions or taxonomies established. Examples include the legal, financial accounting, education and web search (information retrieval) sectors, each of which are briefly described below.
From a legal perspective, relevance has a specific meaning that relates to the admissibility of evidence in terms of its probative value (i.e. the extent to which evidence contributes to proving an important matter of fact) [ 33 ]. For example, a common objection to legal testimony or evidence is that it is ‘irrelevant’ [ 34 ]. Legal processes for considering the admissibility or legal-relevance of evidence are firmly established, requiring explicit declaration of evidentiary sources and direct consideration of that evidence as it relates both to a specific case and related historical precedents, something that is undeveloped in the health sector [ 35 ]. It is the formality, explicitness and retrospective nature of this process, which is directly associated with a specific case (or decision), that is characteristic of the consideration of relevance in the legal context.
Financial accounting provides another perspective on relevance. In this field, relevance is viewed as a fundamental component of generally accepted accounting principles. Relevance and materiality are emphasised such that accountants and auditors focus on financial information that meets the decision-making needs of users and is expected to affect their decisions. In financial accounting, ‘value relevance’ provides a more focused perspective on relevance, defined as “ …the ability of information disclosed by financial statements to capture and summarise firm value. Value relevance can be measured through the statistical relations between information presented by financial statements and stock market values or returns ” [ 36 ]. Similar to the legal perspective, the financial accounting perspective on relevance is set with a formal context, where the focal point (i.e. financial performance) is clear and principles (i.e. generally accepted accounting principles) and processes (i.e. financial reporting and auditing) are clearly established and monitored.
Education provides a slightly more expansive approach to operationalising relevance, given the more general aim of the enterprise. In the United States, the Glossary of Education Reform [ 37 ] notes that “ …the term relevance typically refers to learning experiences that are either directly applicable to the personal aspirations, interests, or cultural experiences of students (personal relevance) or that are connected in some way to real-world issues, problems, and contexts (life relevance) ”. They further state that “ personal relevance occurs when learning is connected to an individual student’s interests, aspirations, and life experiences ”, while “ life relevance occurs when learning is connected in some way to real-world issues, problems, and contexts outside of school ”. A similar framing of relevance in this context suggests that it “…extends the learning beyond the classroom by teaching students to apply what they are learning to real world situations ” [ 38 ]. While the education sector also makes numerous references to a ‘rigour and relevance’ dyad [ 39 ] in contrast to the dominance of the internal validity focus in healthcare, it is the prominent dual focus on ‘personal’ relevance (with its subjectivist orientation) and ‘life’ or ‘real world’ relevance (with its more universal orientation) that seems to most clearly define the education sector’s perspective on relevance.
One of the most intensive and competitive sectors focusing on relevance is the web search (or information retrieval) field. This includes dominant search engines such as Google and Bing, as well as a wide range of commercial and social media sites such as Amazon, eBay, Facebook and LinkedIn, that compete either directly or indirectly on their ability to identify relevant information in response to user queries. Therefore, the ability of these organisations to advance the theory and practice related to relevance is fundamental to their success. For example, Google was built upon the effectiveness of its search algorithm, which is in a constant state of evolution. Both explicit and implicit approaches to assess relevance are used to contribute to search algorithm refinements [ 40 ]. The explicit approach focuses on ‘relevance ratings’, whereby evaluators (e.g. human raters) are contracted to assess the degree of ‘helpfulness’ of search results paired to specific search queries [ 41 ]. The implicit approach to assess relevance monitors and aggregates search behaviour of millions of users who are likely unaware that their behaviour is being assessed. Google has more recently advanced ‘personalised relevance’, which uses past individual search behaviour to personalise/tailor future search results for the same individual. Pariser has critiqued this concept as “ the filter bubble ” [ 42 ], warning that Google’s intent to optimise search algorithms for personal relevance creates a “ …personal ecosystem of information… ” that limits the diversity of search results and promotes insularity. This personal relevance is situated within the pervasiveness of social media, which facilitates the advancement of ‘social relevance’. Personal and social relevance highlight two important orientations towards relevance – one built on increasingly detailed understanding of individual preferences and the other reflecting the growing power and increasing accessibility of crowd-sourced perspectives. Overall, web search has made important contributions to how we understand and operationalise relevance, including the use of increasingly sophisticated explicit and implicit feedback mechanisms and the ability to draw upon and analyse big data sets. Web search has also exposed the contrasting orientations of personal and social relevance that underscore the challenges of combining or integrating different relevance assessments.
These non-health sector perspectives on relevance highlight several considerations. First, they reinforce general findings that point to perspective, decision context, timeliness and precision of focus or ‘fit’ as key elements of relevance. Additionally, they highlight a few distinctive considerations. The formalistic contexts of financial accounting and law emphasise issues such as precedent and legitimacy, implying that relevance in a research sense might require the demonstration of some legitimate or credible association between research and its use or user, among other considerations. Further, the complex consumerist world of social media highlights some of the challenges of a purely subjectivist definition of relevance. Whereas the International Society for Pharmacoeconomics and Outcomes Research guidance takes a subjectivist stance in suggesting that, “[t] here is no correct answer for relevance ” [ 24 ], the “ filter bubble ” criticised by Pariser [ 42 ] suggests otherwise. Relevance solely to the personally-perceived interests of a research user is unlikely to adequately serve the collective commitments to health and health equity that are especially germane to the health research enterprise.
To this point, we have endeavoured to unpack relevance from theoretical and practical perspectives. In light of these insights and in the context of persistent interest in research impact assessment and evolving interest in research relevance, we now turn to some specific forward-looking considerations for research relevance assessment (RRA).
The first consideration for RRA is the acknowledgement that research is only one of many sources of insight to inform the needs or actions of research users. A research user is influenced by a wide range of political, legal, media, economic and other contextual information, interactions and experiences, as well as prevailing organisational governance, leadership, culture and values that all serve to complement (and often dominate) any insights that might be derived from research [ 43 ]. This reality implies that ‘relevance’ has a different meaning for researchers and research users. Researchers are typically interested in the relevance of a specific research product or activity for identifiable actions of (potentially) multiple research users; relevance is here judged relative to both the perceived needs of research users, and the extent and content of other related research. In contrast, research users are typically focused on identifying multiple relevant inputs to guide a specific action, only some of which may be research; relevance is here judged relative to both the research user’s needs and the form and content of the other inputs.
Given these distinct orientations to research relevance, RRA needs to be explicit about its comparative lens. Clear distinctions should be made between relevance based on the merits of the research product or activity (researcher lens) and relevance based on the relative value of research compared to other research and non-research sources (research user lens). RRA provides an opportunity to build more robust ways to characterise and assess the contribution of research to research users, including a more systematic and transparent articulation of anticipated research uses (akin to the Research Councils UK’s ‘Pathways to Impact’ [ 44 ] or descriptions of planned study design and methodological approach published in study protocols/registrations for randomised controlled trials or systematic reviews).
The considerations noted above rely heavily on instrumental uses of research. Theoretically derived definitions of relevance, such as Floridi’s [ 14 ], tend to focus on the response to a specified question. This suggests a direct and tangible connection between research and its ‘use’. However, as Weiss [ 43 ] and others have observed, most types of research use are not instrumental, where use is documented and explicitly addresses a specific query or challenge for a research user. Rather, research use tends to be more conceptual, where use is indirect and evolves over time, or symbolic, where use may be politically or tactically motivated [ 43 ]. Research may also create externalities or unintended effects. For example, general research activity might support an engaged learning environment, interactive research relationships, and additional research-related discourse that provides benefits that are not attributable to any specific research product or activity. This has important ramifications for how research is funded and the role that relevance can play in that assessment. Ultimately, RRA needs to go beyond a singular understanding of research use as instrumental use, to develop better methods for capturing and assessing the relevance of the many non-instrumental uses of research.
Another closely related consideration for RRA is the temporal context. Almost all research is conducted in a temporally defined period. Yet, while the quality of research is typically characterised by its methodology, which is a static feature typically not subject to temporal variation (e.g. the assessed quality of a randomised controlled trial should be consistent over time), relevance of research can be considered at any time (e.g. prior to the initiation of a research study or at different points in time post-completion) and is therefore subject to dynamic perceptions as they pertain to evolving action or decision contexts. Cohen [ 15 ] suggests that “ …relevance, like reasoning, has a prospective dimension as well as a retrospective one. It helps prediction as well as explanation ” (p. 182). The important insight is that, in contrast to research quality, the relevance of a specific research product can change over time, making assessment of research relevance more challenging.
This requires RRA to acknowledge the temporal factor and its associated implications for research relevance. At minimum, RRA should specify the temporal context as either pre-research (e.g. proposal/funding stage) or post-research (e.g. after research results have been produced). RRA at the pre-research stage focuses on proposed inputs and hypothetical outputs and outcomes, and may be more likely to overestimate instrumental research use and underestimate non-instrumental use. RRA at the post-research stage focuses mainly on the importance and value of actual outputs and tangible results, and may capture more non-instrumental research use. The pre-research stage is clearly aligned with research funding/investment processes, while the post-research stage can contribute to retrospective return-on-investment calculations and more general research impact assessment. However, employing this simple temporal categorisation should not lead us to lose sight of the dynamic, iterative nature of research relevance and the opportunity to assess it at interim and ongoing stages that captures re-interpretations or re-applications of research findings over time.
An underlying theme in our review of relevance is subjectivity. Consider the broad scientific paradigms of positivism and interpretivism that are typically respectively aligned with research quality and research relevance. Research quality can be viewed as relating to characteristics or features that are assessed objectively, while research relevance may be seen as subjectively adjudicated. The subjective focus emphasises the variability of different perspectives and contexts and the suggestion that anyone can have a different take on the relevance of a specific research product or activity. For RRA, this reinforces a user-centred orientation to relevance assessment that privileges the judgment of the interrogator and raises the key question regarding who is positioned as the main arbiter of research relevance.
However, while relevance may never be characterised as universal, it could be argued that it is not purely subjective either. Rather, relevance may be more consistent with an intersubjective understanding that emphasises the extent of agreement or shared understanding among individual subjective perspectives representing a way to bridge the personal and the universal. The intersubjective view, while not presenting an objective approach to measuring relevance, does provide a road towards a meaningful and structured assessment of research relevance. It also emphasises the importance of representation in forging the intersubjective judgments that guide the research enterprise.
This paper has unpacked research relevance from different perspectives and outlined key considerations for its assessment. Alongside research impact assessment, research relevance seems increasingly important in justifying research investments and guiding strategic research planning. Indeed, judgments of ‘relevance’ are becoming a key component of the health research enterprise. However, consideration of relevance has been largely tacit in the health research community, often depending on unexplained interpretations of value, fit and potential for impact. Reviewing the various uses of relevance in health research, the concept is sometimes used as a synonym for research impact or positioned as a reliable predictor of later consequence. In many ways, research relevance seems a necessary condition for impact – a process or component of efforts to make rigorous research usable. However, relevance is not a necessary or sufficient condition to achieve impact. We expect that research that is relevant, and thus accountable to specific and legitimate users, will be impactful, but this may not necessarily be the case where other factors intervene. Additionally, we may expect that research that is impactful will be appropriately accountable – but again, this is not necessarily the case. Ultimately, relevance stands apart from research impact. Like rigour, relevance is a complementary but distinctive dimension of what it is that ensures ‘the good’ in health research.
While ‘relevance’ is ever-present, understanding of the concept in terms of health research is emergent and not well codified. To improve our understanding, this paper outlines four key considerations, including how research relevance assessments (1) orientate to, capture and compare research versus non-research sources, (2) consider both instrumental versus non-instrumental uses of research, (3) accommodate dynamic temporal-shifting perspectives on research, and (4) align with an intersubjective understanding of relevance. We believe careful and explicit consideration of research relevance, guided by transparent principles and processes is vital to gauge the overall value and impact of a wide range of individual and collective research efforts and investments. We hope this paper generates more discussion and debate to facilitate progress.
Alberta Innovates – Health Solutions
Canadian Academy of Health Sciences
Canadian Health Services Research Foundation
Canadian Institutes of Health Research
knowledge translation
Ontario SPOR SUPPORT Unit
research impact assessment
research relevance assessment
Strategy for Patient-Oriented Research
Kleinert S, Horton R. How should medical science change? Lancet. 2014;383(9913):197–8.
Article PubMed Google Scholar
Chalmers I, Bracken MB, Djulbegovic B, Garattini S, Grant J, Gülmezoglu AM, Howells DW, Ioannidis JPA, Oliver S. How to increase value and reduce waste when research priorities are set. Lancet. 2014;383(9912):156–65.
Chalmers I, Glasziou P. Avoidable waste in the production and reporting of research evidence. Lancet. 2009;374(9683):86–9.
Panel on the Return on Investments in Health Research. Making an impact: a preferred framework and indicators to measure returns on investment in health research. Ottawa: Canadian Academy of Health Sciences; 2009.
Google Scholar
Banzi R, Moja L, Pistotti V, Facchini A, Liberati A. Conceptual frameworks and empirical approaches used to assess the impact of health research: an overview of reviews. Health Res Policy Syst. 2011;9:26.
Article PubMed PubMed Central Google Scholar
Milat AJ, Bauman AE, Redman S. A narrative review of research impact assessment models and methods. Health Res Policy Syst. 2015;13:18.
Buxton M, Hanney S. How can payback from health services research be assessed? J Health Serv Res Policy. 1996;1:35–43.
CAS PubMed Google Scholar
Graham KER, Chorzempa HL, Valentine PA, J M. Evaluating health research impact: Development and implementation of the Alberta Innovates – Health Solutions impact framework. Res Eval. 2012;21:354–67
Nowotny H, Scott P, Gibbons M. Introduction: ‘Mode 2’ Revisited: The New Production of Knowledge. Minerva. 2003;41(3):179–94.
Article Google Scholar
Canadian Institutes of Health Research. Strategy for Patient-Oriented Research. 2013. http://www.cihr-irsc.gc.ca/e/41204.html . Accessed 27 Jul 2014.
Lomas J. Preface: The first ones over the barricade. In: Potvin L, Armstrong P, editors. Shaping Academia for the Public Good: Critical Reflections on the CHSRF/CIHR Chair Program. Toronto: University of Toronto Press; 2013.
Ontario Ministry of Health and Long-Term Care – Community and Health Promotion Branch. Ontario Support for People and Patient-Oriented Research and Trials (SUPPORT) Unit: Business Plan. Toronto, ON: MOHLTC; 2013.
Buxton M. The payback of ‘Payback’: challenges in assessing research impact. Res Eval. 2011;20(3):259–60.
Floridi L. Understanding epistemic relevance. Erkenntnis. 2008;69(1):69–92.
Cohen LJ. Some steps towards a general theory of relevance. Synthese. 1994;101(2):171–85.
Gärdenfors P. On the logic of relevance. Synthese. 1978;37(3):351–67.
Schlesinger GN. Relevance Theoria. 1986;57(1):57–67.
Keynes JM. A treatise on probability. London: MacMillan and Co. Limited; 1921.
Bush V. Science - the endless frontier. A report to the President on a program of postwar scientific research. Washington: National Science Foundation; 1945.
Dudley JM. Defending basic research. Nat Photonics. 2013;7:338–9.
Article CAS Google Scholar
Stokes DE. Pasteur's quadrant - basic science and technological innovation. Washington: Brookings Institution Press; 1997.
Tushman M, O'Reilly C. Research and relevance: implications of Pasteur's Quadrant for doctoral programs and faculty development. Acad Manag J. 2007;50(4):769–74.
Graham ID, Logan J, Harrison MB, Straus SE, Tetroe J, Caswell W, Robinson N. Lost in knowledge translation: time for a map? J Contin Educ Health Prof. 2006;26(1):13–24.
Berger ML, Martin BC, Husereau D, Worley K, Allen JD, Yang W, Quon NC, Mullins CD, Kahler KH, Crown W. A questionnaire to assess the relevance and credibility of observational studies to inform health care decision making: an ISPOR-AMCP-NPC Good Practice Task Force report. Value Health. 2014;17(2):143–56.
Jansen JP, Trikalinos T, Cappelleri JC, Daw J, Andes S, Eldessouki R, Salanti G. Indirect treatment comparison/network meta-analysis study questionnaire to assess relevance and credibility to inform health care decision making: an ISPOR-AMCP-NPC Good Practice Task Force report. Value Health. 2014;17(2):157–73.
Mays N, Pope C. Qualitative research in health care: assessing quality in qualitative research. BMJ. 2000;320(7226):50–2.
Article CAS PubMed PubMed Central Google Scholar
Pope C, Ziebland S, Mays N. Qualitative research in health care: analysing qualitative data. BMJ. 2000;2000(320):114–6.
Guyatt GH, Oxman AD, Vist GE, Kunz R, Falck-Ytter Y, Alonso-Coello P, Schunemann HJ, for the GRADE Working Group. GRADE: an emerging consensus on rating quality of evidence and strength of recommendations. BMJ. 2008;336(7650):924–6.
Campbell Collaboration. Campbell Collaboration Systematic Reviews: Policies and Guidelines (Version 1.2). 2016. https://www.campbellcollaboration.org/library/campbell-collaboration-systematic-reviews-policies-and-guidelines.html . Accessed 27 Mar 2017.
Burchett H, Umoquit M, Dobrow M. How do we know when research from one setting can be useful in another? A review of external validity, applicability and transferability. J Health Serv Res Policy. 2011;16(4):238–44.
World Health Organization. WHO recommendations: Optimizing health worker roles to improve access to key maternal and newborn health interventions through task shifting. Annex 8 Contextualizing the guidelines – workbook. 2012. http://www.who.int/reproductivehealth/publications/maternal_perinatal_health/Annex_8_Contextualizing_Workbook.pdf?ua=1 . Accessed 27 Sep 2016.
Re-aim.org. What is RE-AIM. 2017. http://re-aim.org/about/what-is-re-aim . Accessed 27 Mar 2017.
Center for the Study of Language and Information, Stanford University. Stanford Encyclopedia of Philosopy: The legal concept of evidence. 2015. http://plato.stanford.edu/entries/evidence-legal/ . Accessed 27 Sep 2016.
Cornell University Law School. Rule 402. General admissibility of relevant evidence. 2016. https://www.law.cornell.edu/rules/fre/rule_402 . Accessed 27 Sep 2016.
Giacomini M. One of these things is not like the others: the idea of precedence in health technology assessment and coverage decisions. Milbank Q. 2005;83(2):193–223.
Karğın S. The Impact of IFRS on the Value Relevance of Accounting Information: Evidence from Turkish Firms. Int J Econ Finance. 2013;5(4):71–80.
Abbott S. Relevance. In: Abbott S, editor. The glossary of education reform. Portland: Great Schools Partnership; 2013.
Pearson. Rigor and relevance: an overview. 2014. http://www.californiareading.com/media/pdf/rigor_and_relevance.pdf . Accessed 12 Oct 2014.
International Center for Leadership in Education. The Rigor Relevance Framework. 2016. http://www.leadered.com/our-philosophy/rigor-relevance-framework.php . Accessed 27 Sep 2016.
Google. Google Inside Search – Algorithms. 2016. http://www.google.ca/insidesearch/howsearchworks/algorithms.html . Accessed 27 Sep 2016.
Google. Search Quality Evaluator Guidelines. 2016. Accessed 28 Mar 2016.
Pariser E. The Filter Bubble: What the Internet is Hiding from You. New York: Penguin Group Inc.; 2011.
Weiss C. The many meanings of research utilization. Public Adm Rev. 1979;39(5):426–31.
Research Councils UK. Pathways to Impact. 2014. http://www.rcuk.ac.uk/innovation/impacts . Accessed 22 Feb 2017.
Download references
We acknowledge and appreciate the contributions of participants of a roundtable discussion to gather feedback on an earlier version of this paper. Participants included Simon Denegri, National Director for Public Participation and Engagement in Research, National Institute for Health Research (NIHR) UK, and Chair of INVOLVE, UK; Lee Fairclough, Vice-President, Quality Improvement, Health Quality Ontario; Michael Hillmer, Director, Planning, Research and Analysis Branch, Ontario Ministry of Health and Long-Term Care; John McLaughlin, Chief Science Officer and Senior Scientist, Public Health Ontario; Allison Paprica, Director, Strategic Partnerships, ICES; Michael Schull, President and CEO, ICES; and Vasanthi Srinivasan, Executive Director, Ontario Strategy for Patient-Oriented Research (SPOR) SUPPORT Unit (OSSU). We also want to thank John Lavis of the McMaster Health Forum for his very helpful comments on an earlier draft. Though we owe these individuals and organisations many thanks for their insights and support, we alone are responsible for the final product.
This work was commissioned by the Ontario SPOR Support Unit (OSSU). The executive director of the OSSU was one of the participants in a roundtable discussion to gather feedback on an earlier version of this paper, but beyond that, the OSSU did not have any role in the design of the study, collection, analysis or interpretation of the data, or writing of the manuscript.
Not applicable. No datasets were generated or analysed during the development of the article.
ADB acquired funding for the study. MJD, FAM and ADB conceptualised the study. MJD, FAM, CF and ADB participated in the review and writing of the manuscript. MJD, FAM and ADB participated in the roundtable discussion. MJD, FAM and ADB reviewed and approved the final version of the manuscript (CF passed away prior to submission of the manuscript).
The authors declare that they have no competing interests.
Not applicable.
Publisher’s note.
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Authors and affiliations.
Institute of Health Policy, Management and Evaluation, Dalla Lana School of Public Health, University of Toronto, 155 College Street, 4th Floor, Toronto, ON, M5T 3M6, Canada
Mark J. Dobrow, Fiona A. Miller & Adalsteinn D. Brown
Alberta Innovates - Health Solutions, Edmonton, Alberta, Canada
You can also search for this author in PubMed Google Scholar
Correspondence to Mark J. Dobrow .
This article is dedicated to the memory of Dr Cy Frank, our co-author and esteemed colleague, whose untimely death occurred midway through development of this work. Among his many interests, Dr Frank was a champion for improving understanding of research impact assessment and provided many insights on the concept of research relevance, some of which we expand upon in this article. His many contributions to the health sector will live on, but he will be greatly missed.
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License ( http://creativecommons.org/licenses/by/4.0/ ), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated.
Reprints and permissions
Cite this article.
Dobrow, M.J., Miller, F.A., Frank, C. et al. Understanding relevance of health research: considerations in the context of research impact assessment. Health Res Policy Sys 15 , 31 (2017). https://doi.org/10.1186/s12961-017-0188-6
Download citation
Received : 27 September 2016
Accepted : 07 March 2017
Published : 17 April 2017
DOI : https://doi.org/10.1186/s12961-017-0188-6
Anyone you share the following link with will be able to read this content:
Sorry, a shareable link is not currently available for this article.
Provided by the Springer Nature SharedIt content-sharing initiative
ISSN: 1478-4505
Objective To perform critical methodological assessments on designs, outcomes, quality and implementation limitations of studies evaluating the impact of malaria rapid diagnostic tests (mRDTs) on patient-important outcomes in sub-Saharan Africa.
Design A systematic review of study methods.
Data sources MEDLINE, EMBASE, Cochrane Library, African Index Medicus and clinical trial registries were searched up to May 2022.
Eligibility criteria Primary quantitative studies that compared mRDTs to alternative diagnostic tests for malaria on patient-important outcomes within sub-Sahara Africa.
Data extraction and synthesis Studies were sought by an information specialist and two independent reviewers screened for eligible records and extracted data using a predesigned form using Covidence. Methodological quality was assessed using the National Institutes of Health tools. Descriptive statistics and thematic analysis guided by the Supporting the Use of Research Evidence framework were used for analysis. Findings were presented narratively, graphically and by quality ratings.
Results Our search yielded 4717 studies, of which we included 24 quantitative studies; (15, 62.5%) experimental, (5, 20.8%) quasi-experimental and (4, 16.7%) observational studies. Most studies (17, 70.8%) were conducted within government-owned facilities. Of the 24 included studies, (21, 87.5%) measured the therapeutic impact of mRDTs. Prescription patterns were the most reported outcome (20, 83.3%). Only (13, 54.2%) of all studies reported statistically significant findings, in which (11, 45.8%) demonstrated mRDTs’ potential to reduce over-prescription of antimalarials. Most studies (17, 70.8%) were of good methodological quality; however, reporting sample size justification needs improvement. Implementation limitations reported were mostly about health system constraints, the unacceptability of the test by the patients and low trust among health providers.
Conclusion Impact evaluations of mRDTs in sub-Saharan Africa are mostly randomised trials measuring mRDTs’ effect on therapeutic outcomes in real-life settings. Though their methodological quality remains good, process evaluations can be incorporated to assess how contextual concerns influence their interpretation and implementation.
PROSPERO registration number CRD42018083816.
Data are available upon reasonable request. Our reviews’ data on the data extraction template forms, including data extracted from the included studies, will be availed by the corresponding author, JAO, upon reasonable request.
This is an open access article distributed in accordance with the Creative Commons Attribution 4.0 Unported (CC BY 4.0) license, which permits others to copy, redistribute, remix, transform and build upon this work for any purpose, provided the original work is properly cited, a link to the licence is given, and indication of whether changes were made. See: https://creativecommons.org/licenses/by/4.0/ .
https://doi.org/10.1136/bmjopen-2023-077361
Request permissions.
If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.
We conducted a robust literature search to get a recent representative sample of articles to assess the methodology.
In addition to the methodology of studies, we evaluated the implementation challenges that limit the effect of the tests.
We only included studies published in English which might have limited the generalisability of study findings, but we believe this is a representative sample to investigate the methods used to assess the impact of malaria rapid diagnostic tests.
The malaria burden remains high in sub-Saharan Africa despite several interventions deployed to control. 1 Interventions include but are not limited to the adoption of parasitological confirmation of malaria infection using malaria rapid diagnostic tests (mRDTs) and effective treatment using artemisinin-based combination therapies. 2 3 In 2021, there were 247 million cases of malaria reported globally, an increase of 2 million cases from 245 million cases reported in 2020. 4 This estimated increase in 2021 was mainly reported in sub-Saharan Africa. 4 Of all global malaria cases in 2021, 48.1% were reported in sub-Saharan Africa—Nigeria (26.6%), the Democratic Republic of the Congo (DRC) (12.3%), Uganda (5.1%) and Mozambique (4.1%). 4–6 Similarly, 51.9% of the worldwide malaria deaths were reported in sub-Saharan African—Nigeria (31.3%), the DRC (12.6%), the United Republic of Tanzania (4.1%) and Niger (3.9%). 4–6
Following the 2010 WHO’s policy on recommending parasitological diagnosis of malaria before treatment, the availability and access to mRDTs have significantly increased. 7 For instance, globally, manufacturers sold 3.5 billion mRDTs for malaria between 2010 and 2021, with almost 82% of these sales being in sub-Saharan African countries. 4 In the same period, National Malaria Control Programmes distributed 2.4 billion mRDTs globally, with 88% of the distribution being in sub-Saharan Africa. 4 This demonstrates impressive strides in access to diagnostic services in the public sector but does not effectively reveal the extent of test access in the private and retail sectors. Published literature indicates that over-the-counter (OTC) malaria medications or treatment in private retail drug stores are often the first point of care for fever or acute illness in African adults and children. 7–9 Using mRDTs in private drug outlets remains low, leading to overprescribing antimalarials. Increased access to mRDTs may minimise the overuse of OTC medicines to treat malaria.
Universal access to malaria diagnosis using quality-assured diagnostic tests is a crucial pillar of the WHO’s Global Technical Strategy (GTS) for malaria control and elimination. 4 10 11 Assessing the role of mRDTs in achieving the GTS goals and their impact on patient-important outcomes is essential in effectively guiding their future evaluation and programmatic scale-up. 12 Rapidly and accurately identifying those with the disease in a population is crucial to administering timely and appropriate treatment. It plays a key role in effective disease management, control and surveillance.
Impact evaluations determine if and how well a programme or intervention works. If impact evaluations are well conducted, they are expected to inform the scale-up of interventions such as mRDTs, including the cost associated with the implementation. Recent secondary research (systematic reviews on the impact of mRDTs on patient-important outcomes) 13 is only based on assessing mRDTs’ effect and does not consider how well the individual studies were conducted. Odaga et al conducted a Cochrane review comparing mRDTs to clinical diagnosis. They included seven trials where mRDTs substantially reduced antimalarial prescription and improved patient health outcomes. However, they did not assess the contextual factors that influence the effective implementation of the studies. There is a need to access the methodological implementation of studies that evaluate the impact of mRDTs. To our knowledge, no study has investigated the implementation methods of studies evaluating the impact of mRDTs.
We aimed to perform critical methodological assessments on the designs, outcomes, quality and implementation limitations of studies that evaluate the impact of mRDTs compared with other malaria diagnostic tests on patient-important outcomes among persons suspected of malaria in sub-Saharan Africa. We defined patient-important outcomes as; characteristics valued by patients which directly reflect how they feel, function or survive (direct downstream health outcomes such as morbidity, mortality and quality of life) and those that lie on the causal pathway through which a test can affect a patient’s health, and thus predict patient health outcomes (indirect upstream outcomes such as time to diagnosis, prescription patterns of antimalarials and antimicrobials, patient adherence). 14
We prepared this manuscript according to the reporting guideline: Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA-2020) 15 ( online supplemental files 1; 2 ). The protocol is registered with the International Prospective Register of Systematic Reviews and was last updated in June 2022. The protocol is also available as a preprint in the Open Science Network repositories. 12
Patient and public involvement, criteria for including studies in this review, study designs.
We included primary quantitative studies published in English. We included observational and experimental studies in either controlled or uncontrolled settings. We did not limit trials to the unit of randomisation (individual or cluster). We extracted qualitative data from quantitative studies on implementation limitations. We excluded studies, which only provided test accuracy statistics without evaluating the tests’ impact on patient-important outcomes and modelling studies. We also excluded editorials, opinion pieces, non-research reports, theoretical studies, secondary quantitative studies, reports, case studies, case series or abstracts with insufficient information or no full texts available, as the methodology of the studies could not be fully appraised.
We defined our population as people suspected of having malaria infection caused by any of the four human malaria parasites ( Plasmodium falciparum, P. malariae, P. ovale and P. vivax ) who reside in any sub-Saharan African country, regardless of age, sex or disease severity.
We restricted studies for inclusion to those assessing mRDTs, regardless of the test type or the manufacturer.
We included studies comparing mRDTs to microscopy, molecular diagnosis (PCR) or clinical/presumptive/routine diagnosis.
We included studies reporting on at least one or more patient-important outcomes. We adopted the conceptual framework for the classification of these outcomes as described by Schumacher et al . 16 Further details regarding the classification are available in our protocol. 12
Measures of the diagnostic impact that indirectly assess the effect of mRDTs on the diagnostic process, such as time to diagnosis/turn-around time and prediagnostic loss to follow-up.
Measures of the therapeutic impact that indirectly assess the effect of mRDTs on treatment decisions, such as time to treatment, pretreatment loss to follow-up, antimalarial/antibiotics prescription patterns and patient adherence to the test results.
Measures of the health impact that directly assess the effect of mRDTs on the patient’s health, such as mortality, morbidity, symptom resolution, quality of life and patient health costs.
Electronic searches.
Given the review’s purpose to assess the methodology of existing studies, we searched the following electronic databases for a representative sample till May 2022; MEDLINE, EMBASE, Cochrane Library and African Index Medicus. We also searched clinical trial registries, including clinicaltrials.gov, the meta-register of controlled trials, the WHO trials register and the Pan African Clinical Trials Registry. We applied a broad search strategy that included the following key terms: “Malaria”, “Diagnosis”, “Rapid diagnostic test”, “Impact”, “Outcome” and their associated synonyms. The full search strategy is provided in online supplemental file 2 .
We searched reference lists and citations of relevant systematic reviews that assessed the impact of mRDTs on patient-important outcomes. We checked for searches from conference proceedings within our search output.
Two reviewers independently screened the titles and abstracts of the search output and identified potentially eligible full texts using Covidence—an online platform for systematic reviews. 17 We resolved any differences or conflicts through discussion among the reviewers or consulting a senior reviewer.
Two reviewers independently extracted data from studies included using a predesigned and standard data extraction form in Covidence. 17 We piloted the form on two potentially eligible studies before its use and resolved any differences or conflicts through a discussion among the reviewers or consulting a senior reviewer. The study information that was extracted included the following:
General study details include the first author, year, title, geographical location(s), population, target condition and disease seasonality.
Study design details such as the type of study, intervention, comparator, prediagnostic, pretreatment and post-treatment loss to follow-up, outcome measures and results for outcome measures (effect size and precision). Study design issues were also considered, including sample size, study setting, inclusion criteria and study recruitment.
The quality assessment of the included studies was also performed using the National Institute for Health (NIH) quality assessment tools 18 ( online supplemental file 3 ).
The implementation challenges, as reported by study authors in the methods and the discussion sections, were extracted according to the four main domains of the Supporting the Use of Research Evidence (SURE) framework for identifying barriers and enablers to health systems: recipient of care, providers of care, health system constraints and sociopolitical constraints 19 ( online supplemental file 4 ).
We assessed the methodological quality of included studies in Covidence. 17 We adopted two NIH quality assessment tools 18 for experimental and observational designs. Two reviewers independently assessed the methodological quality of studies as stratified by study design. We resolved any differences or conflicts by discussing among the reviewers or consulting a senior reviewer. Our quality evaluation was based on the number of quality criteria a study reported about its internal validity. The overall score was used to gauge the study’s methodological quality. We did not exclude studies based on the evaluation of methodological quality. Instead, we used our assessment to explain the methodological issues affecting impact studies of mRDTs.
We did not pool results from included individual studies, but we conducted descriptive statistics by synthesising our results narratively and graphically, as this was a methodological review. All included studies were thereby considered during narrative synthesis.
We started our analysis by listing and classifying identified study designs and patient-important outcomes according to similarities. Stratified by study design, we used descriptive statistics for summarising key study characteristics. Descriptive analysis was done using STATA V.17 (Stata Corp, College Station, TX).
We used the thematic framework analysis approach to analyse and synthesise the qualitative data to enhance our understanding of why the health stakeholders thought, felt and behaved as they did. 20 We applied the following steps: familiarisation with data, selection of a thematic framework (SURE), 19 coding themes, charting, mapping and interpreting identified themes.
A summary of our study selection has been provided in figure 1 . Our search yielded 4717 records as of June 2022. After removing 17 duplicates, we screened 4700 studies based on their titles and abstracts and excluded 4566 records. After that, we retrieved 134 full texts and screened them against the eligibility criteria. We excluded 110 studies. The characteristics of excluded studies are shown in online supplemental file 5 . Therefore, we included 24 studies in this systematic review.
Preferred Reporting Items for Systematic Reviews and Meta-Analyses 2020 flow diagram showing the study selection process.
Study characteristics have been summarised in online supplemental file 6 . Studies included in this review were done in Ghana (7, 29.2%), Uganda (7, 29.2%), Tanzania (6, 25%), Burkina Faso (3, 12.5%), Nigeria (2, 8.3%) and Zambia (1, 4.2%). Most studies (16, 66.7%) were done on mixed populations of children and adults, while the remaining (8, 33.3%) were done on children alone. All studies (24, 100%) tested mRDTs as the intervention. Most studies (18, 75%) compared mRDTs to presumptive treatment/clinical diagnosis/clinical judgement, while the remaining (7, 29.2%) had microscopy and routine care (1, 4.2%) as their comparator. No study reported on PCR as a control.
Of all included studies, (17, 70.8%) were carried out in rural areas within government-owned facilities, (7, 29.2%) in urban areas and (2, 8.3%) in peri-urban areas. Few studies (6, 25%) were conducted in privately owned propriety facilities. Most studies (15, 62.5%) were conducted in health facilities and only (9, 37.5%) were within the communities. Studies conducted within health centres were (9, 37.5%), while those conducted in hospitals were (7, 29.2%). Most studies (15, 62.5%) were conducted during the high malaria transmission season, (9, 37.5%) during the low malaria season and (4, 16.7%) during the moderate malaria season. P. falciparum was the most common malaria parasite species (21, 87.5%)
We included multiple-armed studies with an intervention and a comparator ( online supplemental file 6 ). Of the 24 studies, (15, 62.5%) were experimental designs in which, (10, 41.7%) were cluster randomised controlled trials (4, 16.7%) were individual randomised controlled trials and (1, 4.2%) was a randomised crossover trial. Of the remaining studies, (5, 20.8%) were quasi-experimental designs (non-randomised studies of intervention) in which (4, 16.7%) were pre-post/before and after studies and (1, 4.2%) was non-randomised crossover trials. The remaining studies (4, 16.7%) were observational where, (3, 12.5%) were cross-sectional designs and (1, 4.2%) was a cohort study.
Patient-important outcome measures and individual study findings are summarised in online supplemental file 7 . Of the 24 included studies, (21, 87.5%) measured the therapeutic impact of mRDTs, while (13, 54.2%) evaluated its health impact and only (1, 4.2%) assessed its diagnostic impact. Only (13, 54.2%) of all studies reported statistically significant findings.
Of the included studies, (20, 83.3%) reported on either antimalarials or antibiotics prescription patterns. The patient’s adherence to test results was reported by (3, 12.5%) studies, and the time taken to initiate treatment was reported by (2, 8.3%). In contrast, the pretreatment loss to follow-up was reported by (1, 4.2%) study. Studies reporting statistically significant findings on prescription patterns were (12, 50%), in which (11, 45.8%) demonstrated mRDTs’ potential to reduce over-prescription of antimalarials. In contrast, (1, 4.2%) study reported increased antimalarial prescription in the mRDT arm. Other statistically significant findings were reported by two studies where (1, 4.2%) reported that patients’ adherence to test results was poor in the malaria RDT arm. In contrast, the other (1, 4.2%) reported that mRDTs reduced the time to offer treatment.
Of the included studies, (6, 25%) reported on mortality, while (5, 20.8%) reported on symptom resolution. Patient health cost was reported by (4, 16.7%) studies, while patient referral and clinical re-attendance rates were reported by (2, 8.3%) each. Few (3, 12.5%) studies reported statistically significant findings on measuring the health impact that mRDTs improved the patient’s health outcomes by reducing morbidity.
Time taken to diagnose patients with malaria was reported by (1, 4.2%) study where diagnosis using mRDTs reduced the time to diagnose patients, but the findings were not statistically significant.
The themes identified among included studies according to the SURE framework 19 are presented in table 1 . Most themes (n=7, 50%) emerged from the health system constraints domain while only one theme was reported under the domain, social and political constraints. Two themes, human resources and patient’s attitude were dominant. Lack of qualified staff in some study sites and patient’s preference for alternative diagnostic tests other than mRDTs hindered effective implementation of five studies.
Implementation challenge reported by the included studies
The methodological quality of the included studies is summarised in figures 2 and 3 . All studies assessed their outcomes validly and reliably and consistently implemented them across all participants. Some studies did not provide adequate information about loss-to-follow-up. Overall, (17, 70.8%) were of good methodological quality in which (11, 45.8%) were experimental, (3, 12.5%) were quasi-experimental and (3, 12.5%) were observational studies; however, blinding was not feasible. Concerns regarding patient non-adherence to treatment were reported in some studies. Sample size justification which is crucial when detecting differences in the measured primary outcomes was poorly reported among most studies. A detailed summary of each study’s performance is available in online supplemental files 8 and 9 .
Quality assessment of controlled intervention study designs. NIH, National Institute for Health.
Quality assessment of observational study designs. NIH, National Institute for Health.
In this methodological systematic review, we assessed the designs, patient-important outcomes, implementation challenges and the methodological quality of studies evaluating the impact of mRDTs on patient-important outcomes within sub-Saharan Africa. We found evidence of mRDTs’ impact on patient-important outcomes came from just a few (six) from Western, Eastern and Southern African countries. Few studies were done on children, while most enrolled mixed populations in rural settings within government-owned hospitals. Few studies were conducted within the community health posts. Included studies assessed mRDTs’ impact compared with either microscopy/clinical diagnosis, with a majority being carried out during the high malaria transmission seasons in areas predominated by P. falciparum . Studies included were primary comparative designs, with experimental designs being the majority, followed by quasi-experimental and observational designs.
While most studies evaluated the therapeutic impact of mRDTs by measuring the prescription patterns of antimalarials/antibiotics, few assessed the test’s health and diagnostic impact. Few studies reported statistically significant findings, mainly on reduced antimalarial prescription patterns due to mRDTs. Most studies were of good quality, but quality concerns were lack of adequate information about loss-to-follow-up, inability to blind participants/providers/investigators, patient’s poor adherence to treatment options provided as guided by the predefined study protocols and lack of proper sample size justification. Key implementation limitations included inadequate human resources, lack of facilities, patients’ unacceptability of mRDTs, little consumer knowledge of the test and the providers’ low confidence in mRDTs’ negative results.
Schumacher et al conducted a similar study focusing on the impact of tuberculosis molecular tests, but unlike ours, they did not focus on implementation challenges. Similar to our results, Schumacher et al 16 identified that evidence of the impact of diagnostic tests comes from just a small number of countries within a particular setting. 16 Likewise, most studies evaluating the impact of diagnostic tests are done in health facilities like hospitals rather than in the community. 16 Our finding that the choice of study design in diagnostic research is coupled with trade-offs is in line with Schumacher’s review. 16 In the same way, experimental designs are mostly preferred in assessing diagnostic test impact, followed by quasi-experimental studies—majorly pre-post studies—conducted before and after the introduction of the intervention. 16 Our findings also agree that observational designs are the least adopted in evaluating diagnostic impact. 16 Similarly, our review’s finding concur with Schumacher et al that it may be worthwhile to explore other designs 16 that use qualitative and quantitative methods, that is, the mixed-methods design, as this can create a better understanding of the test’s impact in a pragmatic way.
Our findings that studies indirectly assess the impact of diagnostic tests on patients by measuring the therapeutic impact rather than the direct health impact agree with Schumacher et al . 16 However, in this systematic review, the ‘prescription patterns’ were most reported in contrast to Schumacher et al , where the ‘time to treatment’ was by far the most common. 16 Similar to our finding, Schumacher et al determined that there is a trade-off in the choice of design and the fulfilment of criteria set forth to protect the study’s internal validity. 16 While Schumacher et al investigated the risk of bias, our review focused on methodological quality. 16
Diagnostic impact studies are complex to implement despite being crucial to any health system seeking to roll-out the universal health coverage programmes. 21 Unlike therapeutic interventions that directly affect outcomes, several factors influence access to and effective implementation of diagnostic testing. 22 While it is easier to measure indirect upstream outcomes to quantify mRDTs’ impact on diagnosis and treatment options, it is crucial to understand the downstream measures such as morbidity (symptom resolution, clinical re-attendance and referrals), mortality, patient health costs 22 are key to improving value-based care. Contextual factors such as the provider’s lack of trust in the test’s credibility can negate the positive effects of the test, such as good performance. This is a problem facing health systems that are putting up initiatives to roll out mRDTs as the providers often perceive that negative mRDTs’ results are false positives. 16 22 Consequently, lacking essential facilities and human resources can hinder the true estimation of the value mRDTs contribute to the patient’s health in resource-limited areas.
We conducted a robust literature search to get a recent representative sample of articles to assess the methodology. In addition to the methodology of studies, we evaluated the implementation challenges that limit the effect of the tests. Although we only included studies published in English which could affect generalisability of these findings, we believe this is a representative sample. Included studies were just from a few countries with sub-Sahara which could limit generalisability to other countries within the region. Since the overall sample size may not be an adequate representative of the entire population, the findings presented herein should be interpreted with caution. Additionally, considerations of the limited diversity in terms of study populations, interventions and outcome measures due to the few countries represented in the review should be included when interpreting our findings.
Health system concerns in both anglophone and francophone countries in sub-Saharan Africa are similar. 23 Studies did not report on blinding, but this did not affect their methodological quality since prior knowledge of the test and the intervention itself calls for having prior knowledge of the test. Our study was limited by reporting of study items such as randomisation and blinding of participants, providers and outcome assessors. This limited our quality assessment in quasi-experimental studies. Therefore, authors are encouraged to report the study findings according to the relevant reporting guidelines. 24 Most studies did not justify their sample sizes which could have compromised the validity of findings by influencing the precision and reliability of estimates. In cases where the sample size is inadequate, the reliability and generalisability of the findings becomes limited due to imprecise estimates with broad CIs. Studies reported poor adherence to protocols which could have reduced the sample size and the overall statistical power which could limit validity.
Controlling the malaria epidemic in high-burden settings in sub-Saharan Africa will require the effective implementation of tests that do more than provide incremental benefit over current testing strategies. Contextual factors affecting the test performance need to be considered a priori and factors introduced to mitigate their effect on implementing mRDTs. Process evaluations 25 can be incorporated into quantitative studies or done alongside quantitative studies to determine whether the tests have been implemented as intended and resulted in certain outputs. Process evaluations 25 can be incorporated into experimental studies to assess contextual challenges that could influence the design. Process evaluations can help decision-makers ascertain whether the mRDTs could similarly impact the people if adopted in a different context. Therefore, not only should process evaluations be performed but they should also be performed in a variety of contexts. It is prudent that patient-important outcomes be measured alongside process evaluations to better understand how to implement mRDTs. It may be worthwhile to focus on methodological research that guides impact evaluation reporting, particularly those that consider contextual factors. Future studies on the impact of mRDTs could improve by conducting mixed-methods designs which might provide richer data interpretation and insights into implementation challenges. Future studies could also consider providing clear justification for the sample size to ensure there is enough power to detect a significant difference.
Most studies evaluating mRDTs’ impact on patient-important outcomes in sub-Saharan Africa are randomised trials of good methodological quality conducted in real-life settings. The therapeutic effect of mRDTs is by far the most common measure of mRDTs’ impact. Quality issues include poor reporting on sample size justification and reporting of statistically significant findings. Effective studies of patient-important outcome measures need to account for contextual factors such as inadequate resources, patients’ unacceptability of mRDTs, and the providers’ low confidence in mRDTs’ negative results, which hinder the effective implementation of impact-evaluating studies. Process evaluations can be incorporated into experimental studies to assess contextual challenges that could influence the design.
Patient consent for publication.
Not applicable.
Acknowledgments.
We also acknowledge the information search specialist Vittoria Lutje for designing the search strategy and conducting the literature searches.
X @AkothJenifer, @sagamcaleb1
Contributors Concept of the study: EO. Drafting of the initial manuscript: JAO. Intellectual input on versions of the manuscript: JAO, LMW, CKS, SK, EO. Study supervision: SK, EO. Approving final draft of the manuscript: JAO, LMW, CKS, SK, EO. Guarantor: JAO.
Funding EO is funded under the UK MRC African Research Leaders award (MR/T008768/1). This award is jointly funded by the UK Medical Research Council (MRC) and the UK Foreign, Commonwealth & Development Office (FCDO) under the MRC/FCDO Concordat agreement. It is also part of the EDCTP2 programme supported by the European Union. This publication is associated with the Research, Evidence and Development Initiative (READ-It). READ-It (project number 300342-104) is funded by UK aid from the UK government; however, the views expressed do not necessarily reflect the UK government’s official policies. The funding organisations had no role in the development of this review.
Competing interests None declared.
Patient and public involvement Patients and/or the public were not involved in the design, or conduct, or reporting, or dissemination plans of this research.
Provenance and peer review Not commissioned; externally peer reviewed.
Supplemental material This content has been supplied by the author(s). It has not been vetted by BMJ Publishing Group Limited (BMJ) and may not have been peer-reviewed. Any opinions or recommendations discussed are solely those of the author(s) and are not endorsed by BMJ. BMJ disclaims all liability and responsibility arising from any reliance placed on the content. Where the content includes any translated material, BMJ does not warrant the accuracy and reliability of the translations (including but not limited to local regulations, clinical guidelines, terminology, drug names and drug dosages), and is not responsible for any error and/or omissions arising from translation and adaptation or otherwise.
Numbers, Facts and Trends Shaping Your World
Read our research on:
Full Topic List
Read Our Research On:
2. issues and the 2024 election, table of contents.
As concerns around the state of the economy and inflation continue, about eight-in-ten registered voters (81%) say the economy will be very important to their vote in the 2024 presidential election.
While the economy is the top issue among voters, a large majority (69%) cite at least five of the 10 issues asked about in the survey as very important to their vote.
There are wide differences between voters who support Harris and Trump when it comes to the issues.
Among Trump supporters, the economy (93%), immigration (82%) and violent crime (76%) are the leading issues. Just 18% of Trump supporters say racial and ethnic inequality is very important. And even fewer say climate change is very important (11%).
For Harris supporters, issues such as health care (76%) and Supreme Court appointments (73%) are of top importance. Large majorities also cite the economy (68%) and abortion (67%) as very important to their vote in the election.
Most voters cite several issues as very important to their vote this November. Very few – just 5% – say only one issue or no issues are highly important.
Majorities of both Harris supporters (71%) and Trump supporters (69%) say at least five of 10 issues included in the survey are very important to their vote.
Harris supporters are more likely than Trump supporters to say most of the issues included are very important. About a third of Harris supporters (32%) say at least eight of 10 issues are very important, compared with 17% of Trump supporters.
While the economy has long been a top issue for voters – and continues to be one today – other issues have become increasingly important for voters over the past four years.
About six-in-ten voters (61%) today say immigration is very important to their vote – a 9 percentage point increase from the 2020 presidential election and 13 points higher than during the 2022 congressional elections.
Immigration is now a much more important issue for Republican voters in particular: 82% of Trump supporters say it is very important to their vote in the 2024 election, up 21 points from 2020.
About four-in-ten Harris supporters (39%) say immigration is very important to their vote. This is 8 points higher than the share of Democratic congressional supporters who said this in 2022, but lower than the 46% of Biden supporters who cited immigration as very important four years ago.
In August 2020, fewer than half of voters (40%) said abortion was a very important issue to their vote. At the time, Trump voters (46%) were more likely than Biden voters (35%) to say it mattered a great deal.
Following the Supreme Court’s decision to overturn Roe v. Wade , opinions about abortion’s importance as a voting issue shifted. Today, 67% of Harris supporters call the issue very important – nearly double the share of Biden voters who said this four years ago, though somewhat lower than the share of midterm Democratic voters who said this in 2022 (74%). And about a third of Trump supporters (35%) now say abortion is very important to their vote – 11 points lower than in 2020.
Voters have more confidence in Trump than Harris on economic, immigration and foreign policies. Half or more voters say they are at least somewhat confident in Trump to make good decisions in these areas, while smaller shares (45% each) say this about Harris.
In contrast, voters have more confidence in Harris than Trump to make good decisions about abortion policy and to effectively address issues around race. Just over half of voters have confidence in Harris on these issues, while 44% have confidence in Trump on these issues.
Trump holds a slight edge over Harris for handling law enforcement and criminal justice issues (51% Trump, 47% Harris). Voters are equally confident in Harris and Trump to select good nominees for the Supreme Court (50% each).
Fewer than half of voters say they are very or somewhat confident in either candidate to bring the country closer together (41% are confident in Harris, 36% in Trump). And voters express relatively little confidence in Trump (37%) or Harris (32%) to reduce the influence of money in politics.
Since Biden dropped out of the presidential race in July , there has been movement on how confident voters are in the candidates to address issues facing the country.
In July, 48% of voters were confident in Biden to make good decisions about abortion policy. Today, 55% of voters are confident in Harris to do the same.
Harris currently has an 11-point advantage over Trump on voters’ confidence to handle abortion policy decisions.
Voters also express more confidence in Harris to make wise decisions about immigration policy than they did for Biden before he withdrew from the race. Today, 45% are confident in Harris on this issue; in July, 35% said this about Biden.
While Trump’s advantage over Harris on immigration policy is less pronounced than it was over Biden, he continues to hold a 7-point edge. Voters are as confident in his ability to make wise decisions about immigration policy as they were in July (52%).
Harris has also improved over Biden in voters’ confidence to make good decisions about foreign and economic policies. Currently, 45% of voters are confident in Harris on each of these issues.
In July, 39% had confidence in Biden to make good foreign policy decisions, while a similar share (40%) had confidence in him on economic policy.
Trump holds an edge over Harris on both of these issues, though both are somewhat narrower than the advantage he had over Biden on these issues in July.
Fresh data delivery Saturday mornings
Weekly updates on the world of news & information
As robert f. kennedy jr. exits, a look at who supported him in the 2024 presidential race, harris energizes democrats in transformed presidential race, many americans are confident the 2024 election will be conducted fairly, but wide partisan differences remain, joe biden, public opinion and his withdrawal from the 2024 race, most popular, report materials.
901 E St. NW, Suite 300 Washington, DC 20004 USA (+1) 202-419-4300 | Main (+1) 202-857-8562 | Fax (+1) 202-419-4372 | Media Inquiries
ABOUT PEW RESEARCH CENTER Pew Research Center is a nonpartisan, nonadvocacy fact tank that informs the public about the issues, attitudes and trends shaping the world. It does not take policy positions. The Center conducts public opinion polling, demographic research, computational social science research and other data-driven research. Pew Research Center is a subsidiary of The Pew Charitable Trusts , its primary funder.
© 2024 Pew Research Center
An official website of the United States government
The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.
The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.
Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .
Vishnu renjith.
School of Nursing and Midwifery, Royal College of Surgeons Ireland - Bahrain (RCSI Bahrain), Al Sayh Muharraq Governorate, Bahrain
1 Department of Mental Health Nursing, Manipal College of Nursing Manipal, Manipal Academy of Higher Education, Manipal, Karnataka, India
2 Department of OBG Nursing, Manipal College of Nursing Manipal, Manipal Academy of Higher Education, Manipal, Karnataka, India
3 School of Nursing, MGH Institute of Health Professions, Boston, USA
4 Department of Child Health Nursing, Manipal College of Nursing Manipal, Manipal Academy of Higher Education, Manipal, Karnataka, India
Healthcare research is a systematic inquiry intended to generate robust evidence about important issues in the fields of medicine and healthcare. Qualitative research has ample possibilities within the arena of healthcare research. This article aims to inform healthcare professionals regarding qualitative research, its significance, and applicability in the field of healthcare. A wide variety of phenomena that cannot be explained using the quantitative approach can be explored and conveyed using a qualitative method. The major types of qualitative research designs are narrative research, phenomenological research, grounded theory research, ethnographic research, historical research, and case study research. The greatest strength of the qualitative research approach lies in the richness and depth of the healthcare exploration and description it makes. In health research, these methods are considered as the most humanistic and person-centered way of discovering and uncovering thoughts and actions of human beings.
Healthcare research is a systematic inquiry intended to generate trustworthy evidence about issues in the field of medicine and healthcare. The three principal approaches to health research are the quantitative, the qualitative, and the mixed methods approach. The quantitative research method uses data, which are measures of values and counts and are often described using statistical methods which in turn aids the researcher to draw inferences. Qualitative research incorporates the recording, interpreting, and analyzing of non-numeric data with an attempt to uncover the deeper meanings of human experiences and behaviors. Mixed methods research, the third methodological approach, involves collection and analysis of both qualitative and quantitative information with an objective to solve different but related questions, or at times the same questions.[ 1 , 2 ]
In healthcare, qualitative research is widely used to understand patterns of health behaviors, describe lived experiences, develop behavioral theories, explore healthcare needs, and design interventions.[ 1 , 2 , 3 ] Because of its ample applications in healthcare, there has been a tremendous increase in the number of health research studies undertaken using qualitative methodology.[ 4 , 5 ] This article discusses qualitative research methods, their significance, and applicability in the arena of healthcare.
Diverse academic and non-academic disciplines utilize qualitative research as a method of inquiry to understand human behavior and experiences.[ 6 , 7 ] According to Munhall, “Qualitative research involves broadly stated questions about human experiences and realities, studied through sustained contact with the individual in their natural environments and producing rich, descriptive data that will help us to understand those individual's experiences.”[ 8 ]
The qualitative method of inquiry examines the 'how' and 'why' of decision making, rather than the 'when,' 'what,' and 'where.'[ 7 ] Unlike quantitative methods, the objective of qualitative inquiry is to explore, narrate, and explain the phenomena and make sense of the complex reality. Health interventions, explanatory health models, and medical-social theories could be developed as an outcome of qualitative research.[ 9 ] Understanding the richness and complexity of human behavior is the crux of qualitative research.
The quantitative and qualitative forms of inquiry vary based on their underlying objectives. They are in no way opposed to each other; instead, these two methods are like two sides of a coin. The critical differences between quantitative and qualitative research are summarized in Table 1 .[ 1 , 10 , 11 ]
Differences between quantitative and qualitative research
Areas | Quantitative Research | Qualitative Research |
---|---|---|
Nature of reality | Assumes there is a single reality. | Assumes existence of dynamic and multiple reality. |
Goal | Test and confirm hypotheses. | Explore and understand phenomena. |
Data collection methods | Highly structured methods like questionnaires, inventories and scales. | Semi structured like in-depth interviews, observations and focus group discussions. |
Design | Predetermined and rigid design. | Flexible and emergent design. |
Reasoning | Deductive process to test the hypothesis. | Primarily inductive to develop the theory or hypothesis. |
Focus | Concerned with the outcomes and prediction of the causal relationships. | Concerned primarily with process, rather than outcomes or products. |
Sampling | Rely largely on random sampling methods. | Based on purposive sampling methods. |
Sample size determination | Involves a-priori sample size calculation. | Collect data until data saturation is achieved. |
Sample size | Relatively large. | Small sample size but studied in-depth. |
Data analysis | Variable based and use of statistical or mathematical methods. | Case based and use non statistical descriptive or interpretive methods. |
Qualitative questions are exploratory and are open-ended. A well-formulated study question forms the basis for developing a protocol, guides the selection of design, and data collection methods. Qualitative research questions generally involve two parts, a central question and related subquestions. The central question is directed towards the primary phenomenon under study, whereas the subquestions explore the subareas of focus. It is advised not to have more than five to seven subquestions. A commonly used framework for designing a qualitative research question is the 'PCO framework' wherein, P stands for the population under study, C stands for the context of exploration, and O stands for the outcome/s of interest.[ 12 ] The PCO framework guides researchers in crafting a focused study question.
Example: In the question, “What are the experiences of mothers on parenting children with Thalassemia?”, the population is “mothers of children with Thalassemia,” the context is “parenting children with Thalassemia,” and the outcome of interest is “experiences.”
The purpose statement specifies the broad focus of the study, identifies the approach, and provides direction for the overall goal of the study. The major components of a purpose statement include the central phenomenon under investigation, the study design and the population of interest. Qualitative research does not require a-priori hypothesis.[ 13 , 14 , 15 ]
Example: Borimnejad et al . undertook a qualitative research on the lived experiences of women suffering from vitiligo. The purpose of this study was, “to explore lived experiences of women suffering from vitiligo using a hermeneutic phenomenological approach.” [ 16 ]
In quantitative research, the researchers do an extensive review of scientific literature prior to the commencement of the study. However, in qualitative research, only a minimal literature search is conducted at the beginning of the study. This is to ensure that the researcher is not influenced by the existing understanding of the phenomenon under the study. The minimal literature review will help the researchers to avoid the conceptual pollution of the phenomenon being studied. Nonetheless, an extensive review of the literature is conducted after data collection and analysis.[ 15 ]
Reflexivity refers to critical self-appraisal about one's own biases, values, preferences, and preconceptions about the phenomenon under investigation. Maintaining a reflexive diary/journal is a widely recognized way to foster reflexivity. According to Creswell, “Reflexivity increases the credibility of the study by enhancing more neutral interpretations.”[ 7 ]
The qualitative research approach encompasses a wide array of research designs. The words such as types, traditions, designs, strategies of inquiry, varieties, and methods are used interchangeably. The major types of qualitative research designs are narrative research, phenomenological research, grounded theory research, ethnographic research, historical research, and case study research.[ 1 , 7 , 10 ]
Narrative research focuses on exploring the life of an individual and is ideally suited to tell the stories of individual experiences.[ 17 ] The purpose of narrative research is to utilize 'story telling' as a method in communicating an individual's experience to a larger audience.[ 18 ] The roots of narrative inquiry extend to humanities including anthropology, literature, psychology, education, history, and sociology. Narrative research encompasses the study of individual experiences and learning the significance of those experiences. The data collection procedures include mainly interviews, field notes, letters, photographs, diaries, and documents collected from one or more individuals. Data analysis involves the analysis of the stories or experiences through “re-storying of stories” and developing themes usually in chronological order of events. Rolls and Payne argued that narrative research is a valuable approach in health care research, to gain deeper insight into patient's experiences.[ 19 ]
Example: Karlsson et al . undertook a narrative inquiry to “explore how people with Alzheimer's disease present their life story.” Data were collected from nine participants. They were asked to describe about their life experiences from childhood to adulthood, then to current life and their views about the future life. [ 20 ]
Phenomenology is a philosophical tradition developed by German philosopher Edmond Husserl. His student Martin Heidegger did further developments in this methodology. It defines the 'essence' of individual's experiences regarding a certain phenomenon.[ 1 ] The methodology has its origin from philosophy, psychology, and education. The purpose of qualitative research is to understand the people's everyday life experiences and reduce it into the central meaning or the 'essence of the experience'.[ 21 , 22 ] The unit of analysis of phenomenology is the individuals who have had similar experiences of the phenomenon. Interviews with individuals are mainly considered for the data collection, though, documents and observations are also useful. Data analysis includes identification of significant meaning elements, textural description (what was experienced), structural description (how was it experienced), and description of 'essence' of experience.[ 1 , 7 , 21 ] The phenomenological approach is further divided into descriptive and interpretive phenomenology. Descriptive phenomenology focuses on the understanding of the essence of experiences and is best suited in situations that need to describe the lived phenomenon. Hermeneutic phenomenology or Interpretive phenomenology moves beyond the description to uncover the meanings that are not explicitly evident. The researcher tries to interpret the phenomenon, based on their judgment rather than just describing it.[ 7 , 21 , 22 , 23 , 24 ]
Example: A phenomenological study conducted by Cornelio et al . aimed at describing the lived experiences of mothers in parenting children with leukemia. Data from ten mothers were collected using in-depth semi-structured interviews and were analyzed using Husserl's method of phenomenology. Themes such as “pivotal moment in life”, “the experience of being with a seriously ill child”, “having to keep distance with the relatives”, “overcoming the financial and social commitments”, “responding to challenges”, “experience of faith as being key to survival”, “health concerns of the present and future”, and “optimism” were derived. The researchers reported the essence of the study as “chronic illness such as leukemia in children results in a negative impact on the child and on the mother.” [ 25 ]
Grounded theory has its base in sociology and propagated by two sociologists, Barney Glaser, and Anselm Strauss.[ 26 ] The primary purpose of grounded theory is to discover or generate theory in the context of the social process being studied. The major difference between grounded theory and other approaches lies in its emphasis on theory generation and development. The name grounded theory comes from its ability to induce a theory grounded in the reality of study participants.[ 7 , 27 ] Data collection in grounded theory research involves recording interviews from many individuals until data saturation. Constant comparative analysis, theoretical sampling, theoretical coding, and theoretical saturation are unique features of grounded theory research.[ 26 , 27 , 28 ] Data analysis includes analyzing data through 'open coding,' 'axial coding,' and 'selective coding.'[ 1 , 7 ] Open coding is the first level of abstraction, and it refers to the creation of a broad initial range of categories, axial coding is the procedure of understanding connections between the open codes, whereas selective coding relates to the process of connecting the axial codes to formulate a theory.[ 1 , 7 ] Results of the grounded theory analysis are supplemented with a visual representation of major constructs usually in the form of flow charts or framework diagrams. Quotations from the participants are used in a supportive capacity to substantiate the findings. Strauss and Corbin highlights that “the value of the grounded theory lies not only in its ability to generate a theory but also to ground that theory in the data.”[ 27 ]
Example: Williams et al . conducted a grounded theory research to explore the nature of relationship between the sense of self and the eating disorders. Data were collected form 11 women with a lifetime history of Anorexia Nervosa and were analyzed using the grounded theory methodology. Analysis led to the development of a theoretical framework on the nature of the relationship between the self and Anorexia Nervosa. [ 29 ]
Ethnography has its base in anthropology, where the anthropologists used it for understanding the culture-specific knowledge and behaviors. In health sciences research, ethnography focuses on narrating and interpreting the health behaviors of a culture-sharing group. 'Culture-sharing group' in an ethnography represents any 'group of people who share common meanings, customs or experiences.' In health research, it could be a group of physicians working in rural care, a group of medical students, or it could be a group of patients who receive home-based rehabilitation. To understand the cultural patterns, researchers primarily observe the individuals or group of individuals for a prolonged period of time.[ 1 , 7 , 30 ] The scope of ethnography can be broad or narrow depending on the aim. The study of more general cultural groups is termed as macro-ethnography, whereas micro-ethnography focuses on more narrowly defined cultures. Ethnography is usually conducted in a single setting. Ethnographers collect data using a variety of methods such as observation, interviews, audio-video records, and document reviews. A written report includes a detailed description of the culture sharing group with emic and etic perspectives. When the researcher reports the views of the participants it is called emic perspectives and when the researcher reports his or her views about the culture, the term is called etic.[ 7 ]
Example: The aim of the ethnographic study by LeBaron et al . was to explore the barriers to opioid availability and cancer pain management in India. The researchers collected data from fifty-nine participants using in-depth semi-structured interviews, participant observation, and document review. The researchers identified significant barriers by open coding and thematic analysis of the formal interview. [ 31 ]
Historical research is the “systematic collection, critical evaluation, and interpretation of historical evidence”.[ 1 ] The purpose of historical research is to gain insights from the past and involves interpreting past events in the light of the present. The data for historical research are usually collected from primary and secondary sources. The primary source mainly includes diaries, first hand information, and writings. The secondary sources are textbooks, newspapers, second or third-hand accounts of historical events and medical/legal documents. The data gathered from these various sources are synthesized and reported as biographical narratives or developmental perspectives in chronological order. The ideas are interpreted in terms of the historical context and significance. The written report describes 'what happened', 'how it happened', 'why it happened', and its significance and implications to current clinical practice.[ 1 , 10 ]
Example: Lubold (2019) analyzed the breastfeeding trends in three countries (Sweden, Ireland, and the United States) using a historical qualitative method. Through analysis of historical data, the researcher found that strong family policies, adherence to international recommendations and adoption of baby-friendly hospital initiative could greatly enhance the breastfeeding rates. [ 32 ]
Case study research focuses on the description and in-depth analysis of the case(s) or issues illustrated by the case(s). The design has its origin from psychology, law, and medicine. Case studies are best suited for the understanding of case(s), thus reducing the unit of analysis into studying an event, a program, an activity or an illness. Observations, one to one interviews, artifacts, and documents are used for collecting the data, and the analysis is done through the description of the case. From this, themes and cross-case themes are derived. A written case study report includes a detailed description of one or more cases.[ 7 , 10 ]
Example: Perceptions of poststroke sexuality in a woman of childbearing age was explored using a qualitative case study approach by Beal and Millenbrunch. Semi structured interview was conducted with a 36- year mother of two children with a history of Acute ischemic stroke. The data were analyzed using an inductive approach. The authors concluded that “stroke during childbearing years may affect a woman's perception of herself as a sexual being and her ability to carry out gender roles”. [ 33 ]
Qualitative researchers widely use non-probability sampling techniques such as purposive sampling, convenience sampling, quota sampling, snowball sampling, homogeneous sampling, maximum variation sampling, extreme (deviant) case sampling, typical case sampling, and intensity sampling. The selection of a sampling technique depends on the nature and needs of the study.[ 34 , 35 , 36 , 37 , 38 , 39 , 40 ] The four widely used sampling techniques are convenience sampling, purposive sampling, snowball sampling, and intensity sampling.
It is otherwise called accidental sampling, where the researchers collect data from the subjects who are selected based on accessibility, geographical proximity, ease, speed, and or low cost.[ 34 ] Convenience sampling offers a significant benefit of convenience but often accompanies the issues of sample representation.
Purposive or purposeful sampling is a widely used sampling technique.[ 35 ] It involves identifying a population based on already established sampling criteria and then selecting subjects who fulfill that criteria to increase the credibility. However, choosing information-rich cases is the key to determine the power and logic of purposive sampling in a qualitative study.[ 1 ]
The method is also known as 'chain referral sampling' or 'network sampling.' The sampling starts by having a few initial participants, and the researcher relies on these early participants to identify additional study participants. It is best adopted when the researcher wishes to study the stigmatized group, or in cases, where findings of participants are likely to be difficult by ordinary means. Respondent ridden sampling is an improvised version of snowball sampling used to find out the participant from a hard-to-find or hard-to-study population.[ 37 , 38 ]
The process of identifying information-rich cases that manifest the phenomenon of interest is referred to as intensity sampling. It requires prior information, and considerable judgment about the phenomenon of interest and the researcher should do some preliminary investigations to determine the nature of the variation. Intensity sampling will be done once the researcher identifies the variation across the cases (extreme, average and intense) and picks the intense cases from them.[ 40 ]
A-priori sample size calculation is not undertaken in the case of qualitative research. Researchers collect the data from as many participants as possible until they reach the point of data saturation. Data saturation or the point of redundancy is the stage where the researcher no longer sees or hears any new information. Data saturation gives the idea that the researcher has captured all possible information about the phenomenon of interest. Since no further information is being uncovered as redundancy is achieved, at this point the data collection can be stopped. The objective here is to get an overall picture of the chronicle of the phenomenon under the study rather than generalization.[ 1 , 7 , 41 ]
The various strategies used for data collection in qualitative research includes in-depth interviews (individual or group), focus group discussions (FGDs), participant observation, narrative life history, document analysis, audio materials, videos or video footage, text analysis, and simple observation. Among all these, the three popular methods are the FGDs, one to one in-depth interviews and the participant observation.
FGDs are useful in eliciting data from a group of individuals. They are normally built around a specific topic and are considered as the best approach to gather data on an entire range of responses to a topic.[ 42 Group size in an FGD ranges from 6 to 12. Depending upon the nature of participants, FGDs could be homogeneous or heterogeneous.[ 1 , 14 ] One to one in-depth interviews are best suited to obtain individuals' life histories, lived experiences, perceptions, and views, particularly while exporting topics of sensitive nature. In-depth interviews can be structured, unstructured, or semi-structured. However, semi-structured interviews are widely used in qualitative research. Participant observations are suitable for gathering data regarding naturally occurring behaviors.[ 1 ]
Various strategies are employed by researchers to analyze data in qualitative research. Data analytic strategies differ according to the type of inquiry. A general content analysis approach is described herewith. Data analysis begins by transcription of the interview data. The researcher carefully reads data and gets a sense of the whole. Once the researcher is familiarized with the data, the researcher strives to identify small meaning units called the 'codes.' The codes are then grouped based on their shared concepts to form the primary categories. Based on the relationship between the primary categories, they are then clustered into secondary categories. The next step involves the identification of themes and interpretation to make meaning out of data. In the results section of the manuscript, the researcher describes the key findings/themes that emerged. The themes can be supported by participants' quotes. The analytical framework used should be explained in sufficient detail, and the analytic framework must be well referenced. The study findings are usually represented in a schematic form for better conceptualization.[ 1 , 7 ] Even though the overall analytical process remains the same across different qualitative designs, each design such as phenomenology, ethnography, and grounded theory has design specific analytical procedures, the details of which are out of the scope of this article.
Until recently, qualitative analysis was done either manually or with the help of a spreadsheet application. Currently, there are various software programs available which aid researchers to manage qualitative data. CAQDAS is basically data management tools and cannot analyze the qualitative data as it lacks the ability to think, reflect, and conceptualize. Nonetheless, CAQDAS helps researchers to manage, shape, and make sense of unstructured information. Open Code, MAXQDA, NVivo, Atlas.ti, and Hyper Research are some of the widely used qualitative data analysis software.[ 14 , 43 ]
Consolidated Criteria for Reporting Qualitative Research (COREQ) is the widely used reporting guideline for qualitative research. This 32-item checklist assists researchers in reporting all the major aspects related to the study. The three major domains of COREQ are the 'research team and reflexivity', 'study design', and 'analysis and findings'.[ 44 , 45 ]
Various scales are available to critical appraisal of qualitative research. The widely used one is the Critical Appraisal Skills Program (CASP) Qualitative Checklist developed by CASP network, UK. This 10-item checklist evaluates the quality of the study under areas such as aims, methodology, research design, ethical considerations, data collection, data analysis, and findings.[ 46 ]
A qualitative study must be undertaken by grounding it in the principles of bioethics such as beneficence, non-maleficence, autonomy, and justice. Protecting the participants is of utmost importance, and the greatest care has to be taken while collecting data from a vulnerable research population. The researcher must respect individuals, families, and communities and must make sure that the participants are not identifiable by their quotations that the researchers include when publishing the data. Consent for audio/video recordings must be obtained. Approval to be in FGDs must be obtained from the participants. Researchers must ensure the confidentiality and anonymity of the transcripts/audio-video records/photographs/other data collected as a part of the study. The researchers must confirm their role as advocates and proceed in the best interest of all participants.[ 42 , 47 , 48 ]
The demonstration of rigor or quality in the conduct of the study is essential for every research method. However, the criteria used to evaluate the rigor of quantitative studies are not be appropriate for qualitative methods. Lincoln and Guba (1985) first outlined the criteria for evaluating the qualitative research often referred to as “standards of trustworthiness of qualitative research”.[ 49 ] The four components of the criteria are credibility, transferability, dependability, and confirmability.
Credibility refers to confidence in the 'truth value' of the data and its interpretation. It is used to establish that the findings are true, credible and believable. Credibility is similar to the internal validity in quantitative research.[ 1 , 50 , 51 ] The second criterion to establish the trustworthiness of the qualitative research is transferability, Transferability refers to the degree to which the qualitative results are applicability to other settings, population or contexts. This is analogous to the external validity in quantitative research.[ 1 , 50 , 51 ] Lincoln and Guba recommend authors provide enough details so that the users will be able to evaluate the applicability of data in other contexts.[ 49 ] The criterion of dependability refers to the assumption of repeatability or replicability of the study findings and is similar to that of reliability in quantitative research. The dependability question is 'Whether the study findings be repeated of the study is replicated with the same (similar) cohort of participants, data coders, and context?'[ 1 , 50 , 51 ] Confirmability, the fourth criteria is analogous to the objectivity of the study and refers the degree to which the study findings could be confirmed or corroborated by others. To ensure confirmability the data should directly reflect the participants' experiences and not the bias, motivations, or imaginations of the inquirer.[ 1 , 50 , 51 ] Qualitative researchers should ensure that the study is conducted with enough rigor and should report the measures undertaken to enhance the trustworthiness of the study.
Qualitative research studies are being widely acknowledged and recognized in health care practice. This overview illustrates various qualitative methods and shows how these methods can be used to generate evidence that informs clinical practice. Qualitative research helps to understand the patterns of health behaviors, describe illness experiences, design health interventions, and develop healthcare theories. The ultimate strength of the qualitative research approach lies in the richness of the data and the descriptions and depth of exploration it makes. Hence, qualitative methods are considered as the most humanistic and person-centered way of discovering and uncovering thoughts and actions of human beings.
Conflicts of interest.
There are no conflicts of interest.
IMAGES
VIDEO
COMMENTS
The previous chapter reviewed the value of privacy, while this chapter examines the value and importance of health research. As noted in the introduction to Chapter 2, the committee views privacy and health research as complementary values. Ideally, society should strive to facilitate both for the benefit of individuals as well as the public.
Learn how research is indispensable for resolving public health challenges and improving health outcomes. Explore WHO's work on research for health, including fact sheets, databases, publications, news and videos.
Research can help improve the quality and effectiveness of healthcare by generating new evidence that can be applied to make healthcare affordable, safe, effective, equitable, accessible, and patient-centered. By applying this evidence in practice, healthcare systems can be improved to ensure that patients receive the best possible care.
Abstract. A working knowledge of research - both how it is done, and how it can be used - is important for everyone involved in direct patient care and the planning & delivery of eye programmes. A research coordinator collecting data from a health extension worker. ethiopia. The mention of 'research' can be off-putting and may seem ...
Qualitative research is conducted in the following order: (1) selection of a research topic and question, (2) selection of a theoretical framework and methods, (3) literature analysis, (4) selection of the research participants and data collection methods, (5) data analysis and description of findings, and (6) research validation.
Introduction. In 2010, approximately US$240 billion was invested in healthcare research worldwide [].Such research is utilised by policy makers, healthcare providers, and clinicians to make important evidence-based decisions aimed at maximising patient benefit, whilst ensuring that limited healthcare resources are used as efficiently as possible to facilitate effective and sustainable service ...
There has been a dramatic increase in the body of evidence demonstrating the benefits that come from health research. In 2014, the funding bodies for higher education in the UK conducted an assessment of research using an approach termed the Research Excellence Framework (REF). As one element of the REF, universities and medical schools in the UK submitted 1,621 case studies claiming to show ...
Let's look at some of the benefits of research in healthcare. 1. Increased knowledge and understanding of diseases and treatments. Research increases knowledge and understanding of diseases and treatments by providing doctors and scientists with a better understanding of the causes and symptoms. This allows them to identify more effective ...
These represent an important interim stage in the process towards the final expected impacts, such as quantifiable health improvements and economic benefits, ... et al Exploring the impact of primary health care research Stage 2 Primary Health Care Research Impact Project Adelaide: Primary Health Care Research & Information Service (PHCRIS ...
1.2 Why research is important. The UK is a world leader for research and invention in healthcare, ... Public health research investigates issues that impact at a population rather than an individual level. This can be done within the NHS with system-level studies, such as secondary prevention of cardiovascular disease and examining the impact ...
Research has an important role to play in strengthening health systems to improve system performance and public health impact. The multiple definitions of operational research, implementation research, and health systems research creates confusion and negatively affects the credibility and progress of the research.
Helene Langevin, M.D., explains why NCCIH focuses on complementary health approaches to foster health promotion and disease prevention. She highlights the need for evidence-based research on the mechanisms and benefits of mind and body practices.
NIH works to turn scientific discoveries into better health for all. As the largest public funder of biomedical and behavioral research in the world, NIH is the driving force behind decades of advances that improve health, revolutionize science, and serve society more broadly. Evidence of the varied, long-term impacts of NIH activities comes from a variety of sources, ranging from studies on ...
The term "health research," sometimes also called "medical research" or "clinical research," refers to research that is done to learn more about human health. Health research also aims to find better ways to prevent and treat disease. Health research is an important way to help improve the care and treatment of people worldwide.
Eliminating health and health care inequities is a longstanding goal of multiple United States health agencies, but overwhelming scientific evidence suggests that health and health care inequities persist in the United States, despite decades of research and initiatives to alleviate them. Because of its comprehensiveness, studying health inequities in the context of primary care allows for the ...
Indeed, a realist review focussing on research capacity development in health and care systems has highlighted how showing that research makes a difference can act as an important symbolic mechanism that increases research capacity and research culture in healthcare organisations . Ideally these should be captured contemporaneously within the ...
Background: Evidence-based practice and decision-making have been consistently linked to improved quality of care, patient safety, and many positive clinical outcomes in isolated reports throughout the literature. However, a comprehensive summary and review of the extent and type of evidence-based practices (EBPs) and their associated outcomes across clinical settings are lacking.
With mounting pressure on health care organizations to provide high-quality care while containing costs, there's been an increasing reliance on using health outcomes research to identify the most effective interventions and incorporate them into clinical practice. ... Health outcomes research can also play an important role in identifying ...
In an editorial entitled "Research is the future: get involved"1, Fiona Godlee supports and re-emphasises the positive points about NIHR clinical research networks that are made in Anne Gulland's paper, "It's the duty of every doctor to get involved with research".2 Gulland notes that an increasing number of patients are taking part in studies hosted by the NIHR research networks and ...
The National Institutes of Health (NIH), a part of the U.S. Department of Health and Human Services, is the nation's medical research agency — making important discoveries that improve health and save lives.
Health. Health. News. Eight years on and not forgotten: Local Dad shares wife's blood cancer story to highlight importance of medical research in Northern Ireland.
Sept. 11, 2024 | Rebecca Raciborski, Ph.D., is a research health scientist for the United States Department of Veteran Affairs (VA).. An alumna of the University of Arkansas for Medical Sciences (UAMS) Fay W. Boozman College of Public Health, Raciborski serves as the lead of the Methods Core at VA's Center for Mental Healthcare and Outcomes Research.
Thus, while relevance is an important concept for the health research enterprise, its use is largely tacit and taken for granted. Non-health sector perspectives. To unpack relevance further we consider some non-health sector perspectives that give attention to the term, often with formal definitions or taxonomies established. Examples include ...
Introduction. There has been a growing demand from patients, researchers, research sponsors, and scientific journals to shift clinical studies from being exclusively conducted on, about, or for patients to involving patients themselves or members of the public [1, 2].Two primary lines of reasoning underlie active patient and public involvement (PPI):
With massive investment in health-related research, above and beyond investments in the management and delivery of healthcare and public health services, there has been increasing focus on the impact of health research to explore and explain the consequences of these investments and inform strategic planning. Relevance is reflected by increased attention to the usability and impact of health ...
Dr. AM Racila, a PhD-trained anthropologist, is focused on reproductive health research, as well as the health and health care needs of sexual and gender minorities. He recently presented on the methodological considerations of assessing sexual and gender minorities in health surveys at a national conference.
Objective To perform critical methodological assessments on designs, outcomes, quality and implementation limitations of studies evaluating the impact of malaria rapid diagnostic tests (mRDTs) on patient-important outcomes in sub-Saharan Africa. Design A systematic review of study methods. Data sources MEDLINE, EMBASE, Cochrane Library, African Index Medicus and clinical trial registries were ...
History of Health Services Research. The history of HSR is generally considered to have begun in the 1950s and 1960s with the first funding of grants for health services research focused on the impact of hospital organizations. 19, 20 On the contrary, HSR began with Florence Nightingale when she collected and analyzed data as the basis for improving the quality of patient care and outcomes. 21 ...
And even fewer say climate change is very important (11%). For Harris supporters, issues such as health care (76%) and Supreme Court appointments (73%) are of top importance. Large majorities also cite the economy (68%) and abortion (67%) as very important to their vote in the election. Most voters cite several issues as very important to their ...
The greatest strength of the qualitative research approach lies in the richness and depth of the healthcare exploration and description it makes. In health research, these methods are considered as the most humanistic and person-centered way of discovering and uncovering thoughts and actions of human beings. Table 1.