• Privacy Policy

Research Method

Home » Research Gap – Types, Examples and How to Identify

Research Gap – Types, Examples and How to Identify

Table of Contents

Research Gap

Research Gap

Definition:

Research gap refers to an area or topic within a field of study that has not yet been extensively researched or is yet to be explored. It is a question, problem or issue that has not been addressed or resolved by previous research.

How to Identify Research Gap

Identifying a research gap is an essential step in conducting research that adds value and contributes to the existing body of knowledge. Research gap requires critical thinking, creativity, and a thorough understanding of the existing literature . It is an iterative process that may require revisiting and refining your research questions and ideas multiple times.

Here are some steps that can help you identify a research gap:

  • Review existing literature: Conduct a thorough review of the existing literature in your research area. This will help you identify what has already been studied and what gaps still exist.
  • Identify a research problem: Identify a specific research problem or question that you want to address.
  • Analyze existing research: Analyze the existing research related to your research problem. This will help you identify areas that have not been studied, inconsistencies in the findings, or limitations of the previous research.
  • Brainstorm potential research ideas : Based on your analysis, brainstorm potential research ideas that address the identified gaps.
  • Consult with experts: Consult with experts in your research area to get their opinions on potential research ideas and to identify any additional gaps that you may have missed.
  • Refine research questions: Refine your research questions and hypotheses based on the identified gaps and potential research ideas.
  • Develop a research proposal: Develop a research proposal that outlines your research questions, objectives, and methods to address the identified research gap.

Types of Research Gap

There are different types of research gaps that can be identified, and each type is associated with a specific situation or problem. Here are the main types of research gaps and their explanations:

Theoretical Gap

This type of research gap refers to a lack of theoretical understanding or knowledge in a particular area. It can occur when there is a discrepancy between existing theories and empirical evidence or when there is no theory that can explain a particular phenomenon. Identifying theoretical gaps can lead to the development of new theories or the refinement of existing ones.

Empirical Gap

An empirical gap occurs when there is a lack of empirical evidence or data in a particular area. It can happen when there is a lack of research on a specific topic or when existing research is inadequate or inconclusive. Identifying empirical gaps can lead to the development of new research studies to collect data or the refinement of existing research methods to improve the quality of data collected.

Methodological Gap

This type of research gap refers to a lack of appropriate research methods or techniques to answer a research question. It can occur when existing methods are inadequate, outdated, or inappropriate for the research question. Identifying methodological gaps can lead to the development of new research methods or the modification of existing ones to better address the research question.

Practical Gap

A practical gap occurs when there is a lack of practical applications or implementation of research findings. It can occur when research findings are not implemented due to financial, political, or social constraints. Identifying practical gaps can lead to the development of strategies for the effective implementation of research findings in practice.

Knowledge Gap

This type of research gap occurs when there is a lack of knowledge or information on a particular topic. It can happen when a new area of research is emerging, or when research is conducted in a different context or population. Identifying knowledge gaps can lead to the development of new research studies or the extension of existing research to fill the gap.

Examples of Research Gap

Here are some examples of research gaps that researchers might identify:

  • Theoretical Gap Example : In the field of psychology, there might be a theoretical gap related to the lack of understanding of the relationship between social media use and mental health. Although there is existing research on the topic, there might be a lack of consensus on the mechanisms that link social media use to mental health outcomes.
  • Empirical Gap Example : In the field of environmental science, there might be an empirical gap related to the lack of data on the long-term effects of climate change on biodiversity in specific regions. Although there might be some studies on the topic, there might be a lack of data on the long-term effects of climate change on specific species or ecosystems.
  • Methodological Gap Example : In the field of education, there might be a methodological gap related to the lack of appropriate research methods to assess the impact of online learning on student outcomes. Although there might be some studies on the topic, existing research methods might not be appropriate to assess the complex relationships between online learning and student outcomes.
  • Practical Gap Example: In the field of healthcare, there might be a practical gap related to the lack of effective strategies to implement evidence-based practices in clinical settings. Although there might be existing research on the effectiveness of certain practices, they might not be implemented in practice due to various barriers, such as financial constraints or lack of resources.
  • Knowledge Gap Example: In the field of anthropology, there might be a knowledge gap related to the lack of understanding of the cultural practices of indigenous communities in certain regions. Although there might be some research on the topic, there might be a lack of knowledge about specific cultural practices or beliefs that are unique to those communities.

Examples of Research Gap In Literature Review, Thesis, and Research Paper might be:

  • Literature review : A literature review on the topic of machine learning and healthcare might identify a research gap in the lack of studies that investigate the use of machine learning for early detection of rare diseases.
  • Thesis : A thesis on the topic of cybersecurity might identify a research gap in the lack of studies that investigate the effectiveness of artificial intelligence in detecting and preventing cyber attacks.
  • Research paper : A research paper on the topic of natural language processing might identify a research gap in the lack of studies that investigate the use of natural language processing techniques for sentiment analysis in non-English languages.

How to Write Research Gap

By following these steps, you can effectively write about research gaps in your paper and clearly articulate the contribution that your study will make to the existing body of knowledge.

Here are some steps to follow when writing about research gaps in your paper:

  • Identify the research question : Before writing about research gaps, you need to identify your research question or problem. This will help you to understand the scope of your research and identify areas where additional research is needed.
  • Review the literature: Conduct a thorough review of the literature related to your research question. This will help you to identify the current state of knowledge in the field and the gaps that exist.
  • Identify the research gap: Based on your review of the literature, identify the specific research gap that your study will address. This could be a theoretical, empirical, methodological, practical, or knowledge gap.
  • Provide evidence: Provide evidence to support your claim that the research gap exists. This could include a summary of the existing literature, a discussion of the limitations of previous studies, or an analysis of the current state of knowledge in the field.
  • Explain the importance: Explain why it is important to fill the research gap. This could include a discussion of the potential implications of filling the gap, the significance of the research for the field, or the potential benefits to society.
  • State your research objectives: State your research objectives, which should be aligned with the research gap you have identified. This will help you to clearly articulate the purpose of your study and how it will address the research gap.

Importance of Research Gap

The importance of research gaps can be summarized as follows:

  • Advancing knowledge: Identifying research gaps is crucial for advancing knowledge in a particular field. By identifying areas where additional research is needed, researchers can fill gaps in the existing body of knowledge and contribute to the development of new theories and practices.
  • Guiding research: Research gaps can guide researchers in designing studies that fill those gaps. By identifying research gaps, researchers can develop research questions and objectives that are aligned with the needs of the field and contribute to the development of new knowledge.
  • Enhancing research quality: By identifying research gaps, researchers can avoid duplicating previous research and instead focus on developing innovative research that fills gaps in the existing body of knowledge. This can lead to more impactful research and higher-quality research outputs.
  • Informing policy and practice: Research gaps can inform policy and practice by highlighting areas where additional research is needed to inform decision-making. By filling research gaps, researchers can provide evidence-based recommendations that have the potential to improve policy and practice in a particular field.

Applications of Research Gap

Here are some potential applications of research gap:

  • Informing research priorities: Research gaps can help guide research funding agencies and researchers to prioritize research areas that require more attention and resources.
  • Identifying practical implications: Identifying gaps in knowledge can help identify practical applications of research that are still unexplored or underdeveloped.
  • Stimulating innovation: Research gaps can encourage innovation and the development of new approaches or methodologies to address unexplored areas.
  • Improving policy-making: Research gaps can inform policy-making decisions by highlighting areas where more research is needed to make informed policy decisions.
  • Enhancing academic discourse: Research gaps can lead to new and constructive debates and discussions within academic communities, leading to more robust and comprehensive research.

Advantages of Research Gap

Here are some of the advantages of research gap:

  • Identifies new research opportunities: Identifying research gaps can help researchers identify areas that require further exploration, which can lead to new research opportunities.
  • Improves the quality of research: By identifying gaps in current research, researchers can focus their efforts on addressing unanswered questions, which can improve the overall quality of research.
  • Enhances the relevance of research: Research that addresses existing gaps can have significant implications for the development of theories, policies, and practices, and can therefore increase the relevance and impact of research.
  • Helps avoid duplication of effort: Identifying existing research can help researchers avoid duplicating efforts, saving time and resources.
  • Helps to refine research questions: Research gaps can help researchers refine their research questions, making them more focused and relevant to the needs of the field.
  • Promotes collaboration: By identifying areas of research that require further investigation, researchers can collaborate with others to conduct research that addresses these gaps, which can lead to more comprehensive and impactful research outcomes.

Disadvantages of Research Gap

While research gaps can be advantageous, there are also some potential disadvantages that should be considered:

  • Difficulty in identifying gaps: Identifying gaps in existing research can be challenging, particularly in fields where there is a large volume of research or where research findings are scattered across different disciplines.
  • Lack of funding: Addressing research gaps may require significant resources, and researchers may struggle to secure funding for their work if it is perceived as too risky or uncertain.
  • Time-consuming: Conducting research to address gaps can be time-consuming, particularly if the research involves collecting new data or developing new methods.
  • Risk of oversimplification: Addressing research gaps may require researchers to simplify complex problems, which can lead to oversimplification and a failure to capture the complexity of the issues.
  • Bias : Identifying research gaps can be influenced by researchers’ personal biases or perspectives, which can lead to a skewed understanding of the field.
  • Potential for disagreement: Identifying research gaps can be subjective, and different researchers may have different views on what constitutes a gap in the field, leading to disagreements and debate.

About the author

' src=

Muhammad Hassan

Researcher, Academic Writer, Web developer

You may also like

Chapter Summary

Chapter Summary & Overview – Writing Guide...

Research Results

Research Results Section – Writing Guide and...

References in Research

References in Research – Types, Examples and...

Figures in Research Paper

Figures in Research Paper – Examples and Guide

Tables in Research Paper

Tables in Research Paper – Types, Creating Guide...

Data Verification

Data Verification – Process, Types and Examples

An official website of the United States government

Official websites use .gov A .gov website belongs to an official government organization in the United States.

Secure .gov websites use HTTPS A lock ( Lock Locked padlock icon ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.

  • Publications
  • Account settings
  • Advanced Search
  • Journal List

Journal of Clinical and Translational Science logo

Bridging the gap between research, policy, and practice: Lessons learned from academic–public partnerships in the CTSA network

Amytis towfighi, allison zumberge orechwa, tomás j aragón, marc atkins, arleen f brown, olveen carrasquillo, savanna carson, paula fleisher, erika gustafson, deborah k herman, moira inkelas, daniella meeker, doriane c miller, rachelle paul-brutus, michael b potter, sarah s ritner, brendaly rodriguez, anne skinner, hal f yee jr.

  • Author information
  • Article notes
  • Copyright and License information

Address for Correspondence: A. Towfighi, MD, 1100 N State Street, A4E, Los Angeles, CA, 90033, USA. Email: [email protected]

Amytis Towfighi is Chair and Allison Zumberge Orechwa is Vice-Chair.

Sarah S. Rittner's name has been corrected. Additionally, a note stating the first and second authors' roles has been added. A corrigendum detailing these changes has also been published (doi: 10.1017/cts.2020.495 ).

Received 2019 Nov 5; Revised 2020 Mar 4; Accepted 2020 Mar 4; Collection date 2020 Jun.

This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence ( http://creativecommons.org/licenses/by/4.0/ ), which permits unrestricted re-use, distribution, and reproduction in any medium, provided the original work is properly cited.

A primary barrier to translation of clinical research discoveries into care delivery and population health is the lack of sustainable infrastructure bringing researchers, policymakers, practitioners, and communities together to reduce silos in knowledge and action. As National Institutes of Healthʼs (NIH) mechanism to advance translational research, Clinical and Translational Science Award (CTSA) awardees are uniquely positioned to bridge this gap. Delivering on this promise requires sustained collaboration and alignment between research institutions and public health and healthcare programs and services. We describe the collaboration of seven CTSA hubs with city, county, and state healthcare and public health organizations striving to realize this vision together. Partnership representatives convened monthly to identify key components, common and unique themes, and barriers in academic–public collaborations. All partnerships aligned the activities of the CTSA programs with the needs of the city/county/state partners, by sharing resources, responding to real-time policy questions and training needs, promoting best practices, and advancing community-engaged research, and dissemination and implementation science to narrow the knowledge-to-practice gap. Barriers included competing priorities, differing timelines, bureaucratic hurdles, and unstable funding. Academic–public health/health system partnerships represent a unique and underutilized model with potential to enhance community and population health.

Keywords: Translational research, policy-relevant research, implementation science, community engagement, public health

Introduction

The translation of research discoveries from “bench to bedside” and into improved health is slow and inefficient [ 1 ]. The attempt to bridge science, policy, and practice has been described as a “valley of death,” reflecting few successful enduring outcomes [ 2 ]. Federal investment in basic science and efficacy research dwarfs the investment in health quality, dissemination, and outcomes research [ 3 ]. Although social determinants of health account for approximately 60% of health outcomes [ 4 ], the United States spends a significantly lower percentage of its gross domestic product (GDP) on social services as compared to similar countries with better health outcomes [ 5 ], and only 5% of U.S. national health expenditures are allocated to population-wide approaches to health promotion [ 6 ]. Widespread adoption of evidence into policy and practice is hampered when academic institutions undertake science in controlled settings and conditions. Additionally, evidence-based practices resulting from academic studies often result in limited dissemination even within academic circles. Yet, public agencies, including safety-net healthcare systems and departments of public health, must respond to and implement evidence-based policies and health promotion services for populations facing higher burdens of health and healthcare disparities.

More researchers are turning to dissemination and implementation science (D&I) methods to more effectively bridge the research-to-practice gap [ 7 ]. Yet, despite a century of empirical research to advance the translation of research to practice, considerable barriers remain, especially for advancing public health policy and practice [ 8 ].

A primary challenge to addressing the research-policy-practice gap is the lack of sustainable infrastructure bringing researchers, policymakers, practitioners, and communities together to: (1) align the research enterprise with public and population health priorities; (2) bridge healthcare, public health, mental health, and related sectors; (3) engage health systems in research; and (4) develop innovative solutions for health systems. Without a formal mechanism to effectively engage the community, academicians, and public health and healthcare agencies, research fails to address the need among most public health and healthcare agencies to increase the quality of services with existing resources.

Institutions with Clinical and Translational Science Awards (CTSAs) are uniquely positioned to bridge this gap and contribute to care delivery, translation of research into interventions that improve the health of communities, and public health innovation. In 2006, National Institutes of Health (NIH) launched the CTSA Program to support a national network of medical research institutions, or “hubs,” that provide infrastructure at their local universities and other affiliated academic partners to advance clinical and translational research and population health. Hubs support research across disciplines and promote team-based science closely integrated with patients and communities. Their education and training programs aim to create the next generation of translational scientists who are “boundary crossers” and “systems thinkers” [ 9 ]. Through their collaboration with communities, hubs are uniquely situated to identify local health priorities, as well as the resources and expertise to catalyze research in those areas. The CTSA network holds great promise for bridging the research-policy-practice gap.

A number of CTSA hubs have a major emphasis on partnering with city, county, and state health organizations to drive innovations in clinical care and translate research into practical interventions that improve community and population health. We will describe examples from seven CTSA hubs in four cities – Los Angeles, Chicago, Miami, and San Francisco – that have activated their resources toward research, effective service delivery, policy development, implementation, and program evaluation.

Synergy Paper Collaboration

With support from the CTSA Coordinating Center through a “Synergy paper” mechanism, representatives from the seven CTSA hubs and public health/health system partners participated in monthly teleconferences to collaborate on developing a manuscript on this shared topic. The earlier teleconferences included a brief overview by participants of their existing academic–public health/health system partnerships and discussions on shared experiences, lessons learned, and future directions. This led to more in-depth conversations, addressing common themes and both mutual and unique barriers to achieving goals. As the linkage to respective health systems was crucial to this evaluation, authors from each CTSA hub collaborated closely with key public health and health system representatives and received written comments and feedback to integrate into the manuscript. After the elicitation and information sharing processes, members categorized critical factors, challenges, and opportunities for improvement and strategized on recommendations. As a group, members summarized activities and assessed similarities.

CTSA-Public Health System and Health Department Partnerships

The areas of focus spanned the translational spectrum from creation of evidence-based guidelines (T2) to translation to communities (T4) (Table 1 ). Focus areas included direct research support, program evaluation, implementation research, infrastructure and expertise in data sharing, analytics, and health information technology, community needs assessments, educating or conducting interventions with community health workers (CHWs), community professional development, dissemination science, and policy setting.

Partnership activities by city and translational stage

Chicago is the third largest city in the United States, with a population of 2.7 million. Approximately 50% of the population is non-white, with one in five people born outside of the United States and 36% speaking a language other than English at home. Twenty percent of the people in Chicago are living in poverty, which includes one in three children. The three CTSA programs in Chicago, at Northwestern University, the University of Chicago, and the University of Illinois at Chicago, formed a formal collaboration over a decade ago to advance community-engaged research across the Chicagoland region. This collaboration, the Chicago Consortium for Community Engagement (C3 ), is composed of representatives from the community engagement teams of each CTSA, the Chicago Department of Public Health (CDPH), and AllianceChicago, a nonprofit that provides research support to over 60 Federally Qualified Health Centers throughout Chicago and nationally.

In the city of Chicago, there is up to a 17-year gap in life expectancy between community areas that is closely correlated with economic status and race. The CDPH joined C3 in 2016 concurrent with the release of Healthy Chicago 2.0 , a citywide, 4-year strategic plan to promote health equity for Chicagoʼs over 2.5 million diverse residents (CDPH HC 2.0) [ 10 ]. The report is a blueprint for establishing and implementing policies and services that prioritize residents and communities with the greatest need. CDPH and the C3 recognized that the success of Healthy Chicago 2.0 would depend, in part, on strengthening the relationship between communities and academic institutions to advance a public health research agenda.

Activities of the C3 include (1) facilitating and supporting university-based research and evaluation of CDPH-sponsored and community-based programs (four to date); (2) jointly developing mechanisms to facilitate dissemination of research opportunities and findings to community audiences; (3) aligning Clinical and Translational Science Institute (CTSI) seed funding opportunities with Healthy Chicago 2.0 priority areas; (4) facilitating collaborations with community-based organizations and community health centers; (5) collaboratively developing and delivering capacity-building workshops on community-engaged research and dissemination strategies; and (6) improving community partner and member understanding of and interest in research. Most notably, the partnership resulted in a new CDPH Office of Research and Evaluation whose lead staff position is jointly funded by the three Chicago CTSA hubs. She is currently serving on 11 CTSI research projects and center advisory boards.

The C3 meetings allow for discussion of data analytics related to The Chicago Health Atlas (ChicagoHealthAtlas) that provides public health data for the city of Chicago and aggregated community area data based on Healthy Chicago 2.0 indicators. This provides a unique opportunity to consider social determinants of health by, for example, promoting research examining medical center electronic medical record data in relation to Chicago community-level data at each CTSA program. Moreover, ongoing involvement of CDPH leadership in these discussions provides an opportunity to promote research that will inform Healthy Chicago 2025, the blueprint for Chicago healthcare policy and practices (HealthyChicago2025). Examples include recent discussions with CPDH epidemiologists to add questions regarding attitudes about research participation to the Chicago Health Survey; plans for the Chicago-based roll out of the NIH All of Us research initiative, and local efforts by the three Chicago CTSA programs to drive broad participation in a new local multi-institutional research portal. Lastly, the collaboration includes discussions with representatives from the Alliance for Health Equity (AllianceforHealthEquity), a collaborative of over 30 nonprofit hospitals, health departments, and community organizations, that completed a collaborative Community Health Needs Assessment for Chicago and Suburban Cook County to allow partners to collectively identify strategic priorities.

Los Angeles

Los Angeles is the most populous county in the nation, with 10 million residents, and more people live in Los Angeles County (LAC) than in 42 states. Three quarters of the countyʼs residents are non-white, more than 30% of residents were born outside the United States, nearly one in five is below the federal poverty line, approximately one in 5 lack health insurance, and many speak a language other than English at home. The Los Angeles County Department of Health Services (LAC-DHS), the second-largest municipal health system in the United States, provides care to 700,000 patients annually through 4 hospitals, 19 comprehensive ambulatory care centers, and a network of community clinics. Many physicians serving the DHS facilities are also faculty members at the University of Southern California (USC) and University of California Los Angeles (UCLA), and DHS hospitals are training sites for physicians at USC and UCLA. The leadership of both the UCLA and USC CTSA hubs work in tandem with the DHS Chief Medical Officer to identify areas of intersection between academic research and the health system.

The parties invest resources in pilot funding for these areas of mutual interest and into two DHS-wide service cores – implementation science and clinical research informatics. Working closely with the DHS Research Oversight Board on policy and procedure development, the DHS Informatics and Analytics Core established new research informatics infrastructure, serving a county-wide clinical data warehouse and supporting 23 research pilot projects to date. The Innovation and Implementation Core facilitates multidisciplinary team science, deploys research methods that are feasible and acceptable in a safety-net health system, supports bidirectional mentoring and training, and develops new academic and public health leaders who can leverage the strengths of both systems. To date, the 18 projects supported by the Innovation and Implementation Core have affected the care provided by over 270 clinicians and outcomes of over 80,000 patients. An exemplary project supported by both cores is a teleretinal screening program that increased diabetic retinopathy screening rates from 41% to 60% and decreased ophthalmology visit wait times from 158 to 17 days [ 11 ]. To incubate and advance such multidisciplinary projects, the USC/UCLA/DHS partnership has created an intramural pilot funding program for projects that test interventions to enhance quality, efficiency, and patient-centeredness of care provided by LAC-DHS. Proposals are evaluated on these criteria, as well as promise for addressing translational gaps in healthcare delivery and health disparities, alignment with delivery system goals, and system-wide scalability. Six pilot grants have been awarded since 2016, addressing topics such as substance use disorders in the county jail, antimicrobial prophylaxis after surgery, and occupational therapy interventions for diabetes.

The Healthy Aging Initiative is an example of a collaborative effort between the UCLA and SC CTSAs, LAC-DHS, LAC Department of Public Health (LAC-DPH), the City of Los Angeles Department on Aging, California State University, and diverse community stakeholders. The initiative aims to support sustainable change in communities to allow middle-aged and older adults to stay healthy, live independently and safely, with timely, appropriate access to quality health care, social support, and services.

In addition, the Community Engagement cores at both Los Angeles (LA) hubs partner with DHS, DPH, and other LA County health departments in broad-ranging community-facing activities, including community health worker training and outreach, research education workshops based on community priorities, and peer navigation interventions.

Home to over 6 million people, the South Florida region is the largest major metropolitan area in the State of Florida. Miami-Dade County is unique in that 69% of the county is Hispanic, 20% of persons lack health insurance, and 53% were born outside the US [ 12 ]. Since 2012, the Miami CTSI – comprising University of Miami, Jackson Memorial Health System, and Miami VA Healthcare System – has partnered with the Florida Department of Health (FLDOH) to educate and mobilize at-risk communities via the capacity building of culturally and linguistically diverse CHWs. Recognizing that CHWs serve a vital role in bridging at-risk communities and formal healthcare, in 2010, FLDOH established a Community Health Workers Taskforce (now called the Florida Community Health Worker Coalition (FLCHWC) and incorporated as a nonprofit in 2015). By 2015, the Coalition had developed a formal credentialing pathway for CHWs in the state. As a key member of the task force, the Miami CTSI provided considerable and essential input into that process. Since then, the Miami CTSI has helped develop CHW educational programs that meet training requirements on core competencies and electives for CHW certification or renewal. These programs developed in partnership with training centers, clinics, local health planning agencies, and the FLDOH are aimed at expanding the local CHW healthcare workforceʼs capacity to address health conditions related to health disparities (e.g., social determinants of health, communication skills, motivational interviewing, and oral and mental health awareness among others).

The Miami CTSI has been partnering with the FLDOH to develop condition-specific or disease-specific training in response to emergent public health concerns of local county and state health departments. In 2016, when the Zika epidemic in Latin America arrived in Florida, the Miami CTSI developed a Zika/vector-borne disease prevention training module for CHWs that were delivered in both English and Spanish across Miami/Dade County in a short timeframe. That partnership also facilitated a Zika Research Grant Initiative that awarded 12 Florida Department of Health (DOH) grants to University of Miami investigators. Totaling over $13M, the grants focused on vaccine development, new diagnostic testing or therapeutics, and dynamic change team science. Another example was in 2018 when the Miami CTSI also worked with the FLCHWC and the FLDOH in developing opioid epidemic awareness modules for CHWs. The Miami CTSI has also worked with the FLDOH around HIV workforce development. The training modules that the Miami CTSI helped develop are now offered by the FLDOH. In turn, various University of Miami CTSI sponsored research projects now have their CHWs undergo the FLDOH HIV training, which the Miami CTSI initially helped develop.

The Miami CTSI also partners with the FLDOH and the Health Council of South Florida to perform community health needs assessments and shares data with the One Florida Clinical Research Consortium (spearheaded by the University of Florida CTSA). The FLDOH is a critical stakeholder in this consortium.

San Francisco

San Francisco is a county and city under unitary governance, with an ethnically diverse population of about 850,000 residents. It has many health sector assets, including a local public health department, a health sciences university (University of California, San Francisco [UCSF]), hospitals and health systems, and robust community-based organizations. Nonetheless, San Francisco has prominent health disparities. For example, relative to whites, hospitalization rates for diabetes are seven times higher among African Americans and twice as high among Latinos [ 13 ]. The vision of the San Francisco CTSI Community Engagement and Health Policy Program is to use an innovative Systems Based Participatory Research model which integrates community-based, practice-based, and policy research methods to advance health equity in the San Francisco Bay Area. This program strengthens the ability of academicians, the community, and Department of Public Health to conduct stakeholder engaged research through several strategies. First, the San Francisco Health Improvement Partnership (SFHIP) is a collaboration between academic, public, and community health organizations of San Francisco, an ethnically diverse city with 850,000 residents. It was formed in 2010 “to promote health equity using a novel collective impact model blending community engagement with policy change” [ 13 ]. Three backbone organizations – the San Francisco Department of Public Health, the University of California San Francisco CTSI, and the San Francisco Hospital Council – engage ethnic-based community health coalitions, schools, faith communities, and other sectors on public health initiatives. Using small seed grants from the UCSF CTSI, working groups with diverse membership develop feasible, scalable, sustainable evidence-based interventions, especially policy, and structural interventions that promote improving longer-term health outcomes. The partnership also includes community health needs assessments and a comprehensive, online data repository of local population health indicators. Results of past initiatives have been powerful. For example, the development of policy and educational interventions to reduce consumption of sugar-sweetened beverages led to new policies and legislation. These included warning labels on advertisements, a new “soda tax,” new filtered tap water stations at parks and other venues in low-income neighborhoods, and movement toward healthy beverage policies at UCSF, Kaiser Permanente, and other large hospitals. They also developed environmental solutions for reducing disparities in alcohol-related health and safety problems. As a result, they developed an alcohol outlet mapping tool that powers health research, routine blood alcohol testing in a trauma center, and influenced a new state ban on the sale of powdered alcohol, to name a few outcomes. This initiative was spearheaded by community members in neighborhoods affected by high rates of alcohol-related violence, health problems, and public nuisance activities, in collaboration with the San Francisco Police Department and other stakeholders. Using the SFHIP model, UCSF CTSI supported the development of the San Francisco Cancer Initiative, which provided science that has been used to support major community-based policy initiatives such as the banning of menthol cigarettes in San Francisco and more targeted clinical initiatives such as an effort to increase colorectal cancer screening and follow-up activities in local community health centers [ 14 ]. UCSF CTSI also has supported the San Francisco Department of Public Health in the development of its Healthy Cities Initiative, funded by Bloomberg Philanthropies, which seeks to link geocoded electronic health records data across multiple health systems with other neighborhood data to identify community-based strategies to address population health challenges across the city.

Critical Factors and Facilitators

The participating hubs share some foundational similarities and facilitators, although their specific goals and activities are diverse. Across multiple cities, numerous factors were commonly recognized as critical to the success of the partnerships (Table 2 ). First and foremost, in all locales, the needs of the departments of public health and health services shaped the activities of the CTSA hubs. All partnerships were driven by the priorities of the front-line care providers, patients, and/or the public at large, reflecting the specific goals of each health department. Projects originated with problems as identified by healthcare system leaders and clinicians, public health officials, and/or community members. For example, the USC and UCLA CTSAs in Los Angeles collaborated with the LAC-DHS to use implementation science methods to develop, implement, and evaluate sustainable solutions to health system priorities. In San Francisco, the UCSF CTSI initiated the SFHIP program, but leadership and funding responsibilities were turned over to the San Francisco Department of Public Health to ensure that community stakeholders drove the agenda. The CTSAs provided value to the public health/health systems by serving as conveners; offering expertise in informatics, community health needs assessments, implementation, evaluation, and dissemination; providing education and technical support; collaborating on policy development (whether organizational or governmental policy); and leveraging relationships with community organizations.

Key critical factors, facilitators, and barriers

CTSA, Clinical and Translational Science Award.

Since the partnerships developed in response to the public health/health systems’ needs, their goals and activities varied. While the Miami partnership focused on developing workforce capacity, the San Francisco partnership collaborated on policy changes, and the Chicago and Los Angeles partnerships concentrated on building research infrastructure and fostering collaborative research opportunities aligned with public health and health system priorities. By using the academic tools of community-engaged research, healthcare delivery science, implementation, and dissemination research in real-world settings, the partnerships are primed for disruptive innovations in healthcare.

Second, each health department had at least one designated “champion” that helped prioritize partnership activities and advocated for the partnerships to promote tangible and immediate real-life impact. For example, the LAC-DHS Chief Medical Officer has been an enthusiastic champion for the Los Angeles partnership. He co-wrote the pilot funding opportunity request for application (RFA) and offered detailed feedback to each applicant. He was instrumental in establishing and facilitating operations and policy development for the two service cores. His perspective and influence have been critical for initiating the program, refining the program each year, and promoting the research resources available to DHS clinicians. In addition, the UCLA hub created a population health program that is co-led by the Director of Chronic Disease and Injury Prevention within the LAC Department of Public Health. In Chicago, the ongoing involvement of health department leadership with the three CTSA programs through their C3 collaboration promoted a substantial shift in C3 priorities and activities to align more closely with health department programs and practices. This ultimately led to an agreement for the CTSA programs to jointly fund a new position at the health agency to serve as a liaison between the health department and the CTSA programs, despite a city-wide hiring freeze due to statewide budget constraints. In Florida, a Centers for Disease Control and Prevention Policy, Systems and Environment Change grant to the DOH Comprehensive Cancer Control Program created a staff position that was critical to establishing consistent community engagement in developing the capacity of the Florida CHW Coalition to create a credentialing program, on-going statewide involvement in promoting CHWs, and elevate the entire south Florida regionʼs effort to incorporate CHWs in prevention practice and access to care. The rest of the state learned from Miamiʼs efforts, and Miami was strengthened with the support of the statewide coalition. The staff member was able to devote three-quarters of her time to Coalition development, which unfortunately did not continue once the grant ended.

On the academic side, CTSA principal investigators and senior administrators also dedicated significant time and effort to the initiatives beyond monetary resources. CTSA leadership collaborated with the public health/health system champions to set the vision for the initiative, viewed the partnership as a priority for their hub, and exerted the influence needed to drive initiatives forward.

Third, the CTSAs needed the capacity to respond rapidly to key stakeholders and requests. The partnerships have been particularly effective when they have been nimble and responsive to the evolving needs of the local health departments, health systems, and communities. For example, in Miami, the CTSA core trained CHWs and was primed to respond with additional disease-specific training in the setting of the Zika outbreak. The UCLA CTSA offered scientific expertise to the Department of Public Health regarding vaping and e-cigarettes.

Fourth, partnerships can ensure that the communityʼs voice is heard. By leveraging CTSAs’ Community Engagement Cores, and the longstanding partnerships between public health/healthcare systems and community organizations, the communityʼs priorities and concerns can be brought to light. In another example, the UCSF CTSA leveraged long-term trusting relationships with community groups to engage in reducing disparities in alcohol-related harms. Similarly, in Chicago, the Department of Public Health provided the CTSA representatives with an early view of a new citywide health initiative, Healthy Chicago 2025, to initiate ongoing CTSA involvement in planning and implementation. By being responsive to initiatives and priorities, CTSA goals can be harmonized with partners’ operational objectives.

Fifth, as the healthcare landscape in the United States evolves, these partnerships offer opportunities to enhance translation of evidence to practice, study the effects of various payment models, and inform policy.

Other critical factors and facilitators included a common commitment among all parties to address local health disparities; funding in the form of pilot grants tailored to the needs of the public partners, which several CTSA hubs offered, and maturity of the partnership. In Los Angeles, responsiveness to the pilot funding opportunity improved with each iteration of the funding cycle. In all cities, longer relationships increased trust among the partners.

Lessons Learned, Barriers, Gaps, and Challenges

Numerous barriers have become evident in the infancy of these academic/public health/health system partnerships (Table 2 ). When evaluating the programs’ experiences, several themes emerged around challenges and the solutions employed to overcome them.

First, there are often competing priorities between the public health/health system and academic partners. All partnerships addressed this by finding areas where the public partners’ priorities aligned with academic expertise. In Miami, they developed disease-specific training in response to emergent public health concerns of local county and state health departments. In Los Angeles, the CTSA pilot funding criteria and prioritization topics were co-developed with DHS. In Chicago, seed funding projects required alignment with C3/CDPH priorities.

Second, partners’ timelines often differ substantially. The public health/healthcare system cannot adjust the pace to accommodate traditional academic endeavors. Individuals making operational decisions typically do not have the luxury of time to collect pilot data and study intervention implementation and outcomes using conventional research timelines. They are given directives to implement changes broadly and swiftly. Nevertheless, integration with academic endeavors can be achieved. One example is emphasizing underutilized research methods in implementation and improvement designed to generate both locally applicable and generalizable knowledge. Another example is embedding academicians in the public health or healthcare system, to ensure that they are involved in the design, planning, implementation, evaluation, and dissemination of initiatives. Academicians may be frustrated by hasty implementation and limitations in evaluation of outcomes, yet public health and health systems do want to base their decisions on good science. Funding cycles and grant review criteria are not consistent with business timelines and priority setting and often do not value the emerging scientific methods that are designed for learning in systems (e.g., implementation science, improvement science, design science). It is possible to undertake rigorous science that balances the competing operational needs and culture between health departments and universities when these partners focus on appropriate methods and problem-solving. In addition, researchers may have difficulty maintaining their academic credentials, gauged by grant portfolios and publication records. This is an important issue for CTSA program leadership locally and nationally, to advance changes in university tenure policies to encourage and promote health services, community-based, and community-engaged research [ 15 ]. To that end, sustained and systematic collaboration with local health departments can alleviate logistical barriers to community-engaged research to fulfill the CTSA mandate to promote research that informs policy and practice.

Third, it is critical to skillfully navigate bureaucratic hurdles when working with government entities. Several CTSAs have found it particularly effective to appoint a liaison to the public health/healthcare system. Liaisons acted as bridges between partners, drawing on expertise in multiple areas and access to resources across the partnershipʼs sites. As employees of health departments, often with dual appointments at the partnering university, liaisons understand the needs of health departments on an intimate level. With their connections and operational experience, they can act as navigators and advisors to academicians. For example, in Chicago, the new lead of Research and Evaluation at CDPH and co-chair of C3 helps researchers identify funding opportunities, disseminate research findings, and broker relationships. In addition, she serves on the CTSI community governance bodies for all three Chicago CTSIs. In Los Angeles, each of the CTSAs (UCLA and USC) appointed as their liaison an academician who practices in the DHS system. Moreover, the DHS Chief Medical Officer not only served as a supporter and champion internally but was also on the advisory committees for both USC and UCLA CTSA hubs, supporting a bidirectional strategic relationship. This is reflected in infrastructure for data services and provider workgroups promoting institutionally tailored evidence-based practices and tools [ 16 , 17 ]. In Miami, a trusted staff member served as the primary and long-term point of contact for communication channels and helped train a larger workforce of CHWs as an extension of the liaison model. UCSF explored creating a joint position and subsequently developed “Navigator” roles.

Agreements that make programs sustainable often have to be approved by politicians and health department leaders, and the process for obtaining approval may be complex and time consuming. A strategy for addressing the bureaucratic hurdles is to leverage the tools developed in other partnerships. We have compiled resources, including a Request for Proposals and a position description, that may be helpful to others developing similar collaborations (see Supplementary Materials). In cases where longstanding educational partnerships and agreements are in place, agreements and policies devoted to supporting translational research may build upon relationships and roles that establish faculty in leadership positions that advance research.

Fourth, unstable funding threatens the success of these partnerships. Funding is a critical factor in developing informatics and research infrastructure, workforce development, and research and evaluation. Key positions such as the liaison between the CTSI and the public health/health system should be prioritized to ensure the success of these partnerships. Strategies to address this barrier include leveraging existing resources, applying for funding from diverse sources, and being creative with resource utilization. On the other hand, mechanisms and policies for accepting funding from grants into operating budgets can also prove challenging. Three of the four LAC-DHS hospitals have an established research foundation to administer grant funding for clinician-researchers; however, these entities do not have contact with the healthcare budgeting organizations that would support resources for information technology, space, or support staff. The unpredictability of research funding is reflected in the absence of investment or awareness of procedures for accepting relatively small funds for investigator-initiated awards.

Fifth, for CTSIs collaborating with public entities, navigating a political landscape represents unique challenges. Examples include policy initiatives that could threaten corporations and well-funded industries; projects that span various public entities’ purviews (e.g., Public Health vs. Health Services vs. Mental Health); responding to politicians’ priorities; and shifting gears when administrations change. Partnerships that rely heavily on a single influential champion without associated agreements, policies, and procedures are vulnerable to leadership changes. Strong stakeholder engagement and a well-developed infrastructure are critical to ensuring the success of navigating the political sphere and sustainability.

Finally, academicians’ tools may not be well-suited to the public health systems’ needs. For example, in our Los Angeles partnership, although the UCLA and USC CTSIs had knowledge and expertise in implementation science, LAC-DHS was more interested in health delivery science, execution, operationalization, and evaluation. Rather than detailed evaluation of facilitators and barriers of implementation, they desired broad and swift implementation of interventions that reduced resource utilization while improving quality of care. Academicians have typically used an incremental approach, which often requires additional resources; whereas, LAC-DHS was more interested in disruptive approaches. We found that the best way to address the lack of alignment between the needs and the academic tools was to connect researchers with leaders in the public healthcare system early in the process of proposal development and to connect researchers with methodologists who focus on applied science in public delivery systems. Other potential solutions include expanding educational offerings for academicians, providing mentored hands-on experience, embedding researchers in public health/healthcare settings, training health department leaders in research, training community members in results dissemination, and offering incentives for cost-saving.

Unique CTSA hub collaborations with city, county, and state health organizations are driving innovations in health service delivery and population health in four urban cities. A common element among all partnerships was the CTSA hubs’ alignment of activities with the needs of the city/county partners. Other critical factors included having designated “champions” in health departments, CTSAs’ ability to respond quickly to evolving needs, and a common commitment to addressing local health disparities. Most programs encountered similar barriers, including competing priorities, different timelines, bureaucratic hurdles, and unstable funding. The academic–public partnerships have explored numerous strategies to addressing these barriers. These partnerships offer a model for innovatively disrupting healthcare and enhancing population health.

Finding areas of common ground is key. While universities and public health/healthcare systems differ in their priorities, timelines, and modus operandi, successful partnerships are poised to answer some of the critical questions in health policy, including how to deliver critical services to populations in a cost-effective manner and how to address the needs of the public. Many of these challenges are not unique to partnerships between academic centers and public systems. Some of the experiences apply equally to academic medical centers that are increasingly acquiring large private healthcare organizations without an established culture of education and research. If CTSA programs are to have a substantive impact on population health, significant expansion beyond academic medical centers is needed to address the full range of social determinants of health (e.g., housing instability, concentrated poverty, chronic unemployment). Public health departments are ideal partners to consider the bidirectional relation of social determinants and health disparities [ 18 ].

Limitations

First, this manuscript focused on partnerships between CTSAs and public entities such as Departments of Public Health or Departments of Health Services. Yet, CTSAs also have broad-ranging activities engaging communities. Second, public health and health systems have extensive collaborations with researchers and local, national, and international foundations, beyond the CTSAs. PCORnet, for example, has funded nine Clinical Research Networks; several include collaborations between universities and public health systems. Although these partnerships have been impactful, they are beyond the scope of this paper. Third, we have detailed the experiences of seven CTSAs in four large metropolitan areas. These findings and experiences may not be generalizable to other settings, particularly nonurban areas. Fourth, while we provided the experience of seven CTSAs, other CTSAs may have partnerships with their local city/county/state health departments. Rather than providing a comprehensive review of all CTSA/public health/health system partnerships, our hope was to stimulate more discussion around these partnerships.

Future Directions

There are several ways in which collaborations among CTSA programs and public sector health departments can be optimized. First, CTSA programs can prioritize opportunities for workforce development on policy-relevant research through sponsored internships and practica for graduate students and faculty. For students, these training opportunities could be aligned with core program goals across CTSA-affiliate programs in health-related fields (e.g., medicine, public health, psychology, dentistry) to provide a community perspective and promote an awareness of public sector needs early in training. For faculty, innovative funding opportunities could be modeled on sabbatical leave of absences perhaps aligned with pilot seed funding for promising research proposals.

In addition, formal lines of communication between health departments and CTSA program leadership could be encouraged by National Center for Advancing Translational Sciences (NCATS) in RFA announcements and program reviews. Encouraging each CTSA program to have at least one public sector representative on external advisory boards could also expedite cross-channel communication. Prioritizing rapid and consistent communication could help to bridge the gap between biomedical researchers and public health/health system leadership. This is especially important for early-stage research to encourage an appreciation for community resources and needs and to anticipate common barriers to implementation research [ 19 ]. In addition, ongoing feedback across CTSA and health department leadership could provide new opportunities for bi-directional exchanges that can lead to new research opportunities as well as adaptations in ongoing research to improve community-level outcomes.

A related challenge to sustaining changes is the paucity of focus on execution and operationalization. Historically, a missing link has been failure to acknowledge and address the challenges lying between an idea or proven intervention and its implementation. Randomized trials in controlled academic settings can, at best, be considered proofs-of-concept in other settings. In addition to implementation science, a key focus should be on improvements in effective operational management and culture change. The DHS-USC-UCLA partnership has worked to close this gap by hiring, coaching, and empowering multiple academically trained physicians from both the UCLA and USC CTSI hubs by the DHS. These academically trained health services researchers have become key DHS leaders and operational managers within the clinical care delivery system. Second, the partnership has used behavioral economics as an efficient and effective culture change tool in healthcare delivery. The sustainability and retention of these types of programs and partnerships may be less financial and more cultural—a “tipping point” may require organizational dissemination and incentive alignment from the top down to cultivate operational mechanisms and durable pathways to success.

Overall, the goal is to promote research that informs health policy and to encourage health policy that is informed by research. These collaborations show that this goal is best accomplished by a strategic alliance of CTSA programs and health departments. As is evident from the examples of these four cities, new opportunities for shared data and resources emerge from ongoing discussions of shared priorities. The health departments benefit by allocation of CTSA program trainees and funding, and the CTSA programs gain valuable insight and access into community health and health system needs and resources. Ultimately, the alliances promote the overall goal of translational science to inform and improve population health.

Acknowledgments

The authors wish to thank the public officials, researchers, administrators, champions, liaisons, and community members who contributed to the success of each partnership. We also wish to thank the staff at the Center for Leading Innovation and Collaboration (CLIC) for their support in developing this manuscript.

This work was funded in part by the University of Rochester CLIC, under Grant U24TR002260. CLIC is the coordinating center for the CTSA Program, funded by the NCATS at the National Institutes of Health (NIH). This work was also supported by grants UL1TR001855, UL1TR001881, UL1TR002736, UL1TR001872, UL1TR001422, UL1TR000050, UL1TR002389 from NCATS. This work is solely the responsibility of the authors and does not necessarily represent the official views of the NIH.

Supplementary material

For supplementary material accompanying this paper visit http://dx.doi.org/10.1017/cts.2020.23.

click here to view supplementary material

Disclosures

The authors have no conflicts of interest to declare.

  • 1. Woolf SH. The meaning of translational research and why it matters. Journal of the American Medical Association 2008; 299: 211–213. [ DOI ] [ PubMed ] [ Google Scholar ]
  • 2. Meslin EM, Blasimme A, Cambon-Thomsen A. Mapping the translational science policy ‘valley of death’. Clinical and Translational Medicine 2013; 2: 14. [ DOI ] [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • 3. Glasgow RE, et al. National institutes of health approaches to dissemination and implementation science: current and future directions. American Journal of Public Health 2012; 102: 1274–1281. [ DOI ] [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • 4. McGovern L, Miller G, Hughes-Cromwick P. The Relative Contribution of Multiple Determinants to Health Outcomes. Health Affairs Health Policy Briefs. Princeton, NJ: Robert Wood Johnson Foundation, 2014. [ Google Scholar ]
  • 5. Bradley EH, Taylor LA. The American Health Care Paradox: Why Spending More is Getting Us Less. New York, NY: PublicAffairs, 2013. [ Google Scholar ]
  • 6. U.S. Department of Health & Human Services. FY2016 Budget in Brief – Centers for Medicare & Medicaid Services Overview [Internet], 2015. [cited Oct 28, 2019]. ( https://www.hhs.gov/about/budget/fy2016/budget-in-brief/index.html )
  • 7. Brownson R, Fielding J, Green L. Building capacity for evidenced-based public health: reconciling the pulls of practice and the push of research. Annual Review of Public Health 2018; 39: 27–53. [ DOI ] [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • 8. Estabrooks PA, Brownson RC, Pronk NP. Dissemination and implementation science for public health professionals: an overview and call to action. Preventing Chronic Disease 2018; 15: E162. [ DOI ] [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • 9. Gilliland CT, et al. The fundamental characteristics of a translational scientist. ACS Pharmacology & Translational Science 2019; 2: 213–216. [ DOI ] [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • 10. Dircksen JC, et al. Healthy Chicago 2.0: Partnering to Improve Health Equity. City of Chicago, March 2016.
  • 11. Daskivich LP, et al. Implementation and evaluation of a large-scale teleretinal diabetic retinopathy screening program in the Los Angeles county department of health services. JAMA Internal Medicine 2017; 177: 642–649. [ DOI ] [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • 12. United States Census Bureau. QuickFacts Miami-Dade County, Florida [Internet], 2018. [cited Oct 28, 2019]. ( https://www.census.gov/quickfacts/fact/table/miamidadecountyflorida/PST045218 )
  • 13. Grumbach K, et al. Achieving health equity through community engagement in translating evidence to policy: the San Francisco Health Improvement Partnership, 2010–2016. Preventing Chronic Disease 2017; 14: E27. [ DOI ] [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • 14. Hiatt RA, et al. The San Francisco cancer initiative: a community effort to reduce the population burden of cancer. Health Affairs (Millwood) 2018; 37: 54–61. [ DOI ] [ PubMed ] [ Google Scholar ]
  • 15. Marrero DG, et al. Promotion and tenure for community-engaged research: an examination of promotion and tenure support for community-engaged research at three universities collaborating through a Clinical and Translational Science Award. Clinical and Translational Science 2013; 6: 204–208. [ DOI ] [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • 16. Soni SM, Giboney P, Yee HF. Development and implementation of expected practices to reduce inappropriate variations in clinical practice. JAMA 2016; 315: 2163–2164. [ DOI ] [ PubMed ] [ Google Scholar ]
  • 17. Barnett ML, et al. A health planʼs formulary led to reduced use of extended-release opioids but did not lower overall opioid use. Health Affairs 2018; 37: 1509–1516. [ DOI ] [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • 18. Marmot MG, Bell R. Action on health disparities in the United States: commission on social determinants of health. JAMA 2009; 301: 1169–1171. [ DOI ] [ PubMed ] [ Google Scholar ]
  • 19. Dodson EA, Baker EA, Brownson RC. Use of evidence-based interventions in state health departments: a qualitative assessment of barriers and solutions. Journal of Public Health Management & Practice 2010; 16: E9–E15. [ DOI ] [ PubMed ] [ Google Scholar ]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

  • View on publisher site
  • PDF (400.6 KB)
  • Collections

Similar articles

Cited by other articles, links to ncbi databases.

  • Download .nbib .nbib
  • Format: AMA APA MLA NLM

Add to Collections

what is the research practice gap

The Research Gap (Literature Gap)

Dissertation Coaching

I f you’re just starting out in research, chances are you’ve heard about the elusive research gap (also called a literature gap). In this post, we’ll explore the tricky topic of research gaps. We’ll explain what a research gap is, look at the four most common types of research gaps, and unpack how you can go about finding a suitable research gap for your dissertation, thesis or research project.

Overview: Research Gap 101

  • What is a research gap
  • Four common types of research gaps
  • Practical examples
  • How to find research gaps
  • Recap & key takeaways

What (exactly) is a research gap?

Well, at the simplest level, a research gap is essentially an unanswered question or unresolved problem in a field, which reflects a lack of existing research in that space. Alternatively, a research gap can also exist when there’s already a fair deal of existing research, but where the findings of the studies pull in different directions , making it difficult to draw firm conclusions.

For example, let’s say your research aims to identify the cause (or causes) of a particular disease. Upon reviewing the literature, you may find that there’s a body of research that points toward cigarette smoking as a key factor – but at the same time, a large body of research that finds no link between smoking and the disease. In that case, you may have something of a research gap that warrants further investigation.

Now that we’ve defined what a research gap is – an unanswered question or unresolved problem – let’s look at a few different types of research gaps.

A research gap is essentially an unanswered question or unresolved problem in a field, reflecting a lack of existing research.

Types of research gaps

While there are many different types of research gaps, the four most common ones we encounter when helping students at Grad Coach are as follows:

  • The classic literature gap
  • The disagreement gap
  • The contextual gap, and
  • The methodological gap

Need a helping hand?

what is the research practice gap

1. The Classic Literature Gap

First up is the classic literature gap. This type of research gap emerges when there’s a new concept or phenomenon that hasn’t been studied much, or at all. For example, when a social media platform is launched, there’s an opportunity to explore its impacts on users, how it could be leveraged for marketing, its impact on society, and so on. The same applies for new technologies, new modes of communication, transportation, etc.

Classic literature gaps can present exciting research opportunities , but a drawback you need to be aware of is that with this type of research gap, you’ll be exploring completely new territory . This means you’ll have to draw on adjacent literature (that is, research in adjacent fields) to build your literature review, as there naturally won’t be very many existing studies that directly relate to the topic. While this is manageable, it can be challenging for first-time researchers, so be careful not to bite off more than you can chew.

Free Webinar: How To Write A Research Proposal

2. The Disagreement Gap

As the name suggests, the disagreement gap emerges when there are contrasting or contradictory findings in the existing research regarding a specific research question (or set of questions). The hypothetical example we looked at earlier regarding the causes of a disease reflects a disagreement gap.

Importantly, for this type of research gap, there needs to be a relatively balanced set of opposing findings . In other words, a situation where 95% of studies find one result and 5% find the opposite result wouldn’t quite constitute a disagreement in the literature. Of course, it’s hard to quantify exactly how much weight to give to each study, but you’ll need to at least show that the opposing findings aren’t simply a corner-case anomaly .

what is the research practice gap

3. The Contextual Gap

The third type of research gap is the contextual gap. Simply put, a contextual gap exists when there’s already a decent body of existing research on a particular topic, but an absence of research in specific contexts .

For example, there could be a lack of research on:

  • A specific population – perhaps a certain age group, gender or ethnicity
  • A geographic area – for example, a city, country or region
  • A certain time period – perhaps the bulk of the studies took place many years or even decades ago and the landscape has changed.

The contextual gap is a popular option for dissertations and theses, especially for first-time researchers, as it allows you to develop your research on a solid foundation of existing literature and potentially even use existing survey measures.

Importantly, if you’re gonna go this route, you need to ensure that there’s a plausible reason why you’d expect potential differences in the specific context you choose. If there’s no reason to expect different results between existing and new contexts, the research gap wouldn’t be well justified. So, make sure that you can clearly articulate why your chosen context is “different” from existing studies and why that might reasonably result in different findings.

Private Coaching

4. The Methodological Gap

Last but not least, we have the methodological gap. As the name suggests, this type of research gap emerges as a result of the research methodology or design of existing studies. With this approach, you’d argue that the methodology of existing studies is lacking in some way , or that they’re missing a certain perspective.

For example, you might argue that the bulk of the existing research has taken a quantitative approach, and therefore there is a lack of rich insight and texture that a qualitative study could provide. Similarly, you might argue that existing studies have primarily taken a cross-sectional approach , and as a result, have only provided a snapshot view of the situation – whereas a longitudinal approach could help uncover how constructs or variables have evolved over time.

what is the research practice gap

Practical Examples

Let’s take a look at some practical examples so that you can see how research gaps are typically expressed in written form. Keep in mind that these are just examples – not actual current gaps (we’ll show you how to find these a little later!).

Context: Healthcare

Despite extensive research on diabetes management, there’s a research gap in terms of understanding the effectiveness of digital health interventions in rural populations (compared to urban ones) within Eastern Europe.

Context: Environmental Science

While a wealth of research exists regarding plastic pollution in oceans, there is significantly less understanding of microplastic accumulation in freshwater ecosystems like rivers and lakes, particularly within Southern Africa.

Context: Education

While empirical research surrounding online learning has grown over the past five years, there remains a lack of comprehensive studies regarding the effectiveness of online learning for students with special educational needs.

As you can see in each of these examples, the author begins by clearly acknowledging the existing research and then proceeds to explain where the current area of lack (i.e., the research gap) exists.

Research Topic Mega List

How To Find A Research Gap

Now that you’ve got a clearer picture of the different types of research gaps, the next question is of course, “how do you find these research gaps?” .

Well, we cover the process of how to find original, high-value research gaps in a separate post . But, for now, I’ll share a basic two-step strategy here to help you find potential research gaps.

As a starting point, you should find as many literature reviews, systematic reviews and meta-analyses as you can, covering your area of interest. Additionally, you should dig into the most recent journal articles to wrap your head around the current state of knowledge. It’s also a good idea to look at recent dissertations and theses (especially doctoral-level ones). Dissertation databases such as ProQuest, EBSCO and Open Access are a goldmine for this sort of thing. Importantly, make sure that you’re looking at recent resources (ideally those published in the last year or two), or the gaps you find might have already been plugged by other researchers.

Once you’ve gathered a meaty collection of resources, the section that you really want to focus on is the one titled “ further research opportunities ” or “further research is needed”. In this section, the researchers will explicitly state where more studies are required – in other words, where potential research gaps may exist. You can also look at the “ limitations ” section of the studies, as this will often spur ideas for methodology-based research gaps.

By following this process, you’ll orient yourself with the current state of research , which will lay the foundation for you to identify potential research gaps. You can then start drawing up a shortlist of ideas and evaluating them as candidate topics . But remember, make sure you’re looking at recent articles – there’s no use going down a rabbit hole only to find that someone’s already filled the gap 🙂

Let’s Recap

We’ve covered a lot of ground in this post. Here are the key takeaways:

  • A research gap is an unanswered question or unresolved problem in a field, which reflects a lack of existing research in that space.
  • The four most common types of research gaps are the classic literature gap, the disagreement gap, the contextual gap and the methodological gap.
  • To find potential research gaps, start by reviewing recent journal articles in your area of interest, paying particular attention to the FRIN section .

If you’re keen to learn more about research gaps and research topic ideation in general, be sure to check out the rest of the Grad Coach Blog . Alternatively, if you’re looking for 1-on-1 support with your dissertation, thesis or research project, be sure to check out our private coaching service .

Research Bootcamps

You Might Also Like:

How To Choose A Tutor For Your Dissertation

How To Choose A Tutor For Your Dissertation

Hiring the right tutor for your dissertation or thesis can make the difference between passing and failing. Here’s what you need to consider.

5 Signs You Need A Dissertation Helper

5 Signs You Need A Dissertation Helper

Discover the 5 signs that suggest you need a dissertation helper to get unstuck, finish your degree and get your life back.

Writing A Dissertation While Working: A How-To Guide

Writing A Dissertation While Working: A How-To Guide

Struggling to balance your dissertation with a full-time job and family? Learn practical strategies to achieve success.

How To Review & Understand Academic Literature Quickly

How To Review & Understand Academic Literature Quickly

Learn how to fast-track your literature review by reading with intention and clarity. Dr E and Amy Murdock explain how.

Dissertation Writing Services: Far Worse Than You Think

Dissertation Writing Services: Far Worse Than You Think

Thinking about using a dissertation or thesis writing service? You might want to reconsider that move. Here’s what you need to know.

📄 FREE TEMPLATES

Research Topic Ideation

Proposal Writing

Literature Review

Methodology & Analysis

Academic Writing

Referencing & Citing

Apps, Tools & Tricks

The Grad Coach Podcast

43 Comments

ZAID AL-ZUBAIDI

This post is REALLY more than useful, Thank you very very much

Abdu Ebrahim

Very helpful specialy, for those who are new for writing a research! So thank you very much!!

Zinashbizu

I found it very helpful article. Thank you.

fanaye

it very good but what need to be clear with the concept is when di we use research gap before we conduct aresearch or after we finished it ,or are we propose it to be solved or studied or to show that we are unable to cover so that we let it to be studied by other researchers ?

JOAN EDEM

Just at the time when I needed it, really helpful.

Tawana Ngwenya

Very helpful and well-explained. Thank you

ALI ZULFIQAR

VERY HELPFUL

A.M Kwankwameri

We’re very grateful for your guidance, indeed we have been learning a lot from you , so thank you abundantly once again.

ahmed

hello brother could you explain to me this question explain the gaps that researchers are coming up with ?

Aliyu Jibril

Am just starting to write my research paper. your publication is very helpful. Thanks so much

haziel

How to cite the author of this?

kiyyaa

your explanation very help me for research paper. thank you

Bhakti Prasad Subedi

Very important presentation. Thanks.

Salome Makhuduga Serote

Very helpful indeed

Best Ideas. Thank you.

Getachew Gobena

I found it’s an excellent blog to get more insights about the Research Gap. I appreciate it!

Juliana Otabil

Kindly explain to me how to generate good research objectives.

Nathan Mbandama

This is very helpful, thank you

How to tabulate research gap

Favour

Very helpful, thank you.

Vapeuk

Thanks a lot for this great insight!

Effie

This is really helpful indeed!

Guillermo Dimaligalig

This article is really helpfull in discussing how will we be able to define better a research problem of our interest. Thanks so much.

Yisa Usman

Reading this just in good time as i prepare the proposal for my PhD topic defense.

lucy kiende

Very helpful Thanks a lot.

TOUFIK

Thank you very much

Dien Kei

This was very timely. Kudos

Takele Gezaheg Demie

Great one! Thank you all.

Efrem

Thank you very much.

Rev Andy N Moses

This is so enlightening. Disagreement gap. Thanks for the insight.

How do I Cite this document please?

Emmanuel

Research gap about career choice given me Example bro?

Mihloti

I found this information so relevant as I am embarking on a Masters Degree. Thank you for this eye opener. It make me feel I can work diligently and smart on my research proposal.

Bienvenue Concorde

This is very helpful to beginners of research. You have good teaching strategy that use favorable language that limit everyone from being bored. Kudos!!!!!

Hamis Amanje

This plat form is very useful under academic arena therefore im stil learning a lot of informations that will help to reduce the burden during development of my PhD thesis

Foday Abdulai Sesay

This information is beneficial to me.

Lindani

Insightful…

REHEMA

I have found this quite helpful. I will continue using gradcoach for research assistance

Doing research in PhD accounting, my research topic is: Business Environment and Small Business Performance: The Moderating Effect of Financial Literacy in Eastern Uganda. I am failing to focus the idea in the accounting areas. my supervisor tells me my research is more of in the business field. the literature i have surveyed has used financial literacy as an independent variable and not as a moderator. Kindly give me some guidance here. the core problem is that despite the various studies, small businesses continue to collapse in the region. my vision is that financial literacy is still one of the major challenges hence the need for this topic.

Khalid Muhammad

An excellent work, it’s really helpful

Charles Olusanya

This is eye opening

Submit a Comment Cancel reply

Your email address will not be published. Required fields are marked *

Save my name, email, and website in this browser for the next time I comment.

Submit Comment

what is the research practice gap

  • Print Friendly

APA Division 15

Bridging the Research to Practice Gap

Innovative research methods and dissemination practices are leading the way..

Posted August 15, 2016

  • Why Education Is Important
  • Take our ADHD Test
  • Find a Child Therapist

Post by Dr. Joseph M. Lucyshyn, University of British Columbia

For the past 15 years, the fields of education and psychology have been active in the evidence-based practice (EBP) movement that began in the medical field in the early 1990s. However, an ongoing problem in this movement is a research to practice gap . In recognition of this issue, a number of recent APA Division 15 blogs have addressed the importance and challenge to educators of adopting EBPs in school settings (Cook, 2015; Schutz, 2016). Despite the development of many EBPs, few have been implemented and sustained by practitioners in school and mental health settings. In addition, when practitioners adopt an EBP, the level of implementation fidelity is often low and thus unsuccessful. The gap between research and practice is attributed to many proximal factors, including inadequate practitioner training, a poor fit between treatment requirements and existing organizational structures, insufficient administrative support, and practitioner resistance to change (Gotham, 2006).

This proximal analysis has been recently supplemented by a more systemic analysis of the problem. Research scientists have recognized that the way in which they pursue the development of EBPs often interferes with their adoption by practitioners. In the world of research in education and psychology, there are essentially three types of studies: efficacy studies, effectiveness studies, and dissemination studies. Efficacy studies involve the investigation of a practice under ideal conditions. Effectiveness studies involve the investigation of a practice under real-world conditions. Dissemination studies involve investigating whether an effective practice can be implemented at a large scale by practitioners in real-world conditions. The systemic problem is that, by far, the majority of research to date has been efficacy studies, with far fewer effectiveness studies, and very few dissemination studies.

Bruce Chorpita and Eric Daleiden, clinical psychologists, have examined the research to practice gap and offered a more in-depth analysis that also suggests a promising solution (Chorpita & Daleiden, 2014). Borrowing from information science, Chorpita and Daleiden argue that there is a fundamental imbalance between design-time and run-time when a practitioner attempts to implement an EBP in a real-world setting. Design-time refers to the time in which the researcher designs and tests the practice under ideal conditions. Run-time refers to the time when a practitioner attempts to implement (i.e., run) the practice under real-world conditions. In researchers’ efforts to control sources of variability during design-time to maximize effects, circumstances in the natural service setting that require practitioners to adapt the practice are not taken into account. They argue for the adoption of a new model of research to practice called collaborative design . In this model, researchers and practitioners work together in collaborative partnership to ensure that an EBP is adapted to real-world conditions in a manner that preserves design-time features essential to effectiveness while allowing for practitioner feedback and adaptation to run-time conditions.

The research to practice gap also has contributed to the development of a new discipline, implementation science . Implementation science involves the study of conditions that promote or hinder the implementation of an EBP. Dean Fixsen and colleagues, leaders in implementation science, argue that researchers have to abandon “let it happen” and “help it happen” approaches to the dissemination of EBPs, and instead adopt a “make it happen” approach informed by implementation science (Fixsen et al., 2010). “Making it happen” involves five key features (1) a purveyor organization capable of empowering practitioners to implement an EBP; (2) EBP components that are clearly defined; (3) training methods that effectively teach practitioners to implement the EBP with fidelity; (4) organizational support for implementation; and (5) leadership throughout the organization, from adaptive leadership that champions the change to technical leadership that ensures long-term sustainability.

A contemporary example of the development of an EBP consistent with these innovations in addressing the research to practice gap is School-wide Positive Behavior Interventions and Supports (PBIS). As described by its founders, Robert Horner and George Sugai, PBIS is:

... a systems approach for establishing the social culture and individualized behavior supports needed for a school to be a safe and effective learning environment for students. ... [I]t is an approach designed to improve the adoption, accurate implementation, and sustained use of evidence-based practices related to behavior and classroom management and school discipline systems” (Sugai & Horner, 2009, p. 309).

From its beginnings in schools in Oregon in the late 1990s, PBIS is now being implemented in over 21,000 schools throughout the United States, and is being adopted in schools in Canada, Europe, and Australia. In practice, PBIS involves a multi-tiered system of positive behavior support that includes universal supports for all students, targeted supports for some students, and intensive supports for relatively fewer students who are unresponsive to the first two tiers of support. For example, at the universal tier, school-wide expectations are defined and explicitly taught. At the targeted tier, a small group of students may participate in a social skills training intervention. At the intensive tier, a student may receive function-based, multicomponent positive behavior support within a wraparound, interagency service delivery model.

The remarkable growth in the dissemination of PBIS may be attributed to the founders’ and their colleagues’ application of design-time/run-time thinking in the design and refinement of PBIS, and in their use of implementation science when scaling up research and dissemination to the school district and state levels. For example, PBIS research from its inception has included collaborative dialogue between researchers and school educators and administrators. This dialogue has allowed design-time and run-time considerations to reciprocally shape the approach. To empower school personnel to implement PBIS with fidelity and to scale up research and dissemination, researchers have developed regional purveyor groups that support implementation, articulated a blueprint that defines components of the approach, utilized a train-the-trainer coaching model to build local capacity, and worked with administrators to build organizational support for implementation. Each of these activities represent the use of implementation science to bring PBIS into the lives of thousands of educators and millions of students in the US, and now educators and students in Canada, Europe, and Australia.

The innovative research methods described above suggest the value of educational psychologists conducting research in collaboration with education professionals so that EBPs are more likely not only to be effective but also acceptable, feasible, and adaptable in educational settings. The innovative dissemination principles and practices illuminated by implementation science offer educational psychologists a clear pathway to the adoption of EBPs by practitioners in real-world settings. When educational psychologists integrate these innovations into their own lines of research, they are likely to build a broad and sturdy bridge between research and practice.

This post is part of a special series curated by APA Division 15 President Nancy Perry. The series, centered around her presidential theme of "Bridging Theory and Practice Through Productive Partnerships," stems from her belief that educational psychology research has never been more relevant to practitioners' goals . Perry hopes the blog series will provoke critical and creative thinking about what needs to happen so that researcher and practitioner groups can work together collaboratively and productively. Those interested can learn more—and find links to the full series— here .

what is the research practice gap

Chorpita, B. F., & Daleiden, B. F. (2014). Structuring the collaboration of science and service in pursuit of a shared vision. Journal of Clinical and Child & Adolescent Psychology , 43(2), 323-338.

Cook, B. C. (2015, June 2). The importance of evidence-based practice: Identifying evidence-based practices can be tricky, but well-worth the effort. [Web log post]. Retrieved from https://www.psychologytoday.com/blog/psyched/201506/the-importance-evid…

Fixsen, D. L., Blasé, K. A., Duda, M. A., Naoom, S. F., & Van Dyke, M. (2010). Implementation of evidence-based treatments for children and adolescents: Research findings and their implications for the future. In J. R. Weisz & A. E. Kazdin (Ed), Evidence-based psychotherapies for children and adolescents (2nd ed), pp. 435-450). New York: Guilford.

Gotham, H. J. (2006). Advancing implementation of evidence-based practices into clinical practice: How do we get there from here? Professional Psychology : Research and Practice , 37(6), 606-613.

Schutz, P. (2016, May 23). Spreading the word: Science isn’t just for scientists anymore. [Web log post]. Retrieved from https://www.psychologytoday.com/blog/psyched/201605/spreading-the-word

Sugai, G., & & Horner, R. H. (2009). Defining and describing Schoolwide Positive Behavior Support. In W. Sailor, G. Dunlap, G. Sugai, & R. Horner (Eds.), Handbook of positive behavior support . New York: Springer

APA Division 15

The American Psychological Association’s Division 15 is a global association of educational psychologists

  • Find a Therapist
  • Find a Treatment Center
  • Find a Psychiatrist
  • Find a Support Group
  • Find Online Therapy
  • United States
  • Brooklyn, NY
  • Chicago, IL
  • Houston, TX
  • Los Angeles, CA
  • New York, NY
  • Portland, OR
  • San Diego, CA
  • San Francisco, CA
  • Seattle, WA
  • Washington, DC
  • Asperger's
  • Bipolar Disorder
  • Chronic Pain
  • Eating Disorders
  • Passive Aggression
  • Personality
  • Goal Setting
  • Positive Psychology
  • Stopping Smoking
  • Low Sexual Desire
  • Relationships
  • Child Development
  • Self Tests NEW
  • Therapy Center
  • Diagnosis Dictionary
  • Types of Therapy

September 2024 magazine cover

It’s increasingly common for someone to be diagnosed with a condition such as ADHD or autism as an adult. A diagnosis often brings relief, but it can also come with as many questions as answers.

  • Emotional Intelligence
  • Gaslighting
  • Affective Forecasting
  • Neuroscience
  • Open access
  • Published: 09 October 2020

Bridging the research–practice gap in healthcare: a rapid review of research translation centres in England and Australia

  • Tracy Robinson   ORCID: orcid.org/0000-0002-3512-7023 1 , 2 , 3 ,
  • Cate Bailey 1 ,
  • Heather Morris 1 ,
  • Prue Burns 4 ,
  • Angela Melder 1 , 3 , 5 ,
  • Charlotte Croft 6 ,
  • Dmitrios Spyridonidis 2 , 6 ,
  • Halyo Bismantara 1 , 3 ,
  • Helen Skouteris 1 , 3   na1 &
  • Helena Teede 1 , 3 , 5   na1  

Health Research Policy and Systems volume  18 , Article number:  117 ( 2020 ) Cite this article

14k Accesses

28 Citations

16 Altmetric

Metrics details

Large-scale partnerships between universities and health services are widely seen as vehicles for bridging the evidence–practice gap and for accelerating the adoption of new evidence in healthcare. Recently, different versions of these partnerships – often called academic health science centres – have been established across the globe. Although they differ in structure and processes, all aim to improve the integration of research and education with health services. Collectively, these entities are often referred to as Research Translation Centres (RTCs) and both England and Australia have developed relatively new and funded examples of these collaborative centres.

This paper presents findings from a rapid review of RTCs in Australia and England that aimed to identify their structures, leadership, workforce development and strategies for involving communities and service users. The review included published academic and grey literature with a customised search of the Google search engine and RTC websites.

RTCs are complex system-level interventions that will need to disrupt the current paradigms and silos inherent in healthcare, education and research in order to meet their aims. This will require vision, leadership, collaborations and shared learnings, alongside structures, processes and strategies to deliver impact in the face of complexity. The impact of RTCs in overcoming the deeply entrenched silos across organisations, disciplines and sectors needs to be captured at the systems, organisation and individual levels. This includes workforce capacity and public and patient involvement that are vital to understanding the evolution of RTCs. In addition, new models of leadership are needed to support the brokering and mobilisation of knowledge in complex organisations.

Conclusions

The development and funding of RTCs represents one of the most significant shifts in the health research landscape and it is imperative that we continue to explore how we can progress the integration of research and healthcare and ensure research meets stakeholder needs and is translated via the collaborations supported by these organisations. Because RTCs are a recent addition to the healthcare landscape in Australia, it is instructive to review the processes and infrastructure needed to support their implementation and applied health research in England.

Peer Review reports

Introduction

“ If you think competition is hard, you should try collaboration ” (Kings Fund Report, 2019)

Over the past decade, there has been wide international concern that new health research and evidence is not translated into practice in a timely fashion [ 1 , 2 ]. The 17-year time lag between evidence and clinical practice change has been widely touted [ 3 ]. Systemic barriers such as lack of integration between health and research, dissonant metrics, organisational and professional silos, pervasive competition, lack of collaboration, and a failure to engage relevant stakeholders have all been identified as contributors to translation ‘gaps’ [ 4 , 5 , 6 ]. An international response to accelerate the translation and mobilisation of new knowledge has been the development of large-scale partnerships between universities, research institutes and health services that aim to integrate healthcare, research and education [ 7 ]. In world-leading United Kingdom and Australian health systems [ 8 ], these partnerships include a focus on evidence translation and health impact.

In England, these ‘partnerships’ include Collaboration for Leadership in Applied Health Research Centres (CLAHRCs), Academic Health Science Centres (AHSC) and Academic Health Science Networks (AHSNs). Collectively, these entities are often referred to as Research Translational Centres (RTCs) and they have been established internationally in the United States, Canada, England and Australia. In 2008, the National Institute for Health Research (NIHR) established nine CLAHRCs to increase the uptake of promising clinical research into practice and improve outcomes by engaging service users and the public in applied health research [ 9 ]. CLAHRCs competed with each other for NIHR funding in 5-year cycles. Subsequently, AHSCs were established in 2009. They are not formally part of the NIHR and, unlike CLAHRCs, did not receive NIHR funding [ 9 ]. These centres originally developed through interactions between rival institutions and occurred in a policy context that supported and accredited a limited number of prestigious AHSCs that continue to operate in strong institutional competition [ 10 ].

In 2013, a second round of competitive CLAHRC funding saw the recognition of 13 centres across England. Simultaneously, AHSNs were established with clear structures of accountability and budget and a focus on promoting and adopting innovation in healthcare. Commissioned by the National Health Service (NHS), concerns that the future of these networks may be constrained by budgetary pressures have been expressed [ 11 ], even though improving the uptake of innovation is valued in improving the quality and sustainability of healthcare in England. CLAHRCs were tasked with strengthening collaborations with the AHSNs [ 9 ]. A third round of CLAHRC funding, announced in 2019, saw the centres renamed as Applied Research Centres (ARCs), with increased focus on social care and public health. Strengthening the links between the ARCs and the AHSNs remains a priority, with AHSNs expected to take up and implement evidence generated by the ARCs.

In Australia, the McKeon review (2013) identified that the best performing health systems are those that embed research in healthcare and recommended the establishment of integrated RTCs that combine hospital networks, universities and medical research institutes [ 12 ]. The review also recommended a doubling of investment in medical research to grow applied health research that drives efficiency and impacts on communities. Since 2015, the National Health and Medical Research Centre (NHMRC) has accredited seven Advanced Health Research Translation Centres and three Centres for Innovation in Regional Health (CIRHs) to encourage leadership in health research and implementation [ 13 ]. The accreditation process is competitive to a benchmark but RTCs do not compete against each other. The Advanced Health Research and Translation Centres and CIRHs are, to some extent, modelled on RTCs elsewhere, including England, but are uniquely ‘health service-led’ collaborations. The CIRHs have a specific focus on the healthcare needs of regional and remote Australian populations.

Another unique feature of the RTCs in Australia is that they have developed a national alliance – the Australian Health Research Alliance (AHRA). The Australian Federal and State Governments have since invested in these RTCs across the AHRA. Funds are shared equally across all RTCs accredited by the NHMRC and, hence, the system enables collaboration for greater benefit from existing funding rather than promoting competition. The AHRA has increasingly prioritised research on RTC operations and activities, including how best to mobilise strategic prioritised health research in practice and how to measure and capture impact. This is because, despite significant government investment, the optimal collaboration models and activities are yet to be fully understood, especially in Australia where the RTCs and AHRA are relatively recent constructs. In England, several evaluations of the CLAHRCs and AHSCs have been undertaken [ 9 , 14 , 15 , 16 ] but these have mostly been internal evaluations and limited in scope. Given that both England and Australia have world-leading universal health systems [ 8 ] and that the recently established Australian centres are modelled on the English centres, a rapid review of RTCs (confined to England and Australia) was conducted to inform the ongoing development of these partnerships.

This rapid review is timely, with the CLAHRCs and AHSNs in England focusing on greater collaboration and the Australian centres recently being funded $300 million over 10 years, with a clear need for more research to guide evolution. Knowledge ‘gaps’ identified by Australian RTCs include workforce development, strategies for consumer and community involvement (CCI), optimal collaborations, governance arrangements and structures to drive collaboration. CCI and workforce development needs are diverse, yet here we focus on strategies aligned with the RTCs’ aim to integrate research and healthcare and to build collaborations and drive evidence-based healthcare improvement.

Rapid reviews have emerged as an efficient way of supporting health policy-making and systems development by providing evidence in a timely and cost-effective fashion [ 17 ]. They employ a wide variety of methods [ 18 ] and, although we acknowledge that rapid and limited evidence searches can lead to missed information, these methods were chosen as pragmatic and timely and because they capture both academic and grey literature. Traditional systematic review processes were not amenable to the time-frame required by our health partners (the AHRA) and would not capture the diverse reports and evaluations found largely in the grey literature, although it is acknowledged that the grey literature is not rigorously peer reviewed and that combining published and grey literature may lead to bias [ 19 ]. However, rapid reviews do meet the needs of end-users in addressing emerging issues within limited time-frames.

The scope of this review included the vision, governance and structure of RTCs, their CCI, (public and patient involvement (PPI) in England), and workforce development strategies. This review included published academic and grey literature with a customised search of the Google search engine and RTC websites. Since abstracts were unavailable for reports in the grey literature, executive summaries, recommendations and table of contents were reviewed. We searched for academic publications in EMBASE and SCOPUS databases using the following search terms: “Collaboration for Leadership in Applied Health Research and Care” OR “Academic Health Science Centre*” OR “Academic Health Science Network*” OR “Advanced Health Research and Translation Centre*” (acronyms were excluded, as they failed to yield results). In terms of the grey literature, the above terms linked to “AND evaluation” were searched on Google, then sorted by relevance. We also searched the websites of RTCs in England and Australia.

The search period was limited to the previous 10 years (2008 to August 2019) to ensure currency of our findings in a landscape where RTCs continue to evolve. Inclusion criteria for the published and grey literature included reports or evaluations that addressed structure, governance, community and consumer engagement, and/or workforce development. Although the heterogeneity of grey literature means it is less amenable to traditional forms of analysis, it did extend the scope of findings by incorporating information on the applied topic areas and by filling gaps that were apparent in the academic literature. Permission to conduct this study was received from the Monash University Human Research Ethics Committee.

A search of EMBASE and SCOPUS identified a total of 272 relevant papers (after duplicates removed) over 10 years (2008 to August 2019). A review of titles and abstracts identified 41 scientific papers for consideration, all of which addressed the evaluation domains of interest and were retained after full-text review, as shown in the PRISMA diagram in Fig. 1 . This included one systematic review of CLAHRC evaluations [ 20 ] but no evaluations of RTCs in Australia. The evaluations of CLAHRCs were diverse, and often descriptive and exploratory in nature with a paucity of evidence about the overall impact of centres, particularly in relation to knowledge mobilisation processes [ 20 ]. Of the evaluations reviewed, most focused on partnerships, structures and processes. Likewise, a scoping review of AHSCs found most of the literature to be descriptive case studies or commentaries [ 7 ]. This highlights the challenges involved in evaluating complex systems that may require different methods (such as social network analysis) to better capture their dynamics [ 20 ]. The grey literature and review of all RTC websites provided additional information specific to each centre.

figure 1

PRISMA diagram

RTCs’ vision, governance and structure

Although RTCs share a common aim to integrate research and training with health services, there was considerable variation in their vision, governance and structure in both countries. In England, the CLAHRCs have a declared mission to support high-quality applied research that meets the needs of local health and care systems [ 21 ], yet there was considerable variation across individual centres. Table 1 demonstrates that collaboration for patient benefit, translation and the harvesting of evidence were commonly identified in the vision statements for CLAHRCs, while the AHSNs had a focus on innovation as a key part of their mission. The AHSNs were created to connect the NHS and academic organisations, local authorities and industry with a clear focus on improving patient outcomes [ 22 ]; they aim to foster opportunities for industry to work effectively with the NHS by leading regional networks and generating economic growth in their regions. The AHSCs in England share a similar aim to improve health education and patient care and are commonly ‘nested’ within an AHSN but their focus is more on research excellence and the translation of new innovation from the laboratory to the bedside. Governance structures in England appear well developed, albeit highly variable. Most RTCs had all partners represented on their governing boards, with specific steering, advisory and PPI committees. The AHSNs reported over-arching executive boards with discrete advisory committees that help define and advise on regional issues and the inclusion of clinical commissioning groups in their governance. The governance and structure of AHSCs was variable – some reported having academic leaders who determined themes, while others reported having equal representation from all partners.

The stated vision of RTCs in Australia emphasised the integration of research with healthcare and partnerships. The translation of evidence was a strong and consistent focus, largely funded by the Medical Research Future Fund (MRFF) that provides grants for rapid applied research translation [ 23 ]. Early funding priorities have been identified by the MRFF and include reducing unwarranted variation, improving clinical pathways, improving the health of vulnerable groups, increasing primary care research and reducing risk factors for chronic diseases [ 23 ]. In terms of their structure and governance, RTCs in Australia appeared to have more consistency, with all partners represented on boards or councils and various advisory, translation or management committees. Healthcare leadership (rather than academic) was a key feature of Australian RTCs as a means of enhancing the accountability, relevance and impact of research. This governance structure is challenged by the fact that universities are federally funded, whereas healthcare is funded by state governments [ 24 ]. However, this is being addressed by the fact that both the RTCs and the AHRA are federally funded. One RTC in Australia has a unique ‘bottom up’ structure, where governance is strongly led by Aboriginal community controlled organisations and Aboriginal ‘voice’ is embedded across all levels of the organisation (the Central Australian Academic Health Science Network). Few RTCs in either country report on their websites how their vision or governance was developed or whether a strategic plan was in place.

In terms of structure, or the ‘architecture’, some RTCs were built around clinical themes (largely disease focused with flagship programmes), with some being structured around platforms or fields of work such as public health and health services. In England, leading figures with particular research experience acted as Directors and many centres reported having a three-tier structure with a Board, management committee and working groups that align with the clinical themes/projects. While RTCs in both countries identified diverse clinical themes, few reported information on how they developed priorities for themes or whether they involved collaborations with services users and healthcare providers to inform structures and processes.

3.2. Workforce development

The review identified that workforce capacity is being developed across the system, organisation and individual levels to build capacity in translational research and healthcare improvement. This requires leaders with broader skills and support to operate across organisational boundaries and address system-level barriers to change. In England, national efforts to develop leadership include the NHS Leadership Academy and NHS Horizons, which collaborate to identify future leadership development directions [ 25 ]. While the Horizons team supports leaders of change, the Leadership Academy provides a range of tools, models and programmes to support individuals and organisations to develop leaders [ 26 ]. In Australia, there is no coordinated national effort but some initiatives are emerging. In this context, Table 1 demonstrates that RTCs in both countries are all undertaking workforce capacity-building. At the individual level, diverse training needs were identified, including research and data skills, CCI and translation literacy.

The literature confirms the focus on and importance of skills in implementation research, knowledge mobilisation, evaluation skills and collaborative priority-setting with potential end-users of research [ 3 , 27 ]. Time and space are needed to build effective collaborations and, while the ARC model did facilitate collaborative priority-setting, Cooke et al. [ 27 ] reported that scant knowledge exists about processes or guidance on how best to achieve meaningful collaboration. Platforms for negotiation and decision-making (such as special interest groups and advisory groups) were possible enabling factors, as were formal consensus methods for priority-setting [ 27 ]. In England, the James Lind Alliance brings patients, carers and clinicians together to identify research priorities [ 28 ]. In Australia, Delphi and Nominal Group Techniques have been adapted and used for eliciting priorities across stakeholders [ 29 , 30 ].

In England, an important organisational workforce enabler for meaningful engagement, embedding research into healthcare and the translation of new evidence, was leadership. Leadership was identified as a key factor in the overall success of RTCs, including in their workforce capacity for knowledge mobilisation [ 20 , 31 , 32 , 33 ]. Currie et al. [ 33 ] stressed the importance of understanding the social position of senior members of CLAHRCs. Although well-known clinical academics are likely to lead the centres, this study found that privileging pre-existing relationships may constrain much-needed change and meaningful engagement with service users and frontline clinicians [ 33 ]. Leadership in CLAHRCs has been enacted in three ways: ‘push’ models for top down leadership that focus on technical infrastructure, pull methods that aim to increase leadership capacity among project leads and more collective approaches that dispersed leadership to drive new relations between academia and clinical practice [ 32 ]. Aligned with this, a recent Kings Fund report highlights the importance of system leadership (being comfortable with chaos) in driving meaningful change [ 6 ].

Although dispersed leadership approaches were crucial for the exchange of new knowledge, push and pull models continued to influence how knowledge was ‘moved’ within CLAHRCs, especially in relation to the development of technical infrastructures and translating knowledge at the project level [ 32 ]. While more distributed models of leadership were associated with increased potential for engagement with the CLAHRCs [ 20 ], a longitudinal realist evaluation of three centres found that a blend and alignment of designated leadership with distributed leadership was a necessary condition for collective action and implementation [ 34 ]. The presence of both these leadership styles appeared to be important for ensuring alignment and integration across streams [ 34 ]. As such, workforce development in leadership appears important in the context of RTCs.

The need to move knowledge across professional ‘silos’ resulted in several RTCs creating new system approaches such as knowledge-brokering roles (although they varied considerably across centres) [ 20 , 35 , 36 , 37 , 38 ]. For example, some deployed ‘diffusion fellows’, who were senior health staff seconded to actively bridge the research–practice gap [ 35 ]. Despite showing much promise, knowledge brokering and other hybrid roles were often unrecognised and lacked support within their organisations [ 39 , 40 ]. Although management theory identifies that knowledge mobilisation relies on relationships and is an inherently social undertaking [ 9 , 41 ], the deployment of hybrid roles as a means of overcoming system barriers requires particular capabilities and was found to be challenging [ 20 ]. Nevertheless, workforce capabilities, such as stakeholder engagement, co-design, collaboration and team-work, and the co-production of knowledge, rely on understanding complexity and working across multiple levels (individual and organisational) to enact new knowledge [ 42 ]. The importance of developing skills for mobilising knowledge across disciplines and different users was confirmed in the literature [ 27 , 33 , 43 , 44 , 45 , 46 , 47 ]. Mobilising knowledge that is multidisciplinary requires different communities to interact [ 15 ] and RTCs are well placed to enable this kind of cross-silo collaboration, including with health, business, IT, social sciences, engineering and other disciplines.

Individual workforce capabilities for supporting RTC endeavours are not all technical and may include observational skills, appreciative inquiry, systems thinking, improved understanding of data, distributive or collective leadership, and quality improvement – all of which are increasingly found in English workforce programmes but are not yet incorporated into workforce programmes in Australia. At the level of the system and organisation, key workforce development approaches identified in this review include leadership and mentoring [ 48 ], processes for stakeholder engagement [ 27 ], and the creation of new hybrid roles to move knowledge across discipline and organisational boundaries. Despite a focus on leadership, the evaluation of three CLAHRCs by Rycroft-Malone et al. [ 49 ] identified that, on balance, they tended to conduct research rather than focus on ‘how’ to use and apply new research evidence. This means that closing the knowledge–practice gap and methods for translating evidence into improved patient outcomes are yet to be clearly established [ 49 ]. However, AHSNs are now more aligned with the CLAHRCs to increase the translation of generated evidence.

CCI (Australia) and PPI (England)

One significant difference between Australian and English centres was the latter’s strong focus on PPI. England has a national PPI strategy, with PPI a policy and funding requirement and a key strategy for situating patients at the centre of research and healthcare improvement [ 27 ]. The importance of PPI in healthcare has been acknowledged for some time in England; however, there is still limited research on the optimal methods for driving and enabling PPI [ 34 , 50 ]. The literature highlights a significant gap in understanding how PPI can inform implementation research that often focuses on the behaviour of health professionals and health systems and policies (as opposed to clinical research) [ 34 ]. Despite significant advancement in England, cultural barriers persist, including the narrowness of PPI models that fail to address empowerment, equality or diversity strategies [ 51 ]. Often, the level of PPI operates more as consultation rather than as active co-production and empowerment.

Other processes for authentic PPI enshrined in all CLAHRCs include providing payment for PPI representatives to attend meetings and training to enable more informed and active participation. The provision of training and remuneration for PPI representatives is a significant difference between England and Australia; however, real progress in PPI in England cannot be realised without an effective mechanism for coordinating efforts across the complex network of organisations that comprise the NIHR [ 51 ]. The systematic review conducted by Kislov et al. [ 20 ] reported that none of the NIHR-funded evaluations had a particular focus on PPI, although one included interviews with PPI representatives [ 9 ] and three investigated how PPI was enacted [ 52 , 53 , 54 ]. These evaluations all acknowledged the difficulties of quantifying PPI elements and Marston and Renedo [ 52 ] recommend the inclusion of patient voices and tracking dynamic social processes and networks to better understand the key elements and impact of PPI. It is important to identify the dynamic processes and networks through which PPI can contribute to healthcare improvement efforts [ 20 ] as well as the key time-points and strategies for PPI to have the most impact in the translational research cycle [ 51 ].

In Australia, only three RTCs included dedicated information on CCI on their websites. However, across all RTCs, the AHRA have prioritised CCI as a national system-level initiative and have developed a CCI strategy with key stakeholders and completed both an environmental scan of the literature and a national survey on the extent and nature of CCI. In 2018, a national workshop was convened to prioritise the next steps and RTCs committed funding and staff to collaboratively progress this work. To date, findings from Australia confirm that CCI is complex (consistent with the English experience) and that the locus of control for involvement in Australia remains largely with researchers [ 55 ]. The AHRA report also identified a need for more resourcing and better policy aligned with England. They recommended a range of strategies to promote and explore the value and impact of CCI. This report included the development of minimum standards for good practice in CCI involvement in RTCs and guidance on how to incorporate it across the research life cycle [ 55 ], alongside training and capacity-building. Currently, the report recommendations are being implemented collaboratively and co-ordinated nationally through the AHRA.

This review explored the visions, structures and governance processes of RTCs, their workforce development activities and CCI/PPI as key factors for integrating research with health service and community needs. Centres in both England and Australia share a common architecture in that they generally have boards that represent all partners and are organised along research themes that reflect their research strengths, with cross-cutting platforms to enable collaboration with health services. In terms of their vision, RTCs in England appear to have a greater research focus on innovation (AHSNs), collaborative and applied research (CLAHRCs), and a traditional push model of discovery and clinical research into practice (AHSCs). In Australia, RTC visions are aligned with translation, partnerships, and impact and have a strong and consistent focus on research translation.

In terms of workforce development (aligned with RTC visions to integrate research into healthcare, build collaboration and drive evidence-based healthcare improvement), leadership was a key enabling factor. Given that they are an amalgam of stakeholders with potentially competing demands, it is perhaps not surprising that leadership is a prominent theme. Leadership approaches appear to require both dispersed and distributed or top-down and bottom-up approaches to facilitate working collectively with multiple stakeholders [ 32 , 36 ]. Collective and distributed leadership approaches have also been shown to enable healthcare improvement and transformational change [ 32 , 56 , 57 ]. Evaluation reports and published literature identified knowledge mobilisation as another key workforce skill for evidence translation. Historically, the evidence translation gap was perceived as a practice/service responsibility and challenge, rather than a problem of implementation or knowledge creation [ 34 ]. This highlights the need for systems approaches with a more nuanced understanding of how knowledge moves and can be brokered within complex organisations to enable improvement.

In England, structural solutions, such as the creation of new hybrid roles, has proved challenging – particularly in relation to working across all levels of complex organisations and diverse contexts [ 58 , 59 , 60 ]. However, skills and capabilities for moving knowledge in healthcare organisations were identified, including process and systems thinking, the involvement of stakeholders, change management, facilitation, negotiation, and advocacy skills [ 34 , 61 ]. These are yet to find their way into traditional healthcare innovation and knowledge mobilisation roles, where the focus is often organisational and inward looking rather than collaborative with stakeholders and engaging with external evidence [ 40 ]. At this stage, workforce capacity development is more developed in English centres compared with Australia. However, Australian RTCs are now working together with nationally coordinated efforts to improve and scale workforce development activity.

Likewise, in England, PPI is well established and embedded in policy and funding requirements, although there is also a recognition that optimal processes for PPI and their impact should be better understood [ 14 , 15 , 16 ]. When utilised effectively, PPI appears to have the potential to transform services and address the research–practice divide [ 62 , 63 ], but it is important to research and translate how patient input can be best integrated at all levels within and between RTCs. In England, funding, dedicated staff and training are available for both PPI members and frontline staff with co-design and co-production with stakeholders; this is not yet mirrored in Australia, where training programmes for the public and service users are emerging but remain under-developed. However, the AHRA has strongly prioritised and developed a national framework and is focusing on a coordinated approach to CCI. Funding bodies encourage but do not require CCI. One RTC in Australia, with community controlled Aboriginal health service members, appears to be leading in terms of processes for community engagement and clinical and corporate governance participation. Further research and evaluation are needed on the optimal methods and impact of CCI in research and healthcare improvement.

Overall, the findings from this review are important for the evolving RTCs in Australia, which are relatively young organisations and are due for re-accreditation by the NHMRC in 2022. Although this review focused on the structures, leadership, workforce development and engagement with communities of RTCs, it is important to acknowledge that these highly complex interventions, with their relational interactions and processes for collaboration, are often poorly captured and articulated in the literature. In order to understand these nuances, qualitative research is warranted as a means of capturing the range of activities and outcomes generated by these collaborative platforms. Australia has yet to evaluate their RTCs but it is notable that the Australian government has recently committed a 10-year funding strategy, which validates the perceived potential and importance of these entities and provides for long-term strategic planning. It also mandates more evidence-based approaches and the need for evaluation. The Australian MRFF was announced as part of the 2014–2015 federal budget and will build to a $20 billion perpetual fund over the next decade [ 64 ]. The MRFF scheme will complement and enhance current research funding schemes but will focus on delivering a health system fully informed by research with community and patient impact [ 65 ]. This approach is supportive of RTC visions and directly aligns with strategic prioritised research rather than conventional investigator-led research [ 66 ]. This is important because the systematic review of CLAHRC evaluations identified that 5-year funding cycles in England were insufficient to foster and embed collaborations between academic and service providers [ 20 ].

In Australia, the AHRA has prioritised streamlining and the consistency of structures and processes, whilst respecting regional differences. This Australian collaboration is possible in the context of avoiding direct competition for accreditation or funding. This has enabled a more collaborative approach to challenges and coordinated activities nationally within and between centres. This is consistent with recommendations from England that more research is needed that focuses on how collaboration occurs between RTCs [ 16 ] and with the recent Kings Fund report [ 6 ] on the vital need for more collaboration and less competition in healthcare improvement.

RTCs are complex system-level interventions that will need to disrupt the current paradigms and silos inherent in healthcare, education and research in order to meet their aims. This is likely to require vision, leadership, collaborations and shared learnings, alongside structures, processes and strategies to deliver impact in the face of complexity. The impact of RTCs in overcoming the deeply entrenched silos across organisations, disciplines and sectors needs to be captured at the systems, organisation and individual levels. Collectively, the creation of structures and streamlined processes to accelerate stakeholder engagement and collaboration, evidence synthesis, knowledge transfer, data systems and the effective integration of implementation and improvement into healthcare are the holy grail of RTCs. However, many centres appear to still focus on clinical themes and siloed projects. As these RTCs mature, capturing and learning effective ways to promote system change will rely on capturing higher level learnings from the plethora of RTC projects.

This includes better understanding of how to strategically prioritise research and how to build the capacity of the workforce to translate new knowledge into action. Recently, RTCs have developed novel ways of demonstrating these processes, including the use of ‘casebooks’ that detail the impact of research on NHS practice [ 67 ]. A consistency of purpose and activity is needed, alongside a focus on regional needs. Associated policy intentions and funding objectives that support shared learnings and collaborations are also important. Regardless of how RTCs are structured or where they are situated, these collaborative entities all share common potentials and challenges, mostly around how to collaborate in a siloed and competitive system and how to ensure that research and service delivery are integrated and evidence generated and translated for the benefits of the community they serve.

Limitations

This rapid review synthesises diverse literature about broad and complex collaborative RTCs that have become key entities in policy and healthcare service improvement. Combining diverse information sources is challenging and, in the current review, may have limited the depth of findings. Although rapid reviews allow for the inclusion of grey literature, it is important to acknowledge that optimal methods for conducting these reviews are evolving and are yet to be determined. These reviews may lack rigour even while they may prove more viable in terms of cost, timeliness and the breadth of information accessed. However, there is a growing recognition that an understanding of systems perspectives and their inherent complexity require reviews from diverse sources and are not always well served by traditional approaches such as those afforded by systematic reviews [ 68 ]. The review only focuses on England and Australia as world leading universal health systems with strong policy and funding commitment to the integration of research and healthcare, evidence-based improvement and RTCs.

A challenge for all RTCs is how to integrate research and healthcare and overcome competition to build collaboration and deliver impact. The English experience highlights that this requires a better understanding of the structure and vision of centres, their workforce capacity needs, and the nature of their collaborations with service users and communities. Although workforce capacity-building and the involvement of consumers and the community are more developed in England, the development of an alliance between centres in Australia is providing a platform for national coordination, shared learning and rapid collaborations. This alliance has facilitated and shared a national agenda in a range of areas. Given that the development and funding of RTCs represents one of the most significant shifts in the health research landscape, it is imperative that we continue to explore how we can progress the integration of research and healthcare and ensure that research meets stakeholder needs and is translated via the collaborations supported by these organisations.

Availability of data and materials

The datasets used and/or analysed during the current study are available from the corresponding author on reasonable request.

Abbreviations

Academic Health Science Centres

Academic Health Science Networks

Applied Research Centre NIHR

National Institute for Health Research

Australian Health Research Alliance

consumer and community involvement

Collaborations for Leadership in Applied Health Research

Medical Research Future Fund

National Health and Medical Research Council

National Health Service

public and patient involvement

  • Research Translation Centres

Fineout-Overholt E, Melnyk BM, Schultz A. Transforming health care from the inside out: advancing evidence-based practice in the 21st century. J Prof Nurs. 2005;21(6):335–44.

Article   PubMed   Google Scholar  

Westfall JM, Mold J, Fagnan L. Practice-based research—“blue highways” on the NIH roadmap. JAMA. 2007;297(4):403–6.

Article   CAS   PubMed   Google Scholar  

Morris ZS, Wooding S, Grant J. The answer is 17 years, what is the question: understanding time lags in translational research. J R Soc Med. 2011;104(12):510–20.

Article   PubMed   PubMed Central   Google Scholar  

Currie G, Suhomlinova O. The impact of institutional forces upon knowledge sharing in the UK NHS: the triumph of professional power and the inconsistency of policy. Public Adm. 2006;84(1):1–30.

Article   Google Scholar  

Martin GP, Currie G, Finn R. Reconfiguring or reproducing intra-professional boundaries? Specialist expertise, generalist knowledge and the ‘modernization’ of the medical workforce. Soc Sci Med. 2009;68(7):1191–8.

Timmins N. Leading for integrated care ‘If you think competition is hard, you should try collaboration’. London: The Kings Fund; 2019. https://www.kingsfund.org.uk/sites/default/files/2019-11/leading-for-integrated-care.pdf . Accessed 5 Nov 2019.

French CE, Ferlie E, Fulop NJ. The international spread of Academic Health Science Centres: A scoping review and the case of policy transfer to England. Health Policy. 2014;117(3):382–91.

Schneider EC, Sarnak DO, Squires D, Shah A. Mirror, Mirror 2017: International Comparison Reflects Flaws and Opportunities for Better US Health Care; The Commonwealth Fund. 2017. http://www.hcfat.org/Mirror_Mirror_2017_International_Comparison.pdf . Accessed 5 Nov 2019.

Soper B, Hinrichs S, Drabble S, Yaqub O, Marjanovic S, Hanney S, et al. Delivering the aims of the Collaborations for Leadership in Applied Health Research and Care: understanding their strategies and contributions. Southampton: NIHR Journal Library; 2015.

Google Scholar  

Fischer MD, Ferlie E, French C, Fulop N, Wolfe C. The Creation and Survival of an Academic Health Science Organization: Counter-Colonization Through A New Organizational Form? University of Oxford - Said Business School Working Paper No. 2013–26. 2013. https://ssrn.com/abstract=2331463 . Accessed 6 Jan 2020.

Bienkowska-Gibbs T, Exley J, Saunders CL, Marjanovic S, Chataway J, MacLure C, et al. Evaluating the role and contribution of innovation to health and wealth in the UK: a review of innovation, health and wealth: phase 1 final report. Rand Health Q. 2016;6(1):7.

PubMed   PubMed Central   Google Scholar  

McKeon S, Alexander E, Brodaty H, Ferris B, Frazer I, Little M. Strategic Review of Health and Medical Research, Summary Report. Department of Health and Ageing: Canberra; 2013.

NHMRC. Recognition of Advanced Health Research and Translation Centres and Centres for Innovation in Regional Health a report to NHMRC from the International Review Panel. Canberra: Australian Government; 2017. https://www.nhmrc.gov.au/file/5791/download?token=SZ1tzqiB . Accessed 23 Sept 2020.

Kislov R, Boaden R. Evaluation of the NIHR CLAHRCs and publication of results: a brief reflection. Manchester: Manchester Business School; 2015. https://pdfs.semanticscholar.org/5e50/0118b60c329721d14569fb26c390ebdf7e92.pdf . Accessed 5 Nov 2019.

Lockett A, El Enany N, Currie G, Oborn E, Barrett M, Racko G, et al. A formative evaluation of Collaboration for Leadership in Applied Health Research and Care (CLAHRC): institutional entrepreneurship for service innovation. Health Services and Delivery Research. Southampton: NIHR Journals Library; 2014.

Rycroft-Malone J, Wilkinson JE, Burton CR, Andrews G, Ariss S, Baker R, et al. Implementing health research through academic and clinical partnerships: a realistic evaluation of the Collaborations for Leadership in Applied Health Research and Care (CLAHRC). Implement Sci. 2011;6:74.

Langlois EV, Straus SE, Antony J, King VJ, Tricco AC. Using rapid reviews to strengthen health policy and systems and progress towards universal health coverage. BMJ Glob Health. 2019;4:e001178.

Ganann R, Ciliska D, Thomas H. Expediting systematic reviews: methods and implications of rapid reviews. Implement Sci. 2010;5:56.

Hopia H, Latvala E, Liimatainen L. Reviewing the methodology of an integrative review. Scand J Caring Sci. 2016;30(4):662–9.

Kislov R, Wilson PM, Knowles S, Boaden R. Learning from the emergence of NIHR Collaborations for Leadership in Applied Health Research and Care (CLAHRCs): a systematic review of evaluations. Implement Sci. 2018;13:111.

NIHR. NIHR Collaborations for Leadership in Applied Health Research and Care (CLAHRCs). 2018. https://www.nihr.ac.uk/explore-nihr/support/collaborating-in-applied-health-research.htm . Accessed 23 Sept 2020.

TheAHSNNetwork. About Academic Health Science Networks United Kingdom: TheASHNNetwork; 2019. https://www.ahsnnetwork.com/about-academic-health-science-networks . Accessed 5 Nov 2019.

Department of Health. Rapid Applied Research Translation Initiative Canberra: Australian Government; 2019. https://www.health.gov.au/initiatives-and-programs/rapid-applied-research-translation-initiative#how-will-these-goals-be-met . Accessed 23 Sept 2020.

Fisk NM, Wesselingh SL, Beilby JJ, Glasgow NJ, Puddey IB, Robinson BG, et al. Academic health science centres in Australia: let’s get competitive. Med J Aust. 2011;194(2):59–60.

Horizons. Programmes of Work United Kingdom: Horizons; 2019. http://horizonsnhs.com/programmes-of-work/ . Accessed 23 Sept 2020.

National Health Service. About us United Kingdom: NHS; 2019. https://www.leadershipacademy.nhs.uk/about/ . Accessed 26 Aug 2019.

Cooke J, Ariss S, Smith C, Read J. On-going collaborative priority-setting for research activity: a method of capacity building to reduce the research-practice translational gap. Health Res Policy Syst. 2015;13:25.

James Lind Alliance, National Institute for Health Research. The James Lind Alliance Guidebook. 2018. https://www.jla.nihr.ac.uk/jla-guidebook/ . Accessed 23 Sept 2020.

Rankin NM, McGregor D, Butow PN, White K, Phillips JL, Young JM, et al. Adapting the nominal group technique for priority setting of evidence-practice gaps in implementation science. BMC Medical Res Methodol. 2016;16:110.

Teede HJ, Norman RJ, Garad RM. A new evidence-based guideline for assessment and management of polycystic ovary syndrome. Med J Australia. 2019;210(6):285 e1.

Fischer MD, Dopson S, Fitzgerald L, Bennett C, Ferlie E, Ledger J, et al. Knowledge leadership: mobilizing management research by becoming the knowledge object. Hum Relat. 2016;69(7):1563–85.

Spyridonidis D, Hendy J, Barlow J. Leadership for knowledge translation: the case of CLAHRCs. Qual Health Res. 2015;25(11):1492–505.

Currie G, Lockett A. Distributing leadership in health and social care: concertive, conjoint or collective? Int J Manag Rev. 2011;13(3):286–300.

Rycroft-Malone J, Burton C, Wilkinson JE, Harvey G, McCormack B, Baker R, et al. Collective action for knowledge moblisation: a realist evaluation of the Collaborations for Leadership in applied Health Research and Care. Health Services and Delivery Research. Southampton: NIHR Journals Library; 2015;3(44).

Rowley E, Morriss R, Currie G, Schneider J. Research into practice: collaboration for leadership in applied health research and care (CLAHRC) for Nottinghamshire, Derbyshire, Lincolnshire (NDL). Implement Sci. 2012;7:40.

Kislov R, Walshe K, Harvey G. Managing boundaries in primary care service improvement: a developmental approach to communities of practice. Implement Sci. 2012;7:–97.

Evans S, Scarbrough H. Supporting knowledge translation through collaborative translational research initiatives: ‘Bridging’ versus ‘blurring’ boundary-spanning approaches in the UK CLAHRC initiative. Soc Sci Med. 2014;106:119–27.

Sinfield P, Donoghue K, Horobin A, Anderson ES. Placing interprofessional learning at the heart of improving practice: the activities and achievements of CLAHRC in Leicestershire, Northamptonshire and Rutland. Qual Prim Care. 2012;20(3):191–8.

PubMed   Google Scholar  

Gerrish K. Tapping the potential of the National Institute for Health Research Collaborations for Leadership in Applied Health Research and Care (CLAHRC) to develop research capacity and capability in nursing. J Res Nurs. 2010;15(3):215–25.

McLoughlin I, Burns P, Looi E, Sohal A, Teede H. Brokering knowledge into the public sector: understanding improvement facilitators’ priorities in the redesign of hospital care. Public Manag Rev. 2020;22:836–56.

Ovseiko PV, Heitmueller A, Allen P, Davies SM, Wells G, Ford GA, et al. Improving accountability through alignment: the role of academic health science centres and networks in England. BMC Health Serv Res. 2014;14:24.

Ferlie E, Crilly T, Jashapara A, Peckham A. Knowledge mobilisation in healthcare: a critical review of health sector and generic management literature. Soc Sci Med. 2012;74(8):1297–304.

D’Andreta D, Scarbrough H, Evans S. The enactment of knowledge translation: a study of the Collaborations for Leadership in Applied Health Research and Care initiative within the English National Health Service. J Health Serv Res Policy. 2013;18(Suppl. 3):40–52.

Fitzgerald L, Harvey G. Translational networks in healthcare? Evidence on the design and initiation of organizational networks for knowledge mobilization. Soc Sci Med. 2015;138:192–200.

Heaton J, Day J, Britten N. Inside the “black box” of a knowledge translation program in applied health research. Qual Health Res. 2015;25(11):1477–91.

Rycroft-Malone J, Wilkinson J, Burton CR, Harvey G, McCormack B, Graham I, et al. Collaborative action around implementation in Collaborations for Leadership in Applied Health Research and Care: towards a programme theory. J Health Serv Res Policy. 2013;18(Suppl. 3):13–26.

Hewison A, Rowan L. Bridging the research-practice gap. Br J Healthc Manag. 2016;22(4):208–10.

Athanasiou T, Patel V, Garas G, Ashrafian H, Shetty K, Sevdalis N, et al. Mentoring perception and academic performance: an Academic Health Science Centre survey. Postgrad Med J. 2016;92(1092):597–602.

Rycroft-Malone J, Burton CR, Bucknall T, Graham ID, Hutchinson AM, Stacey D. Collaboration and co-production of knowledge in healthcare: opportunities and challenges. International J Health Policy Manage. 2016;5(4):221–3.

Staniszewska S, Herron-Marx S, Mockford C. Measuring the impact of patient and public involvement: the need for an evidence base. Oxford: Oxford University Press; 2008.

Staniszewska S, Denegri S, Matthews R, Minogue V. Reviewing progress in public involvement in NIHR research: developing and implementing a new vision for the future. BMJ Open. 2018;8(7):e017124.

Marston C, Renedo A. Understanding and measuring the effects of patient and public involvement: an ethnographic study. Lancet. 2013;382:S69.

Renedo A, Marston CA, Spyridonidis D, Barlow J. Patient and Public Involvement in Healthcare Quality Improvement: how organizations can help patients and professionals to collaborate. Public Manag Rev. 2015;17(1):17–34.

Ariss S, Cooke J, Smith C, Reed J, Nancarrow S. NIHR CLAHRC for South Yorkshire internal evaluation report November 2011: executive summary. Sheffield: National Institute of Health Research (NIHR); 2012.

Australian Health Research Alliance. Consumer and Community Involvement in Health and Medical Research : An Australia-wide Audit Western Australia: Australian Health Research Alliance. 2018. https://www.wahtn.org/wp-content/uploads/2019/03/AHRA-CCI_Final-Report_Full_Dec2018.pdf .

El Enany N, Currie G, Lockett A. A paradox in healthcare service development: professionalization of service users. Soc Sci Med. 2013;80:24–30.

Ocloo J, Matthews R. From tokenism to empowerment: progressing patient and public involvement in healthcare improvement. BMJ Qual Saf. 2016;25(8):626–32.

Braithwaite J. Growing Inequality: Bridging Complex Systems, Population Health and Health Disparities. Oxford: Oxford University Press; 2018.

Greenhalgh T, Papoutsi C. Studying complexity in health services research: desperately seeking an overdue paradigm shift. BMC Med. 2018;16:95.

Kitson A, Brook A, Harvey G, Jordan Z, Marshall R, O’Shea R, et al. Using complexity and network concepts to inform healthcare knowledge translation. Int J Health Policy Manage. 2018;7(3):231–43.

Kislov R. Boundary discontinuity in a constellation of interconnected practices. Public Admin. 2014;92(2):307–23.

Barnes M. Users as citizens: collective action and the local governance of welfare. Soc Policy Admin. 1999;33(1):73–90.

NHS England. Everyone Counts: Planning for Patients 2013/14. London: HMSO; 2013.

Cunningham AL, Anderson T, Bennett CC, Crabb BS, Goodier G, Hilton D, et al. Why Australia needs a Medical Research Future Fund. Med J Aust. 2015;202(3):123–4.

Department of Health. About the MRFF: Australian Government; 2019. https://www.health.gov.au/initiatives-and-programs/medical-research-future-fund/about-the-mrff . Accessed 23 Sept 2020.

Department of Health. Australian Medical Research and Innovation Strategy 2016–2021. Canberra: Australian Government Department of Health; 2018. https://beta.health.gov.au/resources/publications/australian-medical-research-and-innovation-strategy-2016-2021 . Accessed 23 Sept 2020.

Martin GP, Ward V, Hendy J, Rowley E, Nancarrow S, Heaton J, et al. The challenges of evaluating large-scale, multi-partner programmes: the case of NIHR CLAHRCs. Evid Policy. 2011;7(4):489–509.

Petticrew M, Knai C, Thomas J, Rehfuess EA, Noyes J, Gerhardus A, Grimshaw J, Rutter H, McGill E. Implications of a complexity perspective for systematic reviews and guideline development in health decision making. BMJ Global Health. 2019;4(Suppl. 1):e000899.

Download references

Adherence to national regulations

Not applicable.

Declarations

This rapid review did not require ethics approval.

This work was supported by a small internal grant from the Monash Warwick Alliance (Monash University and Warwick University).

Author information

Helen Skouteris and Helena Teede contributed equally to this study as lead authors.

Authors and Affiliations

Monash Centre for Health Research & Implementation, School of Public Health & Preventive Medicine, Monash University, Level 1, 43-51 Kanooka Grove, Clayton, Victoria, 3168, Australia

Tracy Robinson, Cate Bailey, Heather Morris, Angela Melder, Halyo Bismantara, Helen Skouteris & Helena Teede

School of Nursing, Midwifery & Indigenous Health, Charles Sturt University, Bathurst, NSW, 2795, Australia

Tracy Robinson & Dmitrios Spyridonidis

Monash Partners Academic Health Science CENTre, Clayton, Victoria, Australia

Tracy Robinson, Angela Melder, Halyo Bismantara, Helen Skouteris & Helena Teede

School of Management, College of Business, RMIT University, Melbourne, Australia

Monash Health, Clayton, Victoria, Australia

Angela Melder & Helena Teede

Warwick Business School, Gibbet Hill Road, Coventry, CV4 7AL, United Kingdom

Charlotte Croft & Dmitrios Spyridonidis

You can also search for this author in PubMed   Google Scholar

Contributions

TR was the lead author, study investigator, drafted manuscripts and response to reviewers. CB conducted the reviewed the websites and the scientific literature. HM assisted with rapid review and references. PB was a study investigator and contributed to and revised the manuscript. AM, CC and DS were study investigators and revised the manuscript. HB assisted with website review and revised the manuscript. HS was the lead investigator on the study and revised the manuscript. HT was the lead investigator and provided significant contributions to the study design and manuscript. The authors read and approved the final manuscript.

Corresponding authors

Correspondence to Tracy Robinson or Helena Teede .

Ethics declarations

Consent for publication, competing interests.

All authors have reviewed final draft, consented to publication and declare no conflict of interest.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Cite this article.

Robinson, T., Bailey, C., Morris, H. et al. Bridging the research–practice gap in healthcare: a rapid review of research translation centres in England and Australia. Health Res Policy Sys 18 , 117 (2020). https://doi.org/10.1186/s12961-020-00621-w

Download citation

Received : 11 February 2020

Accepted : 27 August 2020

Published : 09 October 2020

DOI : https://doi.org/10.1186/s12961-020-00621-w

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • workforce development

Health Research Policy and Systems

ISSN: 1478-4505

  • Submission enquiries: Access here and click Contact Us
  • General enquiries: [email protected]

what is the research practice gap

ORIGINAL RESEARCH article

Bridging the gap between science and practice: research collaboration and the perception of research findings.

\r\nHadjar Mohajerzad*

  • 1 German Institute for Adult Education (LG), Bonn, Germany
  • 2 Department of Educational Sciences, University Potsdam, Potsdam, Germany

Research collaboration promises a useful approach to bridging the gap between research and practice and thus promoting evidence-informed education. This study examines whether information on research collaboration can influence the reception of research knowledge. We assume that the composition of experts from the field and scientists in a research team sends out signals that influence trust in as well as the relevance and applicability of the finding. In a survey experiment with practitioners from the field of adult education the influence of different research team compositions around an identical finding is tested. The results show overall high trust, relevance and applicability ratings with regard to the finding, regardless of the composition of the research team. We discuss the potential importance of additional information about research collaborations for effective knowledge translation and point out the need for more empirical research.

Introduction

The question of whether and how scientific evidence can make educational systems and professional action in educational systems more efficient has been a focus of educational science discourse and research programs for the last decades ( Hargreaves, 1999 ; Slavin, 2004 ; Nelson and Campbell, 2017 ; Pellegrini and Vivanet, 2021 ). In the United States, for example, the No Child Left Behind Act (2002) 1 required programs and teaching methods to be based on scientifically derived research results. The adult education legislation of the United States, Workforce Innovation and Opportunity Act (2014) , includes, among other things, the claim to evidence-based education. Practitioners in adult education should thus be supported with scientific knowledge in order to professionalize their skills. Professionalism requires individuals to act with the best available (research) knowledge and to reflect on their actions to improve their professional practice ( Thomm et al., 2021 ). In Great Britain, Hargreaves (1998) in particular called for a stronger evidence-based education at a Teacher Training Agency meeting. Evidence-based approaches in education policy are linked to the expectation that the findings of empirical educational research can serve as a knowledge basis for rational decisions and thereby improve the performance of education systems (e.g., Peurach and Glazer, 2012 ; Tseng et al., 2017 ). Repeatedly, programs of evidence-based educational reform challenge empirical educational research ( Biesta, 2007 ; Schrader et al., 2020 ) to communicate research findings to policymakers and practitioners, thereby contributing to improving actions and decisions ( Penuel et al., 2020 ). However, there is a broad consensus that the communication of research findings and their application in the broad field of education has not been satisfactorily implemented to date ( Broekkamp and Hout-Wolters, 2007 ; Kinyaduka, 2017 ; Tseng et al., 2017 ). This much-cited and multifaceted gap between research and practice has led to a controversial debate about evidence-based approaches in education. For example, the communication of research findings is criticized that it resembles a one-way approach that neglects the need to address practice issues ( Tseng et al., 2017 ). Hargreaves (1999 , p. 246) also refers to the contextuality of policy and practice decision-making processes in which scientific evidence is one factor in a complex structure. He therefore suggests using the term evidence-informed instead of evidence-based policy or practice to account for the quality of evidence and other (constraining) contextual factors in these fields of action.

The debate has been going on since the origins of educational science in the eighteenth century, and the relationship of educational science and educational practice (and the gap in between) has become a constitutive element, if not an academic feature, of the field ( Biesta, 2007 ). Roughly summarized, the current discussion on the causes of this gap and solutions to it develops along two strands that show some cross-connections ( Broekkamp and Hout-Wolters, 2007 ). The first strand pursues the argumentation that the results of educational research do not meet the needs of policy and practice with regard to the relevance and quality of content and applicability. The second strand focuses on the requirements and conditions for the reception and use of scientific research in practice.

With regard to the first strand, one needs to consider the quality and quantity of evidence in a given field of research. In the field of adult education—which will be considered in this paper in particular—empirical research activities have only been intensified in the last 50 years ( Born, 2011 ; Rubenson and Elfert, 2015 ) and there is an ongoing debate about whether methodological approaches are fit to deliver the demand for evidence in the competitive field of empirical educational research ( Boeren, 2018 ; Daley et al., 2018 ). A large number of non-empirical research and an altogether expandable state of research 2 can therefore be a valid factor contributing to the gap between adult education research and practice. The gap between science and practice can be described with various formulas: farewell to the ivory tower of science ( Hayes and Wilson, 2003 ), reference to basic research with little practical relevance ( Siebert, 1979 ; Hargreaves, 1998 ) and a distinction of the two cultures ( Fox, 2000 ; Ginsburg and Gorostiaga, 2001 ).

Regarding the second strand, research knowledge does not seamlessly find its way into practice or policy ( Tseng et al., 2017 ). Target groups must be able to make sense of information by reading, interpreting, and applying research knowledge to their situation in order to make necessary decisions ( Brown et al., 2017 ). Rather, there is a consensus in empirical educational research that evidence-informed education requires the integration of knowledge gained from experience and professionalization with systematically acquired research knowledge, whereby the different types of knowledge are not replaced, but have a reciprocal influence ( Ratcliffe et al., 2005 ) and may depend on specific professional culture and practice ( Booher et al., 2020 ). Concepts and designs to transfer research knowledge into practice, account for reciprocity considering specific contexts of use and professional development as well as dedicated resources ( Cordingley, 2009 ; Wollscheid et al., 2019 ). The literature offers a range of solutions to build bridges between science or theory and practice in adult education ( Siebert, 2011 ). Solutions are, for example, activities and the provision of resources for “linking theory and practice” in adult education to extend the scope to the audience of educational practice as well ( Merriam and Bierema, 2014 ). Other solutions to reduce the science-practice-gap include skills development in universities ( Schön, 1995 ; Hargreaves, 1998 ; Fox, 2000 ; Jütte and Walber, 2015 ) and scientific education ( Kennedy, 1997 ; Jütte and Walber, 2015 ). Furthermore, forms of communication and institutionalization are addressed, e.g., knowledge translation by the Texas Adult Literacy Clearinghouse ( St. Clair, 2004 ), the use of meaning making language ( Roessger, 2017 ), trialogues between science, practice and politics ( Robak and Käpplinger, 2015 ) or “Jour fixe” as discussion and lecture events ( Dausien et al., 2016 ). Other approaches to bridge the science-practice-gap combine both strands by incorporating relations between science and practice in the research process: (research) workshops, e.g., problem definition workshops, consulting workshops and interpretation workshops, mentoring exchange relationships and practitioner-based research ( Dirkx, 2006 ; Jütte and Walber, 2015 ) or research collaboration ( Penuel et al., 2020 ; for the adult education sector, cf. Siebert, 1979 ).

Our study focuses on the role of such collaborations to bridge the gap between science and practice. Research collaboration or research practice partnerships are discussed as a promising approach to enhance the use of evidence in practical decision making in education and there is a demand for studies on their conditions of success and outcomes ( Coburn and Penuel, 2016 ; Wentworth et al., 2017 ). We approach the topic of research collaboration as a potential solution for bridging the science-practice-gap from a large-scale dissemination perspective. We are interested if research collaboration between adult education practice and science has an effect on practitioners’ perception of research findings. In this sense, we are not concerned with the use and application of knowledge within specific research collaborations but investigate on the more general level of dissemination of research output and its contribution to knowledge transformation ( Cordingley, 2009 ). More specifically, we are interested if information on one quality aspect of research collaboration—the composition of scientists and practitioners in the research team—affects the reception of research findings. The central questions are: How does collaboration between science and practice within research processes in the field of adult education influence practitioner’s (1) trust in research findings, (2) attributed relevance of research findings and (3) applicability of these findings? The analysis is guided by system theory and signal theory. Derived hypotheses are tested with data from a survey experiment, testing varying research-practitioner-constellations, applying Bayes factor methods. The paper has the following structure: We first derive the hypotheses based on theoretical considerations and research results related to the concept of practice research collaboration (section “Bridging the Gap Between Science and Practice Through Collaboration—Theoretical Perspectives”). Then we present the methodology, design, data basis and method of analysis (section “A Survey Experiment on The Perception of Research Knowledge in a Collaborative Setting”). The presentation of the results (section “Results”) is followed by a discussion (section “Discussion”).

Bridging the Gap Between Science and Practice Through Collaboration—Theoretical Perspectives

In the scientific literature the relationship between (adult education) science and practice is often conceptualized theoretically using hermeneutic procedures to explain the tension between science and practice. There is little data on what the relationship actually is ( St. Clair, 2004 ). The tension is a structurally determined distance between the two systems (e.g., Feuer, 2006 ). Thus, there are numerous reasons for this distance, and just as many challenges arising from it (e.g., Fox, 2000 ; Broekkamp and Hout-Wolters, 2007 ). From the perspective of practitioners, research questions are of little practical relevance . They consider their role as “experimental objects” and see themselves as barely involved in research or unable to apply research results (e.g., Siebert, 1979 ; Fox, 2000 ). Practice often perceives science as an alien theory that is generated in an ivory tower—one that seeks answers to questions of no relevance to practice and that leaves adult education staff and institutions alone with their daily questions ( Faulstich, 2015 ). Practitioners complain that research knowledge does not meet the needs of practitioners and that research does not offer practice-relevant knowledge on specific topics ( Dean, 1999 ). The reception of research knowledge requires a clear presentation and linguistic comprehensibility on the part of the scientific community ( St. Clair, 2004 ; Christ et al., 2019 ) and an awareness of the practical relevance of scientific questions ( St. Clair, 2004 ).

In general, there is very little research on the extent and quality of practitioners’ use of research knowledge in their practice. The sparse findings are rather sobering. K-12 teachers, for example, show a low engagement with scientific evidence in order to inform their teaching practices ( Booher et al., 2020 ). Surveys with adult education providers show that they perceive the intensive exchange with science as useful. At the same time, they feel that scientific research is not sufficiently interested in practice-relevant issues ( Christ et al., 2019 ). However, if they consider research knowledge as relevant, they are more likely to apply this knowledge ( Weiss and Weiss, 1981 , as quoted in Huberman, 1994 ) and even change their practice accordingly ( St. Clair, 2004 ). Thus, with major reservations against its relevance and applicability, scientific knowledge is nevertheless recognized as suitable problem solution by practice ( St. Clair, 2004 ; Christ et al., 2019 ). Practitioners even demand resources from research to deal with everyday problems ( Fox, 2000 ).

The aim of informing and improving practice has an impact on the type of research and the methodology applied ( St. Clair, 2004 ). In order to reduce the structural distance between science and practice, empirical research demands collaboration in the sense of open and collaborative interaction in the research process (see design-based research approach e.g., Anderson and Shattuck, 2012 or see use-inspired basic research based on Stokes, 1997 ; e.g., Feuer, 2006 ; Goeze and Schrader, 2011 ). In adult education research, too, one of the central issues of the relationship of research to practice is the low level of exchange with practice on the research topics (e.g., Siebert, 1979 ; Faulstich, 2015 ). Huberman (1994) , for example, believes that it is necessary for researchers and practitioners to work together on knowledge production. Therefore, various forms of research collaboration can contribute to reduce the gap. We understand research collaboration as the collaboration between scientist and practitioner. Whether research knowledge is actually used, however, is finally decided on the practical side. Practitioners actively determine “what is useful and how it is useful” ( St. Clair, 2004 , p. 238), and ultimately whether they will deal with research-based information or not. After all, the main barriers are the perception of applicability and relevance of research knowledge, as well as lack of trust in science ( van Schaik et al., 2018 ).

Trust in Science by and in Practice

In the public and in science itself, collaboration between research and practice is sometimes problematized with regard to the openness and independence of research ( Besley et al., 2017 ). If, however, practice is supposed to rely on research knowledge in the sense of evidence-informed education, then practitioners must be able to trust the communication of scientists ( Kennedy, 1997 ). Indeed, trust is an essential element that supports evidence-based decision making within a researcher-practitioner relationship ( Wentworth et al., 2017 ). Trust in science and scientists as a construct has both rational and emotional components. General assessments, for example, show different levels of trust depending on the level of education and political or religious attitudes ( Nadelson et al., 2014 ). Trust attitudes can also vary depending on context and situation, leading to different expectations and different levels of trust in science, on the interests and logics of action in the practical domain ( Resnik, 2011 ) on the practice context and culture (e.g., subjective beliefs and values) or science context (e.g., controversial research findings, researchers’ interests, etc.). However complex and multifaceted, trust is a necessary condition for the transfer of scientific knowledge ( Mohajerzad and Specht, 2021 ). Bormann (2012) argues that the acquisition of scientific knowledge through educational reporting leads to an increase of complexity, which can in turn be reduced by trust.

Trust reduces complexity by absorbing uncertainty. In other words, when it is uncertain how an event will occur (contingency), trust refers to another’s ascertainable future action, thereby opening up possibilities for action that would be unlikely without trust ( Luhmann, 1988 , 2014 ). In this sense, trust does not result from information about the trustworthiness of an actor but replaces this missing information and thus enables action. Following Luhmann, we assume that trust is a function that arises from contingency and that allows us to act despite uncertainty (2014). Scientists select research questions, theoretical approaches, hypotheses, research designs and methods from a wide variety of possibilities without any guarantee that the research activity will lead to an applicable, robust result. The research process is hardly comprehensible to outsiders and leads to a fundamental uncertainty. Therefore, the application of research results requires trust. However, as uncertainty applies to all social interactions, information about practitioners’ participation in the research team will not reduce uncertainty. We therefore assume that research practice collaboration has no influence on trust in research results, leading us to our first hypothesis:

Hypothesis 1: Trust in research knowledge does not depend on research practice collaboration during the research process.

Signals for Practical Relevance and Applicability of Research Knowledge

Signal theory shows yet another way to conceptualize uncertainty in social interactions. As in Luhmann’s approach, signal theory assumes missing or insufficient information as well as information asymmetry between actors. In such a situation, signals can create certainty for decisions and actions. The starting point of the approach is a situation that is typical for game theory: Actor 1 and actor 2 can gain a benefit if actor 2 does something for or with actor 1. However, the prerequisite for the benefit of actor 2 is that actor 1 has a good (k). Actor 1 benefits in any case when actor 2 acts, regardless of whether he has (k) or not. For actor 2, it is therefore important to know whether actor 1 actually has (k). However, she cannot be sure of this. A signal whether actor 1 actually has the good (k) can solve this fundamental dilemma. In order for this signal to give a reliable indication of (k), the cost of the wrong signal that actor 1 has (k), although she does not have it, must be higher than the benefit that actor 1 receives when actor 2 acts based on wrong information ( Gambetta, 2009 ). The next two sections show how signals can illustrate the effect of collaboration between science and practice on the willingness of practice to apply research results.

Practical Relevance of Research Knowledge

If science commits to the idea of evidence-informed practice, it is important for researchers that their results are applied in practice. However, the representatives of practice decide whether scientific knowledge is suitable for practice or not. Practitioners can only guess the importance that practical relevance and applicability has had in the research process. Meanwhile, they only benefit if the findings help them solve practical problems or gain other practically relevant advantages. In this situation, the signal emanating from collaboration between practice and science can break the information asymmetry between science and practice. Contrary to scientists who actually conduct application-relevant research, scientists who conduct pure basic research without any practical relevance will have difficulties finding and keeping collaborative practice partners. Therefore, collaborative research can be a valid signal for practice-relevant research, and sets itself apart from detached research coming out of the ivory tower (e.g., Faulstich, 2015 ).

While the participation of practitioners assures practical relevance, the participation of scientists ensures that scientific standards are met ( Feuer, 2006 ). This results in two basic assumptions about how the signal of collaboration can affect the assessment of practical relevance. A continuous effect should occur if the validity of the signal for practical relevance increases with the number of practitioners in the research project. In contrast, a discrete effect would be expected if collaboration between scientists and practitioners signals practical relevance in any case, regardless of how many practitioners are involved.

Hypothesis 2

2a : The higher the proportion of practitioners in the research process, the higher the practical relevance of research knowledge.

2b: In the case of research collaboration between scientists and practitioners, the practical relevance of research knowledge is higher than without research collaboration.

Applicability of Research Knowledge

The Signal theory is also valid with regard to the applicability of research results. While practical relevance is an important precondition for the transfer to practice, practical relevance does not necessarily mean that the knowledge generated will actually be implemented. Therefore, the knowledge must also be applicable. Research is applicable if it is tailored to practice, i.e., if research deals with problems and experiences of practice. Producing scientific knowledge through collaboration can be a beneficial condition for dissemination ( van Schaik et al., 2018 ). Another position assumes that research is applicable if it is possible to generalize research knowledge and thus transfer a specific solution to a broader or different context ( Ercikan and Roth, 2014 ). However, neither of these goals can be achieved without compromises. While the generalizability of research results by the use of scientific methods and standards is highly desirable from a scientific perspective, findings from applied research that are produced by practitioners themselves, as in action research, may be more applicable ( Kuhn and Quigley, 1997 ). Structures in research signaling an engagement of practitioners by collaboration or by action research, could thus influence practitioners in terms of using research knowledge ( Henson, 2001 ; Levin and Rock, 2003 ). This leads to a set of three possible theses for the assessment of applicability. In the first thesis, we assume that the most important factor for applicability is the focus on practical needs rather than the emphasis on generalizability and advancement of theories. A higher proportion of practitioners involved in the research process should signal a higher applicability of the produced knowledge. In the second thesis, we assume that practitioners perceive research results as more applicable if both generalizability and practicability are signaled. In this case, collaboration signals applicability, regardless of the proportion of scientists and practitioners involved in the research project. In the third thesis, we assume that generalized knowledge generated by rigorous scientific standards, such as validity of data or experimental designs, is produced by scientists in particular. Since research is particularly applicable when research knowledge is generalizable ( Ercikan and Roth, 2014 ), we assume that scientists involved in the research process signal generalizable research knowledge and thus a high applicability. Thus, the higher the proportion of scientists, the higher the assessment of applicability.

Hypothesis 3

H3a: The higher the proportion of practitioners in the research process, the higher the perception of the applicability of research knowledge.

H3b: In case of a research collaboration between scientists and practitioners, the perception of the applicability of research knowledge is higher than in the case of no research collaboration.

H3c : The higher the proportion of scientists in the research process, the higher the perception of the applicability of research knowledge.

A Survey Experiment on the Perception of Research Knowledge in a Collaborative Setting

Data and design.

In order to test the impact of research collaboration opportunities in terms of perception and trust in research knowledge, we use data from the German wbmonitor 2019. Wbmonitor is an annual online survey of adult education and training providing organizations in Germany conducted in cooperation with the Federal Institute for Vocational Education and Training (BIBB) and the German Institute for Adult Education---Leibniz Centre for Lifelong Learning e.V. (DIE). The survey annually collects information on the economic situation, staff and services of the organizations as also information on annually changing thematic focal points. 3 In 2019, 18,050 organizations were invited to participate in the survey between May and June. Our analysis is based on the sample of 1,551 organizations with valid survey participation ( Christ et al., 2020 ), i.e., 1,551 respondents within these organizations. The sample covers different types of organizations : private commercial providers (22%), private non-commercial providers (15%), institutions affiliated with churches, parties, trade unions, non-profit associations or foundations (19%), adult education centers (16%), business-related institutions (10%), vocational schools (8%), educational institutions of companies (3%), universities and universities of applied sciences (3%) and other types of public institutions (2%). The questionnaire is usually answered by persons with leadership and planning roles within the organizations surveyed. 4

For our analysis we use data of a survey experiment that was conducted in addition to the regular survey program in 2019. Survey experiments combine the advantages of randomized experiments with the possibilities of large representative surveys. This design enables causal relationships to be identified and at the same time guarantees a high internal and external validity ( Auspurg and Hinz, 2015 ). Within the experiment, which was placed at the end of the regular survey program, each of the respondents was presented a vignette containing a research result produced by a specific research team (see Figure A1 for the research design). While the presented research result was identical for all of the respondents, the composition of practitioners and scientists in the research team was varied between six groups. The distinction was made between four scientists, three scientists and one expert (working in an adult education providing organization), two scientists and two experts and vice versa, three experts and one scientist and lastly four experts. The vignettes were allocated to the wbmonitor population by a random split into six groups, before the organizations were invited to participate in the survey. The sizes of the six splits in the analyzed sample differ slightly. They are between 15 and 17% of the total sample (see Table 1 ).

www.frontiersin.org

Table 1. Sample split.

Three items ask the respondents (1) whether they trust the presented research result, (2) whether the results are relevant to the field of activities in their organization and 3) whether the results can be applied in their organization (see Figure A1 for questions in the survey). The three items were surveyed using a five-point Likert scale, marked with 1 = “++,” 2 = “+,” 3 = “0,” 4 = “-” and 5 = “–” and verbalized endpoints (++ “agree completely,” – “do not agree at all”).

Analytical Strategy

We model the effect of different compositions within research practice collaboration on the perception of research results using Bayesian variance analyses. While frequentist statistics with the p -value can only indicate the dependent probability P(D| H0) that the data D occur under validity of the null hypothesis, the probability P(H| D) is of interest, i.e., how probable is a hypothesis among the data obtained. This can be calculated with Bayesian statistics, because with the Bayesian theorem both probabilities can be related to each other ( Hoijtink et al., 2019 ):

The so-called Bayes Factor (BF) expresses quantitatively to what extent the data obtained speak in favor of a zero model/hypothesis or an alternative model/hypothesis. Although frequentist statistics will be used to calculate a p -value for whether the null hypothesis should be rejected, quantifying the p -value as the strength of the data against the null hypothesis is linked with restrictions. Analyses of Lin and Yin (2015) show that even if the null hypothesis is rejected, there is still a probability of about 20% that the null hypothesis is true. Therefore, the Bayesian posterior probability of the null hypothesis is appropriate for examining how strongly the data support the null hypothesis ( Lin and Yin, 2015 ). Our first hypothesis is therefore tested using the Bayesian posterior probability of the null hypothesis. Another advantage of Bayes’ theorem is that the BF can be used to test a set of hypotheses ( Hoijtink et al., 2019 ). BF then selects the best suitable hypotheses. Since we have two sets of hypotheses (2a and 2b resp. 3a, 3b, and 3c), we can use Bayesian variance analysis to model our hypotheses against each other. We perform the data analysis with the Bayes-Factor-Package bain 5 in the statistics program R ( Gu et al., 2018 ). This follows in the tradition set by O’Hagan (1995) . We present our results based on the recommendation for reporting results from Hoijtink et al. (2019 , p. 553–554).

Table 2 provides an overview of all dependent variables and their main summary statistics that we calculate using the total analytical sample of all variables. Across all groups, it is evident that research knowledge is trusted ( M -values: 2.16–2.25) and perceived as practically relevant ( M -values: 2.27–2.53) and applicable ( M -values: 2.48–2.57).

www.frontiersin.org

Table 2. Descriptive statistics of dependent variable.

To examine our three hypotheses (or sets of hypotheses), Bayesian analyses of variance were performed. The posterior distribution of the Bayesian analysis summarizes the information in the data and the prior distribution in relation to the population mean of each of the groups in the ANOVA. The Bayes factors vs. H u 6 and the Bayesian probabilities are displayed in the table below. The Bayesian probabilities—also called posterior probabilities—quantify the support for hypothesis (e.g., H 0 ) and H u after the data is observed ( Hoijtink et al., 2019 ). Hence, P( H 0 | data) can be regarded as the Bayesian error probability if H u is chosen as the preferred hypothesis, and P( H u | data) is the Bayesian error probability if H 0 is chosen as the preferred hypothesis. The ratio of these probabilities (the posterior odds) can be calculated using the BF and prior odds over P ⁢ ( H 0 | d ⁢ a ⁢ t ⁢ a ) P ⁢ ( H u | d ⁢ a ⁢ t ⁢ a ) = B ⁢ F 0 ⁢ u × P ⁢ ( H 0 ) P ⁢ ( H u ) ( Hoijtink et al., 2019 , p. 544).

In our first hypothesis we assumed that trust in research knowledge does not depend on research practice collaboration during the research process. Table 3 shows the results from testing our hypotheses on trust in research knowledge. Two hypotheses corresponding to the Bayesian variance analysis are displayed: 7

www.frontiersin.org

Table 3. Bayesian informative hypothesis testing (ANOVA) of Trust.

As can be seen, B F 1 u = 602430.05, that is, the support for H 1 is still 602 430.05 times larger than for H u . The Bayesian error probability associated with preferring H 1 equals zero. H 1 is the preferred hypothesis. This means we can assume that trust in research knowledge is not dependent on research practice compositions in the research process.

Second, four hypotheses for the variables of practical relevance are evaluated, which are firstly, the higher the proportion of practitioners in the research process, the higher the practical relevance of the research knowledge (2a) and secondly, when there is research collaboration between researchers and practitioners, the practical relevance of the research knowledge is higher than without research collaboration (2b). Following the Bayesian approach, these hypotheses were again tested against the H 0 , that there is no difference, and against the H u , that there is no relationship between the constellations.

The Bayes factors vs. H u and the posterior probability are displayed in Table 4 . As can be seen, H 0 is supported more than H 2 a , H 2 b , and H u . The posterior probability H 0 has the highest posterior model probability (0.96) and thus is the best suitable hypothesis of the set of hypotheses. We can therefore not accept any of our hypotheses on the influence of research compositions on the perception of practical relevance. This is because the null hypothesis is confirmed, i.e., practical relevance in research knowledge is not dependent on research practice compositions in the research process.

www.frontiersin.org

Table 4. Bayesian informative hypothesis testing (ANOVA) of practical relevance.

To examine the third set of hypotheses with regard to the applicability of research knowledge, three hypotheses were contrasted: The higher the proportion of practitioners in the research process, the higher the perception of applicability of research knowledge (H3a), in case of research collaboration between researchers and practitioners, the perception of applicability of research knowledge is higher than in case of no research collaboration (H3b), and the higher the share of scientists in the research process, the higher the perception of the applicability of research knowledge (H3c). Five hypotheses are evaluated for the variable of applicability according to Bayesian analysis of variance:

The results in Table 5 show the Bayes factor resulting from the ANOVA analysis. As can be seen, B F 0 u = 665957.83, that is, the support for H 0 is still 665 957.83 times larger than for H u . It can also be seen that the support for H 3b is 26088.13 times larger than the support for H u . The posterior probabilities are obtained by including H u in the set of hypotheses examined. They show that H 0 , with a posterior probability of 0.96, is the hypothesis with the greatest support and that the preference for H 0 is associated with an error probability of 0.04. Again, as in hypothesis two, we confirm the null hypothesis, i.e., applicability of research knowledge is not dependent on research practice compositions in the research process.

www.frontiersin.org

Table 5. Bayesian informative hypothesis testing (ANOVA) of applicability.

Summary of Results

Overall, across the varying research team compositions presented, research knowledge is associated with a rather high level of trust, relevance and applicability. Our analyses confirm the assumption that trust in research knowledge is not dependent on research collaboration (Hypothesis 1a). Concerning the question of whether signals about a (no) research collaboration per se point to practice-relevant and applicable research knowledge, the findings show that practitioners neither perceive signals of practice-relevance nor applicability from any collaborative research setting nor from homogeneous research teams (Hypotheses 2 and 3).

Against the background of the science-practice gap, our study examines whether information about research collaborations between science and practice influences the reception of research findings in the field of adult education. The dimensions examined—trust in the findings as well as assessments of their relevance and applicability—are central prerequisites for the actual use of research findings in the field. Our survey experiment varied compositions of scientists and practitioners in a research team around an identical finding. With reference to considerations based on system and signal theory, our hypotheses suggested that research-practice collaboration and the ratio between participating scientists and practitioners in the research team makes a difference in the reception of the dimensions studied. According to our results, it does not. On average, the descriptive results on trust in the finding, as well as the assessment of its relevance and applicability, show a positive tendency—regardless of the composition of the research team. On the one hand, the results of the Bayesian models support our assumption that trust does not depend on research collaboration. On the other hand, the results of Bayesian models do not support our assumptions about the influence of compositions within research collaborations on reception by signals. Neither the relevance to practice nor the applicability of research findings is influenced by signals about the composition of researchers and practitioners.

Implications

The results indicate that practitioners have a high level of trust in scientific findings regardless of the composition of the research team. Trust reduces complexity and uncertainty and is therefore a basic prerequisite for scientific knowledge to be translated into decisions and action in the field. An unconditionally high level of trust is generally a good premise for knowledge translation. Are the ivory tower metaphor and sweeping accusations about a lack of practical relevance therefore water under the bridge of the science-practice gap? What are the implications for the discussion on evidence based practice in education?

Trust in science has different degrees of complexity and its conditions are difficult to define ( Resnik, 2011 ). Although our findings indicate that information about research collaboration does not strengthen trust in, nor improve the perception of the relevance and applicability of research knowledge, we cannot conclude that information about research collaboration does not matter because we measured attributions. Research collaboration as a multifaceted element should be considered in further research. After all, research knowledge produced through collaborative research can enrich relevance and applicability as it is integrated with experiential knowledge and situational awareness of complexity in educational practice ( van Schaik et al., 2018 ). To this point, (mostly case) studies (e.g., Coburn and Penuel, 2016 ; Wentworth et al., 2017 ) show potential benefits of research collaboration in the specific context of the collaboration (that is those organizations, practitioners and students involved or in close proximity to the research practice partnership).

Our results suggest that cooperation between academics and practitioners in research teams may be beneficial for the realization of research but does not per se lead to widespread use of practitioners’ research knowledge. It would be worthwhile to further explore whether and how evidence-based action in small-scale settings can spill over into large-scale knowledge-sharing settings. Against this background, the focus could shift from trust in individual research findings and their origins to trust in media and institutions that act as knowledge brokers to communicate scientific findings to professionals in various educational settings, e.g., clearing houses, practical journals and blogs. Potential spill-over effects of collaborative research results could also be investigated in a more differentiated way with regard to various transfer products from individual projects. Transfer products, such as trainings, textbooks or digital media tailored for practice, are per se more application-oriented ( Goeze and Schrader, 2011 ) and could be especially efficient if they were developed in mutual collaboration. The expertise of practitioners on conditions and barriers of the use of research results in the practice of professionals as well as further studies on the user behavior in different professional fields ( Henson, 2001 ; Levin and Rock, 2003 ) can provide important insights and design information.

Limitations

Although survey experiments by design promise internal and external validity, by combining experimental designs with representative samples, the application of the method brings limitations and trade-offs for any study based on wbmonitor data. An additional module to the regular question program was enabled, which had a correspondingly limited question program. In form of a classic split-ballot experimental design, the study is limited to the analysis of only one vignette dimension, namely the different composition of actors in a research collaboration. The representation of no or more or less practitioners, respectively, scientists in a research team has only limited informative value for aspects of cooperative research and its potentials. Thus, we are not in a position to test the influence of further relevant characteristics for the description of cooperative research processes (e.g., detailed descriptions of persons involved or descriptions of scope and quality of the involvement) on the reception of scientific knowledge. Likewise, we were unable to vary other factors outside the research collaboration, such as characteristics of the research knowledge (e.g., research methods, scientific language, mode of presentation, etc.).

Furthermore, there is a certain proximity between the two terms “expert” and “scientist” in German. Even though we have added the words “from an adult education institution” to the term “experts,” we cannot rule out the possibility that respondents may also associate the term with a scientific nature. Moreover, future research should take up questions of how the impact of knowledge about collaborations in research processes on reception is influenced by recipients’ characteristics (e.g., general attitudes toward science and/or contextual information on the working environment). Finally, we admit that it cannot be ruled out that the effects could be different in other scientific disciplines. In medical research, for example, cooperation with economically oriented pharmaceutical industries critically affects trust in research. In particular, trust in (educational) science as a complex construct deserves closer examination, which should consider further detailed information of contexts in which research results are produced and contexts in which they are to be used ( Mohajerzad and Specht, 2021 ).

On a more general note, this study counteracts the observation that too little importance is given to the reception of research results ( St. Clair, 2004 ). We think that an expansion of research on the reception of scientific knowledge and its effectiveness in the context of knowledge translation is required. In view of constantly growing empirical research on education, questions of how generated knowledge can be used to inform evidence-based practice in education are receiving relatively little attention. This question should not only be answered conceptually but should also be guided empirically in line with the evidence-informed approach.

Data Availability Statement

The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation.

Ethics Statement

Ethical review and approval was not required for the study on human participants in accordance with the local legislation and institutional requirements. The patients/participants provided their written informed consent to participate in this study.

Author Contributions

AM, JC, and SW contributed to the study conception and design, and performed the material preparation and data collection in cooperation with the Federal Institute for Vocational Education and Training (BIBB). AM and HM performed the data analysis. HM wrote the first draft of the manuscript. AM, JC, SW, and HM wrote sections of the manuscript. All authors commented on the earlier versions of the manuscript, read and approved the final manuscript.

Conflict of Interest

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Publisher’s Note

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.

  • ^ Replaced in 2015 by the Every Student Succeeds Act (2015) .
  • ^ Tooley and Darby (1998) as cited in Feuer et al. (2002) and Schrader (2014) as quoted in Schrader (2014) also mention such defects.
  • ^ For further information on the survey design: Koscheck and Ohly (2010) .
  • ^ Results based on wbmonitor 2018 show following shares for tasks performed by respondents (multiple responses per respondent): planning/management = 87%, teaching/consulting = 29%, administration = 17% (Source. Own calculations using wbmonitor 2018).
  • ^ https://informative-hypotheses.sites.uu.nl/software/bain/
  • ^ The subscript u denotes that the means are unrestricted. The Bayesian factors are calculated by integrating so-called posterior and priority distributions with respect to (parts of) Hu ( Hoijtink et al., 2019 ).
  • ^ 4S = four Scientists, 3S1E = three Scientists and one Expert, 2S2E = two Scientists and two Experts, 2E2S = two Experts and two Scientists, 3E1S = three Experts and one Scientist and 4E = four Experts.

Anderson, T., and Shattuck, J. (2012). Design-based research: a decade of progress in education research? Educ. Res. 41, 16–25. doi: 10.3102/0013189X11428813

CrossRef Full Text | Google Scholar

Auspurg, K., and Hinz, T. (2015). Factorial Survey Experiments. Thousand Oaks, CA: Sage.

Google Scholar

Besley, J. C., McCright, A. M., Zahry, N. R., Elliott, K. C., Kaminski, N. E., and Martin, J. D. (2017). Perceived conflict of interest in health science partnerships. PLoS One 12:e0175643. doi: 10.1371/journal.pone.0175643

PubMed Abstract | CrossRef Full Text | Google Scholar

Biesta, G. (2007). Bridging the gap between educational research and educational practice: the need for critical distance. Educ. Res. Eval. 13, 295–301. doi: 10.1080/13803610701640227

Boeren, E. (2018). The methodological underdog: a review of quantitative research in the key adult education journals. Adult Educ. Q. 68, 63–79. doi: 10.1177/0741713617739347

Booher, L., Nadelson, L. S., and Nadelson, S. G. (2020). What about research and evidence? Teachers’ perceptions and uses of education research to inform STEM teaching. J. Educ. Res. 113, 213–225. doi: 10.1080/00220671.2020.1782811

Bormann, I. (2012). Vertrauen in institutionen der bildung oder: vertrauen ist gut – ist evidenz besser? Zeitschrift für Pädagogik 58, 812–823. doi: 10.25656/01:10477

Born, A. (2011). “Geschichte der erwachsenenbildungsforschung,” in Handbuch Erwachsenenbildung/Weiterbildung , ed. R. Tippelt (Wiesbaden: VS Verlag für Sozialwissenschaften), 231–241. doi: 10.1007/978-3-531-94165-3_14

Broekkamp, H., and Hout-Wolters, B. (2007). The gap between educational research and practice: a literature review, symposium, and questionnaire. Educ. Res. Eval. 13, 203–220. doi: 10.1080/13803610701626127

Brown, C., Schildkamp, K., and Hubers, M. D. (2017). Combining the best of two worlds: a conceptual proposal for evidence-informed school improvement. Educ. Res. 59, 154–172. doi: 10.1080/00131881.2017.1304327

Christ, J., Koscheck, S., Martin, A., Ohly, H., and Widany, S. (2020). Digitalisierung. Ergebnisse der wbmonitor Umfrage 2019 . Leverkusen: Budrich.

Christ, J., Koscheck, S., Martin, A., and Widany, S. (2019). Wissenstransfer – Wie kommt die Wissenschaft in die Praxis? Ergebnisse der wbmonitor Umfrage 2018 . Leverkusen: Budrich.

Coburn, C., and Penuel, W. (2016). Research-practice partnerships in education: outcomes dynamics, and open questions. Educ. Res. 45, 48–54. doi: 10.3102/0013189X16631750

Cordingley, P. (2009). Research and evidence-informed practice: focusing on practice and practitioners. Camb. J. Educ. 38, 37–52. doi: 10.1080/03057640801889964

Daley, B. J., Martin, L. G., and Roessger, K. M. (2018). A call for methodological plurality: reconsidering research approaches in adult education. Adult Educ. Q. 68, 157–169. doi: 10.1177/0741713618760560

Dausien, B., Kellner, W., and Rothe, D. (2016). Der Jour Fixe Bildungstheorie | Bildungspraxis. Eine Kooperation Zwischen Erwachsenenbildung und Universität. Magazin erwachsenenbildung.at Das Fachmedium für Forschung, Praxis und Diskurs. Available online at: http://www.erwachsenenbildung.at/magazin/16-27/meb16-27.pdf (accessed September 2, 2021).

Dean, G. J. (1999). Reality and research in adult education: do opposites really attract? PAACE J. Lifelong Learn. 8, 21–30.

Dirkx, J. M. (2006). Studying the complicated matter of what works: evidence-based research and the problem of practice. Adult Educ. Q. 56, 273–290. doi: 10.1177/0741713606289358

Ercikan, K., and Roth, M. (2014). Limits of generalizing in education research: why criteria for research generalization should include population heterogeneity and users of knowledge claims. Teach. Coll. Rec. 116:050304.

Every Student Succeeds Act (2015). Pub. L. No. 114–95. 20 U.S.C. §6301 et seq.

Faulstich, P. (2015). Reflexive handlungsfähigkeit vermitteln. Aufgaben der wissenschaft in der erwachsenenbildung. Hessische Blätter für Volksbildung 1, 8–16. doi: 10.3278/HBV1501W008

Feuer, M. J. (2006). Response to bettie St. Pierre’s ‘scientifically based research in education: epistemology and ethics’. Adult Educ. Q. 56, 267–272. doi: 10.1177/0741713606289007

Feuer, M. J., Towne, L., and Shavelson, R. J. (2002). Scientific culture and educational research. Educ. Res. 31, 4–14.

Fox, R. D. (2000). Using theory and research to shape the practice of continuing professional development. J. Contin. Educ. Health Prof. 20, 238–239. doi: 10.1002/chp.1340200407

Gambetta, D. (2009). “Signaling,” in The Oxford Handbook of Analytical Sociology , eds P. Hedström and P. Bearman (Oxford: Oxford University Press), 168–194.

Ginsburg, M. B., and Gorostiaga, J. M. (2001). Relationships between theorists/researchers and policy makers/practitioners: rethinking the two-cultures thesis and the possibility of dialogue. Comp. Educ. Rev. 45, 173–196. doi: 10.1086/447660

Goeze, A., and Schrader, J. (2011). Wie forschung nützlich werden kann. Rep. Zeitschrift für Weiterbildungsforschung 34, 67–76. doi: 10.3278/REP1102W067

Gu, X., Mulder, J., and Hoijtink, H. (2018). Approximated adjusted fractional bayes factors: a general method for testing informative hypotheses. Br J Math. Stat. Psychol. 71, 229–261. doi: 10.1111/bmsp.12110

Hagiwara, A. (2015). Effect of visual support on the processing of multiclausal sentences. Lang. Teach. Res. 19, 455–472. doi: 10.1177/1362168814541715

Hargreaves, D. H. (1998). Creative Professionalism: The Role of Teachers in the Knowledge Society. London: Demos.

Hargreaves, D. H. (1999). Revitalising educational research: lessons from the past and proposals for the future. Camb. J. Educ. 29, 239–249. doi: 10.1080/0305764990290207

Hayes, E. R., and Wilson, A. L. (2003). From the editors: making meaning of meaning making. Adult Educ. Q. 53, 77–80. doi: 10.1177/0741713602238904

Henson, R. K. (2001). The effects of participation in teacher research on teacher efficacy. Teach. Teach. Educ. 17, 819–836. doi: 10.1016/S0742-051X(01)00033-6

Hoijtink, H., Mulder, J., van Lissa, C., and Gu, X. (2019). A tutorial on testing hypotheses using the Bayes factor. Psychol. Methods 24, 539–556. doi: 10.1037/met0000201

Huberman, M. (1994). Research utilization: the state of the art. Knowl. Policy 7, 13–34. doi: 10.1007/BF02696290

Jütte, W., and Walber, M. (2015). Wie finden wissenschaft und praxis der weiterbildung zusammen? kooperative professionalisierungsprozesse aus relationaler perspektive. Hessische Blätter für Volksbildung 1, 67–75. doi: 10.3278/HBV1501W067

Kennedy, M. M. (1997). How teachers connect research and practice. Midwest. Educ. Res. 10, 25–29.

Kinyaduka, B. D. (2017). Why are we unable bridging theory-practice gap in context of plethora of literature on its causes, effects and solutions? J. Educ. Pract. 8, 102–105.

Koscheck, S., and Ohly, H. (2010). wbmonitor 2007–2009. Bonn: BIBB-FDZ Daten- und Methodenberichte.

Kuhn, G., and Quigley, A. (1997). “Understanding and using action research in practice settings,” in Creating Practical Knowledge through Action Research , eds A. Quigley and G. Kuhn (San Francisco, CA: Jossey-Bass), 23–40. doi: 10.1002/ace.7302

Levin, B. B., and Rock, T. C. (2003). The effects of collaborative action research on preservice and experienced teacher partners in professional development schools. J. Teach. Educ. 54, 135–149. doi: 10.1177/0022487102250287

Lin, R., and Yin, G. (2015). Bayes factor and posterior probability: complementary statistical evidence to p-value. Contemp. Clin. Trials 44, 33–35. doi: 10.1016/j.cct.2015.07.001

Luhmann, N. (1988). “Familiarity, confidence, trust: problems and alternatives,” in Trust: Making and Breaking of Cooperative Relations , ed. D. Gambetta (Oxford: Blackwell), 94–107.

Luhmann, N. (2014). Vertrauen: Ein Mechanismus der Reduktion sozialer Komplexität , 5th Edn. Konstanz: UKV Verlagsgesellschaft.

Merriam, S. B., and Bierema, L. L. (2014). Adult Learning: Linking Theory to Practice. San Francisco, CA: Jossey-Bass.

Mohajerzad, H., and Specht, I. (2021). “Vertrauen in Wissenschaft als komplexes Konzept,” in Wissenstransfer – Komplexit tsreduktion – Design , eds G. Moll and J. Schütz (Bielefeld: wbv Media), 31–49. doi: 10.3278/6004796w

Nadelson, L., Jorcyk, C., Yang, D., Jarratt Smith, M., Matson, S., Cornell, K., et al. (2014). I just don’t trust them: the development and validation of an assessment instrument to measure trust in science and scientists. Sch. Sci. Math. 114, 76–86. doi: 10.1111/ssm.12051

Nelson, J., and Campbell, C. (2017). Evidence-informed practice in education: meanings and applications. Educ. Res. 59, 127–135. doi: 10.1080/00131881.2017.1314115

No Child Left Behind Act (2002). Pub. L. No. 107–110, §115 Stat. 1425.

O’Hagan, A. (1995). Fractional Bayes factors for model comparison (with discussion). J. R. Stat. Soc. Series B 57, 99–138. doi: 10.1111/j.2517-6161.1995.tb02017.x

Pellegrini, M., and Vivanet, G. (2021). Evidence-based policies in education: initiatives and challenges in Europe. ECNU Rev. Educ. 4, 25–45. doi: 10.1177/2096531120924670

Penuel, W. R., Riedy, R., Barber, M. S., Peurach, D. J., Le Bouef, W. A., and Clark, T. (2020). Principles of collaborative education research with stakeholders: toward requirements for a new research and development infrastructure. Rev. Educ. Res. 90, 627–674. doi: 10.3102/0034654320938126

Peurach, D. J., and Glazer, J. L. (2012). Reconsidering replication: new perspectives on large-scale school improvement. J. Educ. Change 13, 155–190. doi: 10.1007/s10833-011-9177-7

Ratcliffe, M., Bartholomew, H., Hames, V., Hind, A., Leach, J., Millar, R., et al. (2005). Evidence-based practice in science education: the researcher–user interface. Res. Pap. Educ. 20, 169–186. doi: 10.1080/02671520500078036

Resnik, B. (2011). Scientific research and the public trust. Sci. Eng. Ethics 17, 399–409. doi: 10.1007/s11948-010-9210-x

Robak, S., and Käpplinger, B. (2015). Zum trialog von wissenschaft, praxis und politik. eine essayistische annäherung 60 jahre nach der hildesheim-studie. Hessische Blätter für Volksbildung 1, 46–55. doi: 10.3278/HBV1501W046

Roessger, K. M. (2017). From theory to practice: a quantitative content analysis of adult education’s language on meaning making. Adult Educ. Q. 67, 209–227. doi: 10.1177/0741713617700986

Rubenson, K., and Elfert, M. (2015). Adult education research. exploring an increasingly fragmented map. Eur. J. Res. Educ. Learn. Adults 6, 125–138. doi: 10.3384/rela.2000-7426.rela9066

Schön, D. A. (1995). Knowing-in-action: the new scholarship requires a new epistemology. Change 27, 28–34. doi: 10.1080/00091383.1995.10544673

Schrader, J. (2014). Analyse und förderung effektiver lehr-lernprozesse unter dem anspruch evidenzbasierter bildungsreform. Zeitschrift für Erziehungswissenschaft 17, 193–223. doi: 10.1007/s11618-014-0540-3

Schrader, J., Hasselhorn, M., Hetfleisch, P., and Goeze, A. (2020). Stichwortbeitrag Implementationsforschung: wie wissenschaft zu verbesserungen im bildungssystem beitragen kann. Zeitschrift für Erziehungswissenschaft 23, 9–59. doi: 10.1007/s11618-020-00927-z

Siebert, H. (1979). Taschenbuch der Weiterbildungsforschung. Baltmannsweiler: Burgbücherei Schneider.

Siebert, H. (2011). Theorien für die Praxis. Bielefeld: wbv.

Slavin, R. E. (2004). Education research can and must address ‘what works’ questions. Educ. Res. 33, 27–28. doi: 10.3102/0013189X033001027

St. Clair, R. (2004). A beautiful friendship? the relationship of research to practice in adult education. Adult Educ. Q. 54, 224–241. doi: 10.1177/0741713604263053

Stokes, D. (1997). Pasteur’s Quadrant–Basic Science and Technological Innovation. Washington, DC: Brookings Institution Press.

Thomm, E., Gold, B., Betsch, T., and Bauer, J. (2021). When preservice teachers’ prior beliefs contradict evidence from educational research. Br. J. Educ. Psychol. 91:e12407. doi: 10.1111/bjep.1247

Tooley, J., and Darby, D. (1998). Educational Research: A Critique. London: OFSTED.

Tseng, V., Easton, J. Q., and Supplee, L. H. (2017). Research-practice partnerships: building two-way streets of engagement. Soc. Policy Rep. 30, 3–16. doi: 10.1002/j.2379-3988.2017.tb00089.x

van Schaik, P., Volman, M., Admiraal, W., and Schenke, W. (2018). Barriers and conditions for teachers’ utilization of academic knowledge. Int. J. Educ. Res. 90, 50–63. doi: 10.1016/j.ijer.2018.05.003

Weiss, J., and Weiss, C. (1981). Social scientists and decision-makers look at the usefulness of mental health research. Am. Psychol. 36, 837–847. doi: 10.1037//0003-066x.36.8.837

Wentworth, L., Mazzeo, C., and Connolly, F. (2017). Research practice partnerships: a strategy for promoting evidence-based decision-making in education. Educ. Res. 59, 241–255. doi: 10.1080/07391102.2017.1314108

Wollscheid, S., Stensaker, B., and Bugge, M. M. (2019). Evidence-informed policy and practice in the field of education: the dilemmas related to organizational design. Eur. Educ. 51, 270–290. doi: 10.1080/10564934.2019.1619465

Workforce Innovation and Opportunity Act (2014). Pub. L. No. 113–128. 29 U.S.C §3101 .

www.frontiersin.org

Figure A1. Treatment and Outcomes.

Keywords : science and practice relationship, research collaboration, research findings, survey experiment, Bayes factor method

Citation: Mohajerzad H, Martin A, Christ J and Widany S (2021) Bridging the Gap Between Science and Practice: Research Collaboration and the Perception of Research Findings. Front. Psychol. 12:790451. doi: 10.3389/fpsyg.2021.790451

Received: 06 October 2021; Accepted: 25 November 2021; Published: 16 December 2021.

Reviewed by:

Copyright © 2021 Mohajerzad, Martin, Christ and Widany. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY) . The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Hadjar Mohajerzad, [email protected]

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.

U.S. flag

An official website of the United States government

The .gov means it's official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you're on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings
  • Browse Titles

NCBI Bookshelf. A service of the National Library of Medicine, National Institutes of Health.

Robinson KA, Akinyede O, Dutta T, et al. Framework for Determining Research Gaps During Systematic Review: Evaluation [Internet]. Rockville (MD): Agency for Healthcare Research and Quality (US); 2013 Feb.

Cover of Framework for Determining Research Gaps During Systematic Review: Evaluation

Framework for Determining Research Gaps During Systematic Review: Evaluation [Internet].

Introduction.

The identification of gaps from systematic reviews is essential to the practice of “evidence-based research.” Health care research should begin and end with a systematic review. 1 - 3 A comprehensive and explicit consideration of the existing evidence is necessary for the identification and development of an unanswered and answerable question, for the design of a study most likely to answer that question, and for the interpretation of the results of the study. 4

In a systematic review, the consideration of existing evidence often highlights important areas where deficiencies in information limit our ability to make decisions. We define a research gap as a topic or area for which missing or inadequate information limits the ability of reviewers to reach a conclusion for a given question. A research gap may be further developed, such as through stakeholder engagement in prioritization, into research needs. Research needs are those areas where the gaps in the evidence limit decision making by patients, clinicians, and policy makers. A research gap may not be a research need if filling the gap would not be of use to stakeholders that make decisions in health care. The clear and explicit identification of research gaps is a necessary step in developing a research agenda. Evidence reports produced by Evidence-based Practice Centers (EPCs) have always included a future research section. However, in contrast to the explicit and transparent steps taken in the completion of a systematic review, there has not been a systematic process for the identification of research gaps.

In a prior methods project, our EPC set out to identify and pilot test a framework for the identification of research gaps. 5 , 6 We searched the literature, conducted an audit of EPC evidence reports, and sought information from other organizations which conduct evidence synthesis. Despite these efforts, we identified little detail or consistency in the frameworks used to determine research gaps within systematic reviews. In general, we found no widespread use or endorsement of a specific formal process or framework for identifying research gaps using systematic reviews.

We developed a framework to systematically identify research gaps from systematic reviews. This framework facilitates the classification of where the current evidence falls short and why the evidence falls short. The framework included two elements: (1) the characterization the gaps and (2) the identification and classification of the reason(s) for the research gap.

The PICOS structure (Population, Intervention, Comparison, Outcome and Setting) was used in this framework to describe questions or parts of questions inadequately addressed by the evidence synthesized in the systematic review. The issue of timing, sometimes included as PICOTS, was considered separately for Intervention, Comparison, and Outcome. The PICOS elements were the only sort of framework we had identified in an audit of existing methods for the identification of gaps used by EPCs and other related organizations (i.e., health technology assessment organizations). We chose to use this structure as it is one familiar to EPCs, and others, in developing questions.

It is not only important to identify research gaps but also to determine how the evidence falls short, in order to maximally inform researchers, policy makers, and funders on the types of questions that need to be addressed and the types of studies needed to address these questions. Thus, the second element of the framework was the classification of the reasons for the existence of a research gap. For each research gap, the reason(s) that most preclude conclusions from being made in the systematic review is chosen by the review team completing the framework. To leverage work already being completed by review teams, we mapped the reasons for research gaps to concepts from commonly used evidence grading systems. Briefly, these categories of reasons, explained in detail in the prior JHU EPC report 5 , are:

  • Insufficient or imprecise information
  • Biased information
  • Inconsistent or unknown consistency results
  • Not the right information

The framework facilitates a systematic approach to identifying research gaps and the reasons for those gaps. The identification of where the evidence falls short and how the evidence falls short is essential to the development of important research questions and in providing guidance in how to address these questions.

As part of the previous methods product, we developed a worksheet and instructions to facilitate the use of the framework when completing a systematic review (See Appendix A ). Preliminary evaluation of the framework and worksheet was completed by applying the framework to two completed EPC evidence reports. The framework was further refined through peer review. In this current project, we extend our work on this research gaps framework.

Our objective in this project was to complete two types of further evaluation: (1) application of the framework across a larger sample of existing systematic reviews in different topic areas, and (2) implementation of the framework by EPCs. These two objectives were used to evaluate the framework and instructions for usability and to evaluate the application of the framework by others, outside of our EPC, including as part of the process of completing an EPC report. Our overall goal was to produce a revised framework with guidance that could be used by EPCs to explicitly identify research gaps from systematic reviews.

  • Cite this Page Robinson KA, Akinyede O, Dutta T, et al. Framework for Determining Research Gaps During Systematic Review: Evaluation [Internet]. Rockville (MD): Agency for Healthcare Research and Quality (US); 2013 Feb. Introduction.
  • PDF version of this title (425K)

Other titles in these collections

  • AHRQ Methods for Effective Health Care
  • Health Services/Technology Assessment Texts (HSTAT)

Recent Activity

  • Introduction - Framework for Determining Research Gaps During Systematic Review:... Introduction - Framework for Determining Research Gaps During Systematic Review: Evaluation

Your browsing activity is empty.

Activity recording is turned off.

Turn recording back on

Connect with NLM

National Library of Medicine 8600 Rockville Pike Bethesda, MD 20894

Web Policies FOIA HHS Vulnerability Disclosure

Help Accessibility Careers

statistics

  • - Google Chrome

Intended for healthcare professionals

  • Access provided by Google Indexer
  • My email alerts
  • BMA member login
  • Username * Password * Forgot your log in details? Need to activate BMA Member Log In Log in via OpenAthens Log in via your institution

Home

Search form

  • Advanced search
  • Search responses
  • Search blogs
  • Closing the gap...

Closing the gap between research and practice: an overview of systematic reviews of interventions to promote the implementation of research findings

  • Related content
  • Peer review
  • Lisa A Bero , associate professor a ,
  • Roberto Grilli , head b ,
  • Jeremy M Grimshaw , programme director ( j.m.grimshaw{at}abdn.ac.uk ) c ,
  • Emma Harvey , research fellow d ,
  • Andrew D Oxman , director e ,
  • Mary Ann Thomson , senior research fellow c
  • Institute for Health Policy Studies, University of California at San Francisco, 1388 Sutter Street, 11th floor, San Francisco, CA 94109, USA
  • Unit of Clinical Policy Analysis, Laboratory of Clinical Epidemiology, Istituto di Ricerche Farmacologiche Mario Negri, Via Eritrea 62, 20157 Milan, Italy
  • Health Services Research Unit, Department of Public Health, Aberdeen AB25 2ZD
  • Department of Health Sciences and Clinical Evaluation, University of York, York YO1 5DD
  • Health Services Research Unit, National Institute of Public Health, PO Box 4404 Torshov, N-0462 Oslo, Norway
  • c Correspondence to: Dr Grimshaw

This is the seventh in a series of eight articles analysing the gap between research and practice

Series editors: Andrew Haines and Anna Donald

Despite the considerable amount of money spent on clinical research relatively little attention has been paid to ensuring that the findings of research are implemented in routine clinical practice. 1 There are many different types of intervention that can be used to promote behavioural change among healthcare professionals and the implementation of research findings. Disentangling the effects of intervention from the influence of contextual factors is difficult when interpreting the results of individual trials of behavioural change. 2 Nevertheless, systematic reviews of rigorous studies provide the best evidence of the effectiveness of different strategies for promoting behavioural change. 3 4 In this paper we examine systematic reviews of different strategies for the dissemination and implementation of research findings to identify evidence of the effectiveness of different strategies and to assess the quality of the systematic reviews.

Summary points

Systematic reviews of rigorous studies provide the best evidence on the effectiveness of different strategies to promote the implementation of research findings

Passive dissemination of information is generally ineffective

It seems necessary to use specific strategies to encourage implementation of research based recommendations and to ensure changes in practice

Further research on the relative effectiveness and efficiency of different strategies is required

Identification and inclusion of systematicreviews

We searched Medline records dating from 1966 to June 1995 using a strategy developed in collaboration with the NHS Centre for Reviews and Dissemination. The search identified 1139 references. No reviews from the Cochrane Effective Practice and Organisation of Care Review Group 4 had been published during this time. In addition, we searched the Database of Abstracts of Research Effectiveness (DARE) ( http://www.york.ac.uk/inst/crd ) but did not identify any other review meeting the inclusion criteria.

We searched for any review of interventions to improve professional performance that reported explicit selection criteria and in which the main outcomes considered were changes in performance or outcome. Reviews that did not report explicit selection criteria, systematic reviews focusing on the methodological quality of published studies, published bibliographies, bibliographic databases, and registers of projects on dissemination activities were excluded from our review. If systematic reviews had been updated we considered only the most recently published review. For example, the Effective Health Care bulletin on implementing clinical guidelines superseded the earlier review by Grimshaw and Russell. 5 6

Two reviewers independently assessed the quality of the reviews and extracted data on the focus, inclusion criteria, main results, and conclusions of each review. A previously validated checklist (including nine criteria scored as done, partially done, or not done) was used to assess quality. 7 8 Reviews also gave a summary score (out of seven) based on the scientific quality of the review. Major disagreements between reviewers were resolved by discussion and consensus.

Resultsand assessment of systematic reviews

We identified 18 reviews that met the inclusion criteria. They were categorised as focusing on broad strategies (such as the dissemination and implementation of guidelines 5 6 9 9– 11 ), continuing medical education, 12 13 particular strategies (such as audit and feedback, 14 15 computerised decision support systems, 16 17 or multifaceted interventions 18 ), particular target groups (for example, nurses 19 or primary healthcare professionals 20 ), and particular problem areas or types of behaviour (for example, diagnostic testing, 15 prescribing, 21 or aspects of preventive care 15 6 22 – 25 ). Most primary studies were included in more than one review, and some reviewers published more than one review. No systematic reviews published before 1988 were identified. None of the reviews explicitly addressed the cost effectiveness of different strategies for effecting changes in behaviour.

There was a lack of a common approach adopted between the reviews in how interventions and potentially confounding factors were categorised. The inclusion criteria and methods used in these reviews varied considerably. Interventions were frequently classed differently in the different systematic reviews.

Common methodological problems included the failure to adequately report criteria for selecting studies included in the review, the failure to avoid bias in the selection of studies, the failure to adequately report criteria used to assess validity, and the failure to apply criteria to assess the validity of the selected studies. Overall, 42% (68/162) of criteria were reported as having been done, 49% (80/162) as having been partially done, and 9% (14/162) as not having been done. The mean summary score was 4.13 (range 2 to 6, median 3.75, mode 3).

Encouragingly, reviews published more recently seemed to be of better quality. For studies published between 1988 and 1991 (n=6) only 20% (11/54) of criteria were scored as having been done (mean summary score 3.0); for reviews published after 1991 (n=12) 52% (56/108) of criteria were scored as having been done (mean summary score 4.7).

Five reviews attempted formal meta-analyses of the results of the studies identified. 12 17 19 23 25 The appropriateness of meta-analysis in three of these reviews is uncertain, 12 17 19 and the reviews should be considered exploratory at best, given the broad focus and heterogeneity of the studies included in the reviews with respect to the types of interventions, targeted behaviours, contextual factors, and other research factors. 2

A number of consistent themes were identified by the systematic reviews (box). (Further details about the systematic reviews are available on the BMJ 's website.) Most of the reviews identified modest improvements in performance after interventions. However, the passive dissemination of information was generally ineffective in altering practices no matter how important the issue or how valid the assessment methods. 5 9 11 13 21 26 The use of computerised decision support systems has led to improvements in the performance of doctors in terms of decisions on drug dosage, the provision of preventive care, and the general clinical management of patients, but not in diagnosis. 16 Educational outreach visits have resulted in improvements in prescribing decisions in North America. 5 13 Patient mediated interventions also seem to improve the provision of preventive care in North America (where baseline performance is often very low). 13 Multifaceted interventions (that is, a combination of methods that includes two or more interventions such as participation in audit and a local consensus process) seem to be more effective than single interventions. 13 18 There is insufficient evidence to assess the effectiveness of some interventions—for example the identification and recruitment of local opinion leaders (practitioners nominated by their colleagues as influential). 5

Interventions to promote behavioural change among health professionals

Consistently effective interventions.

Reminders (manual or computerised)

Multifaceted interventions (a combination that includes two or more of the following: audit and feedback, reminders, local consensus processes, or marketing)

Interactive educational meetings (participation of healthcare providers in workshops thatinclude discussion or practice)

Interventions of variable effectiveness

Audit and feedback (or any summary of clinical performance)

The use of local opinion leaders (practitioners identified by their colleagues as influential)

Local consensus processes (inclusion of participating practitioners in discussions to ensure that they agree that the chosen clinical problem is important and the approach to managing the problem is appropriate)

Patient mediated interventions (any intervention aimed at changing the performance of healthcare providers for which specific information was sought from or given to patients)

Interventions that have little or no effect

Educational materials (distribution of recommendations for clinical care, including clinical practice guidelines, audiovisual materials, and electronic publications)

Didactic educational meetings (such as lectures)

Few reviews attempted explicitly to link their findings to theories of behavioural change. The difficulties associated with linking findings and theories are illustrated in the review by Davis et al, who found that the results of their overview supported several different theories of behavioural change. 13

Availability and quality of primary studies

This overview also allows the opportunity to estimate the availability and quality of primary research in the areas of dissemination and implementation. Identification of published studies on behavioural change is difficult because they are poorly indexed and scattered across generalist and specialist journals. Nevertheless, two reviews provided an indication of the extent of research in this area. Oxman et al identified 102 randomised or quasirandomised controlled trials involving 160 comparisons of interventions to improve professional practice. 11 The Effective Health Care bulletin on implementing clinical guidelines identified 91 rigorous studies (including 63 randomised or quasirandomised controlled trials and 28 controlled before and after studies or time series analyses). 5 Even though the studies included in these two reviews fulfilled the minimum inclusion criteria, some are methodologically flawed and have potentially major threats to their validity. Many studies randomised health professionals or groups of professionals (cluster randomisation) but analysed the results by patient, thus resulting in a possible overestimation of the significance of the observed effects (unit of analysis error). 27 Given the small to moderate size of the observed effects this could lead to false conclusions about the significance of the effectiveness of interventions in both meta-analyses and qualitative analyses. Few studies attempted to undertake any form of economic analysis.

Given the importance of implementing the results of sound research and the problems of generalisability across different healthcare settings, there are relatively few studies of individual interventions to effect behavioural change. The review by Oxman et al identified studies involving 12 comparisons of educational materials, 17 of conferences, four of outreach visits, six of local opinion leaders, 10 of patient mediated interventions, 33 of audit and feedback, 53 of reminders, two of marketing, eight of local consensus processes, and 15 of multifaceted interventions. 11 Few studies compared the relative effectiveness of different strategies; only 22 out of 91 studies reviewed in the Effective Health Care bulletin allowed comparisons of different strategies. 5 A further limitation of the evidence about different types of interventions is that the research is often conducted by limited numbers of researchers in specific settings. The generalisability of these findings to other settings is uncertain, especially because of the marked differences in undergraduate and postgraduate education, the organisation of healthcare systems, potential systemic incentives and barriers to change, and societal values and cultures. Most of the studies reviewed were conducted in North America; only 14 of the 91 studies reviewed in the Effective Health Care bulletin had been conducted in Europe. 5

The wayforward

This overview suggests that there is an increasing amount of primary and secondary research in the areas of dissemination and implementation. It is striking how little is known about the effectiveness and cost effectiveness of interventions that aim to change the practice or delivery of health care. The reviews that we examined suggest that the passive dissemination of information (for example, publication of consensus conferences in professional journals or the mailing of educational materials) is generally ineffective and, at best, results only in small changes in practice. However, these passive approaches probably represent the most common approaches adopted by researchers, professional bodies, and healthcare organisations. The use of specific strategies to implement research based recommendations seems to be necessary to ensure that practices change, and studies suggest that more intensive efforts to alter practice are generally more successful.

At a local level greater attention needs to be given to actively coordinating dissemination and implementation to ensure that research findings are implemented. The choice of intervention should be guided by the evidence on the effectiveness of dissemination and implementation strategies, the characteristics of the message, 10 the recognition of external barriers to change, 13 and the preparedness of the clinicians to change. 28 Local policymakers with responsibility for professional education or quality assurance need to be aware of the results of implementation research, develop expertise in the principles of the management of change, and accept the need for local experimentation.

Given the paucity of evidence it is vital that dissemination and implementation activities should be rigorously evaluated whenever possible. Studies evaluating a single intervention provide little new information about the relative effectiveness and cost effectiveness of different interventions in different settings. Greater emphasis should be given to conducting studies that evaluate two or more interventions in a specific setting or help clarify the circumstances that are likely to modify the effectiveness of an intervention. Economic evaluations should be considered an integral component of research. Researchers should have greater awareness of the issues related to cluster randomisation, and should ensure that studies have adequate power and that they are analysed using appropriate methods. 29

The NHS research and development programme on evaluating methods to promote the implementation of research and development is an important initiative that will contribute to our knowledge of the dissemination of information and the implementation of research findings. 30 However, these research issues cut across national and cultural differences in the practice and financing of health care. Moreover, the scope of these issues is such that no one country's health services research programme can examine them in a comprehensive way. This suggests that there are potential benefits of international collaboration and cooperation in research, as long as appropriate attention is paid to cultural factors that might influence the implementation process such as the beliefs and perceptions of the public, patients, healthcare professionals, and policymakers.

The results of primary research should be systematically reviewed to identify promising implementation techniques and areas where more research is required. 3 Undertaking reviews in this area is difficult because of the complexity inherent in the interventions, the variability in the methods used, and the difficulty of generalising study findings across healthcare settings. The Cochrane Effective Practices and Organisation of Care Review Group is helping to meet the need for systematic reviews of current best evidence on the effects of continuing medical education, quality assurance, and other interventions that affect professional practice. A growing number of these reviews are being published and updated in the Cochrane Database of Systematic Reviews . 4 31

The articles in this series are adapted from Getting Research Findings into Practice , edited by Andrew Haines and Anna Donald and published by BMJ Books.

Acknowledgments

This paper is based on a briefing paper prepared by the authors for the Advisory Group on the NHS research and development programme on evaluating methods to promote the implementation of research and development. We thank Nick Freemantle for his contribution to this paper.

Funding: This work was partly funded by the European Community funded Eur-Assess project. The Cochrane Effective Practice and Organisation of Care Review Group is funded by the Chief Scientist Office of the Scottish Office Home and Health Department; the NHS Welsh Office of Research and Development; the Northern Ireland Department of Health and Social Services; the research and development offices of the Anglia and Oxford, North Thames, North West, South and West, South Thames, Trent, and West Midlands regions; and by the Norwegian Research Council and Ministry of Health and Social Affairs in Norway. The Health Services Research Unit is funded by the Chief Scientist Office of the Scottish Office Home and Health Department. The views expressed are those of the authors and not necessarily the funding bodies.

Conflict of interest: None.

  • Grimshaw JM ,
  • Freemantle N ,
  • Langhorne P ,
  • ↵ Implementing clinical guidelines: can guidelines be used to improve clinical practice? Effective Health Care 1994 ; No 8.
  • Thomson MA ,
  • Mugford M ,
  • Banfield P ,
  • Buntinx F ,
  • Winkens R ,
  • Knottnerus JA
  • Johnston ME ,
  • Langton KB ,
  • Haynes RB ,
  • Austin SM ,
  • Mitchell JA ,
  • Wensing M ,
  • Hirsch SH ,
  • Robbins AS ,
  • Rubenstein LV
  • Soumerai SB ,
  • McLaughlin TJ ,
  • Gyorkos TW ,
  • Tannenbaum TN ,
  • Abrahamowicz M ,
  • Carsley J ,
  • Franco ED ,
  • Mandleblatt J ,
  • Kanetsky PA
  • Lancaster T ,
  • Anderson GM ,
  • Domnick-Pierre K ,
  • Whiting-O'Keefe QE ,
  • Birkett N ,
  • NHS Research and Development Programme

what is the research practice gap

IMAGES

  1. The Gap between Research and Practice

    what is the research practice gap

  2. Example of Research Gap

    what is the research practice gap

  3. How Can Organisations Close the Theory-Practice Gap?

    what is the research practice gap

  4. Closing the “Secondary” Research-Theory-Practice-Application Gap: Charting a Path for

    what is the research practice gap

  5. What Is A Research Gap (With Examples)

    what is the research practice gap

  6. The Research-practice Gap in Education

    what is the research practice gap

VIDEO

  1. 🩺 Bridging the Gap in Nursing Education: Student Perspectives on Intentional Rounding 🇦🇺

  2. Good News in the Research-Practice Gap

  3. Finding Research Gap

  4. How to Find Research Gap || Best way to do PhD Research #phd #research

  5. Briding the Research-Practice Gap: Contrasting Views and Possible Strategies: Bruce Thyer

  6. What's a Research Gap? (And the Easiest Way to Identify it!)

COMMENTS

  1. Bridging the Gap Between Research and Practice: Predicting ...

    We argue that the research-practice gap reflects a gap between the causal claims supported by the experimental research results generally favored in EBE—“It worked ”—and the causal claims that are relevant to practice—“It will work here.”.

  2. Research Gap – Types, Examples and How to Identify

    Identifying a research gap is an essential step in conducting research that adds value and contributes to the existing body of knowledge. Research gap requires critical thinking, creativity, and a thorough understanding of the existing literature.

  3. Bridging the gap between research, policy, and practice ...

    A primary challenge to addressing the research-policy-practice gap is the lack of sustainable infrastructure bringing researchers, policymakers, practitioners, and communities together to: (1) align the research enterprise with public and population health priorities; (2) bridge healthcare, public health, mental health, and related sectors; (3 ...

  4. Narrowing the 17-Year Research to Practice Gap | American ...

    Critical care researchers and clinicians can contribute to narrowing the research to practice gap. Conducting larger, multisite studies rather than small single site studies would narrow the gap by enrolling required subjects more quickly and reducing the time required to complete research.

  5. What Is A Research Gap (With Examples) - Grad Coach

    A research gap is an unanswered question or unresolved problem in a field, which reflects a lack of existing research in that space. The four most common types of research gaps are the classic literature gap, the disagreement gap, the contextual gap and the methodological gap.

  6. Bridging the Research to Practice Gap - Psychology Today

    The gap between research and practice is attributed to many proximal factors, including inadequate practitioner training, a poor fit between treatment requirements and existing organizational...

  7. Bridging the research–practice gap in healthcare: a rapid ...

    Large-scale partnerships between universities and health services are widely seen as vehicles for bridging the evidencepractice gap and for accelerating the adoption of new evidence in healthcare.

  8. Bridging the Gap Between Science and Practice: Research ...

    Research collaboration promises a useful approach to bridging the gap between research and practice and thus promoting evidence-informed education. This study examines whether information on research collaboration can influence the reception of research knowledge.

  9. Framework for Determining Research Gaps During Systematic ...

    We define a research gap as a topic or area for which missing or inadequate information limits the ability of reviewers to reach a conclusion for a given question. A research gap may be further developed, such as through stakeholder engagement in prioritization, into research needs.

  10. Closing the gap between research and practice: an overview of ...

    Closing the gap between research and practice: an overview of systematic reviews of interventions to promote the implementation of research findings. This is the seventh in a series of eight articles analysing the gap between research and practice.