Skip to main content
  • Systematic review
  • Open access
  • Published:

Reporting feedback on healthcare outcomes to improve quality in care: a scoping review

Abstract

Background

Providing healthcare providers (HCPs) feedback on their practice patterns and achieved outcomes is a mild to moderately effective strategy for improving healthcare quality. Best practices for providing feedback have been proposed. However, it is unknown how these strategies are implemented in practice and what their real-world effectiveness is. This scoping review addresses this gap by examining the use and reported impact of feedback reporting practices in various clinical fields.

Methods

A systematic review of the literature was conducted, and electronic databases were searched for publications in English between 2010—June 2024. We included studies that utilized and evaluated feedback reporting to change HCP behaviours and enhance outcomes, using either qualitative or quantitative designs. Two researchers reviewed and extracted data from full texts of eligible studies, including information on study objectives, types of quality indicators, sources of data, types of feedback reporting practices, and co-interventions implemented.

Results

In 279 included studies we found that most studies implemented best practices in reporting feedback, including peer comparisons (66%), active delivery of feedback (65%), timely feedback (56%), feedback specific to HCPs’ practice (37%), and reporting feedback in group settings (27%). The majority (68%) combined feedback with co-interventions, such as education, post-feedback consultations, reminders, action toolboxes, social influence, and incentives. 81% showed improvement in quality indicators associated with feedback interventions. Interventions targeting outcome measures were reported as less successful than those targeting process measures, or both. Feedback interventions appeared to be more successful when supplemented with post-feedback consultations, reminders, education, and action toolboxes.

Conclusion

This review provides a comprehensive overview of strategies used to implement feedback interventions in a wide range of practice settings. Targeting process measures or combining them with outcome measures results in more positive outcomes. Additionally, feedback interventions may be slightly more effective when combined with other interventions designed to facilitate behaviour change. These findings can provide valuable insights for others wishing to implement similar interventions.

Registration

Open Science Framework, https://doiorg.publicaciones.saludcastillayleon.es/10.17605/OSF.IO/GAJVS.

Peer Review reports

Background

Healthcare systems face challenges in ensuring high-quality care while managing rising healthcare costs [1]. Variability in care delivery and outcomes across clinical settings further raises concerns about the consistency and quality of care provided [2]. In response to these challenges, Porter and Teisberg introduced the concept of value-based healthcare (VBHC), which defines value as the outcomes achieved relative to the costs of achieving them [3]. VBHC emphasizes improving value for patients by systematically monitoring and evaluating patient-centred quality indicators, such as patient-reported outcome measures (PROMs), patient-reported experience measures (PREMs), and clinician-reported measures [4,5,6,7,8]. However, a critical gap in VBHC lies in the uncertainty surrounding how to utilize collected data to enhance care processes effectively [9].

A common strategy to improve care is providing feedback to healthcare providers (HCPs) based on data about their performance, often known as Audit and Feedback. In such strategies, HCP’s data on outcomes and processes of care is compared to predefined standards or peers’ performance data [10]. The primary goal of feedback is to facilitate evidence-based practice by integrating clinical guidelines, standardizing care processes, and identifying areas for improvement [11]. Control theory suggests that by making performance data visible, feedback helps HCPs recognize gaps in care, prioritize quality improvement efforts, and enhance patient outcomes [12]. Feedback can be reported at the individual or team level, and can differ in terms of content, recipients (e.g., individual clinicians, teams, or organizations), delivery modes (e.g., written or verbal), and contexts. Their primary objective is to initiate a learning cycle that includes data monitoring, reflection, adjustments to care, and documentation of changes [13, 14]. In this study, we refer to these practices as feedback interventions [13].

Despite widespread use, feedback interventions demonstrate modest, variable effectiveness [15], with progress plateauing in recent decades [16, 17]. Progress in the efficacy of feedback interventions may be hindered by the limited use of theory to understand the mechanisms driving behaviour change. Theories of behaviour change, such as the Clinical Performance Intervention Theory (CP-FIT), offer valuable frameworks for understanding these mechanisms [14]. CP-FIT conceptualizes feedback as part of a sequential process, encompassing goal setting, interaction with data, perception, acceptance, intention formation, and behavioural adaptation. It identifies key factors—recipient characteristics, feedback attributes, and contextual variables—that influence each step. Importantly, CP-FIT underscores the role of feedback presentation and delivery as critical determinants of success.

To support effective feedback design, CP-FIT encompasses 42 recommendations to design effective feedback interventions of which 12 are related to display and delivery of feedback, including providing peer comparisons, delivering feedback through knowledgeable sources, ensuring timely reporting, and using user-friendly designs [14]. While these practices are supported by extensive qualitative data and theories, there is a lack of comprehensive overview of studies with diverse research designs and clinical settings to understand the effective application of feedback reporting strategies in practice [9, 14].

Additionally, despite well-designed feedback interventions, HCPs often encounter barriers that hinder behaviour change [18, 19]. While comprehensive reviews addressing barriers to implement feedback interventions are lacking, Geerligs et al. identified key challenges arising from organizational factors, such as misalignment with organizational culture and workflow incompatibility with the intervention [20]. Staff-related barriers include resistance to change, insufficient commitment, unclear roles, and inadequate skills to effectively engage with feedback. Furthermore, structural issues such as underdeveloped data infrastructure [21, 22], competing organizational goals [23], and the absence of a systematic approach to implementing insights gained from feedback exacerbate these challenges [21].

To address these barriers, feedback interventions are often paired with complementary implementation strategies (co-interventions) to enhance engagement and promote the adoption of evidence-based practices [24]. When feedback serves as the primary intervention, co-interventions are used to either help HCPs engage with feedback to change practice (e.g., through education or consultations) or facilitate behaviour change (e.g., incorporating an action toolbox within a feedback dashboard) [14]. Grol et al. categorized these implementation strategies into facilitative (e.g., training, reminders) and coercive approaches (e.g., financial incentives, leveraging social influence) [19]. Literature suggests that implementation strategies tailored to overcome contextual barriers and support both the interaction with feedback and post-feedback actions improve the effectiveness of feedback interventions [14]. Previous reviews have investigated the effectiveness of different implementation strategies in clinical practice [25,26,27,28], however, a formal evidence synthesis of these strategies within the context of feedback interventions remains absent, highlighting an important gap in the literature.

In summary, various methods for reporting feedback and influencing the behaviour of HCPs through co-interventions have been suggested in the literature. However, the absence of thorough formal assessments of these methods leads to a lack of clarity regarding their effectiveness. This review examines feedback interventions across diverse healthcare settings to understand how they are implemented in practice and identifies mechanisms that consistently drive behaviour change, regardless of contextual differences.

Objective and research questions

This scoping review aimed to assess current evidence regarding the utilization and effectiveness of feedback approaches and co-interventions to involve HCPs in continuous learning and quality improvement. This led to the following three research questions (RQs):

  1. 1.

    What methods are applied in clinical practice for reporting feedback to HCPs?

  2. 2.

    What co-interventions are applied in clinical practice to facilitate HCPs’ behaviour change when reporting feedback?

  3. 3.

    What are the reported impacts of feedback (co-) interventions on care processes and patient outcomes?

Methods

A scoping review was performed using the Joanna Briggs Institute methodology [29]. Scoping reviews efficiently map and summarize diverse literature on a broad topic, offering an overview irrespective of evidence quality, in contrast to other systematic review types [30, 31]. Our study protocol was registered on Open Science Framework (https://doiorg.publicaciones.saludcastillayleon.es/ https://doiorg.publicaciones.saludcastillayleon.es/10.17605/OSF.IO/GAJVS) [32]. Results are presented following the PRISMA flow diagram [33], and the PRISMA-ScR (preferred Reporting Items for Systematic reviews and Meta-Analyses extension for Scoping Reviews) checklist [34] (Additional file 1). The authors acknowledge the assistance of OpenAI's ChatGPT in refining sentence structure and improving clarity during the manuscript preparation. ChatGPT was used solely for language support, with all research and analysis conducted independently by the authors.

Inclusion criteria

The inclusion criteria sought studies that met the following conditions:

  1. 1.

    Empirical studies in which feedback interventions are evaluated, using either qualitative or quantitative study designs (including pilot studies).

  2. 2.

    Studies in English language.

  3. 3.

    Studies published between January 2010 and June 2024 to ensure the incorporation of the most up-to-date research on feedback practices in healthcare. This timeframe was selected to capture key developments, such as value-based healthcare (VBHC) [35], EHR-integrated feedback, and real-time performance monitoring [36]. Additionally, research prior to 2010 was found to have limited use of theory in intervention design [37], further justifying the focus on more recent studies that integrate these advancements.

  4. 4.

    Studies in which all of the following are true:

    1. a

      End-users are HCPs involved in patient care, including physicians, nurses, surgeons, and other clinical providers. This broad definition was chosen as feedback interventions aimed at improving care quality apply across various roles within healthcare settings. Feedback interventions directed at trainee HCPs were excluded as such interventions targeted on trainees typically prioritize proficiency in clinical duties rather than learning from practice data.

    2. b

      Feedback is provided to HCPs with the goal of improving the quality of care, defined as improvements in process measures (e.g., adherence to clinical guidelines, timely care delivery) and outcome measures (e.g., patient health outcomes, satisfaction) [38].

    3. c

      Methods of feedback reporting (quality indicators used, mode of feedback reporting) are clearly described.

  5. 5.

    For studies involving multifaceted interventions to improve quality of care, feedback must serve as a core component:

    1. a

      Either as standalone approaches or in combination with co-interventions as a core intervention.

    2. b

      These feedback interventions should assess quality of care using:

      • ◦ Process measures and/or

      • ◦ Outcome measures.

Search strategy

Comprehensive searches in MEDLINE (Ovid), CINAHL (EBSCO), Embase (Ovid), PsycINFO (Ovid), Cochrane Library, Scopus, and Google Scholar were conducted. Expert librarians assisted in utilizing controlled vocabulary (e.g., MeSH, Emtree) to capture relevant terms related to HCPs' performance and feedback reporting. Multiple pilot searches were performed, combining Medical Subject Headings with text words and synonyms for terms such as feedback,' 'audit and feedback,' 'performance feedback,' 'dashboard,' 'data communication,' 'quality improvement,' and 'healthcare professionals.'

The search strategy incorporated wildcard characters and Boolean logic for variations in spelling and plural forms. Index terms were adapted for each database. The search was initially conducted in December 2022 and updated in June 2024. Articles from the initial search and exemplar articles identified by our team were used to develop and refine search terms and indexing. The full search strategy for all databases is provided in Additional file 2.

Selection of studies

After compiling and organizing the identified studies, they were uploaded into Rayyan, a web-based tool designed for systematic reviews [39]. Duplicates were removed, and 6,282 studies were eligible for title and abstract screening. One reviewer (EV) manually screened titles and abstracts, while the second reviewer (MA) utilized ASReview, an open-source machine learning aided software [40]. This combined approach balanced nuanced interpretation with rapid processing, consistency, and error reduction [41, 42]. ASReview offers various classifier models to determine the relevance of included articles. For our study, we used the default settings [41, 43]. Before screening, ASReview was trained on 5 relevant and 5 irrelevant articles manually screened and uploaded in the software to allow the software to identify relevant articles more accurately. Subsequently, MA assessed articles based on eligibility criteria until encountering 100 consecutive irrelevant articles, marking completion of this phase. After title and abstract screening, full texts of potentially eligible articles were retrieved and independently screened by MA and EV against the inclusion criteria. Disagreements during title and abstract screening, as well as full text screening, were resolved through consensus or consultation with the senior author (WD). All excluded articles were meticulously documented, including reasons for their exclusion.

Data charting and analysis

A Microsoft Excel-based data charting form was developed and tested by MA and EV to identify variables relevant to the RQs. The form included sections such as author(s), publication year, study objectives, study design, research methods, intervention details, and results. To synthesize evidence on methods used in clinical practice for reporting feedback to HCPs (RQ1), data were collected on feedback modes, quality indicators, and the application of best practices outlined in the CP-FIT framework for feedback display and delivery [14]. Each study was evaluated to determine whether it adhered to the CP-FIT best practices. We calculated the average number of best practices used per study, identified the most and least frequently applied practices, and provided examples of their practical implementation (see Additional file 3). To answer which co-interventions are applied in practice to facilitate behaviour change (RQ2), we collected data defining co-interventions as strategies in combination with feedback to either facilitate engagement with feedback or target specific behaviours to support the feedback process and enhance the effectiveness of the intervention. We applied the Expert Recommendations for Implementing Change (ERIC) taxonomy to list different co-interventions [11]. Additionally, the framework of implementing new interventions in clinical practice by Grol et al. (1992) was employed to categorize the co-interventions into facilitative and coercive approaches [19]. Facilitative approaches included competence-based (e.g., education, post-feedback consultations) and performance-based (e.g., reminders) interventions, while coercive approaches involved structural changes in environment and financial incentives to facilitate behaviour change. To collect evidence on the reported impacts of feedback (co-) interventions on care processes and patient outcomes (RQ3), descriptive analyses using tables and graphical figures were performed to compare the number of studies that reported positive outcomes across various research designs, types of co-interventions, and quality indicators. A positive outcome was defined as a statistically significant improvement (p < 0.05) in the desired direction after the intervention in at least one primary quality indicator targeted for change. The included studies were categorized based on the Joanna Briggs Institute (JBI) Levels of Evidence for Effectiveness, which reflect the strength and reliability of the evidence. Studies were assigned level 1 (experimental), level 2 (quasi-experimental), level 3 (analytical), level 4 (descriptive) or level 5 (expert opinion) [44]. Finally, qualitative data presented in the studies underwent a combined inductive and deductive thematic analysis to identify and summarize factors contributing to intervention effectiveness. The analysis process began with an inductive approach to allow themes to emerge directly from qualitative data from the included studies. Data were categorized into factors related to feedback intervention and other factors affecting HCP behaviour. Later, sub-themes (e.g., valid data, meaningful data) were mapped to the CP-FIT framework, using a deductive approach to conceptualize how identified factors influenced behaviour change. The primary analysis was conducted by MA, while WD reviewed the codes, themes, and final interpretations through detailed discussions. These discussions served to enhance the trustworthiness of the findings by ensuring consensus and refining interpretations.

Results

Study selection process

The database searches initially identified 13,026 articles, with an additional 9 articles retrieved through snowballing references. Following the removal of 6,753 duplicates, 6,282 unique articles underwent screening. Of those, 523 articles underwent full-text review, resulting in a total of 279 articles for detailed analysis. Figure 1 summarizes the review process of the publications using a PRISMA flowchart [33]. Additional file 3 includes an overview of studies and extracted data.

Fig. 1
figure 1

Flowchart of the literature search

Additional file 4 provides a list of studies excluded at the full-text level, along with the reasons for their exclusion.

Study characteristics

Included studies were published between 2010 and June 2024, with the majority published after 2017 (69%) (Fig. 2). Most of the studies originated from the United States of America (39%) and Canada (17%). Figure 3 depicts that in 27% of the included studies feedback was used in primary care (adult and paediatric), 46% in in-patient care (pre/post-surgical care, tertiary hospitals), 10% in nursing and long-term care, 10% in specialized care settings (e.g., cancer care, urology, respiratory care, dermal care), 4% in out-patient care, 3% in other care settings (e.g., palliative care, mental care). A majority of the studies compared feedback interventions with usual care or no intervention (80%). Only a minority compared the feedback intervention with another intervention (20%). Among the included studies, 50% were pre-post studies without a control group (JBI level 2), 22% randomized controlled trials (RCTs) (JBI level 1), 16% qualitaative studies (JBI level 5), 10% quasi-experimental (JBI level 2) and 2% stepped wedge trials (JBI level 1) (Table 1).

Fig. 2
figure 2

Number of publications per year. Legend: Note. Studies were searched up to June 2024

Fig. 3
figure 3

Number of publications per clinical area

Table 1 Summary of study designs

Methods applied in clinical practice for reporting feedback

Table 2 summarizes the types of feedback reporting practices used. Most studies delivered feedback in a written format, primarily utilizing written reports or emails (37%), 15% studies reported feedback verbally, 20% studies reported feedback using a combined method and 18% studies applied digital formats such as online dashboard or web-based systems. In terms of the quality indicators used in the interventions, 60% of studies used only process measures, 8% used only outcome measures, and 25% used a combination of both. In 7% of studies, the types of measures weren’t clearly reported. Process measures included clinical assessments (e.g., catheter removal, compliance related to pain assessment tools), adherence to guidelines, prescribing behaviours (e.g., VTE prophylaxis, radiation use, antibiotics, statins), resource utilization (e.g., colonoscopy process indicators for cancer, ICU stay, hospital readmission, ER visits), and documentation of procedures (e.g., discharge summary documentation). Outcome measures comprised clinician-reported outcomes (e.g., bloodstream infection rates, prevalence of anaemia e.g., clinical remission, blood pressure control, glycated haemoglobin (HbA1c)), adverse outcomes (e.g., rate of severe hyperglycaemia days, acquired pressure ulcers prevalence, mortality rates and cardiovascular events), and patient-reported outcome measures (PROMs) (e.g., self-reported pain). Additionally, two studies fell under the 'other' category, examining healthcare professionals’ ratings of their professionalism.

Table 2 Methods to deliver feedback

Best practices of reporting feedback recommended by CP-FIT

On average, the included studies implemented 3 out of the 12 recommended best practices for feedback reporting (Table 3; Additional file 3) [14]. The most frequently employed practices were providing feedback with peer comparisons (66%), delivering timely feedback (56%), offering feedback specific to HCPs’ practice (37%), actively delivering feedback (65%), and presenting feedback in group settings (27%). In contrast, some best practices were infrequently used, such as designing user-friendly feedback (12%), using a knowledgeable source for reporting feedback (11%), and showing trends between past and current performance (8%).

Table 3 Summary of ‘best practices’ that studies reported to use when reporting feedback

Co-interventions applied in practice to facilitate behaviour change

In total 190 studies (68%) employed a multifaceted intervention approach, combining feedback with various co-interventions, while 89 (32%) reported feedback without co-intervention (Fig. 4).

Fig. 4
figure 4

Summary of types of (co) interventions

Co-interventions using facilitative approaches

Of the 190 studies, 60% applied education as a co-intervention, either alone (31%) or combined with other strategies (29%). Education primarily addressed care performance gaps informed by feedback, while reminders and post-feedback consultations directly targeted feedback. Reminders (22%) served as prompts to encourage adherence to feedback recommendations, either alone or combined with other strategies like education or action toolboxes. Post-feedback consultations (19%) allowed HCPs to clarify feedback, reflect on practice, and plan actions with expert support.

Co-interventions with coercive approaches

Structural modifications to practice settings were reported as co-interventions in 34% of studies, applied either alone (13%) or in combination with other co-interventions (22%). These included action toolboxes, pocket-sized guidelines, and clinical pathways, primarily aimed at improving targeted practices and thereby supporting feedback interventions. Financial or non-monetary incentives, such as gift cards, vouchers, or Continuing Medical Education (CME) credits, were implemented in 8% of studies, either independently or alongside other co-interventions. Social influence strategies, reported in 5% of studies, often involved increasing awareness of clinical norms or publicly sharing HCP performance results.

Reported impact of various approaches for facilitating behaviour change and clinical improvement

Interventions showing improvements in quality of care by study design

Overall, 235 studies quantitatively assessed the effectiveness of feedback (co-)interventions, with 191 (81%) reporting improvements in at least one primary quality indicator. Among pre-post studies without a control group, 91% studies showed improvements. In quasi experimental controlled group and stepped wedge trials, 79% studies reported improvements, while in RCTs, 61% showed improvements.

Interventions with improvements in quality of care by types of quality indicators

Of these studies, 230 used either process measures (62%), outcome measures (8%) or a combination of both (30%). Improvement in quality of care was reported in 85% of studies utilizing process measures, 68% of those using outcome measures, and 79% of studies incorporating both. These differences were further pronounced when stratified by study design: in controlled study designs, improvements were observed in 69% of studies using process measures, 50% using outcome measures, and 72% incorporating both (Fig. 5).

Fig. 5
figure 5

Summary of studies with improvements/ non-significant or no improvements by choice of outcomes

The most frequently reported improvements in process measures included clinical assessments (89%), such as evaluations for cancer pain, venous thromboembolism (VTE) risk, or procedures involving core biopsies and radiological imaging (e.g., CT, MRI). These assessments were often part of diagnostic protocols or pre/post-surgical procedures, such as catheter placement or adherence to aseptic techniques. Improvements were also commonly reported in prescribing practices, including the use of antibiotics, medications for chronic conditions (e.g., cardiovascular drugs), immunizations, and high-risk medications such as opioids for cancer pain, with 82% of studies highlighting positive changes. Conversely, process measures related to resource utilization—such as hospital admission or re-admission rates, length of stay (LOS), or reducing unnecessary diagnostic procedures like CT scans—showed less frequent improvement, reported in 45% of studies.

For outcome measures, 67% of studies reported improvements in clinician-reported outcomes, such as pain scores in acute care, clinical remission or adequate nutrition and growth in paediatric chronic disease management, or laboratory measures like blood pressure, temperature control, or infection rates. Improvements in PROMs were less frequently observed, with 64% of studies documenting positive changes. Adverse outcomes, including mortality rates, infection rates, and cardiovascular events, showed the least improvement, with only 27% of studies reporting favourable results.

Studies showing improvements in quality of care by types of co-interventions

Studies that included co-interventions reported improvements in 141 out of 169 studies (83%). In comparison, those without co-interventions showed improvements in 51 out of 66 studies (77%).

Co-interventions with facilitative approaches

Of the studies, that applied education as a co-intervention, 76% reported improvements in at least one primary quality indicator. Improvements were more frequently reported (84%) when HCPs were educated on strategies to enhance care delivery. In-person educational meetings were the most commonly reported method, with 89% of studies indicating positive outcomes; studies with hybrid educational approaches reported improvements in 96% of studies; outreach visits in which where experts provide personalized education in HCPs' work settings resulted in improvements in 29% of studies. Post-feedback consultations showed improvements in 83% of studies. Similar results were observed for interventions including reminders (83%) (Table 4).

Table 4 Number of studies with improvements by types of co-interventions and study designs

Co-interventions using coercive approaches

Studies that applied co-interventions using a coercive approach reported mixed results. Among co-interventions with structural changes, decision support tools/action toolboxes pre-filled with suggested actions, and supporting materials were associated with improvements in approximately 79% of studies. Incentives such as financial rewards or continuing medical education (CME) credits were reported as effective in about 69% of studies. Similar trends were observed for studies applying social influence as a main co-intervention, with around 63% reporting positive outcomes.

Effectiveness of (co-) interventions based on content analysis of qualitative studies

We included 44 studies that utilized a qualitative research design. Thematic analysis of the results and discussion sections of these studies revealed two main themes regarding the effectiveness of feedback interventions: 'feedback-related factors' and 'other sources influencing behaviour'. Additional file 5 provides examples of how feedback related factors and external factors were perceived to be important for the effectiveness of feedback interventions in different studies.

Feedback related factors

The perceived usefulness of interventions was influenced by several feedback related factors; whether the data on which the feedback was provided was credible, valid and meaningful. Feedback that is timely, is non-punitive, applies user-friendly designs and compares data to other HCPs, was easier to interpret by HCPs and facilitated behaviour change.

Other factors influencing HCP behaviour

Organizational factors play an important role, with adequate resources, such as sufficient time and manpower to carry out tasks related to the quality improvement intervention, emerging as a common theme to be necessary for effective feedback implementation. This is followed by strong leadership, a culture that supports learning, and the presence of co-interventions, all of which create an environment conducive to utilizing feedback. HCP factors, including self-efficacy, knowledge/skills to use data, and expectation management, also determine how feedback is perceived and utilized. For example, when HCPs felt accountability towards their patients, they were more engaged with feedback [45]. Furthermore, external factors such as interactions with other HCPs within own organization or network may further enhance motivation to change behaviour and apply feedback in practice.

Additionally, patient behaviours such as patients demanding more antibiotics than those recommended, can undermine HCPs’ control over desired practices, thereby impacting their engagement with feedback [46]. Conversely, the involvement of health insurers adds to the complexity of how feedback is perceived and acted on. Table 5 provides a joint meta-synthesis display of findings from qualitative and quantitative studies.

Table 5 Meta-Synthesis Matrix: Integrating Feedback Reporting Approaches and Co-Interventions to Enhance Effectiveness

Discussion

In the current scoping review, we collected available evidence on feedback reporting practices to HCPs focusing on outcomes and process measures from various clinical fields. Our objectives were to explore current feedback practices in the literature, examine co-interventions applied to facilitate behaviour change, and identify the most successful interventions in changing behaviour and achieving clinical improvement. We observed that most studies focused on process measures as targets of clinical improvement, primarily using written reports or emails. On average, studies reported using 3 out 12 best practices described in (CP-FIT) theory [14]. Additionally, most studies (68%) combined feedback with co-interventions. The results of comparative studies and other observational studies pointed to facilitative co-interventions, such as education and post-feedback consultations and reminders, as the most successful approaches. Action toolboxes and clinician decision support tools also appeared to be effective co-interventions. Studies that targeted either process measures or a combination of process and outcome measures more frequently observed improvements in assessed quality indicators compared to those targeting only outcome measures.

Comparison with existing literature

First, our results showed that process measures were predominantly targeted to assess practice patterns such as prescribing behaviours or ordering certain tests, which is consistent with previous findings [15, 16]. While outcome measures are the outcomes that matter most to patients [3], they can be more challenging to act upon by HCPs and are often influenced by external factors such as disease complexity, severity, and patient population types, despite the use of statistical methods available to control for these sources of bias [47, 48]. On the other hand, process measures are often considered more actionable, particularly in multisite quality improvement interventions where there may be significant contextual differences between sites [49]. In these settings, process measures can offer clearer insights into the desired practice changes, which are less prone to bias. Our findings suggest that a combined approach, targeting both process and outcome measures, can yield greater improvements and enhance the effectiveness of feedback interventions, ultimately driving better patient outcomes while providing actionable insights to HCPs.

Best practices were variably incorporated in the designs of feedback interventions. Included studies most frequently incorporated peer comparisons within the feedback, timely feedback delivery, and active feedback methods. Social influence by delivering feedback in a group setting, was common as well, suggesting that this motivates HCPs to adopt behaviours of their peers [50, 51]. Psychological theories also support this notion that humans have a natural tendency to conform to social norms and learn by observing others [52, 53]. Conversely, practices such as reporting feedback with a user-friendly design, using knowledgeable sources for reporting feedback, and showing trends in data were less frequently applied. The lower number of studies reporting the use of user-friendly designs may be due to the fact that 15% of studies delivered feedback verbally, where the importance of user-friendly designs is diminished. Another explanation for the infrequent reporting of these practices could be that, although studies followed 'best practices' in reporting feedback to HCPs, some practices were more frequently cited, while others were likely used but not adequately reported, possibly due to insufficient or unclear documentation of study design and methods.

The issue of poor reporting of feedback practices in detail has also been highlighted in previous studies, [54, 55] which complicates the detection of differences between studies and their effects on clinical improvement. In recent years, several reporting guidelines have been proposed [56, 57], and taxonomies of behavioural interventions have been published to facilitate transparent reporting [11, 58]. However, only a few studies have demonstrated compliance with these guidelines or the use of taxonomies [55].

Co-interventions were commonly used to assist HCPs with behaviour change, primarily through education and decision-support tools, or to support feedback engagement, such as post-feedback consultations and reminders. While most studies employed co-interventions, studies that applied multiple co-interventions often used similar mechanisms. According to Grol et al., facilitative approaches target HCPs' intrinsic motivation, while coercive approaches, such as financial incentives, target extrinsic motivation [19]. Notably, among the 19% of studies implementing multiple co-interventions, nearly half relied exclusively on facilitative strategies. For the long-term sustainability of quality improvement initiatives, addressing both facilitative and coercive approaches may be beneficial [19].

Feedback is a widely used strategy for integrating evidence-based practices into real-world clinical settings. Its effectiveness depends on multiple factors, including the motivation and capability of HCPs, the availability of organizational resources, and the identification of barriers that may hinder implementation [24, 57,58,59]. Successful feedback interventions require careful planning, adaptation to the clinical context, and ongoing evaluation to ensure meaningful improvements in care. Factors such as organizational culture, resource availability, and external pressures can significantly influence the uptake and impact of feedback interventions.

Regarding the effectiveness of co-interventions, we compared studies that used co-interventions with those that did not. Studies with co-interventions showed slightly more improvements. We observed that co-interventions such as education, post-feedback consultations, reminders, and action tools appeared to be more successful. Earlier studies also reflected the success of education combined with feedback [60, 61]. Additionally, we found that in-person education and distance learning methods, such as printed materials, showed greater improvements, consistent with prior reviews [60, 62]. In contrast, methods such as outreach visits by experts to provide education were less successful. Furthermore, education targeting practice improvement strategies yielded higher enhancements compared to topics on general awareness on quality improvement intervention. This aligns with previous findings emphasizing HCPs' preference for practice-based educational interventions [63, 64]. Co-interventions employing a coercive approach yielded mixed results, whereas action toolboxes emerged as more successful than interventions involving incentives and social influence. While one literature review demonstrated the efficacy of clinician decision support tools in enhancing clinical processes and outcomes [65], there is currently no clear synthesis of evidence regarding their effectiveness when combined with feedback.

Limitations

Our scoping review collected evidence from diverse clinical settings, providing a comprehensive understanding of the various applications of feedback in practice. However, this review encountered challenges and limitations. Our search was limited to studies published from 2010 onward, which may have excluded earlier research on feedback interventions. However, this timeframe aligns with significant advancements in feedback delivery, such as EHR integration, real-time performance monitoring [36], and theory-driven approaches [37]. We included only English-language studies to ensure consistent interpretation. These decisions may have caused some relevant non-English studies or papers published prior to 2010 to be excluded. Additionally, our database search identified 279 eligible studies, most of which were observational, making it challenging to assess effectiveness. We observed that studies without a controlled design showed more improvements in outcomes compared to those with a controlled design. This might be because pre-post intervention studies, while feasible and practical in such contexts, cannot control for confounders that may influence the results. Furthermore, the diverse approaches and quality indicators used across various clinical settings made synthesizing evidence on the reported impacts of (co-) interventions difficult, as comparing results across studies was challenging. Finally, our search strategy may have inadvertently overlooked studies using different terminology, such as alternative terms for studies involving broader quality improvement initiatives with feedback. To mitigate this risk, we iteratively refined our search strategy in collaboration with experienced librarians.

Implications and future research

There is a crucial need to explore how to design feedback interventions effectively, as prior studies show large variations in their effectiveness [16, 66], indicating limited knowledge of the best approaches. A prior Cochrane review [16] compared different feedback interventions, but the focus on including only high-quality studies many lower-quality studies may have excluded, potentially limiting the diversity of clinical settings represented. However, including high-quality studies ensures more reliable insights and a robust assessment of the interventions' impact. While these studies provide important evidence, they do not encompass all clinical contexts. Our scoping review highlights key practices and types of co-interventions, such as education, post-feedback consultations, and action toolboxes, that appear to be more effective. Future research should confirm these findings through well-designed and transparently reported studies to broaden the applicability of feedback interventions across a wider range of clinical settings.

Conclusion

This review provides a comprehensive overview of strategies used to implement feedback interventions across various practice settings. Our findings suggest that feedback interventions incorporating both outcome and process measures, as well as those combined with other co-interventions such as education, action toolboxes, and reminders, tend to be more effective. These insights have practical implications for designing future feedback interventions and implementation strategies that are intended for the uptake of evidence-based practices, emphasizing the importance of selecting strategies that are tailored to intervention settings, address barriers and the types of motivation being targeted. For future research, it is crucial to explore the individual and additive effects of these co-interventions, along with their long-term sustainability, to better understand how to maintain improvements over time in diverse clinical settings.

Data availability

Not applicable.

Abbreviations

HCPs:

Healthcare Providers

VBHC:

Value-based healthcare

PREMS:

Patient-reported experience measures

PROMS:

Patient-reported outcome measures

RCT:

Randomized controlled trials

References

  1. Corallo AN, Croxford R, Goodman DC, Bryan EL, Srivastava D, Stukel TA. A systematic review of medical practice variation in OECD countries. Health Policy. 2014;114(1):5–14. https://doiorg.publicaciones.saludcastillayleon.es/10.1016/j.healthpol.2013.08.002.

    Article  PubMed  Google Scholar 

  2. Putera I. Redefining health: implication for value-based healthcare reform. Cureus. 2017. https://doiorg.publicaciones.saludcastillayleon.es/10.7759/cureus.1067.

    Article  PubMed  PubMed Central  Google Scholar 

  3. Porter ME, Teisberg EO. Redefining health care: creating value-based competition on results. Boston: Harvard Business Press; 2006.

  4. Kim AH, Roberts C, Feagan BG, et al. Developing a standard set of patient-centred outcomes for inflammatory bowel disease—an international, cross-disciplinary consensus. J Crohns Colitis. 2018;12(4):408–18.

    Article  PubMed  Google Scholar 

  5. Cella D, Riley W, Stone A, et al. The Patient-Reported Outcomes Measurement Information System (PROMIS) developed and tested its first wave of adult self-reported health outcome item banks: 2005–2008. J Clin Epidemiol. 2010;63(11):1179–94.

    Article  PubMed  PubMed Central  Google Scholar 

  6. Askew RL, Cook KF, Revicki DA, Cella D, Amtmann D. Evidence from diverse clinical populations supported clinical validity of promis pain interference and pain behavior. J Clin Epidemiol. 2016;73:103–11.

    Article  PubMed  PubMed Central  Google Scholar 

  7. Almario CV, Chey WD, Khanna D, Mosadeghi S, Ahmed S, Afghani E, et al. Impact of national institutes of health gastrointestinal promis measures in clinical practice: results of a multicenter controlled trial. Am J Gastroenterol. 2016;111(11):1546–56.

    Article  PubMed  PubMed Central  Google Scholar 

  8. Amini M, van Leeuwen N, Eijkenaar F, Mulder MJ, Schonewille W, Lycklama à Nijeholt G, et al. Improving quality of stroke care through benchmarking center performance: why focusing on outcomes is not enough. BMC Health Services Research. 2020;20(1):998.

    Article  PubMed  PubMed Central  Google Scholar 

  9. Tuti T, Nzinga J, Njoroge M, Brown B, Peek N, English M, et al. A systematic review of electronic audit and feedback: Intervention effectiveness and use of behaviour change theory. Implementation Science. 2017;12(1):61.

    Article  PubMed  PubMed Central  Google Scholar 

  10. Menear M, Blanchette MA, Demers-Payette O, Roy D. A framework for value-creating learning health systems. Health research policy and systems. 2019;17(1):1–3.

    Article  Google Scholar 

  11. Powell BJ, Waltz TJ, Chinman MJ, Damschroder LJ, Smith JL, Matthieu MM, Proctor EK, Kirchner JE. A refined compilation of implementation strategies: results from the Expert Recommendations for Implementing Change (ERIC) project. Implement Sci. 2015;10:1–4.

    Article  Google Scholar 

  12. Carver CS, Scheier MF. Control theory: a useful conceptual framework for personality–social, clinical, and health psychology. Psychol Bull. 1982;92(1):111.

    Article  CAS  PubMed  Google Scholar 

  13. Devlin NJ, Appleby J, Buxton M. Getting the most out of proms: putting health outcomes at the heart of NHS decision-making. London: King’s Fund; 2010.

    Google Scholar 

  14. Brown B, Gude WT, Blakeman T, van der Veer SN, Ivers N, Francis JJ, et al. Clinical Performance Feedback Intervention Theory (CP-FIT): a new theory for designing, implementing, and evaluating feedback in health care based on a systematic review and meta-synthesis of qualitative research. Implement Sci. 2019;14(1):1–25.

    Article  Google Scholar 

  15. Brehaut JC, Colquhoun HL, Eva KW, Carroll K, Sales A, Michie S, Ivers N, Grimshaw JM. Practice feedback interventions: 15 suggestions for optimizing effectiveness. Ann Intern Med. 2016;164(6):435–41.

    Article  PubMed  Google Scholar 

  16. Ivers N, Jamtvedt G, Flottorp S, et al. Audit and feedback: effects on professional practice and healthcare outcomes. Cochrane Database Syst Rev. 2012;(6):CD000259. https://doiorg.publicaciones.saludcastillayleon.es/10.1002/14651858.CD000259.pub3.

  17. Ivers NM, Grimshaw JM, Jamtvedt G, Flottorp S, O’Brien MA, French SD, Young J, Odgaard-Jensen J. Growing literature, stagnant science? Systematic review, meta-regression and cumulative analysis of audit and feedback interventions in health care. J Gen Intern Med. 2014;29(11):1534–41.

    Article  PubMed  PubMed Central  Google Scholar 

  18. Colquhoun HL, Carroll K, Eva KW, Grimshaw JM, Ivers N, Michie S, Sales A, Brehaut JC. Advancing the literature on designing audit and feedback interventions: identifying theory-informed hypotheses. Implement Sci. 2017;12(1):1.

    Article  Google Scholar 

  19. Grol RP. Implementing guidelines in general practice care. Qual Health Care. 1992;1(3):184.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  20. Geerligs L, Rankin NM, Shepherd HL, Butow P. Hospital-based interventions: a systematic review of staff-reported barriers and facilitators to implementation processes. Implement Sci. 2018;13:1–7.

    Article  Google Scholar 

  21. Van Veghel D, Daeter EJ, Bax M, Amoroso G, Blaauw Y, Camaro C, Cummins P, Halfwerk FR, Wijdh-den Hamer IJ, De Jong JS, Stooker W. Organization of outcome-based quality improvement in Dutch heart centres. European Heart Journal-Quality of Care and Clinical Outcomes. 2020;6(1):49–54.

    PubMed  Google Scholar 

  22. Kaplan HC, Brady PW, Dritz MC, Hooper DK, Linam WM, Froehle CM, Margolis P. The influence of context on quality improvement success in health care: a systematic review of the literature. Milbank Q. 2010;88(4):500–59.

    Article  PubMed  PubMed Central  Google Scholar 

  23. van Gemert-Pijnen JE, Nijland N, van Limburg M, Ossebaard HC, Kelders SM, Eysenbach G, Seydel ER. A holistic framework to improve the uptake and impact of eHealth technologies. J Med Internet Res. 2011;13(4):e1672.

    Google Scholar 

  24. Michie S, van Stralen MM, West R. The behaviour change wheel: a new method for characterising and designing behaviour change interventions. Implement Sci. 2011;6(1):1–2.

    Article  Google Scholar 

  25. Sant’Anna A, Vilhelmsson A, Wolf A. Nudging healthcare professionals in clinical settings: a scoping review of the literature. BMC Health Serv Res. 2021;21(1):1–4.

  26. Strack F, Deutsch R. The duality of everyday life: dual-process and dual-system models in social psychology. In: Mikulincer M, Shaver PR, Borgida E, Bargh JA, editors. APA handbook of personality and social psychology, Vol. 1: Attitudes and social cognition. Washington DC: American Psychological Association; 2015. p. 891–927. Available from: https://doiorg.publicaciones.saludcastillayleon.es/10.1037/14341-028.

  27. Lamprell K, Tran Y, Arnolda G, Braithwaite J. Nudging clinicians: a systematic scoping review of the literature. J Eval Clin Pract. 2021;27(1):175–92.

    Article  PubMed  Google Scholar 

  28. Yoong SL, Hall A, Stacey F, Grady A, Sutherland R, Wyse R, Anderson A, Nathan N, Wolfenden L. Nudge strategies to improve healthcare providers’ implementation of evidence-based guidelines, policies and practices: a systematic review of trials included within Cochrane systematic reviews. Implement Sci. 2020;15:1–30.

    Article  Google Scholar 

  29. Peters M, Godfrey C, McInerney P, Soares CB, Khalil H, Parker D. Methodology for JBI scoping reviews. In: *The Joanna Briggs Institute reviewers manual 2015*. Adelaide: Joanna Briggs Institute; 2015. p. 3–24.

  30. Arksey H, O’Malley L. Scoping studies: towards a methodological framework. Int J Soc Res Methodol. 2005;8(1):19–32.

    Article  Google Scholar 

  31. Peters MD, Godfrey CM, Khalil H, McInerney P, Parker D, Soares CB. Guidance for conducting systematic scoping reviews. JBI Evidence Implementation. 2015;13(3):141–6.

    Google Scholar 

  32. Ali MP, Visser E, West R, Noord Dv, Woude CJv, Deen WV. An interdisciplinary approach to designing and implementing audit & feedback interventions within the networks of health care providers: a scoping review protocol. https://doiorg.publicaciones.saludcastillayleon.es/10.17605/OSF.IO/GAJVS.

  33. Page MJ, McKenzie JE, Bossuyt PM, Boutron I, Hoffmann TC, Mulrow CD, Shamseer L, Tetzlaff JM, Akl EA, Brennan SE, Chou R. The PRISMA 2020 statement: an updated guideline for reporting systematic reviews. Syst Rev. 2021;10(1):1–1.

    Article  Google Scholar 

  34. Rethlefsen ML, Kirtley S, Waffenschmidt S, Ayala AP, Moher D, Page MJ, Koffel JB. PRISMA-S: an extension to the PRISMA statement for reporting literature searches in systematic reviews. Syst Rev. 2021;10:1–9.

    Article  Google Scholar 

  35. van Staalduinen DJ, van den Bekerom P, Groeneveld S, Kidanemariam M, Stiggelbout AM, van den Akker-van Marle ME. The implementation of value-based healthcare: a scoping review. BMC Health Serv Res. 2022;22(1):270.

    Article  PubMed  PubMed Central  Google Scholar 

  36. World Health Organization. Global Diffusion of eHealth: making universal health coverage achievable. Geneva: World Health Organization; 2016. Available from: https://africahealthforum.afro.who.int/first-edition/IMG/pdf/global_diffusion_of_ehealth_-_making_universal_health_coverage_achievable.pdf. Cited 2025 Feb 15 .

    Google Scholar 

  37. Colquhoun HL, Brehaut JC, Sales A, Ivers N, Grimshaw J, Michie S, Carroll K, Chalifoux M, Eva KW. A systematic review of the use of theory in randomized controlled trials of audit and feedback. Implement Sci. 2013;8:1–8.

    Article  Google Scholar 

  38. Donabedian A. Evaluating the quality of medical care. Milbank Mem Fund Q. 1966;44(3):166–206.

    Article  Google Scholar 

  39. AI powered tool for systematic literature reviews. 2023. Available from: https://www.rayyan.ai/. Cited 2023 Nov 20.

  40. Active learning for systematic reviews. 2023. Available from: https://asreview.nl/. Cited 2023 Nov 20.

  41. Ferdinands G, Schram R, de Bruin J, et al. Active learning for screening prioritization in systematic reviews - a simulation study. Open Sci Framework [Preprint]. 2020.

  42. Wang Z, Nayfeh T, Tetzlaff J, O’Blenis P, Murad MH. Error rates of human reviewers during abstract screening in systematic reviews. PLoS One. 2020;15(1): e0227742.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  43. Van de Schoot R, de Bruin J, Schram R, Zahedi P, de Boer J, Weijdema F, Kramer B, Huijts M, Hoogerwerf M, Ferdinands G, Harkema A. An open-source machine learning framework for efficient and transparent systematic reviews. Nat Mach Intell. 2021;3(2):125–33.

    Article  Google Scholar 

  44. Ovid. JBI database of systematic reviews and implementation reports: guide. New York: Ovid; 2024. Available from: https://ospguides.ovid.com/OSPguides/jbidb.htm. Cited 2024 Aug 18.

  45. Christina V, Baldwin K, Biron A, Emed J, Lepage K. Factors influencing the effectiveness of audit and feedback: nurses’ perceptions. J Nurs Manag. 2016;24(8):1080–7.

    Article  PubMed  Google Scholar 

  46. Roche KF, Morrissey EC, Cunningham J, Molloy GJ. The use of postal audit and feedback among Irish General Practitioners for the self-management of antimicrobial prescribing: a qualitative study. BMC Prim Care. 2022;23(1):1–10.

    Article  Google Scholar 

  47. Wolfe HA, Taylor A, Subramanyam R. Statistics in quality improvement: measurement and statistical process control. Pediatr Anesth. 2021;31(5):539–47.

    Article  Google Scholar 

  48. Marang-Van De Mheen PJ, Woodcock T. Grand rounds in methodology: four critical decision points in statistical process control evaluations of quality improvement initiatives. BMJ Quality and Safety. 2023;32(1):47–54.

    Article  PubMed  Google Scholar 

  49. de Vos ML, van der Veer SN, Wouterse B, Graafmans WC, Peek N, de Keizer NF, Jager KJ, Westert GP, van der Voort PH. A multifaceted feedback strategy alone does not improve the adherence to organizational guideline-based standards: a cluster randomized trial in intensive care. Implement Sci. 2015;10:1–9.

    Google Scholar 

  50. John P, Sanders M, Wang J. A panacea for improving citizen behaviours? Introduction to the symposium on the use of social norms in public administration. J Behav Public Adm. 2019;2(2):[no page numbers available].

  51. Sheeran P, Maki A, Montanaro E, Avishai-Yitshak A, Bryan A, Klein WM, et al. The impact of changing attitudes, norms, and self-efficacy on health-related intentions and behavior: a meta-analysis. Health Psychol. 2016;35(11):1178–88.

    Article  PubMed  Google Scholar 

  52. Ajzen I. The theory of planned behavior. Organ Behav Hum Decis Process. 1991;50(2):179–211.

    Article  Google Scholar 

  53. Bandura A. Human agency in social cognitive theory. Am Psychol. 1989;44(9):1175–84.

    Article  CAS  PubMed  Google Scholar 

  54. Ivers NM, Sales A, Colquhoun H, Michie S, Foy R, Francis JJ, Grimshaw JM. No more ‘business as usual’with audit and feedback interventions: towards an agenda for a reinvigorated intervention. Implement Sci. 2014;9:1–8.

    Article  Google Scholar 

  55. Colquhoun H, Michie S, Sales A, Ivers N, Grimshaw JM, Carroll K, Chalifoux M, Eva K, Brehaut J. Reporting and design elements of audit and feedback interventions: a secondary review. BMJ Qual Saf. 2017;26(1):54–60.

    Article  PubMed  Google Scholar 

  56. Hoffmann T, English T, Glasziou P. Reporting of interventions in randomised trials: an audit of journal instructions to authors. Trials. 2014;15(1):1–6.

    Article  Google Scholar 

  57. Ogrinc G, Mooney SE, Estrada C, Foster T, Goldmann D, Hall LW, et al. The SQUIRE (Standards for QUality Improvement Reporting Excellence) guidelines for quality improvement reporting: explanation and elaboration. Qual Saf Health Care. 2008;17(Suppl 1):i13–32.

  58. Michie S, Richardson M, Johnston M, Abraham C, Francis J, Hardeman W, Eccles MP, Cane J, Wood CE. The behavior change technique taxonomy (v1) of 93 hierarchically clustered techniques: building an international consensus for the reporting of behavior change interventions. Ann Behav Med. 2013;46(1):81–95.

    Article  PubMed  Google Scholar 

  59. Cane J, O’Connor D, Michie S. Validation of the theoretical domains framework for use in behaviour change and implementation research. Implement Sci. 2012;7:1–7.

    Article  Google Scholar 

  60. Farmer AP, Légaré F, Turcot L, et al. Printed educational materials: effects on professional practice and health care outcomes. Cochrane Database Syst Rev. 2008;(3):CD004398. https://doiorg.publicaciones.saludcastillayleon.es/10.1002/14651858.CD004398.pub2.

  61. Kobewka DM, Ronksley PE, McKay JA, Forster AJ, van Walraven C. Influence of educational, audit and feedback, system-based, and incentive and penalty interventions to reduce laboratory test utilization: a systematic review. Clin Chem Lab Med. 2015;53(2):157–83.

    Article  CAS  PubMed  Google Scholar 

  62. Grimshaw J, Eccles M, Tetroe J. Implementing clinical guidelines: current evidence and future implications. J Contin Educ Health Prof. 2004;24:24(S1).

    Article  Google Scholar 

  63. Soumerai SB, Avorn J. Principles of educational outreach ('academic detailing’) to improve clinical decision making. JAMA. 1990;263(4):549–56.

    Article  CAS  PubMed  Google Scholar 

  64. Cranney M, Warren E, Barton S, Gardner K, Walley T. Why do GPs not implement evidence-based guidelines? A descriptive study. Fam Pract. 2001;18(4):359–63.

    Article  CAS  PubMed  Google Scholar 

  65. Bright TJ, Wong A, Dhurjati R, Bristow E, Bastian L, Coeytaux RR, Samsa G, Hasselblad V, Williams JW, Musty MD, Wing L. Effect of clinical decision-support systems: a systematic review. Ann Intern Med. 2012;157(1):29–43.

    Article  PubMed  Google Scholar 

  66. Jamtvedt G, Young JM, Kristoffersen DT, O’Brien MA, Oxman AD. Does telling people what they have been doing change what they do? A systematic review of the effects of audit and feedback. BMJ Qual Saf. 2006;15(6):433–6.

    Article  Google Scholar 

Download references

Acknowledgements

The authors wish to thank Wichor Bramer from the Erasmus MC Medical Library for developing and updating the search strategy.

Funding

This project has received funding from the Erasmus Initiative “Smarter Choices for Better Health”. Additionally, Janssen pharmaceuticals funded part of the research for this project. The funders had no role in the design of the study; the collection, analysis, and interpretation of the data; or the decision to approve the publication of the finished manuscript.

Author information

Authors and Affiliations

Authors

Contributions

MA and WD created the study concept and design. MA constructed and refined the search strategy. MA and EV extracted the data. Analysis of the data was completed by MA. RW, DN, CJW and WD provided critical input on data interpretation. Drafting of the manuscript was done by MA. All authors revised the manuscript critically for important intellectual content and approved the final manuscript.

Corresponding author

Correspondence to Welmoed K. van Deen.

Ethics declarations

Ethics approval and consent to participate

Not applicable.

Consent for publication

Not applicable.

Competing interests

None declared.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License, which permits any non-commercial use, sharing, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if you modified the licensed material. You do not have permission under this licence to share adapted material derived from this article or parts of it. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by-nc-nd/4.0/.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Ali, M.P., Visser, E.H., West, R.L. et al. Reporting feedback on healthcare outcomes to improve quality in care: a scoping review. Implementation Sci 20, 14 (2025). https://doiorg.publicaciones.saludcastillayleon.es/10.1186/s13012-025-01424-9

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doiorg.publicaciones.saludcastillayleon.es/10.1186/s13012-025-01424-9

Keywords