Skip to end of metadata
Go to start of metadata


  • See here for work in progress toward providing this toolset

Problem to be solved

Currently, there is a lack of data aggregation between reporting platforms; each CDO has its own internal reporting without cross talk or national benchmarking.

Without standardized definitions/terminologies organizations aren’t talking about a measure using same language or sending a measure using same protocol.

Finally, we aren’t reporting measures transparently to patients.

  • HCOs and clinicians have burden to 'send data out' for a variety of purposes - research, case reporting, QI, etc. Burden to send this needs to be addressed; problems inhibit effective LHS function.
    • there's documentation burden for providers to meet reporting requirements; these documentation requirements should be able to be addressed more seamlessly as a natural byproduct of care workflows (at least for process measures) 
  • How do you combine all the data that's created to produce the broader picture. Pandemic highlights this - different kinds of data from different sources. Can't be combined to paint a picture of what's going on healthcare-wise for the needed purposes (QI, research)
    • Hard for CDOs to share CDS interventions based on their success and value because the data can't be integrated/aggregated as above
    • For standardized quality measures to work accurately requires standardized data models that support accurate reporting based on consensus about for reporting of digital quality measures. Different candidates - OMOP, FHIR, etc. - need to harmonize (address - 'so what' - why is this important to achieve goals)  [Zahid to this bullet]

HEDIS measures are reported a health plan level. Providers don't have continuous enrollment level. Core concepts are the same (e.g., "hypertension") though measures are different


-------------

Examples of siloed (non-automated) approach that lags clinical care by 2-3 months:

The American College of Surgeons (ACS) developed multiple national benchmarking programs. For example, TQIP (Trauma Quality Improvement Project) requires 900+ ACS validated trauma centers to hire manual clinical abstractors to extract 300 data elements (defined by NTDS standards) per trauma patient and input into registry vendor software where it is sent centrally to the ACS TQIP program. This data is then used to create benchmarking reports for each institution.


Similarly NSQIP another ACS initiative around elective surgery has its own data dictionary that extracts 250 data elements (with 25% overlap) and manual registry generation, and external submission for benchmarking. 

What does solving the problem look like - e.g., what new is needed? (future vision)

NCQA vision for the future quality measurement ecosystem - and recommendations to achieve this vision - are outlined in these recommendations the organization submitted to the Biden-Harris administration in 2021.

The Future Vision – closes the LHS loop by gathering standardized process and outcome data (e.g., from patients, providers, and payers; as reflected in case reports, measures, healthcare utilization surveys, patient goals [e.g., as reflected in care plans], SDOH data) to facilitate:

  • Evaluation of guideline effectiveness 
  • Performance monitoring of CDOs

By using standardized definitions/terminologies everyone is talking about a measure using same language and sending a measure using same protocol and pipeline.

Finally, reporting measures are transparent to patients.

  • when patient-specific data is reported for patient safety/regulatory purposes and potentially leveraged for LHS purposes, the data confidential issues will be addressed.


  • (See here for work in progress toward providing this toolset)

---------------

Using the above example of NSQIP/TQIP:

Need for an automated executable program that can extract all of the above data elements from Epic/Cerner data warehouse in real-time defined using a single agreed upon standard (i.e. OMOP), submit the 300 elements for TQIP externally, submit the 250 elements for NSQIP externally, submit any other required external data, use data to drive QI/CDS processes and local reports. 


Tool Users/Use Cases


Users and their use cases (Its hard to list the actual use cases as they are specific to each institution, I tried to put some broad language around use cases)

  1. LHS/QI researchers - Disease processes with significant burden on patients/health disparities/high funding priority
  2. Operations - Disease processes where their system is a laggard, Disease processes with high cost of care/reimbursement, disease processed tied to VBP
  3. System/Service line leadership - Similar to Operations
  4. Providers and Clinicians - Diseases in their specialty that are serious causes of morbidity / mortality. For example, in Trauma and Critical Care common use cases would be: Traumatic Brain Injury, Hip / Femur fracture, Solid Organ Injury, Rib Fractures, Spinal Cord Injury, COVID-19, ARDS, VAP, Sepsis, etc...
  5. Payors and Benefit Administrators - Chronic Disease, Readmissions, etc...
  6. Department of Health (especially for COVID-19 use cases) - Public Health diseases, Chronic Disease, Diseases with high morbidity/mortality.
  7. Federal Agencies 
  8. Patients - Common diseases (Cancer, Cardiac, etc...), COVID-19, Chronic Disease 
  9. Medical Societies 
  10. Research Universities/Institutions - Diseases with high funding 


Infrastructure needed to produce tools/solve problem

1.) The biggest gap healthcare systems face is a lack of funding. I've heard many times from hospital administrators "Where is the financial benefit to a health system to invest in being an early adopter of healthcare interoperable standards (FHIR), interoperable data models (OMOP), etc.." They can just continue implementing local CDS mapped to local variables and use in-house local quality dashboards. Developing interoperable computable guideline and living evidence is for researchers and should be covered by research budgets. The problem though is that these cost a lot of money to implement and without research $ to support hiring staff to implement OMOP, terminology servers, pre-process EHR data into a "usable" database for research/QI/reporting, develop and implement FHIR resources. 

 - Suggestion: AHRQ should give ACTS $3 million per use case for a pilot set of X hospitals to implement the entire LHS for a single use case

 - This should scale to additional use cases and additional health systems. Otherwise hospitals aren't going to put operations $ towards developing / implementing this.

 - While AHRQ grants exist, they routinely go to the few health systems that are early adopters (thus having preliminary publications and research in the area) and this limits the ability to actually build a transformative approach nationally and get more and more systems engaged. 

  1. Trustful, Independent evaluation of data including data in transport??
  2. Alignment of quality and value-based payment (For example: CMS Core Measure Sepsis SEP-1 measure adherence has rarely been shown to be associated with improved sepsis outcomes for example). - reword 
  3. API marketplace?
  4. Available clinical data warehouse (i.e. N3C) specifically developed around clinical use cases of high value
  5. Agreement in what value is? (standards, etc...)

Other enablers needed to solve problem

  • Standardized process for automated data collection (about care processes and results) using consensus standardized data models; where models depend on specific use cases. E.g., regulatory reporting, research to generate new evidence, local quality improvement efforts (e.g., to refine parameters of CDS local interventions). These data models are developed in a stakeholder driven fashion as part of efforts to develop a reference architecture underlying the knowledge ecosystem; takes into account needs and constraints of various stakeholders (clinicians, patients researchers CDOs, EHR vendors, etc.). Include in these models information about cost, patient satisfaction, equity, PROMs, other data needed to drive LHS/quintuple (use graphic from Stan/Health Catalyst).
  • Data quality is extremely important in data collection and aggregation; a data Aggregator Validation program evaluates clinical data streams to help ensure that health plans, providers, government organizations and others can trust the accuracy of aggregated clinical data for use in Healthcare PROGRAMS (see NCQA graphic as example).
  • In addition to model, need to define processes and platforms for how data will be shared, aggregated, reported, etc.
  • When a population is defined for quality/reporting/regulatory purposes the value sets that define this population are specified in a consistent way across measures and uses cases around the LHS cycle.
  1. AHRQ U grant for collaborative CDS healthcare system group
    1. Possibly development of CDS collaborative similar to clinical trials networks (SIREN, PETAL, etc…)?

                    - Similar to Infrastructure #1 above. If I were AHRQ I would set up a U grant or a funding stream to support ACTS transitioning into a CDS Collaborative similar to trials networks. ACTS would have the responsibility of implementing 1-2 use cases across a collaborative of 5-10 heterogenous healthcare systems, using interoperable CDS, data, and living evidence and CDS Connect. This would likely cost on the order of $10 million. 

                    - Following this ACTS milestones would be supporting the translation of this CDS to other health systems (30+ with budgets of $250-300,000 per system)


2.) The current AHRQ R18 grant to scale interoperable CDS is excellent, I'd expand funding support of this grant to make intraoperable CDS/Learning Health Systems a higher funding priority with an increased funding line. The only criticism is that this creates a bit of a siloed approach to CDS funding whereas if you did more #1 you would have a centrally organized body overseeing and directing interoperable CDS implementation with all using the same agreed upon standards by the central body (ACTS). 


3.) The AHRQ K12 training grant is excellent, I'd expand this program to engage junior faculty at more sites. 


4.) AHRQ funding grants take too long. By the time we submit a grant it takes nearly 7-8 months to hear if it is accepted or not. There should be a faster mechanism for people to get funding, in the order of 1-2 months with rapid resubmission and integration of AHRQ feedback so that funded proposals better align with AHRQ funding priorities and LHS needs. 


Steps to address needed infrastructure / enablers  - Who does what?


Funds could come from: 

  1.  HHS
  2. AHRQ
  3. NIH
  4. CDC
  5. PCORI
  6. Payors
  7. Private Industry?

Standards could come from:

  1. Bridges between Payors/Provider networks?
  2. Medical Societies
  3. AHRQ/ACTS
  4. Payors (Medicare, Medicaid, etc…)
  5. National Quality Forum

See special publication from the National Academies of Medicine on "Health Data Sharing to Support Better Outcomes: Building a Foundation of Stakeholder Trust."  This Special Publication outlines a number of potentially valuable policy changes and actions that will help drive toward effective, efficient, and ethical data sharing, including more compelling and widespread communication efforts to improve awareness, understanding, and participation in data sharing. It identifies 'creating and prioritizing use cases' as a priority for action, and this concept demo and related ACTS Roadmap provides an approach for determining these priorities and integrating them with related efforts to produce learning health systems.

How tool(s) fit in Patient Journey

  • Patient-reported outcomes - Mobile applications for PROM reporting
  • Process/Outcome measure monitoring 
  • CDS utilization/alert override, etc 
  • Interoperable Platform for Knowledge Engine - Transitioning from Native EPIC to CPG-on-FHIR
  •          - eCQM and eCaseReport Features 
  • NCQA ReportCards and Measures
  • Evidence Generation and Dissemination- Publications, Pre-prints, Presentations?
  • Patient level information 

Ecosystem Cycle Step(s) where tool is applicable

  • Gather, Analyze and Apply Data About Care Processes and Results



  • No labels