Ugrás a metaadatok végére
Ugrás a metaadatok elejére

Overview

  • This page is a resource to help stakeholders working along the COVID-19 evidence-to guidance- to action - to data - to evidence cycle improve their work processes and results (see diagram at the top of this page). The ultimately goal is to a soon as possible, improve COVID-19 care delivery and outcomes.
  • The tables below are designed to aggregate Collaborative participant recommendations for addressing steps in this COVID-19 knowledge ecosystem. The recommended tools and approaches are being gathered and will evolve over time as stakeholder input is received and community consensus around best practices - and the pandemic itself - evolve.

Recommendation Table Listing

  • Identify Studies
  • Synthesize Evidence
  • Produce Guidance
  • Make Guidance Computable
  • Implement Guidance
  • Analyze Care Results
  • Leverage Results Analysis (e.g., for Quality Improvement, Reporting, Evidence Generation)

Process for Populating Recommendation Tables

  • Identify 'leads' for each table who are currently doing extensive, collaborative work around the ecosystem step.
  • Leads provide pointers to what their collaborative communities to be high value resources for table cells
  • Other Collaborative participants likewise add comments and suggestions about this emerging information
  • Formal processes/criteria will be developed by the Collaborative for adding/vetting information in the tables to optimize their value and use (including defining explicit criteria for what belongs in each row), e.g., 
    • Input Sources: 
    • Search Strategies: 
    • Output Repositories: 
    • Standards: 
    • Initiatives: 
    • Tools/Platforms: 
    • Other Best Practices: 

Recommendation Tables for Knowledge Ecosystem Steps

Identify Studies



General RecommendationsAnticoagulationTesting/TriageOther (Long COVID, Vaccine, Steroids)
Key Definitions and Frameworks



What to know/do Overview

('nouns and verbs' - what goes in rows below in each table are the 'lists' associated with this the bullets in this row)

include: what tools to use under what circumstance. Information about the item so people can match it to their need.


Input Sources

COVID specific systematic and rapid reviews

COVID specific mixed (reviews and trials)

COVID specific Trials



  • For steroids systematic meta-review, 8 sources have been identified these are (MEDLINE, CORD-19, L-OVE/Episteminokos, NIH iSearch COVID-19, EuropePMC, WHO COVID-19 Database, EMBASE + Prospero) 
Search Strategies



Output Repositories



Standards



Initiatives
  • Librarian Reserve Corps
    • COVID-Evidence Project - building a database to gather all the evidence around a drug (University of Basil - currently focused on hydroxychloroquine, but model could be adaptable to other targets)
    • Identification of sources to identify primary studies - validation study of specialized COVID-19 databases (systematic reviewers are going to different sources - this effort is to identify best practices)
    • Advocate for librarian representation in searches and reviews - leverage skillsets/best practices in this work
      [SLMC]: close gaps between needs that clinicians are seeing on the front line and topics covered in reviews guidelines
  • COKA Evidence/ Tools WG [Project Google Drive](work in process)



Tools/ Platforms



Other Best Practices



Synthesize Evidence


General RecommendationsAnticoagulationTesting/TriageOther (Long COVID, Vaccine, Steroids)
Key Definitions and Frameworks
  • includes evaluation of quality of evidence 
  • also needs to address processing of real world evidence
  • [weave in ICER - search ISER opioids]



What to know/do Overview
  • For people creating evidence syntheses
    • See if work on planned topic has been done already - search L*VE/Epistemonikos (2K reviews - cover many questions - only a fraction registered in PROSPERO; most reviews are out of date - only 3% have all the trials they should), PROSPERO for what's in the pipeline. Make sure that any existing reviews look at outcomes that are pertinent to your needs [leverage COMET - outcome measures], etc. that of interest for the need at hand. Very hard for user to identify a review that's up to date. COVID-END repository of Best Evidence Syntheses can be helpful here.
    • Need to have access to raw data that fed into the review so it's more re-usable.
    • important to understand end-user needs so that the review can be aimed at addressing them
    • AHRQ has methods guides, Cochrane has them. Are there tools for this on COVID - e.g., Resources and tools for researchers considering and conducting
      COVID-19 evidence syntheses
       (From COVID-END Synthesizing WG)
  • For organizations consuming evidence syntheses
    • For selecting/using synthesized evidence: Is there a GRADE evidence profile? Do they make quality assessments? How comprehensive are they? Search date



Input Sources
Output Repositories


Standards


Initiatives


Tools/ Platforms


Other Best Practices
  • COVID-NMA has initiated communication with all trialists to try to ensure consistent approaches e.g. selection of outcomes, reduction of risk of bias, and to invite them to contribute missing data. 
  • Framework slides: Applying Standards to the Evidence Domain (from the COKA Evidence Ecosystem Liaison WG)




Produce Guidance


General RecommendationsAnticoagulationTesting/TriageOther (Long COVID, Vaccine, Steroids)
Key Definitions and Frameworks



What to know/do Overview

For people using or consuming guidance:

For people producing guidance:




Input Sources
  • See Output Repositories from Synthesize Evidence table

Output Repositories


Standards

IOM Standards: Clinical Practice Guidelines We Can Trust




Initiatives



Tools/ Platforms


Other Best Practices


Make Guidance Computable


General RecommendationsAnticoagulationTesting/TriageOther (Long COVID, Vaccine, Steroids)
Key Definitions and Frameworks
  • [CPG on FHIR and BPM+ IGs do this separately - need to be combined and simplified (e.g., with helpful graphics); e.g., leverage L1-L4]
  • [Approach section from CPG on FHIR IG might be helpful in framing scope]



What to know/do Overview

There are threads of work in HL7 and BPM+ community. Results of this work should be available in the coming weeks/months.

By end of Nov HL7 should publish details of how the DGWG got the ED use case guidance to L3.

  • Knowledge Elicitation: This goes through types of input that are needed, what needs to be made explicit and computable. Interactions with people developing the guideline. Getting information from guidance developer tools/artifacts into a DGWG tool (based on McMaster work) for making guidance computable. The DGWG template will be made available as part of this tool.
  • Terminology Management: reach out to terminology vendors for mappings about how terms are expressed in working systems (e.g., EHRs) and connect this to how guidance developers described terms. Narrative guidelines have clinicians as an audience (e.g., understand what terms like 'patients with diabetes' mean); for guidelines to be computerized, these terms must be expressed based on computable code sets. Fleshing out these terms/definitions is the lion's share of the work in making guidelines computable.
  • Execution Model: describes target representation that models the execution semantics that are necessary for any specific implementation (Sivaram/Matt to provide example). Has 'pragmatics' that consistently conveys guidance intent. Translates what SMEs, knowledge engineers, etc. know into system behavior. There are tools to facilitate this work (e.g., OMG BPM+ tools, OWL). 'Case' describes patient details, 'Plan' describes what is (or should be) done for specific patients.

The approach above is intended as a paradigm shift in the approach to developing guidance. Historically it has started with developing a text representation for the guidance that is directly applied by clinicians to enhance decisions and actions. The shift here is developing a computable representation of the guidance that serves as the 'source of truth' for subsequent implementation and modifications. When changes to the guidance model are made, these changes can then flow more seamlessly to CDS interventions, quality measures, eCase Reports, etc. This is more efficient than having a non-computable, text-based representation of the guidance as the 'source of truth,' since the former requires extensive adaptation (which introduces error, ambiguity, time delays, etc.) for implementation and modification.

(Sivaram/Matt to flesh this out) Use shared ontologies to ensure consistent guidelines, reduce rework. These get pulled into authoring tools...

(VA has library of 100 knowledge artifacts - used HL7 KNART spec to make XML rendering. Using clinical content from these to represent as FHIR questionnaires. This other HL7 work outlined above resonates, and potentially could be leveraged to support VA efforts. There are only a handful of guidelines that VA pushes out from national - other CDS interventions are developed more locally by VA facilities using tools available in the HIT infrastructure. Question: How does the above process reconcile when there are competing recommendations on a particular topic. Answer: The HL7 CPG on FHIR IG addresses localization - can related to workflows, but can also reflect how organizations combine different external recommendations to make essentially a local guideline that differs from the external guidelines.


[Evidence synthesis teams would like to have something that summarizes for a COVID resource database, where are they pulling information from, what are their inclusion/exclusion criteria, why you might use one source vs. another]

[CPG on FHIR team would like to incorporate insight we generate here - including BPM+ synergies, back into that resource. The 'Integrated Process' about how to develop narrative and computable guidelines in parallel - will be published in about a month.]

Robert Lario - co-chairs OMG BPM+ activities. 3 languages - process modeling (BPMN), decision modeling (DMN), case/event (CMMN) modeling. VA using these to express clinical practice guidelines - sometimes just instructive, other times executable. All have execution models. BPM+ has its own ecosystem. Gaps and hard to do some things with BPM +. Started working on 3 other modeling languages. Situational data - how do you represent structure of data, etc. Provenance who owns/controls and access. and Pedigree: what produces what. Knowledge Package: Many languages/constructs used in a guideline (sequencing). How do you bundle these up into a CPG. How do you surface models, discuss dependencies. Focusing on how do you express knowledge in a clear and unambiguous way, and how to you create artifacts? CPG on FHIR speaks more to methodology - complements BPM+ which doesn't get into deep detail on this. Also not looking at curation and management of models.

Blackford: DGWG ran through effort to implement guidance based on CPG on FHIR. Would like to use a resource like this table to know which tools to use to make guidance computable in different circumstances. How do you implement this at scale.

Address dissemination and marketplaces. (HL7 Marketplaces spec)




Input Sources
  • See output repositories under Produce Guidance



Output Repositories


Standards


Initiatives



Tools/ Platforms


Other Best Practices

From C19HCC Digital Guideline WG:

  • Using CPG-on-FHIR standard for representing/ expressing the full intent of the Guidance in computer-interpretable artifacts (part of HL7 CPG-IG)
  • Using the Agile Approach to CPG Development (inclusive of Integrated Process) to concurrently Produce Guidance and Make Guidance Computable (part of HL7 CPG-IG)
  • Use Agile Knowledge Engineering methods, principles, and tools
    • Cross-functional Integrated team (Agile CPG Team)
    • Leverage composite nature of CPGs (e.g. can develop logic for inferences on patient information- CPG_CaseFeatures) to build incrementally and iteratively with rapid feedback
    • Pull knowledge engineers into Content design/reviews; pull domain SMEs into knowledge representation design/reviews
  • Leverage CPG-on-FHIR as a faithful expression of Guidance and its ability to create computationally derived CDS and Cognitive Support, patient-specific, practice-level digital Quality Measures/Metrics, eCaseReports, etc. to create computable artifacts used downstream in the Learning Health System and to provide closed-loop feedback/feedforward.
  • Leverage established tools and capabilities (e.g. BPM+ process and tooling, Clinical Ontology) to author computable Guidance and Open Source tooling to translated into HL7 CPG-on-FHIR to leverage derivative and native compute
  • tips:
    • Use established standards and work with standards community (to understand and evolve as needed)
    • Engage consumers/ users early and often
    • Engage downstream vendors (e.g Terminology vendor USED in the EHRs) early
    • Just because everyone everyone is using the same terminology systems doesn't mean they're agreeing how to use the actual terms- this needs to be considered and addressed to make ecosystem/supply chain work properly (feedforward from Evidence, but also feedback of data semantics back into evidence)
    • Learn from related communities of practice (e.g. Agile Software Engineering)



Implement Guidance (e.g., as CDS, eCQMs)


General RecommendationsAnticoagulationTesting/TriageOther (Long COVID, Vaccine, Steroids)
Key Definitions and Frameworks
  • Implement Guidance includes integrating the computable guidance into organizational information system infrastructure, and deploying the intervention to users, and maintaining the interventions over time. Includes looking at 'leading' indicators (e.g., process changes) regarding intervention use and results. (as opposed to the next "Analyze Care Results" table below that addresses 'lagging indicators' (e.g., clinical outcomes))



What to know/do Overview



Input Sources
  • See output repositories for Make Guidance Computable



Standards



Initiatives
  • C19 Digital Guidelines WG developing and implementation guide for COVID-19 interventions



Tools/ Platforms



Other Best Practices


Analyze Care Results (Within and Across Care Delivery Organizations)


General RecommendationsAnticoagulationTesting/TriageOther (Long COVID, Vaccine, Steroids)
Key Definitions and Frameworks
  • Includes gathering the data that will be analyzed
  • 'Results' include 'lagging indicators' such as clinical outcomes. Analyzing/addressing 'leading indicators' (e.g., process changes) is addressed under "Implement Guidance"



What to know/do Overview
  • Curate data into a structured, centralized resource - applying specific instructions about how to obtain and document the data (historically a manual process to locate information but becoming more automated)
    • Need semantic interoperability so that results can be aggregated/compared across CDOs; should be based on widely used, open data and standards
  • Put information into registry (institution-specific or cross-institution)
  • Generate report - to get feedback about effectiveness, safety, etc. of various interventions



Input Sources


Search Strategies



Output Repositories
  • Registries, e.g., from specialty societies. (discuss with Frank Opelka, ACS)



Standards
  • [under development] MCBK Standards WG Metadata work
  • standards that do (or could) underpin registries



Initiatives


Tools/ Platforms


Other Best Practices



Leverage Results Analysis (e.g., for Quality Improvement, Reporting, and Evidence Generation)


General RecommendationsAnticoagulationTesting/TriageOther (Long COVID, Vaccine, Steroids)
Key Definitions and Frameworks



What to know/do Overview
  • Use reports (see analyze care results)
  • Review reports (see analyze care results) with key stakeholders



Input Sources



Output Repositories



Current Standards


Initiatives


Tools/ Platforms



Other Best Practices



  • Címkék nélkül