Ugrás a metaadatok végére
Ugrás a metaadatok elejére

This page outlines a potential proof of concept demo being explored by ACTS Collaborative Learning Community participants to illustrate how making evidence and guidance more computable and standards-based could enhance the development and updating of 'living' CDS interventions and eCQMs for COVID-19 and beyond.

  • The figure in section A. is an overview of potential enhanced information flow around the Evidence/Quality Ecosystem Cycle - i.e., driven by more computable and standards-based evidence and guidance
  • The table in section B. outlines ecosystem enhancement needs and opportunities, as well as the notes on a potential concept demo toolkit (in the 4th table column) for addressing those needs 
  • The diagram in section C. illustrates where and how a concept demo toolkit that makes evidence and guidance more computable and standards-based could enhance the Evidence/Quality Ecosystem (and Learning Health System) Cycle
  • The email excerpt in section D. provides considerations related to using computable/standards-based evidence descriptions to make developing and updating computable clinical recommendations more efficient and effective. 

A. Evidence Ecosystem Enhancement - Overview Diagram (Enhanced Ecosystem Concept Demo Opportunity 8.26.20.pptx)

[DOC search hyperlink]



B. Ecosystem Needs, Enhancement Opportunities and Potential Concept Demo Outline


Ecosystem Step

High Priority Enhancement Needs/Opportunities1

Potential SRDR+/COKA-enabled Enhancements

Potential Stakeholder-driven Proof of Concept Demo (for Key Targets)2

Other Notes/Comments

Process evidence

  • Quickly identify/select evidence pertinent to topic (e.g., PICO-based inclusion criteria for a study)
  • Data extraction (e.g., results: numerators/ denominators, aggregate measures) from studies is labor intensive and error prone
  • Identify research gaps that require additional attention
  • computable expressions for PICO criteria (now working on outcome definition component); if evidence has standardized PICO tags, it will be faster to identify/select evidence.
  • computable expressions for results (statistics); if evidence has standardized, structured results reported it will be faster and more accurate to extract/upload data into review authoring tool
  • If evidence is in a computable form, can better understand and describe nature of research gap (so it can be filled).
  • A team uses a pilot COKA-enabled tool to identify and apply COKA tags to all studies (previous and emerging) related to COVID-19 and anticoagulation, triage. [e.g., leverage Doc Search, other tools on Evidence/Guidance CoP page to identify the pertinent evidence; explore use of AI to automate this tagging (Lisa Lang/NLM and Brian Alper/COKA have begun discussing this)]
  • EPCs (e.g., UMN for anticoagulation, ? others for other targets) use a concept demo COKA-enhanced version of SRDR+ to illustrate production of living systematic reviews.
  • Systematic reviewers are proactively notified when there are new studies so that updates to the systematic reviews can be considered. 
  • Cochrane registry has PICO tags (as do other systems), but since these aren't standardized, info can be missed. (searching Cochrane on 'diaper rash' may not find evidence tagged as 'nappy rash' - standard disease codes would address this)
  • SRDR has FHIR-based expression of outcome. COKA has outcome definition viewer coming soon. With SRDR-defined outcome tags and Cochrane-defined outcome tags mapped to the same standard, a search in one system can find evidence in the other system.  
  • Identify communities that might do a test to refine AI algorithms to do these kinds of tags [Lisa Lang for more details]
  • Could start with simple, higher-level structures to get things rolling, then, over time make the standards more finer grained regarding PICO details.
Produce Living Guidance
  • Need to quickly/easily determine (e.g., within/ across systematic reviews) judgements about quality of evidence and certainty of findings. This is problematic because different systematic reviews express these in different ways, making this critical information difficult to assess within and across reviews.
  • computable expression for evidence certainty (certainty assessments and reasons for these assessments); 
  • Guideline developers (e.g,. SCCM/ASH for anticoag, ACEP for ED triage, ? CDC for ambulatory triage) use a concept demo COKA-enabled tool to produce living, computable guidance (e.g., building on the type of functionality AU Living Guidelines has implemented with MAGICapp - see anticoagulation example)
  • Guideline developers are proactively notified when there's an update to systematic reviews so that updates to the guidance can be considered. 

Develop Computable Guidance (e.g., CDS/eCQMs, other computable process enablements and assessments)

  • C19HCC Agile Knowledge Engineering Teams use the pilot COKA-enabled tool that produces living, computable guidance to drive developing/updating of Computable Guidelines - and CDS/eCQMs derived from them - via connection to their CPG Template (will soon be migrated to the publicly accessible CPG on FHIR IG)
  • Teams are proactively notified when there's an update to the guidance so that updates to the CDS/eCQMs can be considered. 

Implement CDS/eCQMs
  • Care delivery participants need mechanisms to convey priority needs for which they need guidance/support to those who are producing that information.
  • Implementers are challenged by technical and change management issues that often impede success in achieving QI goals.

  • Leverage/enhance tools that help CDS/eCQM implementers address technical and change management challenges they face in deploying these tools in ways that improve care team workflows/information flows/satisfaction and enhance care delivery and outcomes.

Analyze/Use Care Results 

(report, produce evidence)



  • Those who provide evidence (e.g., study authors) capture data using standard PICO tags so that after-publication coding isn't required. Research funder (NIH, PCORI could require this. Have ACEP pilot this with triage-related articles in JACEP?)

[cautionary note: getting structure into journal articles (e.g., structured abstracts) have been challenging - perhaps even more challenging for this level of standardization]

Cross-cutting Issues



1By those doing the work - e.g., EPCs, VA/UMN/Health Centers, Agile KE teams, NACHC/ACEP/EvidenceCare, many others

2More information about 'building blocks' for creating components of this 'fantasy' (e.g., standards, sources for inputs/outputs, tools/methods platforms) is on the Community of Practice webpages (see navigation bar left side of this page), and in this emerging catalog from the COVID-END project

C. Overview Diagram for Proof of Concept Demo Toolkit (computable evidence slide.pptx)


D.  Excerpts from 9/4/20 email exchange about using computable/standards-based evidence descriptions to make developing and updating computable, evidence-based clinical recommendations more efficient/effective.


Adapted version of note from Jerry Osheroff:

Below is a small excerpt from the HL7 CPG on FHIR Draft Computable Guideline L2 Template (Recommendation tab) being used by the C19HCC Digital Guidelines WG. The question the Learning Community is exploring is whether/how the trajectory of SRDR/COKA efforts to provide computable, standards-based input and out from SRDR could at some point and in some way lead to auto-populating/updating this type of information in some future version of this template:


Evidence supporting recommendation:

Quality of Evidence:

Relationship between Quality of Evidence - Strength of Recommendation:

Build Evidence Table or reference Evidence Summary

Use GRADE or USPSTF


Condition

Study Design

Author, Year

N

Statistically Significant?

Quality of Study

Magnitude of Benefit

Absolute Risk Reduction

Number Needed to Treat

Comments























Response from Brian Alper (lead of EBM on FHIR and COKA):

The format for human expression can look very different from the format for computable expression.   But if we can agree to a standard for computable expression we can support a near-infinite set of patterns of human expression.

Some thoughts below to inform a computable expression of a “Recommendation” and I end with a link to a first draft for it.

As a recap to some of the concepts to clarify recommendation vs. CDS artifact:

One of the challenges in defining L2/L3 and recommendation/CDS may be recognized by 2 different factors (Recommendation/Decision Rule, Digital/Computable):

  • A Recommendation can be an expression of what should be done – Flu vaccine is recommended for people who have not received a flu vaccine this season.
  • A Decision Rule can be an expression of the logic to be applied – If a person does not have a record of receiving a flu vaccine this season, then offer/provide a flu vaccine.
  • The Recommendation and Decision Rule can be applied in clinical practice completely using Print expressions.
  • The Recommendation and Decision Rule can be applied in clinical practice completely using Digital expressions.   This sentence is shared with you now in a Digital expression in this email but is not a Computable expression of the concepts.
  • The Recommendation can be converted to a Computable expression – an L3 artifact that provides “Flu vaccine” as a codeable concept, “people who have not received a flu vaccine this season” as a codeable concept, and “is recommended for”
    as a codeable concept.   This L3 artifact can be considered a CDS artifact but it is not yet sufficient for immediate functional use in a specific CDS system.
  • The Decision Rule can be converted to a Computable expression – in addition to the codeable concepts in the Recommendation, additional codeable concepts to express include “have a record of”, “offer/provide”, and the “if…then” logic.  This L3 artifact would also be considered a CDS artifact.

The original goal of the EBMonFHIR project was to provide for computable expressions of evidence and recommendations.   With the CDC ACQ Informatics Value Stream effort focused on converting guidelines to CDS artifacts, a companion CPGonFHIR project developed.   It appears the CPGonFHIR expresses Decision Rules in computable form and the PlanDefinition Resource expresses the action(s) in computable form.   We have been discussing shared use of Group and EvidenceVariable Resources that can express parts of the “when recommended” concepts in computable form. However, there is not yet a specific resource for the Recommendation in computable form which can be used prior to creating the Decision Rule derived from that Recommendation.

Working off of what we have learned from the evidence-related Resources and the PlanDefinition Resource, I have created a first draft of a Recommendation Resource to bridge this gap.

Response from Davide Sottara:

Brian, I generally agree with your distinctions on Decision vs Recommendation, and Digital vs Computable.

Yet, I would like to understand how far do you envision a "L3 Recommendation".

A "computable, structured" Recommendation would enable a CDS system to reason over the Recommendation,
and e.g. allow to distinguish the different actions that are recommended, evaluate applicability conditions, and afford for a more
contextual delivery. 
Yet, I believe that the current PlanDefinition may be able to express this notion. It's already so polymorphic in nature that
adding 'mood' extensions and profiles for confidence, certainty, and strength may be enough(@Bryn?) 

Yet - from a formal knowledge representation & reasoning perspective - there are more aspects to explore. 
As 'Recommendations' convey an agent's proposal to close a gap between a current state and a perceived goal state, they can be considered plan fragments,
with "mood(al)", deontic and speech act aspects, and can have elements of belief, confidence and evidence (explanation).

These aspects, which allow to reason with formal Recommendations, require capabilities beyond 'inferences' and 'ECA rules'. 

That is, the more we increase the expressivity with additional resources, the more we need to provide tools and guidance on how to correctly
use them - and not only to exchange information. 

So - (1) do we need/want a new Resource, or a new Profile, (2) what are the computational implications of the new resource, 
(3) can we standardize the pragmatics of reasoning with the resource or leave it to the implementers?

Response from Bryn Rhodes:

I don't quite agree with the opening statement of the document. I would say there _is_ a resource that represents a computable recommendation, it's PlanDefinition, and there is even a profile in CPG-on-FHIR called cpg-recommendationdefinition, that's exactly what we're trying to capture there. If there are gaps between what's there and what you are looking for, I'd like to understand what those are.

In short, I don't think we need a new resource here, or at least I don't see what the gaps are that would require it.

Response from Brian Alper:

I don’t necessarily want to create a new Recommendation Resource if a Recommendation Profile (of PlanDefinition Resource) or other form meets the need.

I have been getting asked by more people to extend the EBMonFHIR efforts to provide computable expressions of the “Recommendation” concept on the “L2 to L3 path” for Guideline-to-CDS translation and was not sure what is being requested vs. what is already covered in CPGonFHIR efforts.

My quick take was to distinguish Recommendation from Decision Rule and see what is missing for the computable expression (precise, unambiguous, machine-interpretable expression) of the Recommendation component.   On a quick view of PlanDefinition Resource the “action(s)” appear really well specified so it appears the “whenToAct” specification is what is not easily translated from the guideline recommendation statement to the later expression.  If there is an easy path for doing that let’s use it.  If there are adjustments to make the path easy, let’s do it.  If not then perhaps a Recommendation Profile is a way to make this easier.




  • Címkék nélkül