This page outlines a potential proof of concept demo being explored by ACTS Collaborative Learning Community participants to illustrate how making evidence and guidance more computable and standards-based could enhance the development and updating of 'living' CDS interventions and eCQMs for COVID-19 and beyond.

A. Evidence Ecosystem Enhancement - Overview Diagram (Enhanced Ecosystem Concept Demo Opportunity 8.26.20.pptx)

[DOC search hyperlink]



B. Ecosystem Needs, Enhancement Opportunities and Potential Concept Demo Outline


Ecosystem Step

High Priority Enhancement Needs/Opportunities1

Potential SRDR+/COKA-enabled Enhancements

Potential Stakeholder-driven Proof of Concept Demo (for Key Targets)2

Other Notes/Comments

Process evidence

  • Quickly identify/select evidence pertinent to topic (e.g., PICO-based inclusion criteria for a study)
  • Data extraction (e.g., results: numerators/ denominators, aggregate measures) from studies is labor intensive and error prone
  • Identify research gaps that require additional attention
  • computable expressions for PICO criteria (now working on outcome definition component); if evidence has standardized PICO tags, it will be faster to identify/select evidence.
  • computable expressions for results (statistics); if evidence has standardized, structured results reported it will be faster and more accurate to extract/upload data into review authoring tool
  • If evidence is in a computable form, can better understand and describe nature of research gap (so it can be filled).
  • A team (e.g., at NLM?) uses a pilot COKA-enabled tool to identify and apply COKA tags to all studies (previous and emerging) related to COVID-19 and anticoagulation, triage. [e.g., leverage Doc Search, other tools on Evidence/Guidance CoP page to identify the pertinent evidence]
  • EPCs (e.g., UMN for anticoagulation, ? others for other targets) use a pilot COKA-enhanced version of SRDR+ to produce living systematic reviews.
  • Systematic reviewers are proactively notified when there are new studies so that updates to the systematic reviews can be considered. 
  • Cochrane registry has PICO tags (as do other systems), but since these aren't standardized, info can be missed. (searching Cochrane on 'diaper rash' may not find evidence tagged as 'nappy rash' - standard disease codes would address this)
  • SRDR has FHIR-based expression of outcome. COKA has outcome definition viewer coming soon. With SRDR-defined outcome tags and Cochrane-defined outcome tags mapped to the same standard, a search in one system can find evidence in the other system.  
  • Identify communities that might do a test to refine AI algorithms to do these kinds of tags [Lisa Lang for more details]
  • Could start with simple, higher-level structures to get things rolling, then, over time make the standards more finer grained regarding PICO details.
Produce Living Guidance
  • Need to quickly/easily determine (e.g., within/ across systematic reviews) judgements about quality of evidence and certainty of findings. This is problematic because different systematic reviews express these in different ways, making this critical information difficult to assess within and across reviews.
  • computable expression for evidence certainty (certainty assessments and reasons for these assessments); 
  • Guideline developers (e.g,. SCCM/ASH for anticoag, ACEP for ED triage, ? CDC for ambulatory triage) use a pilot COKA-enabled tool to produce living, computable guidance (e.g., building on the type of functionality AU Living Guidelines has implemented with MAGICapp - see anticoagulation example)
  • Guideline developers are proactively notified when there's an update to systematic reviews so that updates to the guidance can be considered. 

Develop Computable Guidance (e.g., CDS/eCQMs, other computable process enablements and assessments)

  • C19HCC Agile Knowledge Engineering Teams use the pilot COKA-enabled tool that produces living, computable guidance to drive developing/updating of Computable Guidelines - and CDS/eCQMs derived from them - via connection to their CPG Template
  • Teams are proactively notified when there's an update to the guidance so that updates to the CDS/eCQMs can be considered. 

Implement CDS/eCQMs
  • Care delivery participants need mechanisms to convey priority needs for which they need guidance/support to those who are producing that information.



Analyze/Use Care Results 

(report, produce evidence)



  • Those who provide evidence (e.g., study authors) capture data using standard PICO tags so that after-publication coding isn't required. Research funder (NIH, PCORI could require this. Have ACEP pilot this with triage-related articles in JACEP?)

[cautionary note: getting structure into journal articles (e.g., structured abstracts) have been challenging - perhaps even more challenging for this level of standardization]

Cross-cutting Issues



1By those doing the work - e.g., EPCs, VA/UMN/Health Centers, Agile KE teams, NACHC/ACEP/EvidenceCare, many others

2More information about 'building blocks' for creating components of this 'fantasy' (e.g., standards, sources for inputs/outputs, tools/methods platforms) is on the Community of Practice webpages (see navigation bar left side of this page), and in this emerging catalog from the COVID-END project

C. Overview Diagram for Proof of Concept Demo Toolkit (computable evidence slide.pptx)


D.  Excerpts from 9/4/20 email exchange about using computable/standards-based evidence descriptions to make developing and updating computable, evidence-based clinical recommendations more efficient/effective.


Adapted version of note from Jerry Osheroff:

Below is a small excerpt from the HL7 CPG on FHIR Draft Computable Guideline L2 Template (Recommendation tab) being used by the C19HCC Digital Guidelines WG. The question the Learning Community is exploring is whether/how the trajectory of SRDR/COKA efforts to provide computable, standards-based input and out from SRDR could at some point and in some way lead to auto-populating/updating this type of information in some future version of this template:


Evidence supporting recommendation:

Quality of Evidence:

Relationship between Quality of Evidence - Strength of Recommendation:

Build Evidence Table or reference Evidence Summary

Use GRADE or USPSTF


Condition

Study Design

Author, Year

N

Statistically Significant?

Quality of Study

Magnitude of Benefit

Absolute Risk Reduction

Number Needed to Treat

Comments























Response from Brian Alper (lead of EBM on FHIR and COKA):

The format for human expression can look very different from the format for computable expression.   But if we can agree to a standard for computable expression we can support a near-infinite set of patterns of human expression.

Some thoughts below to inform a computable expression of a “Recommendation” and I end with a link to a first draft for it.

As a recap to some of the concepts to clarify recommendation vs. CDS artifact:

One of the challenges in defining L2/L3 and recommendation/CDS may be recognized by 2 different factors (Recommendation/Decision Rule, Digital/Computable):

The original goal of the EBMonFHIR project was to provide for computable expressions of evidence and recommendations.   With the CDC ACQ Informatics Value Stream effort focused on converting guidelines to CDS artifacts, a companion CPGonFHIR project developed.   It appears the CPGonFHIR expresses Decision Rules in computable form and the PlanDefinition Resource expresses the action(s) in computable form.   We have been discussing shared use of Group and EvidenceVariable Resources that can express parts of the “when recommended” concepts in computable form. However, there is not yet a specific resource for the Recommendation in computable form which can be used prior to creating the Decision Rule derived from that Recommendation.

Working off of what we have learned from the evidence-related Resources and the PlanDefinition Resource, I have created a first draft of a Recommendation Resource to bridge this gap.

Response from Davide Sottara:

Brian, I generally agree with your distinctions on Decision vs Recommendation, and Digital vs Computable.

Yet, I would like to understand how far do you envision a "L3 Recommendation".

A "computable, structured" Recommendation would enable a CDS system to reason over the Recommendation,
and e.g. allow to distinguish the different actions that are recommended, evaluate applicability conditions, and afford for a more
contextual delivery. 
Yet, I believe that the current PlanDefinition may be able to express this notion. It's already so polymorphic in nature that
adding 'mood' extensions and profiles for confidence, certainty, and strength may be enough(@Bryn?) 

Yet - from a formal knowledge representation & reasoning perspective - there are more aspects to explore. 
As 'Recommendations' convey an agent's proposal to close a gap between a current state and a perceived goal state, they can be considered plan fragments,
with "mood(al)", deontic and speech act aspects, and can have elements of belief, confidence and evidence (explanation).

These aspects, which allow to reason with formal Recommendations, require capabilities beyond 'inferences' and 'ECA rules'. 

That is, the more we increase the expressivity with additional resources, the more we need to provide tools and guidance on how to correctly
use them - and not only to exchange information. 

So - (1) do we need/want a new Resource, or a new Profile, (2) what are the computational implications of the new resource, 
(3) can we standardize the pragmatics of reasoning with the resource or leave it to the implementers?