This page outlines a potential proof of concept demo being explored by ACTS Collaborative Learning Community participants to illustrate how making evidence and guidance more computable and standards-based could enhance the development and updating of 'living' CDS interventions and eCQMs for COVID-19 and beyond.

A. Evidence Ecosystem Enhancement - Overview Diagram (Enhanced Ecosystem Concept Demo Opportunity 8.26.20.pptx)

[DOC search hyperlink]



B. Ecosystem Needs, Enhancement Opportunities and Potential Concept Demo Outline


Ecosystem Step

High Priority Enhancement Needs/Opportunities1

Potential SRDR+/COKA-enabled Enhancements

Potential Stakeholder-driven Proof of Concept Demo (for Key Targets)2

Other Notes/Comments

Process evidence

  • Quickly identify/select evidence pertinent to topic (e.g., PICO-based inclusion criteria for a study)
  • Data extraction (e.g., results: numerators/ denominators, aggregate measures) from studies is labor intensive and error prone
  • Identify research gaps that require additional attention
  • computable expressions for PICO criteria (now working on outcome definition component); if evidence has standardized PICO tags, it will be faster to identify/select evidence.
  • computable expressions for results (statistics); if evidence has standardized, structured results reported it will be faster and more accurate to extract/upload data into review authoring tool
  • If evidence is in a computable form, can better understand and describe nature of research gap (so it can be filled).
  • A team uses a pilot COKA-enabled tool to identify and apply COKA tags to all studies (previous and emerging) related to COVID-19 and anticoagulation, triage. [e.g., leverage Doc Search, other tools on Evidence/Guidance CoP page to identify the pertinent evidence; explore use of AI to automate this tagging (Lisa Lang/NLM and Brian Alper/COKA have begun discussing this)]
  • EPCs (e.g., UMN for anticoagulation, ? others for other targets) use a concept demo COKA-enhanced version of SRDR+ to illustrate production of living systematic reviews.
  • Systematic reviewers are proactively notified when there are new studies so that updates to the systematic reviews can be considered. 
  • Cochrane registry has PICO tags (as do other systems), but since these aren't standardized, info can be missed. (searching Cochrane on 'diaper rash' may not find evidence tagged as 'nappy rash' - standard disease codes would address this)
  • SRDR has FHIR-based expression of outcome. COKA has outcome definition viewer coming soon. With SRDR-defined outcome tags and Cochrane-defined outcome tags mapped to the same standard, a search in one system can find evidence in the other system.  
  • Identify communities that might do a test to refine AI algorithms to do these kinds of tags [Lisa Lang for more details]
  • Could start with simple, higher-level structures to get things rolling, then, over time make the standards more finer grained regarding PICO details.
Produce Living Guidance
  • Need to quickly/easily determine (e.g., within/ across systematic reviews) judgements about quality of evidence and certainty of findings. This is problematic because different systematic reviews express these in different ways, making this critical information difficult to assess within and across reviews.
  • computable expression for evidence certainty (certainty assessments and reasons for these assessments); 
  • Guideline developers (e.g,. SCCM/ASH for anticoag, ACEP for ED triage, ? CDC for ambulatory triage) use a concept demo COKA-enabled tool to produce living, computable guidance (e.g., building on the type of functionality AU Living Guidelines has implemented with MAGICapp - see anticoagulation example; consider synergies with WHO living guideline on drugs for COVID-19)
  • Guideline developers are proactively notified when there's an update to systematic reviews so that updates to the guidance can be considered. 

Develop Computable Guidance (e.g., CDS/eCQMs, other computable process enablements and assessments)

  • C19HCC Agile Knowledge Engineering Teams use the pilot COKA-enabled tool that produces living, computable guidance to drive developing/updating of Computable Guidelines - and CDS/eCQMs derived from them - via connection to their CPG Template (will soon be migrated to the publicly accessible CPG on FHIR IG)
  • Teams are proactively notified when there's an update to the guidance so that updates to the CDS/eCQMs can be considered. 

Implement CDS/eCQMs
  • Care delivery participants need mechanisms to convey priority needs for which they need guidance/support to those who are producing that information.
  • Implementers are challenged by technical and change management issues that often impede success in achieving QI goals.

  • Leverage/enhance tools that help CDS/eCQM implementers address technical and change management challenges they face in deploying these tools in ways that improve care team workflows/information flows/satisfaction and enhance care delivery and outcomes.

Analyze/Use Care Results 

(report, produce evidence)



  • Those who provide evidence (e.g., study authors) capture data using standard PICO tags so that after-publication coding isn't required. Research funder (NIH, PCORI could require this. Have ACEP pilot this with triage-related articles in JACEP?)

[cautionary note: getting structure into journal articles (e.g., structured abstracts) have been challenging - perhaps even more challenging for this level of standardization]

Cross-cutting Issues



1By those doing the work - e.g., EPCs, VA/UMN/Health Centers, Agile KE teams, NACHC/ACEP/EvidenceCare, many others

2More information about 'building blocks' for creating components of this 'fantasy' (e.g., standards, sources for inputs/outputs, tools/methods platforms) is on the Community of Practice webpages (see navigation bar left side of this page), and in this emerging catalog from the COVID-END project

C. Overview Diagram for Proof of Concept Demo Toolkit (computable evidence slide.pptx)

D. More Details on Knowledge Supply Chain Enhancements 

D.1: UNVETTED DRAFT Notes on a Near-term Approach for Propagating Down the Knowledge Supply Chain Notifications about Impactful Updates  

Goal:

Illustrate how for a sample target (anticoagulation and/or COVID-19 testing/triage) we can signal to care teams (through 'living' CDS interventions) when a change in the evidence-review-guidance supply chain content for this target indicates a change in recommended care. Or this change indicates that the strength of evidence/guidance supporting a recommendation has changed. (The latter is important so this new information can be factored into patient-clinician shared decision making accordingly.)

Approach:


Sampling of External Sources to Check for Updated Evidence/Guidance on Targets


D.2: Notes on a More Comprehensive Proof of Concept Software Toolset 

The 4 proof of concept tools and related repository outlined below can be placed on an open source developmental website for public dissemination. Content development for these tools will be driven by Collaborative participant current efforts, e.g., focused on COVID-19 testing/triage in ambulatory and ED settings and anticoagulation in inpatient settings.



Tool 1: Create/Store/Access Computable Study Results Representation   


Tool 2: Create/Store/Access Computable Systematic Review Representation


Tool 3: Create/Store/Access Computable Rationale for Guidance

Excerpt from Knowledge DRAFT Elicitation Tool:

[intervening portions omitted]


Tool 4: Identify/Store/Access Terminology for Computable Recommendation Definition

Excerpt from Knowledge DRAFT Elicitation Tool: 

E.  Excerpts from 9/4/20 email exchange about using computable/standards-based evidence descriptions to make developing and updating computable, evidence-based clinical recommendations more efficient/effective.


Adapted version of note from Jerry Osheroff:

Below is a small excerpt from the HL7 CPG on FHIR Draft Computable Guideline L2 Template (Recommendation tab) being used by the C19HCC Digital Guidelines WG. The question the Learning Community is exploring is whether/how the trajectory of SRDR/COKA efforts to provide computable, standards-based input and out from SRDR could at some point and in some way lead to auto-populating/updating this type of information in some future version of this template:


Evidence supporting recommendation:

Quality of Evidence:

Relationship between Quality of Evidence - Strength of Recommendation:

Build Evidence Table or reference Evidence Summary

Use GRADE or USPSTF


Condition

Study Design

Author, Year

N

Statistically Significant?

Quality of Study

Magnitude of Benefit

Absolute Risk Reduction

Number Needed to Treat

Comments























Response from Brian Alper (lead of EBM on FHIR and COKA) - [the rest of the back and forth below deals with an important issue that's downstream from the issue of computable evidence]:

The format for human expression can look very different from the format for computable expression.   But if we can agree to a standard for computable expression we can support a near-infinite set of patterns of human expression.

Some thoughts below to inform a computable expression of a “Recommendation” and I end with a link to a first draft for it.

As a recap to some of the concepts to clarify recommendation vs. CDS artifact:

One of the challenges in defining L2/L3 and recommendation/CDS may be recognized by 2 different factors (Recommendation/Decision Rule, Digital/Computable):

The original goal of the EBMonFHIR project was to provide for computable expressions of evidence and recommendations.   With the CDC ACQ Informatics Value Stream effort focused on converting guidelines to CDS artifacts, a companion CPGonFHIR project developed.   It appears the CPGonFHIR expresses Decision Rules in computable form and the PlanDefinition Resource expresses the action(s) in computable form.   We have been discussing shared use of Group and EvidenceVariable Resources that can express parts of the “when recommended” concepts in computable form. However, there is not yet a specific resource for the Recommendation in computable form which can be used prior to creating the Decision Rule derived from that Recommendation.

Working off of what we have learned from the evidence-related Resources and the PlanDefinition Resource, I have created a first draft of a Recommendation Resource to bridge this gap.

Response from Davide Sottara:

Brian, I generally agree with your distinctions on Decision vs Recommendation, and Digital vs Computable.

Yet, I would like to understand how far do you envision a "L3 Recommendation".

A "computable, structured" Recommendation would enable a CDS system to reason over the Recommendation,
and e.g. allow to distinguish the different actions that are recommended, evaluate applicability conditions, and afford for a more
contextual delivery. 
Yet, I believe that the current PlanDefinition may be able to express this notion. It's already so polymorphic in nature that
adding 'mood' extensions and profiles for confidence, certainty, and strength may be enough(@Bryn?) 

Yet - from a formal knowledge representation & reasoning perspective - there are more aspects to explore. 
As 'Recommendations' convey an agent's proposal to close a gap between a current state and a perceived goal state, they can be considered plan fragments,
with "mood(al)", deontic and speech act aspects, and can have elements of belief, confidence and evidence (explanation).

These aspects, which allow to reason with formal Recommendations, require capabilities beyond 'inferences' and 'ECA rules'. 

That is, the more we increase the expressivity with additional resources, the more we need to provide tools and guidance on how to correctly
use them - and not only to exchange information. 

So - (1) do we need/want a new Resource, or a new Profile, (2) what are the computational implications of the new resource, 
(3) can we standardize the pragmatics of reasoning with the resource or leave it to the implementers?

Response from Bryn Rhodes:

I don't quite agree with the opening statement of the document. I would say there _is_ a resource that represents a computable recommendation, it's PlanDefinition, and there is even a profile in CPG-on-FHIR called cpg-recommendationdefinition, that's exactly what we're trying to capture there. If there are gaps between what's there and what you are looking for, I'd like to understand what those are.

In short, I don't think we need a new resource here, or at least I don't see what the gaps are that would require it.

Response from Brian Alper:

I don’t necessarily want to create a new Recommendation Resource if a Recommendation Profile (of PlanDefinition Resource) or other form meets the need.

I have been getting asked by more people to extend the EBMonFHIR efforts to provide computable expressions of the “Recommendation” concept on the “L2 to L3 path” for Guideline-to-CDS translation and was not sure what is being requested vs. what is already covered in CPGonFHIR efforts.

My quick take was to distinguish Recommendation from Decision Rule and see what is missing for the computable expression (precise, unambiguous, machine-interpretable expression) of the Recommendation component.   On a quick view of PlanDefinition Resource the “action(s)” appear really well specified so it appears the “whenToAct” specification is what is not easily translated from the guideline recommendation statement to the later expression.  If there is an easy path for doing that let’s use it.  If there are adjustments to make the path easy, let’s do it.  If not then perhaps a Recommendation Profile is a way to make this easier.


Response [Excerpt] from Matt Burton (Lead for C19HCC Digital Guideline WG):

The HL7 CPG IG is not first and foremost CDS, its very much intended to be a computer-interpretable expression of the guideline itself; then with: 1) the means to derive highly related artifacts such as ECA-Rule-based CDS, patient-specific, practice-level metrics that *may* be rolled up into Quality Measures, eCaseReports to provide all the detail of guideline ‘execution’ at the patient-level (when they met criteria for a recommendation, when CDS notified the clinician of the proposed action, when/if clinicians took said action [order/request], when said proposal or request was fulfilled [whether evidence of request or not], any desired metrics that were captured and how they may have evolved over time [e.g. “on/off path”, “on with a history of off”], and any provider Impressions that may qualify any of the afore mentioned); and 2) lots of human consumable narrative on how to implement an HL7 CPG across its entire lifecycle (from working with Guideline Developers & the Evidence Ecosystem to working with local Practices and their informatics and EHR teams, and numerous other ecosystem touchpoints between- not the least of which is getting the data semantics as applied in point of care clinical information systems nailed- not just a bunch of terms hurled at ehr’s).  Not sure if it is the name, the specification, how we have presented it to date, or some other factor, but that the HL7 Clinical Practice Guideline Implementation Guide describes the process and patterns that afford the means to computationally express the intent of the guideline is a point oft missed.


F.  Notes from 9/18/20 Weekly Call

Pawan Goyal: ACEP has a blog where more than 4,000 Emergency Physicians participate across the globe to share their experiences. ACEP meets with CDC, CMS, NIH, and FDA on regular basis. Our Clinical Policies Committee and Clinical Practices Committee collect evidence/knowledge and review/approve them on periodic basis.

Matt Burton:  Citation analysis can give some linkage... EMB-on-FHIR and CPG-on-FHIR retain provenance (incl to citations)...  linkage analysis can ASSIST, but still need human in loop at some point there are some "semantic linkages" that can be leveraged, but just used to associate...  Sensitivity/ Specificity issue for "relevance", Motive, Stanson, and other CDS vendors live and breathe these types of approaches. Potentially with citation analysis Preston.

Sivaram Arbandi: Information can be open-world, but when it comes to knowledge it will need to be closed-world.

David Tovey:  A resource that has not been mentioned on this call is the L*VE Epistemonikos platform, which tracks all new studies and reviews and includes a very strong search facility that could be used to track new studies looking at clinical predition rules or anticoagulation interventions for example. https://app.iloveevidence.com/loves/5e6fdb9669c00e4ac072701d?utm=ile

Maria Michaels: Link to all the checklists: http://build.fhir.org/ig/HL7/cqf-recommendations/checklists.html. Link to the L4 checklist: http://build.fhir.org/ig/HL7/cqf-recommendations/L4Checklist.html.

Brian Alper: If the data to transfer for updating is put into a standard structure, then a computer can manage a "subscription service" to notify people when the data changes per some parameters.  A FHIR subscription service could be used if the data is put in FHIR on a FHIR server.  If you select a specific type of data for piloting we can explore FHIR tooling for this purpose.

Sandra Zelman Lewis: DOC Search (https://covid-search.doctorevidence.com/) is also a way to search and monoitor PubMed (31.6 million), ClinicalTrials.gov (351,984), COVID-19 Open Research Dataset (197,184), DailyMed (128,431), EPAR (1,502), WHO-ICTRP (648,473), and RSS feeds (870,546 from 491 feeds). You can set up alerts if matches to your search terms.

David Tovey: Indeed Sandy, and also DOC Analytics for automating meta-analysis.  Emails:  daviditovey@gmail.com and Gabriel Rada: radagabriel@gmail.com