Friday, February 18, 2011

Cool Technology of the Week

Last week, I wrote about the Direct Project and Patient Engagement.

This week, BIDMC implemented it.

While I was in Washington, co-chairing the HIT Standards Committee meeting, BIDMC engineers installed the open source Direct Gateway inside the BIDMC firewall.     They worked with Healthvault engineers to exchange certificates so that the digital signing and encryption aspects of Direct's S/MIME implementation would guarantee data security and integrity.

BIDMC engineers then sent my Continuity of Care Record and Continuity of Care Document via the Direct gateway to my secure Health email address - jhalamka@direct.healthvault.com

A few seconds later, I received a notification on my personal gmail account

"New health information is available

While you can view this document at any time, you can't work with the information in it until you add each item into the right place in John Halamka's HealthVault record. This optional step makes the information more easily available to you and any health tools you use.

See the information you can add to keep John Halamka's record up-to-date."

I clicked on the URL embedded in the email, logged into Healthvault, viewed the incoming CCR and CCD, then incorporated the records as structured data into my Healthvault account.

The resulting data is displayed in "data atomic form" as suggested by the PCAST report.   The screenshot above illustrates my data sent from BIDMC to Healthvault via Direct.

A one day implementation of an open source gateway that securely sends patient data (mine) to a PHR using industry proven S/MIME standards, creating a personally controlled health record that I can export and reuse per my privacy preferences.

That's cool!

Thursday, February 17, 2011

The PCAST Hearing

On February 15 and 16, the HIT Policy Committee and the HIT Standards Committee convened to hear testimony about the President's Council of Advisors on Science and Technology HIT Report.

The hearings consisted of 5 panels.   Here are the major themes.

Panel 1 focused on Health Information Exchange and Healthcare Stakeholders

Carol Diamond, MD, Markle Foundation
J. Marc Overhage, MD, Indiana HIE Organization
Art Glasgow, Vice President & Chief Technology Officer, Ingenix

* Trust is more complex than consent and cannot be achieved by technology alone
* Data source systems are frequently not able to meet reasonable service levels for response time and data persistence
* Data source organizations need assistance (strategy, policy, when to attached standardized vocabularies to data) in normalization data
* Need to balance mobilizing data verses losing context

Panel 2 focused on Patients / Consumer / Privacy Advocates

Donna Cryer, JD, CryerHealth Patient-Centric Solutions
Deborah Peel, MD, Patient Privacy Rights
Joyce Dubow, AARP
Lee Tien, Senior Staff Attorney, Electronic Frontier Foundation

* Consent is essential but not sufficient.  PCAST's heavy reliance on consent to achieve adequate privacy is a concern
* First goal of data use should be for treatment of patients and not for secondary uses
* Privacy preferences must be dynamic based on segmentation of data
* Must do proof of concept of pilots of DEAS and privacy
* PHRs can play a role in patients' ability to express granular privacy preferences
* Concern about adequacy of de-identification
* Many privacy issues not discussed during panel

Panel 3 focused on Population Health
Richard Platt, MD, Harvard Medical School, Distributed Health DataNetwork
Joyce C. Niland, Ph.D., Associate Director & Chair of Information Sciences, City of Hope

* Population and clinical research require persistent record sets, curated for the anticipated use
- Observational data is best suited for hypothesis generation
- Correct interpretation requires participation of originator in interpretation, semantic standards will crease but not eliminate this dependence
- research data models reflect study design, not data characteristics
- PCAST does not preclude and can support distributed data management
* Population studies require identification of the population (denominator) and the intervention sub-population (numerator)
- Granular consent and opt out by data suppliers could be problematic
- Policies must support continued use of public health
* De-identification is problematic

Panel 4 focused on Providers and Hospitals

Sarah Chouinard, MD, Medical Director, Primary Care Systems, Inc. and Community Health Network of West Virginia
John E. Mattison, MD, Kaiser Permanente
Scott Whyte, Catholic Healthcare West, provider using middleware
Kevin Larsen, MD, CMIO, Hennepin County Hospital
Theresa Cullen, MD, Indian Health Service, HHS

*PCAST Timeline too aggressive to execute
*Privacy tags may hinder normal institutional use of data
*Propagation/redaction of inaccurate data is a concern
*Middleware may bridge legacy systems to PCAST vision but has limitations
*Patient matching is problematic
*Novel PHR use may additional spur HIT adoption

Panel 5 focused on Technology implications of the report
Michael Stearns, MD, CEO, e-MDs, Small EHR Vendor
Hans J. Buitendik, M.Sc., HS Standards & Regulations Manager, Siemens Healthcare
John Melski,  Marshfield Clinic, homegrown EHR
Edmund Billings, Medsphere

*It is important to maintain the context of a clinical encounter and to preserve the meaning when the data is reused for purposes other than as originally intended.
*Capture structured data with appropriate granularity and controlled terminology.   A "data atom" should be the amount of data that makes sense for the particular use intended.
*Separate the syntax (the container used to send data) from semantics (the ontologies and vocabularies).   Admittedly, in healthcare summary standards, syntax has been driven by semantics, so this separation would require careful thought.
-Syntax is the study of the principles and rules for constructing sentences in natural languages.
-Semantics is the study of meaning. It typically focuses on the relation between signifiers, such as words, phrases, signs and symbols, and what they stand for.
-Information models or relationship types provide frameworks to maintain context.  Explicit representation of context must be integrated into an evolving Universal Exchange Language and may require specification of an information model.
*Evaluate the burden and timeframe and priority in the context of existing meaningful use and ICD10/5010 projects.
*Simply exchanging data does not necessarily lead to useful and accurate data.  We need to know how the data was captured, for what purpose, and by whom.
*Open source solutions should be considered to drive low cost solutions for data exchange
*Use existing profiles/technologies and middleware to meet PCAST data exchange goals. We should not rip and replace existing applications and standards.

We discussed one strawman idea for incorporating PCAST ideas into Stage 2 and Stage 3 of Meaningful Use.

Given that the Direct project  provides a means to push data securely using secure email standards, require that EHRs push immunization data in an electronic envelope containing metadata (we'll call that envelope the universal exchange language) to state and local immunization repositories as part of Meaningful Use Stage 2.   This will implement the Universal Exchange Language portion of the PCAST report.

ONC should begin work on connecting state level immunization registries with a person identified index and privacy controls.     This will implement the  Data Element Access Services (DEAS)  portion of the PCAST report.  

The DEAS will require significant additional policy and technology work that will not be ready by Stage 2.   Thus, by Stage 3 require that EHRs be able to query a DEAS to retrieve immunization data at the point of care so that clinicians can deliver the right immunizations at the right time to the right patients based on a nationwide federated network of immunization exchange.  It's good for patients, good for clinicians, good for public health, and does not raise too many privacy concerns.   Of course we should pilot granular privacy controls enabling individuals to control the flow of immunization information per their privacy preferences.

We'll have several additional meetings before the final workgroup report is issued.    I believe we're close to achieving consensus on the major concerns and next steps as we offer ONC options for incorporating the spirit of PCAST into their work.

Wednesday, February 16, 2011

Securing your iPad and iPhone4

I'm often asked how IT departments should advise users to secure their iPads and iPhone4's.

Here's the process suggested by my security team:

1. Make sure you're running the latest iOS version (4.2.1 currently)
2. Download "Find My iPhone" (free app) from the Apple App Store. Log in or set up a new Mobile Me account and add the iPad to be tracked. Also try it out from a desktop to make sure you can (as a test) send a message to the device.
3. Make sure the iPad autolocks, requires a long passcode and erases data after 10 failed passcode attempts.
    In Settings->General, configure:
    a. Auto-Lock: set to something short, like 2-5 minutes (NOT "Never")
    b. Passcode Lock:
        1. Turn Passcode On
        2. Require Passcode: Immediately
        3. Simple Passcode: Off (then set a long passcode)
        4. Picture Frame: Off
        5. Erase Data: On

If the iPad is stolen, was locked at the time, and the thief does not have unencrypted access to any other device that had previously synced with the iPad (a Mac/PC), the data can be considered "safe".   The user should use "Find My iPhone" to issue a remote wipe as soon as possible. This will of course work better over 3g, but should still be done if it's a wifi-only model.

They should also change any application or institutional passwords that may have been cached on their mobile device.

This will protect against likely attacks in the near-term. That is, someone finds your iPad, taps around looking for emails, pictures, etc, they can't get in. If they hook it up to a desktop, they won't be able to read anything on the filesystem.

This method should meet the standards of safe-harbor, as it includes encryption, "best practice" guidelines, and could be considered reasonable.

A few things to be aware of:

The certificates necessary to bypass the passcode screen are saved on your computer when you sync the iPad.

The hardware encryption used to protect the filesystem (and the passcode) are based on an encryption key known to Apple. They routinely unlock devices for law enforcement (with a court order).

Current accusation guidelines for forensic examiners state that the SIM card should be immediately removed and the device be placed in a Faraday bag to prevent remote wiping (iOS Forensic Analysis for iPhone, iPad and iPod touch, Sean Morrissey, Apress 2010, 978-1-4302-3342-8). Expect attackers do the same.

Cellebrite claims to be adding support for extracting encrypted, passcode locked images from iOS devices with their UFED Physical capture device. Details are a bit hazy on how they're actually accomplishing this, but expect others to follow suit once it's released. Expect hackers to take full advantage of this.

There are many background network based operations constantly running in iOS when the screen is off (and passcode locked). Assuming the device has not been remotely wiped it would be possible to observe these network connections and extract username/passwords. This shouldn't be a problem for most institutional credentials, which require network encryption for authentication, but an observer may be able to harvest passwords to other email or social networking services.

The cab driver who found your phone/ipad probably doesn't have the hardware, technical forensic knowledge or any ability to monetize extracted data. But the guy running the data mining operation buying from him in bulk probably does.

A better protection scheme would be something that applies encryption to stored data in user-space.    This is the realm of the Good Technologies product, MobileIron and others.

Tuesday, February 15, 2011

PQRI XML Testing

Many folks have asked me about the process for testing PQRI XML files, since they are part of Certification and are required to be submitted to CMS in 2012 as part of Meaningful Use Stage 1.   The inspection for Certification is manual (the testing and certification bodies visually examine the files).   To my knowledge, there are not online tools available for self validation of these files (although it would be a great service to the country to create them).

The PQRI resources I do know about are part of the 4 step qualification process for those who want to serve as qualified PQRI registries on behalf of their participants.

Here is the testing process used by the Massachusetts eHealth Collaborative, which acts as the qualified 2011 Physician Quality Reporting System registry for BIDMC's ambulatory submissions.

The Iowa Foundation for Medical Care is the CMS contractor for PQRI testing.

The Massachusetts eHealth Collaborative (MAeHC) worked with the Iowa Foundation for Medical Care (IFMC) as follows:

1.  Self Nomination followed by a phone interview.
2.  Submission of measure calculation process/logic
3.  Submission of data validation strategy/plan
4.  Submission of sample XML

The fourth step required MAeHC to generate sample XML files based on the CMS specification, encrypt the files using CMS approved encryption software, and send them on a DVD-ROM via certified mail to IFMC.

Once this one done, MAeHC sought Individuals Authorized Access to the CMS Computer Services (IACS) accounts to login to the CMS quality data submission site.

In order to get IACS accounts, users must be identity proofed by IFMC/CMS. MAeHC had to submit a list of names with complete contact information of all users who would be authorized to submit registry data to CMS.  Each of them then had to apply for an account via the CMS Application portal and then waited a few days for it to be approved.

Once accounts were approved, each user had to login to the QualityNet portal to ensure the credentials had the proper level of access.  They then were required to submit another set of test files for validation using the online utility to ensure that they complied with any changes that were made in the specifications.

Here's a complete overview of all the CMS requirements for qualification as a PQRI registry.

That's the process.  I hope it helps those organizations seeking to serve as registries submitting PQRI data on behalf of participants to CMS.

Monday, February 14, 2011

Detailed Clinical Models

As the PCAST Workgroup ponders the meaning of a Universal Exchange Language and Data Element Access Services (DEAS), it is exploring what it means to exchange data at the "atomic", "molecular", and document level.   See Wes Rishel's excellent blog defining these terms.   For a sample of my medical record using one definition of an atomic form, see this Microsoft Healthvault screenshot.   It's clear to me that if we want to exchange structured data at a level of granularity less than an inpatient/outpatient/ED encounter, we need to think about detailed clinical models to specify the atoms.  

As I've discussed previously, it would be great if EHRs and PHRs with different internal data models could use middleware to exchange a reasonably consistent representation of a concept like "allergy" over the wire.    I think of an allergy as a substance, a precise reaction description, an onset date, a level of certainty, and an observer (a clinician saw you have a severe reaction verses your mother thought you were itchy).    PHRs often have two fields - substance and a severe/minor indicator.    Any EHR/PHR data exchange without an agreed upon detailed clinical model will lose information.

HL7's Reference Information Model (RIM) provides one approach to modeling the data captured in healthcare workflows.  However, many clinicians do not find the RIM easily understandable, since it is an abstraction (Act, ActRelationship, Participation, Roles, Entities) rather than a reflection of the way a clinician thinks about clinical documentation.   Alternatively, a detailed clinical model provides an archetype or template to define the aspects of the data we should exchange.

Stan Huff at Intermountain Healthcare has led much of the work on detailed clinical models in the US.   Here's a recent presentation describing his work.

To illustrate the way a detailed clinical model can enhance data exchange, here's a description of an allergy template based on collaborative work between Intermountain, Mayo and GE.

The Australian work on OpenEHR is another interesting approach to detailed clinical models.  It creates a clear expression of a clinical concept in a manner that can be understood both by clinical subject matter experts and technologists. For a given concept, OpenEHR specifies a SNOMED code and then builds the appropriate information structure around it.

The screen shot above illustrates the OpenEHR archetype for adverse reactions/allergies.

Other important efforts include:

William Goossen's ISO 13972 work.  He's writing a review of 6 different approaches (see the comment on Keith Boone's blog) including HL7's CDA templates, OpenEHR archetypes, and Stan Huff's detailed clinical models.  Hopefully it will be published soon.

The UK National Health Service Connecting for Health Project's Logical Record Architecture.

The Tolven Open Source Project's Clinical Data Definitions.

It's clear to me that wide adoption of a Universal Exchange Language would be accelerated by detailed clinical models.     This is a topic to watch closely.

Friday, February 11, 2011

Cool Technology of the Week

In New England, we all have snow on our minds.

While shoveling the 7 feet of snow we've had in Wellesley, I've thought there must be an easier way - why not a small portable jet engine to just blast your driveway clear?

Although not yet practical for domestic use, I found a cool technology used by our railroads in the northeast - the jet engine snow blower.

The Snow Jet is a snow blower made from the engine of a jet mounted on a railroad car.  The Metro-North railroad uses 6 of these (pictured above) to clear the snow on commuter rail lines around New York.

The start up of the engine requires finess to prevent excess fuel in the nozzle from erupting into a twenty foot flame. The cab controls include movement up/down and right/left.   I'm told that the engines make so much noise that snow clearing near residential areas is banned from late evening until the following morning.

There is a certain amount of danger in operating the jet engine powered snowblowers. Along with melting and blowing snow they will also blow the loose debris out of the tracks and there's nothing like flying steel and concrete to ruin your day.

A jet engine for clearing snow - that's cool!

Thursday, February 10, 2011

Choosing a Great Cognac

Recently, while watching the Granada Television Sherlock Holmes series with my family, we concluded that the English of the Victorian era used brandy/cognac as a cure-all for a multitude of emotional and physical ailments.   This led me to ask - how do you choose a great brandy/cognac?

Today,  domestic brandy is not a trendy drink in the US.   Chances are that your local liquor store will have a wall of single malt scotch, interesting whiskeys, a few bottles of cognac and a single type of inexpensive brandy.    Cognacs have become increasingly popular because of their  association with rap music and rap singers.

What is Cognac?

It's a beverage made from distilling wine made in the Cognac region of France, 250 miles southwest of Paris.  Cognac starts as a dry, thin, acidic wine made from Ugni Blanc (also called Trebbiano) grapes.   It's double distilled in copper pot stills, then aged in oak, where the alcohol and water evaporate over time, concentrating the flavor and reducing the alcohol content from 70% to 40%.

Although you may hear the term Champagne Cognac, it has nothing to do with Champagne, a region 100 miles east of Paris.   "Champagne" derives from the Roman "Campania" meaning "plain".  The plains of Champagne and Cognac do have the same chalky soil.

There are 4 major producers of cognac - Courvoisier, Hennessy, Martell, and Rémy Martin, although several smaller boutique brands are available.

At bottling, cognacs are blended to incorporate the best flavors from various batches/ages,  cognac is graded based on the youngest components in the blend.

Very Special (VS) - at least 2 years in oak
Very Special Old Pale (VSOP) - at least 4 years in the cask
Extra Old (XO) - at least 6 years in the cask but as of 2016, this will change to 10 years.

The Grape growing region of Cognac is divided into 6  areas, the best of which are the Champagne appellations:
Grande Champagne
Petite Champagne
Borderies
Fins Bois
Bon Bois
Bois Ordinaires

I do not choose any beverage based on reputation or price, I seek value - quality divided by price.

My recommendation is focus on VS or VSOP, because XO is generally very expensive.   From my limited experience I believe the best major brand VS is Courvoisier and the best major brand VSOP is Remy Martin.

On a cold winter's night, curl up with a good Sherlock Holmes video or book and try a snifter of one yourself!