Wednesday, January 12, 2011

The January Meeting of the HIT Standards Committee

The January meeting of the HIT Standards Committee was a discussion of the refinements to content, vocabulary, and transport/security standards needed to reduce barriers and accelerate adoption of interoperability.

Dr. David Blumenthal began the meeting with an overview of the national HIT accomplishments to date.  A year ago, we did not have a Regional Extension Center Program, a Health Information Exchange Program, a Workforce Development Program, a Beacon Community Program, or a SHARP Research Program.    He highlighted that Stage 2 of Meaningful Use will include a greater focus on interoperability.  There is a great sense of urgency among policymakers, the White House, and many stakeholders to align incentives for interoperability and provide the tools to accelerate data exchange.

Dr. Farzad Mostashari offered an overview of the 2011 priorities for the HIT Standards Committee.   ONC programs aim to increase electronic transaction volumes for lab result reporting, e-prescribing, transitions of care, consumer engagement, and public health.   At a high level, standards, governance, architecture, and a trust framework are needed.  At a more specific level, provider directories, certificate management, and identity assurance processes are needed.    As we think about the future of health information exchange, we cannot be limited to just point to point push transactions (which is our 2011 focus).   To support the enhanced outcomes, reduced cost, and improved quality we want, we'll also need a nationwide system that supports queries/pull transactions.  Tomorrow, ONC will announce good news on EHR adoption rates, but there is much more work to do.  EHRs need to be more usable and we must be wary of creating a digital divide - the technology haves and have nots.   We need more consumer engagement by  enhancing provider/patient data exchange including integrated educational materials.  We need more decision support.   We need to be more directive about the way transport, certificates, and directories are implemented by HIEs.   The HIT Standards Committee members agreed that ONC's priorities for interoperability seemed appropriate and we need a workplan for the Committee to address these items quickly.

Farzad also provided a brief overview of the work of the PCAST workgroup which kicked off last week.   Questions to be explored include:

1. What standards, implementation specifications, certification criteria, and certification processes for electronic health record (EHR) technology and other HIT would be required to implement the following specific recommendations from the PCAST report:
That ONC establish minimal standards for the metadata associated with tagged data elements;
That ONC facilitate the rapid mapping of existing semantic taxonomies into tagged data elements;
That certification of EHR technology and other HIT should focus on interoperability with reference implementations developed by ONC.

2. What processes and approaches would facilitate the rapid development and use of these standards, implementation specifications, certification criteria and certification processes?

3. Given currently implemented information technology (IT) architectures and enterprises, what challenges will the industry face with respect to transitioning to the approach discussed in the PCAST report?
Given currently implemented provider workflows, what are some challenges to populating the metadata that may be necessary to implement the approach discussed in the PCAST report?
Alternatively, what are proposed solutions, or best practices from other industries, that could be leveraged to expedite these transitions?

4. What technological developments and policy actions would be required to assure the privacy and security of health data in a national infrastructure for HIT that embodies the PCAST vision and recommendations?

5. How might a system of Data Element Access Services (DEAS), as described in the report, be established, and what role should the Federal government assume in the oversight and/or governance of such a system?

6. How might ONC best integrate the changes envisioned by the PCAST report into its work in preparation for Stage 2 of Meaningful Use?

7. What are the implications of the PCAST report on HIT programs and activities, specifically, health information exchange and Federal agency activities, and how could ONC address those implications?

8. Are there lessons learned regarding metadata tagging in other industries that ONC should be aware of?

9. Are there lessons learned from initiatives to establish information sharing languages (‘‘universal languages’’) in other sectors?

Arien Malec and Doug Fridsma presented an update on the Direct Project and the Standards and Interoperability Framework.   The Direct Project is completing 3 documents
a.  A core specification for senders and receivers using S/MIME and SMTP
b.  A supplemental specification for senders and receivers using an XDR/XDM Gateway to connect to the Direct backbone (which is S/MIME and SMTP)
c.  An overview of what it means for senders and receivers to be "Direct compliant" for receipt of structured and unstructured data

The initial projects of the Standards and Interoperability Framework include
a.  Clinical Document Architecture Consolidation - a cleanup and harmonization of work done by HITSP, IHE, HL7 and others which will hopefully result in a single, easy to use, template-based implementation guide
b.  HL7 2.x Lab simplification - harmonization of content and vocabulary standards to significantly simplify the implementation of a single use case - Lab to EHR result reporting
c.  Transition of Care - creation of easy to use implementation guidance and tools to support the need of care transitions

A committee member noted that aligning incentives to share data will motivate the marketplace and stakeholders to create better tools and implementation guidance.   Better standards alone are necessary but not sufficient to accelerate adoption.

Micky Tripathi and Walter Suarez presented the Provider Directory guidance from the Information Exchange Workgroup.  Their work includes an important short term focus on organization to organization exchange and the entity level provider directories (ELPD) needed to support that workflow.    Future work includes individual provider directories (ILPD).   The workplan is to build the "yellow pages" first and the "white pages" second.   The "yellow pages" is easier and will be sufficient to accelerate data exchange, just as email today includes organization to organization transport but there is no national email "whitepages".

Judy Murphy and Liz Johnson reviewed the themes of the January 10-11 Implementation Workgroup hearing.   High level themes included:

Regional Extension Centers - There is significant variation in cost, business models (at least 4), and services provided by RECs, resulting in mixed customer satisfaction.

Certification - Certification bodies are working very well.  There is some confusion as to how best to acquire complete EHRs with modular EHRs add on products.   If a customer wants to replace a portion of a complete EHR with a certified EHR module, do they still need to buy all the components of the complete EHR?

Health Information Exchange - there is great demand for interoperability specifications that are clear and concise.  The sustainability/value proposition of most HIEs is still unclear.   Some private HIEs are working better than public efforts.

Timing issues - Certification specifications were finalized in the Summer which gave vendors very little time to complete their products and install them in time for the January start of the  Meaningful Use reporting period.  

Quality Measures are difficult - The specifications are challenging to interpret and the computations are difficult to produce.

Overall, there is great energy and all stakeholders are highly motivated to achieve Meaningful Use.   A multi-year roadmap would make planning for the future easier.

Jamie Ferguson provided a brief update on the Clinical Operations Workgroup.  The group will begin work on device standards (especially vocabularies) and a offer advice on the S&I Framework priorities.

Dixie Baker provided a brief update on the Privacy & Security Workgroup.  The group will begin work on digital certificate standards per the Policy Committee's request.

Based on all we discussed, it's clear that 2011 promises to be a busy year - likely one that will mark a tipping point in interoperability for the country.

Cyber Insurance – Is it Worth It?

Several of my blog readers have asked about Cyber Insurance.   I asked a trusted expert on this topic, Michael R. Overly,  Esq., CISA, CISSP, CIPP, ISSMP from Foley & Lardner LLP to write a guest post:

Insurance exists to cover a wide range of potential business risks. Cyber insurance is worth considering as companies increase their presence, business practices and data storage online. In fact, Cyber insurance is not just for companies conducting transactions online (e.g., online retailers).  It is valuable to any company who has critical systems or sensitive data, which is almost every business. While it is possible to have insurance that covers damage to your servers and other computer equipment, it is almost certain the insurance only covers the physical damage to the hardware, itself, and not the valuable data housed within. In fact, insurance policies regularly state that the policy is limited to the replacement costs of the hardware and not the data.  This means that in the event a hacker gains access to your systems and disrupts operations, standard insurance coverage will probably offer little or no protection unless hardware is actually damaged.

The costs associated with restoring lost or damaged data, sending breach notifications to consumers, and other potential liability under each state’s breach notification statues can be astronomical. Cyber insurance can help cover some of the costs of a data breach, including the expense of sending notification to affected individuals, public relations, fines, penalties, responding to regulators and any subsequent litigation by affected individuals. The potential for attacks and breaches is growing exponentially as more and more businesses move operations to the cloud. Moreover, attacks do not necessarily derive from an outsider. Data breaches have resulted from careless, frustrated and vengeful employees who often attempt to profit from someone else’s information. Depending on the policy, Cyber insurance can offer protection from hackers, viruses, data breaches, denial of service attacks, and copyright, trademark, and website content infringement.

Although Cyber insurance provides beneficial protections for the policy holder, it is not without drawbacks and limitations. Most, if not all, Cyber insurance policies are capped at relatively low levels compared to the actual, potentially catastrophic, liability that can result from a breach. It is equally important to review the policy carefully for exclusions. If the policy denies indirect costs like reputational damage, the costs of repair in addition to the premium, could be burdensome to the policy holder.

Because Cyber insurance is a relatively new product with limited public acceptance, and there is ongoing change in the laws and regulations affecting breach notification on a state-by-state basis, the product, policies and premiums tend to differ greatly between providers. Additionally, many policy holders have found that in renewing their policies premiums have increased due to improper risk analyses, either of the insured or of the pool of insureds. Still, with the recent proliferation of data breach notification laws, the interest in Cyber insurance has risen providing some stability in pricing. Even before a policy is quoted, most, if not all insurers require applicants to fill out extensive questionnaires on detailing their information technology and security practices. Many policies specify on-site assessments and audits of the insured’s systems and policies, which for larger companies with multiple locations could run into the hundreds of thousands of dollars. In addition, as part of the assessment, the insured company would have to disclose its security procedures and vulnerabilities to a third party for a risk assessment review.

Tuesday, January 11, 2011

A Primer on XML, RDF, JSON, and Metadata

A new workgroup, formed under the auspices of the HIT Policy Committee and the HIT Standards Committee is beginning its work to help ONC analyze public comments on the President’s Council of Advisors on Science and Technology (PCAST) report, discuss the implications of the report on current ONC strategies, assess the feasibility and impact of the PCAST report on ONC programs, and elaborate on how these recommendations could be integrated into the ONC strategic framework.

Membership includes:
Paul Egerman, Entrepreneur, Chair
William Stead Vanderbilt University, Vice-Chair
Dixie Baker,SAIC
Hunt Blair,Vermont HIE
Tim Elwell, Misys Open Source
Carl A. Gunter, University of Illinois
John Halamka, Beth Israel Deaconess Medical Center, HMS
Leslie Harris, Center for Democracy & Technology
Stan Huff, Intermountain Healthcare
Robert Kahn, Corporation for National Research Initiatives
Gary Marchionini, University of North Carolina
Stephen Ondra, Office of Science & Technology Policy
Jonathan Perlin, Hospital Corporation of America
Richard Platt,Harvard Medical School
Wes Rishel, Gartner
Mark Rothstein, University of Louisville
Steve Stack, American Medical Association
Eileen Twiggs, Planned Parenthood

To advise ONC about the report's recommendations, workgroup members need to understand terms such XML, RDF, JSON and Metadata as well as learn about the standards efforts to date to create human readable and computable data elements for healthcare.

XML is an abbreviation for Extensible Markup Language, a set of rules for encoding documents in machine-readable form.   Here's an example of data about me in XML, which is both human readable and computable
<name><fullname>John David Halamka, M.D.</fullname><firstname>John</firstname><lastname>Halamka</lastname></name>
<address>
<address1>Beth Israel Deaconess Med Ctr</address1><address2>Information Systems, 6th Fl</address2><address3>1135 Tremont St  </address3><address4>Roxbury Crossing, MA 02120</address4><telephone>617/754-8002</telephone><fax>617/754-8015</fax><latitude>42.33555200000000</latitude><longitude>-71.08822700000000</longitude></address>

It's a machine friendly form of my Harvard Catalyst Profiles web page with discrete data elements that any computer language can interpret and search.   The complete XML document about me is available here.

XML has been used to describe healthcare data by HL7 using the Clinical Document Architecture (CDA) and by ASTM using the Continuity of Care Record (CCR)

Here's an example of CDA that illustrates immunizations
<informationsource><author><authortime value="20000407130000+0500"><authorname><prefix>Dr.</prefix><given>Robert</given><family>Dolin</family></authorname></authortime></author></informationsource>
<immunizations><immunization><administereddate value="199911"><medicationinformation><codedproductname code="88" codesystem="2.16.840.1.113883.6.59" displayname="Influenza virus vaccine"><freetextproductname>Influenza virus vaccine</freetextproductname></codedproductname></medicationinformation></administereddate></immunization></immunizations>

Metadata is "data about data" - the details behind this data such as who gathered it, when, and for what purpose.

The metadata in the CDA example includes an Object Identifier (OID) of 2.16.840.1.113883.6.59 which is a code for the Center for Disease Control's CVX immunization vocabulary.   Code 88 is the CVX code for Influenza virus vaccine.   The vaccine was administered in November of 1999.   The information source is Bob Dolin.  The full CDA summary is available here.

Here's an example of CCR that illustrates immunizations
<actor><actorobjectid>AA0001</actorobjectid><person><name><currentname><given>John</given><middle>David</middle><family>Halamka</family></currentname></name><dateofbirth><exactdatetime>1962-05-23T04:00:00Z</exactdatetime></dateofbirth><gender><text>M</text></gender></person></actor>
<address>
<type><text>Home</text></type><line1>11 Alden Road</line1><city>Wellesley</city><state>MA</state><postalcode>02481</postalcode></address>
<telephone><value>781-239-9771</value><type><text>Home</text></type></telephone><actor><actorid>AA0001</actorid></actor>

<immunization><ccrdataobjectid>BB0024</ccrdataobjectid><datetime><type><text>Date Updated</text></type><exactdatetime>2011-01-08T19:49:19Z</exactdatetime></datetime><datetime><type><text>Start date</text></type><exactdatetime>2010-10-11T04:00:00Z</exactdatetime></datetime><type><text>Immunization</text></type><actor><actorid>AA0001</actorid></actor><product><productname><text>Tetanus</text><code><value>35</value><codingsystem>HL7 CVX</codingsystem><version>2.5</version></code><code><value>396412003</value><codingsystem>SNOMEDCT</codingsystem><version>2005</version></code><code><value>C0039619</value><codingsystem>UMLS Concept ID</codingsystem><version>2005</version></code></productname></product></immunization>
<form>
<text>Toxoid</text></form>
<directions><direction><route><text>IM</text></route><site><text>Right Arm</text></site></direction></directions>

The metadata in the CCR example includes that the patient is John Halamka, born 5/23/1962, Male, lives in Wellesley.  Additional metadata identifies that a tetanus shot exists in the record.   The concept "Tetanus shot" is described using the Center for Disease Control's CVX immunization vocabulary, the SNOMED-CT vocabulary, and the National Library of Medicine Meta-thesaurus vocabulary.  Metadata about the reliability of the information includes who reported the tetanus shot and when it was reported.   The metadata in my record describes me as the source of the reported information, updated January 8, 2011.  The full CCR summary is available here.

XML is a very general construct.   Anyone can create any tags for data and metadata.   HL7 has chosen to create a Reference Information Model (RIM) to describe the meaning of its tags and metadata.  ASTM has created a well described fixed set of data elements.   The challenge that different XML tagging creates is that you have to figure out where to look for the information you want.  For the XML example above about my name and address, everyone creating a person directory could create the XML differently.  In one directory, a person's "lastName" could be root element, in another it could be a child of an element called "name", in another it could an attribute of a "person" element.  The XML below is just as valid a way to describe my address as the example above
<address city="Boston" postalcode="02120" state="MA" streetaddress="1135 Tremont">
  <phonenumbers></phonenumbers>
    <phonenumber number="617 754-8002" type="home"></phonenumber>
    <phonenumber  number="617 754-8015" type="fax"></phonenumber>
</address>

The Resource Description Framework (RDF) is a metadata model that provides a standardized approach to describing web resources.   The general idea is to provide a subject-predicate-object model such that the predicate includes of definition of what is being described.  RDF was created to solve the problem of organizations implementing XML tags heterogeneously.

Here's an RDF description of me
<rdf:description rdf:about="http://connects.catalyst.harvard.edu/profiles/profile/person/46034/viewas/rdf" xmlns:bibo="http://purl.org/ontology/bibo/" xmlns:core="http://vivoweb.org/ontology/core#" xmlns:fn="http://www.w3.org/2005/xpath-functions" xmlns:foaf="http://xmlns.com/foaf/0.1/" xmlns:owl="http://www.w3.org/2002/07/owl#" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:rdfs="http://www.w3.org/2000/01/rdf-schema#" xmlns:vitro="http://vitro.mannlib.cornell.edu/ns/vitro/public#" xmlns:xsd="http://www.w3.org/2001/XMLSchema#"></rdf:description>
<rdf:type rdf:resource="http://www.w3.org/2002/07/owl#Thing"></rdf:type>
<rdf:type rdf:resource="http://xmlns.com/foaf/0.1/Person"></rdf:type>
<rdf:type rdf:resource="http://purl.org/ontology/bibo/core#Faculty"></rdf:type>
<rdf:type rdf:resource="http://xmlns.com/foaf/0.1/Agent"></rdf:type>
<rdfs:label xml:lang="en-US">John David Halamka, M.D.</rdfs:label>
<rdf:type rdf:resource="http://vivoweb.org/ontology/core#FacultyMember"></rdf:type>
<foaf:lastname>Halamka</foaf:lastname>
<foaf:firstname>John</foaf:firstname>
<core:preferredtitle>Associate Professor of Medicine</core:preferredtitle>
<core:workfax>617/754-8015</core:workfax>

The subject is my Harvard Catalyst Profiles Page.

The predicates include "the subject has lastname, a firstname, and a preferred title"

The objects are Halamka, John, and Associate Professor.

The definitions of lastname, firstname, and preferred title are found in two places - the friend of a friend definition site and the VIVO site.    The complete RDF document about me is available here.

Thus, RDF provides a means of displaying metadata while also enabling easy access to the definitions of data elements used.

With RDF,  data is always represented as subjects, predicates, and objects, so reading, parsing, and storing it is consistent across all applications. It also enables query of different systems via a common approach . For example, if I exist as a faculty member in Profiles and as a provider in a clinical system that uses RDF, it should be possible to query for topics where I have both faculty and clinical expertise, without having to transform one data source into the other's schema. Similarly, if the government makes all grants, publications, trials, etc. available in RDF, then these things should automatically be available to tools like Profiles, without having to write any additional code.

There is a standard query language called SPARQL that can be used to search RDF resources.

Finally, there is an emerging alternative to XML called Javascript Object Notation (JSON) that is more compact that XML and easier for computer languages to manipulate than XML.  Here's an example of my address information in JSON
{
     "firstName": "John",
     "lastName": "Halamka",
     "age": 48,
     "address":
     {
         "streetAddress": "1135 Tremont",
         "city": "Boston",
         "state": "MA",
         "postalCode": "02120"
     },
     "phoneNumber":
     [
         {
           "type": "office",
           "number": "617-754-8002"
         },
         {
           "type": "fax",
           "number": "617-754-8015"
         }
     ]
 }

JSON has replaced XML as a data interchange format in many social networking applications.   It does have the same issue as XML that authors can create arbitrary formats, so there could be a person object containing firstname and lastname or lastname could be an object - you have to know the way the author organized the data before you can use it.

In summary, CDA and CCR already provide XML data for healthcare that is "data atomic", metadata rich, and searchable using standard tools.    RDF is a standardized way of describing metadata.  JSON is an efficient way of representing, transmitting, and interpreting data that is similar but more compact than XML.  

Our report is due in April.  I welcome the discussion with the PCAST workgroup over the next 3 months!

Monday, January 10, 2011

Early Experiences with Hospital Certification

As one of the pilot sites for CCHIT's EHR Alternative Certification for Hospitals (EACH), I promised the industry an overview of my experience.

It's going very well.   Here's what has happened thus far.

1.  Recognizing that security and interoperability are some of the more challenging aspects of certification, we started with the CCHIT ONC-ATCB Certified Security Self Attestation Form to document all the details of the hashing and encryption we use to protect data in transit via the New England Healthcare Exchange Network.

Next, I had my staff prepare samples of all the interoperability messages we send to patients, providers, public health, and CMS.   Specifically, we created

CCD v.2.5 used to fulfill the Discharge summary criterion
HL7 2.51 Reportable lab
HL7 2.51 Syndromic surveillance
HL7 2.51 Immunizations
PQRI XML 2009 for hospital quality measures

We validated them with the HL7 NIST test site

and the HITSP C32 version 2.5 NIST test site.

CCHIT validated the PQRI XML as conforming.

2.  Next, I documented an inventory of all the applications we are using during our Meaningful Use measurement period for Hospital Inpatient and Emergency Department care (Medicare place of Service 21 and 23)

webOMR - our online medical record
CPOE - our inpatient ordering system
ED Dashboard - our emergency department workflow applications
Massachusetts eHealth Collaborative Quality Data Center - our PQRI reporting system
Performance Manager - web-based analytics from our hospital data marts

I assigned each of these applications to the 24 Hospital Meaningful Use Criteria

Drug-drug, drug-allergy interaction checks
Drug-formulary checks
Maintain up-to-date problem list
Maintain active medication list
Maintain active medication allergy list
Record and chart vital signs
Smoking status
Incorporate laboratory test results
Generate patient lists
Medication reconciliation
Submission to immunization registries
Public health surveillance
Patient-specific education resources
Automated measure calculation
Computerized provider order entry
Record demographics
Clinical decision support
Electronic copy of health information
Electronic copy of discharge instructions
Exchange clinical information and patient summary record
Reportable lab results
Advance directives
Calculate and submit clinical quality measures

Once I watched the CCHIT Certification Readiness video I was advanced to Readiness Learning Complete and we could begin preparing for inspection.

3.  I assigned each of the CCHIT Test scripts (easier to use than NIST Test scripts) to my staff to ensure our applications met the certification functional requirements.   They executed each of the scripts twice and timed the effort so that we could report our actual test execution experience to CCHIT.

4.  We scheduled a time for inspection testing - a web-based desktop sharing application session with a CCHIT observer to evaluate our conformance.

5.  In preparation for that testing my staff created test patients with test medications, test problems, test allergies, and test labs.   Also, they practiced their demonstrations to ensure smooth and efficient execution of the test scripts.

Since we're certifying our applications in parallel with measuring our hospital meaningful use performance, we sent training materials to our clinicians reminding them of their responsibilities to use the applications completely and wisely.

Here are my lessons learned thus far:

1.  Take certification very seriously - it's not easy.   I have a staff of very experienced IT professionals and we had to do a great deal of preparation.   This is not a function of the Authtorized Testing and Certification Body you choose, it's a function of the certification requirements and the NIST test scripts.  The staff and educational materials of the ATCB make a huge difference.    In my case, I relied on CCHIT staff to guide me through the process and CCHIT inventory tools/test scripts to make the process as easy as possible.

2.  Interoperability testing is rigorous.    The more tightly constrained the content standards, the more likely they will be interoperable between sender and receiver.

3.  Quality measurement is hard.   There are 15 detailed numerators and denominators with exclusionary criteria to prepare.  CMS requires these to be electronically submitted in PQRI XML, so you must generate a conforming electronic format.

4.  Some of the NIST test scripts require functionality that may not be clinically obvious.   Note that this is purely my own personal opinion as a doctor.   You must demonstrate that super users can change drug/drug and drug/allergy alerting logic.   As a clinician, I cannot think of a reason to change drug/allergy alerting - you are either allergic to a medication or you're not.   There is no alert fatigue from reporting a drug/allergy interaction, no matter how minor.

5.  You must certify all the technology you plan on using for Meaningful Use attestation.   You can only report Meaningful Use data from "Certified EHR technology", hence the reason we are certifying our ED applications, inpatient applications, data warehouses, and analytic tools.

Thus far, the process has great integrity, appropriate rigor, and sufficient specificity.    We're doing it with our existing teams within existing budgets.    Yes, it is creating temporary stress.   However, if we pass certification in the next several weeks, we'll all be very proud.

Also - the ONC Permanent Certification Program was published in the Federal Register last week (Thanks to Robin Raiford for this bookmarked copy).   I'll write about industry reaction to it as soon as I hear more.

Friday, January 7, 2011

Cool Technology of the Week

The creative folks at Google Labs have introduced a 3D body browser, which is essentially a Google Earth for the human body.   It supports 3D rotation, zoom, and exploring the nervous system, circulatory system, musculoskeletal system, and organs.

The application requires a browser that supports WebGL, a cross-platform, royalty-free API used to create 3D graphics in a Web browser.

Google Chrome, Firefox 4 Beta, and Safari with Webkit fully support WebGL.

At Harvard Medical School, we've created and purchased tools that enable the medical students to perform 3D navigation of the human body as part of their anatomy coursework.   Now, similar resources are available for the public to access free of charge.

Google Earth for the Human Body - that's cool!

Thursday, January 6, 2011

Weather Station Lessons Learned

As part of my Christmas gift research, I wrote about selecting a home weather station.

I  configured the Davis Vantage Vue weather station  at my home so I could more easily install it at my parents house.

The integrated sensor suite is now sending so much data to so many people, that I'm keeping this one and purchasing a new unit for my parents home.  

Here's what I learned

Weather Underground, the default Google source for weather data, includes thousands of personal weather stations throughout the world.    To become a weather contributor, all you need to do is register your station.

You'll be given a call sign  - I'm KMAWELLE10 and my data is now available to any weather underground user.

Since Google's weather gadget draws from Weather Underground, any Google user seeking Weather from Wellesley Hills, MA is now receiving data directly from my home - I've become Google's default weather source for Wellesley.   Here's a great Patch article about it.

The Citizen's Weather Observation Program also enables easy registration of personal weather stations.   Once the data is quality controlled (here's my quality control report) , the data is added to the National Oceanic and Atmosphere Administration's (NOAA) Meteorological Assimilation Data Ingest System (MADIS) and becomes part of the dataset used for research, disaster response, and forecasting.   My MADIS ID is D6574.

NOAA has given me "2 thumbs up" for accuracy - less than 1 degree variation in temperature and 1 millibar of barometric pressure from the official NOAA data sources.

Finally, I've added real time weather from my home to my blog - just scroll down past my crosslinked blogs and you'll find real time "Geekdoctor weather" from home that is located at

Lat: N 42 ° 18 ' 2 '' ( 42.301 ° )
Lon: W 71 ° 16 ' 19 '' ( -71.272 ° )
Elevation (ft): 221

The experience with this weather station taught me a great deal about interoperability.   A single XML standard for content complemented by a domain specific vocabulary, transported using a simple web protocol enabled me to connect a complex data stream from my house to thousands of users throughout the world in minutes.    Of course, the privacy/security/data integrity of temperature reporting is a much different problem than electronic health records, but let us hope that by the end of 2011, connecting patients, providers and payers will be as easy as sharing my home weather telemetry.

Wednesday, January 5, 2011

A Healthcare Information Services Provider Business Model

I've written previously about Healthcare Information Exchange Sustainability and the need for Healthcare Information Services Providers (HISPs) to serve as gateways connecting individual EHRs.

How should HISPs be funded and how can we encourage HISP vendors to connect every little guy in the country?

We've started to think about this in Massachusetts.

There are numerous vendors promising HISP services -  Medicity (Aenta), Axolotl (Ingenix), Surescripts, Verizon, and Covisint.

An HIE needs to include at least one common approach to data transport, a routing directory, and a certificate management process that creates a trust fabric.   Existing HISP vendors have heterogeneous approaches to each of these functions.    In the future,  the Direct Project may provide a single approach, but for now HISP vendors will need to be motivated to adhere to State HIE requirements.

An idea that has been embraced by some State HIEs, such as New Hampshire, is to pay HISP vendors a modest fee (under 100K) to support State requirements.   This "connectivity" incentive results in interoperable HISPs, creating a statewide network of networks.

Once a standardized HISP approach is supported by multiple vendors, then individual practices need to be connected.   Some practices will be aggregated into hubs by EHR software vendors as has been done in cities such as North Adams (Massachusetts), projects such as the New York City PCIP project, and physicians organizations such as the Beth Israel Deaconess Physicians Organization.   However, it's not likely to be cost effective for a vendor to connect every isolated practice to a HISP for the $50/month the practice is willing to pay.

The Regional Extension Center program offers $5000 per provider to accelerate EHR adoption.    If State HIE programs were to offer a onetime EHR integration payment to HISP vendors, such as $500 per practice connected, then it is likely vendors would accelerate their efforts to connect "the last mile".

Thus, if State HIE funds covered vendor costs to implement statewide standards and offered a per EHR initial connectivity fee, barriers to startup would be eliminated.   The small amounts clinicians are willing to pay per month would then cover operating expenses so that HISPs could be self sustaining.

As a fallback, in the case that vendors do not find these payments appealing enough, Massachusetts has considered the idea of a public interest HISP - a subsidized service to cover practices without resources, in remote locations, or with special connectivity challenges.

Ideally, market forces would be enough for vendors to connect every payer, provider, and patient in the country.  However, HIEs will only be sustainable when there are sufficient customer connections to create business value.   A catalyst of State HIE funding to accelerate HISP standardization and EHR connectivity is necessary to provide the "activation energy" which will align the cost of providing the service with a price customers are willing to pay.