Just like Innovation, Usability is the weapon du jour against the large and/or established EHR vendors. After all, it is common knowledge that these “legacy” products all look like old Windows applications and lack usability to the point of endangering patients’ lives. On the other hand, the new and innovative EHRs, anticipated to make their debut any day now, will have so much usability that users will intuitively know how to use them before even laying their eyes on the actual product. With this new generation of EHR technology, users will be up and running their medical practice in 5 minutes and everybody in the office will be able to complete their tasks in a fraction of the time it took with the clunky, legacy EMRs built in the 90s. And all this because the new EHRs have Usability, not functionality, a.k.a. bloat, not analytical business intelligence and definitely not massive integration, a.k.a. monolithic. No, this is the minimalist age of EHR haiku. Less is better, as long as it has Usability.
Usability, according to the Usability Professionals Association, is “the extent to which a product can be used by specified users to achieve specified goals with effectiveness, efficiency and satisfaction in a specified context of use [ISO 9241-11]". Based on this definition, it stands to reason that any EHR prospective buyer should want a product with lots of Usability. Everybody wants to be effective, efficient and satisfied. So how does one go about finding such EHR?
Well, as always, CCHIT picked up the glove, and as always, CCHIT will be criticized for doing so. The 2011 Ambulatory EHR Certification includes Usability Ratings from 1 to 5 stars. The ratings are based on a Usability Testing Guide. Jurors are instructed to assess Usability of the product during and after the certification testing based on three criteria: Effectiveness, Efficiency and the subjective Satisfaction, as required by the ISO standard. The tools for this assessment consist of 3 types of questionnaires:
- After Scenario Questionnaire (ASQ) –jurors rate perceived efficiency (time and effort), learnability, and confidence after viewing scenarios
- Perceived Usability Questionnaire (PERUSE)–jurors rate screen-level design attributes based on reasonably observable characteristics
- System Usability Survey (SUS) –jurors rate the assessment of usability, and satisfaction with the application
The questions range from general subjective assessments in the ASQ, to very specific inquiries in PERUSE, like whether table headers are clearly indicative of the table columns content. Following the certification testing, results from all jurors are combined and weighted with more weight to specific answers and less to subjective overall impressions. The final result is the star rating, ranging from 1 to 5 Usability stars.
As of this writing, 19 Ambulatory EHRs have obtained CCHIT 2011 certification and all of them have been rated for Usability presumably according to the model described above. Of those, 12 achieved 5 stars, 6 have 4 stars and 1 has 3 stars. Amongst the 5 stars winners, one can find such “legacy” products as Epic, Allscripts and NextGen. The 4 and 3 stars awardees are rather obscure. So what can we learn from these results?
The futuristic EHR movement will probably dismiss these r`nkings as the usual CCHIT bias towards large vendors. Having gone through a full CCHIT certification process a couple of years ago, I can attest that the only large vendor bias I observed was in the functionality criteria, which seemed tailored to large products. Big problem. However, the testing and the jurors seemed very fair and competent. Looking at the CCHIT Usability Testing Guide, I cannot detect any bias towards any type of software. I would encourage folks to read the guide and form their own unbiased opinions. Are we then to assume that the 5 Stars EHRs have high Usability and therefore will provide satisfaction?
I don’t have a clear answer to this question. Obviously these EHRs have all their buttons and labels and text conforming to the Usability industry standards, and obviously a handful of jurors watching a vendor representative go through a bunch of preset tasks on a Webex screen felt comfortable that they understand and could use the system themselves without too much trouble. Many physicians feel the same way during vendor sales demos. However, efficiency and effectiveness can only be measured by repetitive use of the software in real life settings, for long periods of time and by a variety of users. Measuring satisfaction, the third pillar of Usability, is a different story altogether. There isn’t much satisfaction about anything in the physician community nowadays and when one is overwhelmed with patients, contemplating pay cuts every 30 days or so and bracing for unwelcome intrusion of regulators into one’s business, it’s hard to find joy in a piece of software, no matter how well aligned the checkboxes are.
The bottom line for doctors looking for EHRs remains unchanged: caveat emptor. The footnote is that the bigger EHRs are as usable as the Usability standards dictate, just like they are as meaningful as the Meaningful Use standards dictate and when all is said and done it is still up to the individual physician user to pick the best EHR for his/her own Satisfaction.
No comments:
Post a Comment