[Your comments and corrections are very welcome.] After Day 1, I was feeling all portal-ed out. We’d heard so much talk about the amazing things we can do when our data is accessible in some sort of aggregated or federated way, but there had been no proper discussion of the technologies we might use to achieve that federated data source. Slide after slide showed a black box where data mince goes in and scrummy, patient-centric sausages come out… So I was hoping Day 2 would get down into the nitty-gritty of how we build that black box.

Some general thoughts about the data-scrunging process, concentrating on matching patients (there are other things we’d also have to match of course): I realise that there’s probably no option but to build some sort of fuzzy matching system, similar to that which I’ve struggled with for years in SCI Store. It was kind of amusing that so many vendors of English-based solutions were punting us these matching systems as something new, when every health board in Scotland has been running SCI Store for years. But my experience with SCI Store has made me very wary. If we’re going to use fuzzy matching (and by fuzzy I mean it matches on anything from CHI to varying degrees of name/DOB/address matching rules) then we’re going to have to acknowledge that matching errors are going to happen. I want NHS Scotland to tell me how we’re authorised to deal with that. Sysadmins should not be held responsible for the reliability of matching in such systems, as has happened with SCI Store. The matching rules should be set at a national level, and they should be common to all systems. Sufficient (human being) resources need to be put in place to deal with manually resolving the stuff which the matching engine can’t handle, a particularly serious issue for small health boards who can’t afford full-time staff to monitor data quality. Third-party systems – including SCI Store – need to be required to present unique IDs (not CHI, which the CHI programme manager herself made clear (during one of the post-presentation discussions) should not be used as the sole data item on which matching should be based); these unique IDs are then used in whatever global lookup system we end up using. The key thing here is to acknowledge (a) this is difficult, (b) it won’t be perfect and (c) it’s not down to local sysadmins to make it work correctly.

I’m sorry to say that none of this stuff came up during Day 2. Basically we were presented with various sales pitches for portal technology (each with it’s magic black box for making sausages) from SG, Sun, Logica, Emis, Vision, PAERS and Lorenzo. The day was also very heavily biased towards the English experience. Here are what struck me as the highlights: Nick Booth talked about Data Standards and his experiences with the English system. This was a useful talk in that it introduced me to the vocabulary people are using to discuss this stuff, i.e. functional, syntactic and semantic interoperability. Examples of functional systems are N3, staff registration and x-rays. Examples of syntactic systems are choose + book, electronic prescribing, personal demographics service and summary care records. One example of a semantic system: GP2GP. Functional interoperability seems to be basic messaging services. Syntactic interoperability is shared information structure and terminologies. Semantic interoperability is a single knowledge model which enables decision support and “computer-level comprehension”. I have to admit I’m unclear about the difference between syntactic and semantic interoperability – I need some better examples. Nick then went on to talk about the various data standards available. His conclusion was that there is no single coherent set of standards and that the overlap between standards creates ambiguity. The existing professional bodies need to address this urgently and more health professionals need to get involved. He said standards are “like treating psoriasis in my early days as a GP – there are loads of preparations because none of them work”. I liked the fact that Nick gave a nod to the ISD XML schemas which have made things like SCI Gateway possible.

Ian McNicol told us about OpenEHR. [Update: I didn’t get this quite right – see Ian’s comment below for clarification.] I liked his line about the constantly-changing clinical requirements and multitude of clinical viewpoints: “we need to stop inflicting this on our technical colleagues”. I couldn’t agree more! Even simple concepts like blood pressure are implemented differently in different computer systems, he explained. The ISD XML schemas are good but “not the way of the future”. OpenEHR is open source with provision for commercial licensing as well. It was established by UCL and the IP is owned by Ocean Informatics. There are implementations for both Java and .net. A Java program called the Clinical Knowledge Manager allows clinicians to collaboratively build the archetypes which form the OpenEHR, keeping the techie and clinical sides separated. The goal is to create a “maximum dataset”, allowing the system designers to choose which parts of the dataset they will implement. For the moment, it’s just attributes which are included, but there are plans to develop archetypes for processes too. “The big problem” said Nick “is getting concensus across professional groups.”

As a general point, I was amazed at the fact that all the portal-punters were creating their own portal frameworks. Surely the whole point is to build interoprerable widgets for a common framework? One has to hope that SG puts their foot down and requires third-party developers to create widgets which will work in any portal framework. What would be the point of a portal if we just end up having to switch *between* portals to access all the systems we need? There’s a degree of complacency in not developing widgets for existing platforms e.g. iGoogle, Yahoo, etc. We’ve got loads of portal frameworks, it’s the underlying data and the presentation widgets we’re lacking. Talking of which…

Emis and Vision gave a shared presentation. They plan to build a “medical interoperability gateway” (MiG) which will use their own “Open HR” data format to pass records between Vision, Emis and any other software vendor who wants to join the fun. Note that they declined to use the established standards such as OpenEHR because it was cheaper and easier to get a product to market using their own data format. Hmm. The MiG is a broker which routes messages between the different systems, essentially in the same way as proposed by the eHealth strategy published last year (referred to as the Record Locator Service in Cathy Kelly’s keynote, NHS Scotland’s name for the “magic glue”). To quote the presenter, the MiG has “lots of ways of working out where to route requests. Hmmm. Other data formats will be available through transformation services (presumably XSLT) – not an encouraging thought given the difficulty both outfits have had in serializing their data to the ISD referral schema required for SCI Gateway.

Will Jones’ presentation on document formats in Lorenzo was a bit beyond me I’m afraid, I just didn’t know enough about the issues. However he did raise some issues which were mentioned nowhere else in the conference, but should have been: version control of documents and data, rich-text formatting inside data and synchronisation.

Paul Goss, a “senior consultant in ehealth”, seemed determined to alienate his audience right from the start. First of all he asked if we’d read today’s Telegraph – “as with most healthcare workers you’re probably Guardian readers”. In case the techies felt left out, he said of data federation that is was something “techies will tell you is easy, but isn’t”. Grrrr. He then told us all about Logica’s system in use in England. Incidentally, Logica call their magic glue the “Logica eCarelogic Infrastructure”.

The final talk of the day was from Claudia Pagliari. She discussed trends in health and the convergence between health, consumer electronics and infotainment. I don’t think Claudia is a regular social web user or gamer, and the talk was therefore (to my ears) a little naive and drew more profound conclusions from tech trends than is justified IMO. She also seemed to be unaware of some of the serious technical issues with using unencrypted transmission services such as email and SMS for consulting, not forgetting the drift away from such services towards IM and social websites. However, a lot of it was good, including recognition of the burgeoning telehealth field and the urgent need for the NHS to start engaging its patients through improved information sources and social networks. My thoughts: We need to learn from our mistakes with Web 1.0, i.e. when the NHS created online content “too little, too late”, didn’t take the maintenance of online information sufficiently seriously, failed to provide decent co-ordination or decent implementation of search facilities and didn’t understand how to integrate with commonly-used stuff like Google Search. If we’re not careful, the Web 2.0 shift to collaborative information sources and communities will similarly leave us behind… A couple more points Claudia made that I really liked: (1) there’s a trend from “illness” to “wellness”, something that I’ve heard from a GP as well; (2) we should be concentrating on management of long-term conditions since they are the main service-users and will benefit most from having better access to their medical data and health information.

So that was it. Thanks to all for making it a good conference, especially Paul Woolman et al for setting it up. Thanks to Ken M, Nigel F, Jim R, Pauline S, Jon H and Mark H who serially kept me company.

Advertisements