In Brad Pitt’s recent Oscar-nominated film ‘Moneyball’, Jonah Hill plays Peter Brand a young Yale economics graduate who brings a radical approach to scouting baseball players, based on ‘Sabermetrics’ (derived from the acronym ‘SABR’, which stands for the Society for American Baseball Research). Brand’s approach is to almost exclusively use players’ batting statistics to determine performance and selection rather than relying on the team’s scouts and intuition. It worked and in this true story the Oakland A’s went on to win an unprecedented 20 consecutive games in 2002, setting the American League record.
Of course no health organisation would rate its clinical performance on data alone, would it? Actually some might if a) there were the data and b) they told you something about outcome. Avedis Donabedian, the original guru of healthcare quality, in his classic 1966 paper Evaluating the Quality of Medical Care1 argued that “outcomes, by and large, remain the ultimate validation of the effectiveness and quality of medical care”. The trouble has been the lack of available UK data regarding local healthcare outcomes and hence performance, both within organisations (ie for clinicians) and outside (for the public). Internal audit data have traditionally been used by clinicians for monitoring standards but few data have been available systematically to allow the comparative analysis of clinical performance by clinicians or hospitals. However the game-changer came in 2001 following the publication by the independent Dr Foster Intelligence website2 of mortality rates for general hospitals. Now, comparing a range of routine outcome data has become commonplace for most disease groups, exemplified par excellence in Muir Gray’s NHS Atlas of Variation3 where differences in practice (outcome), such as amputation rates by area of the country, can be readily compared.
But what about mental health? Curiously most available mental health patient data are focussed on measured activity and processes rather than outcomes. There is no equivalent Dr Foster appraisal for mental health trusts. This might reflect the difficulty in determining what a ‘good’ outcome is (other than suicide being clearly the worst outcome); for example, is a good outcome symptom reduction or good quality of life or a combination? Nevertheless the lack of outcome data in mental health, for those commissioning services if not the clinicians and the public, is surprising given the societal burden of mental disorder, never mind the financial cost especially say in forensic settings (1% of the NHS budget being swallowed up by medium secure psychiatric care).
So what should mental health organisations do? As the business meme goes ‘better information leads to better decisions’ although providing feedback on outcome and hence performance to clinicians has been on the agenda, for example as one of the US Institute of Medicine’s “top six global health challenges”, for over a decade4. One healthcare provider, Kaiser Permanente in the US, grabbed the opportunity to feed back clinical data through an elaborate informatics approach for its staff although apparently almost bankrupted itself in the development process. Kaiser clinicians having access to and control of data to monitor their performance is now central to optimising their quality patient care and its supreme organisational efficiency.
All this might improve in Englandtoo with the publication of the newest NHS Outcomes Framework5 where a range of agreed indicators will be used to help measure outcomes. However some of this is predicated on the timely availability of accurate data while noting that in mental health settings the most important outcomes might be the least easy to measure. But that is where patients come in to help with a greater emphasis on so-called Patient-Related Outcome Measures (PROMs). Interestingly maybe patients have known ‘what good looks like’ all along. Last month Imperial College published analyses of patients’ ratings of general hospitals from the NHS Choices website since 20086 and sure enough the better-rated hospitals tend to have lower death and re-admission rates and hospitals rated the cleanest have lower MRSA rates.
So, what is a busy clinician to make of it all? Comparisons of services are potentially scary, an “an atmosphere of pressure” according to the government’s transparency tsar Tim Kelsey (and former Dr Foster founder). However most clinicians think they are doing a good job but often lack the resources or the clinical data to prove it but will they welcome the opportunity (‘pressure’) to have their stats poured over as they come out to bat?
Dr John Milton
Consultant Forensic Psychiatrist & Forensic Research Lead
Nottinghamshire Healthcare NHS Trust
1 Donabedian A (1966) Evaluating the Quality of Medical Care Available via the Milbank Quarterly (2005) http://onlinelibrary.wiley.com/doi/10.1111/j.1468-0009.2005.00397.x/abstract
2 Dr Foster Intelligence http://drfosterintelligence.co.uk/
3 NHS Atlas of Variation http://www.rightcare.nhs.uk/index.php/atlas/atlas-of-variation-2011/
4 Institute of Medicine of the National Academies: Crossing the Quality Chasm: A New Health System for the 21st Century http://www.iom.edu/Reports/2001/Crossing-the-Quality-Chasm-A-New-Health-System-for-the-21st-Century.aspx
5NHS Outcomes Framework 2012-13
6 Greaves F et al (2012) Associations Between Web-Based Patient Ratings and Objective Measures of Hospital Quality Arch Intern Med 172: 435-436 doi:10.1001/archinternmed.2011.1675