Friday, 11 November 2011
Consequences... or lack there of
So Nadine Dorries thinks that Tom Watson "let himself down" (58mins 12secs) and "jumped the shark" (58mins 22secs) by calling James Murdoch a "mafia boss".
I wonder what she thinks about herself saying that "you let 2.2 million in" (26mins 20secs) to Rachel Reeves, shadow Chief Secretary to the Treasury, last night (Thursday 10 November, 2011) on Question Time.
This is an outrageous, wildly inaccurate claim. Googling "2.2 million immigration UK" returns a whole range of sources scotching it. 2.2 million was actually the net migration over the 12 years of the previous Labour government. It has absolutely nothing to do with a breach in security, or people entering the country without the authority's knowledge, or without having the right to do so (the context in which her comment was made).
Some relevant snippets from the Google search below:
"But under Labour, net migration to Britain was close to 200,000 per year, for most years since 2000. As a result, over Labour’s time in office net migration totalled more than 2.2 million people – more than double the population of Birmingham"
"The prime minister will open his speech, in Hampshire, by saying that immigration is a hugely emotive subject that must be handled with sensitivity. But he will then say that Labour presided over the "largest influx" of immigration in British history, which saw 2.2 million more people settling in Britain between 1997 and 2009 than leaving to live abroad".
"Between 1997 and 2009, 2.2 million more people came to live in Britain than those who left to live abroad, Mr Cameron will say"
My mum has a number of sayings that she's fond of - one of which is "No horse play on the stairs", but that's not really relevant here. Another is "There'll be consequences" - she's particularly keen on this one when dealing with naughty grandchildren.
I guess what I'm increasingly wondering is what are the consequences for MPs making false claims?
No, not false expense claims - we all know that can now carry consequences. I mean false claims like the one last night. Or these about the NHS. And also, on the same type topic, what are the consequences of refusing to answer straightforward questions, as highlighted by this sorry episode?
Are there any consequences? And if not, why on earth not?!?
Chris
P.S. I guess if I'm being strictly fair, Dorries could claim she didn't say that Labour explicitly let in 2.2 million illegal immigrants. But look at the context in which she said it, and decide for yourself. Also, the fact it went unchallenged at the time is massively damaging, as chances are, no-one cares enough to correct / clarify it, so now it's just out there... festering.
P.P.S. Oh, and don't get me started about the "statistics" (24mins 40secs; 25mins 30secs) that Dorries gave... I await full publication and open scrutiny of them! And Michael Moore MP seemed to say that the pilot was still being "evaluated" (29mins 38secs)...
Wednesday, 9 November 2011
Don't have a cow, man
Above is a particularly rubbish attempt to link in to a post about Barts and The London NHS Trust. And here's a totally unfair, knee-jerk generalisation for you... what is it with journalists? I'm not on a crusade and I haven't got an axe to grind, but I keep finding myself in conflict with them.
And my crime? Well, reading what they write... and, um, querying what looks odd or surprising to me... and, well, that's about it really. Naively, I would think that's exactly what they'd want. When I'm a sports journalist (and surely it can only be a matter of time before a major newspaper comes knocking at my door, begging me to watch every Everton, Bath and Essex game on their time and money), I would want the self-affirmation provided by knowing that someone reads what I write, and engages with the topics that I write about. But hey, as I say, what do I know?
I've got two sagas rumbling on at the moment. The first is looking like being interminable (the Independent have now amended the offending article, but they've made it worse and more inaccurate than the original, which is pretty special of them). And the second is the topic of this blog posting. Here goes...
On the 26th of August 2011, this article was published by The Guardian - "One in 13 A&E patients return within a week - despite being seen". Now, various things about this article irritated me - for example, it muddies the picture between different data sources without any acknowledgement or proper explanation, and it presents one of those data sources, which is clearly published as provisional and experimental, as if it is robust and established. Beyond all that, there was an absolutely huge clanger in it.
It's not entirely dissimilar to the BBC clanger about A&E waiting times, with The Guardian article reporting that:
"Barts and London NHS Trust saw 95% of people waiting more than eight hours in A&E"
Now, because I know that the vast majority of people attending A&E nationally wait less than four hours, this claim leapt out at me. So in the interest of accuracy, I queried it:
Barts & London - 95% of people waiting longer than 8 hours in A&E?!? Er, >95% spent <4 hours.
And obviously Twitter isn't always the best medium to get your point across, so I simplified my point as well (I also employed the patented Ben Goldacre tactic of being polite and humble... partly to be nice, and partly because it may well be that I've made the stuff up!):
I could be wrong, but Barts and London para looks way off beam, and I make it 165,252 waiting >4 hours, not 165,279 gu.com/p/3xfyk/tw
Oh, and because I'm an annoying pedant, I couldn't resist one more:
Final point, promise. No mention in the article of the A&E data being experimental and provisional.
Now, I was going to give you a blow by blow, tweet by tweet account of what happened over the next few days and weeks, but ultimately it gets very dull and repetitive. And at least a little bit odd. What I will do is give you a brief summary.
It took one day for the major mistake in the article to be pointed out, but it took over a month for it to be corrected... I say corrected, it would be more accurate to say made a lot less wrong.
During those five weeks, I was repeatedly told that I was wrong and the article was right, that I should take it up with the Department of Health, and that I should declare my interests! I in turn repeatedly explained the massive difference between '95%' and the '95th percentile', and why it's important to recognise data quality caveats.
When numerous tweets failed to get the message across, I also spent the time to write two long emails explaining absolutely everything. The email exchange is below (skim read or skip past it, see if I care! It's fascinating, honest... ahem):
Good afternoonThis should hopefully be easier, freed from the 140 character tyranny.As you know, DH published new provisional A&E indicators on 26 August, 2011:They've been open and honest about the provisional and experimental nature of the data:These A&E HES data are published as experimental statistics to note the shortfalls in the quality and coverage of records submitted via the A&E commissioning data set. The data used in these reports are sourced from Provisional A&E HES data, and as such these data may differ to information extracted directly from Secondary Uses Service (SUS) data, or data extracted directly from local patient administration systems.Provisional HES data may be revised throughout the year (for example, activity data for April 2011 may differ depending on whether they are extracted in August 2011, or later in the year). Indicator data published for earlier months have not been revised using updated HES data extracted in subsequent months.The excel workbook is here, with the IC repeating the caveats:http://www.ic.nhs.uk/statistics-and-data-collections/hospital-care/accident-and-emergency-hospital-episode-statistics-hes/provisional-accident-and-emergency-quality-indicators-for-england-experimental-statistics-by-provider-for-april-2011These A&E HES data are published as experimental statistics to note the shortfalls in the quality and coverage of records submitted via the A&E commissioning data set. The data used in these reports are sourced from Provisional A&E HES data, and as such these data may differ to information extracted directly from Secondary Uses Service (SUS) data, or data extracted directly from local patient administration systemsOne of the key facts listed is:Several organizations reported data that did not meet the data quality checks required by the A&E indicators. The 95th percentile and longest single wait information are particularly sensitive to poor data quality, outliers and data definitional issues, which contributes to why some unusually high values may be observed for these measuresOn to the data itself, and Barts in particular.Barts - row 22.Time to departure - columns AT to BA.Because the data is provisional and experimental, DH and the IC have gone heavy on data quality measures, which is a good thing. They want to drive up the quality before deeming the new data set to be properly robust. They've also included helpful footnotes, one of which states:"The 95th percentile is particularly sensitive to poor data quality and definitional issues, which is why some unusually high values may be observed"Sorry to bang on about the data quality caveats, but it is important. Your article makes it sound like it is an established data source, which it really isn't - and one of your twitter responses mentions data revisions, which are standard. But the caveats published along with the new A&E data aren't standard.Of all the trusts listed, Barts has by far the highest proportion of departure times recorded at exactly midnight - 5.1%, compared to a national average of just 0.2%. That means they've almost certainly got a data quality problem.Their median waiting time is 129 minutes (roughly 2 hours), shorter than the national average of 131 minutes (roughly 2 hours).Their 95th percentile wait is 521 minutes (roughly 9 hours), much longer than the national average of 258 minutes (roughly 4 hours).So we know that there's a data quality problem with 5% of Barts' data, and therefore it most likely follows that there will be a data quality problem with looking at Barts' 95th percentile performance.Also, if Barts had seen 100 patients in A&E (just to keep it simple - they actually saw 11,541), and we listed them all out in order of shortest to longest wait, then the indicators are saying that:The middle (median) person waited 129 minutes (roughly 2 hours) - in reality, Barts' middle person was actually person 5,771The 95th (95th percentile) person waited 521 minutes (roughly 9 hours) - in reality, Barts' 95th percentile person was actually person 10,964That's very, very different to:Barts and London NHS Trust saw 95% of people waiting more than eight hours in A&E.Even if we ignore the data quality concerns (which we shouldn't), all that could actually be said is "Barts and The London NHS Trust saw 5% of people waiting more than eight hours in A&E". We really shouldn't say even that though, as we know that 5% of Barts' data looks dodgy (5.1% of records set to a departure time of exactly midnight, 00:00 - I'm willing to wager a decent sum that that's not a departure time, it's a default setting).Given that Barts' median (middle) waiting time is better than the national average, they really shouldn't be singled out. And it would be good if the provisional and experimental nature of the data was mentioned in your article.Hope the above is a better, more comprehensive explanation than my poor twitter based attempts.CheersChris
Thanks for this Chris
Sorry have been busy with another set of things here.I completely see your point. The 95 percentile is actually the data measure used by DoH so apologies for not taking the time to comb through the data. DoH pointed out that Barts was the first trust on that list not to have the 24-hour error that systematically wrecks the data. That being said it must contain a fair few of these as the average time to departure is so high. So really the story isDodgy data set used by DoH shows Barts top 5% longest wait was 9hours.I will correct when next back in the office. (Tuesday i think). Why do you thnk they did not use median figure?
Thanks
Surprised that DH highlighted Barts - they should really understand the data.
Mean, median, mode, quartiles, percentiles... horses for courses, to be honest.
I personally think that you get the most value and accuracy when you report multiple measures in conjunction.
A&E is high volume, so I'd go mean (the average gives you your headline, instantly understandable figure), and then I'd keep an eye on the median, 25th and 75th percentiles (to give you an understanding of the distribution). And I'd throw in the 95th percentile if (!) I was confident that there weren't data quality issues that undermined the measure. The 95th percentile gives you a good idea about the 'tail' of your distribution.
To be honest, as the data set is provisional, experimental and in its infancy, I'd go heavy on the data quality measures and working with the trusts to understand any oddities.
Barts being a case in point!
Thanks again for taking the time to go through it all. Good to get a correction.
Cheers
Chris
So what, I don't hear you ask, was the end result of my patient, comprehensive explanation of why Barts had been unfairly and erroneously singled out? On the 30th of September 2011, one line of the article was amended to read:
"Barts and London NHS Trust saw 5% of people waiting more than eight hours in A&E"
What a load of wasted effort on my part. And how arrogant to deny, deny, fob off, fob off, deflect, deflect, and ultimately only partially correct the original major mistake after a ridiculous length of time. And not take on board any of the other points.
Am I asking too much? Do I expect too much? Am I being incredibly petty? Maybe... maybe not.
Goodnight journalism, wherever you are.
Chris
P.S. You may have guessed by the name of the post and its content that I've struggled to give it a coherent structure. I've also wrestled with whether to post it at all (hence the time elapsed), as there's obviously a clear argument that a journalist taking the time to reply is better than one that just ignores you completely. I absolutely concede that, but ultimately I guess my point is that in my admittedly very limited experience there seems to be a real resistance from journalists to being questioned. Read my stuff - great. Agree with me - fantastic. Question me - what, what?!? Who are you? Don't you know who I am? Go away. I'm very busy etc. That's a real shame. Helping correct articles is surely a good thing, and the number one concern should always be the wronged party - in this case the hard working staff at Barts and The London's A&E department. Particularly as they are so open and transparent about how they are performing... if you don't believe me, look here!
P.P.S. It would also be wrong of me not to mention the good - George Monbiot, in my humble opinion, is an absolute star. Engaging, provocative, and above all - well researched, and approachable. Look at the glorious selection of heavily referenced articles on his website:
http://www.monbiot.com/
Nothing special, you might think. But compared to others, it is a revelation. And he goes further... a lot further. A comprehensive biography, as well as helpful career advice, AND a full registry of interests:
http://www.monbiot.com/about/
http://www.monbiot.com/career-advice/
http://www.monbiot.com/registry-of-interests/
P.S. You may have guessed by the name of the post and its content that I've struggled to give it a coherent structure. I've also wrestled with whether to post it at all (hence the time elapsed), as there's obviously a clear argument that a journalist taking the time to reply is better than one that just ignores you completely. I absolutely concede that, but ultimately I guess my point is that in my admittedly very limited experience there seems to be a real resistance from journalists to being questioned. Read my stuff - great. Agree with me - fantastic. Question me - what, what?!? Who are you? Don't you know who I am? Go away. I'm very busy etc. That's a real shame. Helping correct articles is surely a good thing, and the number one concern should always be the wronged party - in this case the hard working staff at Barts and The London's A&E department. Particularly as they are so open and transparent about how they are performing... if you don't believe me, look here!
P.P.S. It would also be wrong of me not to mention the good - George Monbiot, in my humble opinion, is an absolute star. Engaging, provocative, and above all - well researched, and approachable. Look at the glorious selection of heavily referenced articles on his website:
http://www.monbiot.com/
Nothing special, you might think. But compared to others, it is a revelation. And he goes further... a lot further. A comprehensive biography, as well as helpful career advice, AND a full registry of interests:
http://www.monbiot.com/about/
http://www.monbiot.com/career-advice/
http://www.monbiot.com/registry-of-interests/
Thursday, 18 August 2011
Once, twice, three times a headline
Hello, is it an accurate headline you're looking for?
On the afternoon of Friday 12 August, the BBC tweeted the below:
I could pretend that it made me think, wow - that's shocking news! But because it is just way, way, way too far fetched to be remotely believable, it immediately made me think, oh dear, somebody's made a major gaffe.
The previous day (Thursday 11 August), the Department of Health had released the latest A&E performance data, covering Quarter 1 of the 2011/12 financial year (1 April 2011 to 30 June 2011).
The previous Government's target was for a four-hour maximum wait in A&E from arrival to admission, transfer or discharge. In practice, this used to be measured by using a 98% threshold - in other words, an NHS trust was deemed to have achieved the target if 98% or more of people attending A&E waited less than four hours.
With the change in Government, the 'target' remained. Ish. 'Target' is now a dirty word and a big no,no - it smacks of the previous regime. So the 'target' became a 'standard', and the threshold for achievement was relaxed from 98% to 95% in June 2010.
Now that we're in 2011, and a whole new financial year - the first full financial year since the change in Government - we have the Department of Health further distancing themselves from the culture of targets.... by.... well...... by using much more creative language.
We don't have targets any more.
We don't even have standards.
What we have is IPMfNOs.
Ok, so I made up the acronym, but I can only dream of coming up with the satirical masterpiece that is.... ready for it.
Integrated performance measures for national oversight.
So targets then? No, no, no - integrated performance measures for national oversight.
Standards? Nope, you're not listening. Pay attention at the back - integrated performance measures for national oversight.
Riiiiight. You say integrated performance measure for national oversight. I say target. And I'll be in Scotland afore ye.
The reason why the BBC's tweet is too far fetched to be remotely believable is that 3.6 million people attended major A&E units between April and June 2011. For the time that they had to wait to almost double would be a mind-blowingly catastrophic deterioration in service.
What the latest figures from the Department of Health actually show is that 95.5% of the 3.6 million people attending major A&E units between 1 April 2011 and 30 June 2011 waited less than four hours.
During the same period last year (1 April 2010 to 30 June 2010), 97.7% of the 3.6 million people attending major A&E units waited less than four hours.
A deterioration in performance? Yes.
A doubling in waiting times? NO.
In Quarter 1 last year, 84,439 people waited longer than four hours. In Quarter 1 this year, 161,422 people waited longer than four hours.
Ahhhh! So that's the almost doubling that the shockingly inaccurate headline is trying to address! We've got almost twice as many people waiting more than four hours this year than last year.
Interestingly, the current official figures (much more detailed data is now available, but it's very much still in its experimental infancy) don't capture how long individual people actually wait - they only look at the four hour barrier, and report those waiting less than four hours and those waiting more than four hours. So actually the 161,422 people waiting more than four hours this year might (incredibly unlikely, but might) all have only waited four and a half hours, while the 84,439 people waiting more than four hours last year might (again, incredibly unlikely, but might) have waited seven hours.
Unrealistic example, granted. But the point is that we just don't know. All we know is the volume under four hours and the volume over four hours.
I replied to the BBC after their tweet:
Seriously BBC that is a shockingly inaccurate headline @bengoldacre @undunc RT @BBCNews A&E waiting times nearly double bbc.in/qaBGhr
They didn't acknowledge me, but the power of the Goldacre is great, particularly when he re-tweets you:
yup -> RT @Do0g1e: shockingly inaccurate headline RT @BBCNews A&E waiting times nearly double http://bbc.in/qaBGhr
A dozen or so re-tweets later, the BBC seemed to get the message (although still no acknowledgement... and they still haven't deleted their original tweet).
Their story headline went from the diabolical:
To the rubbish and cryptic:
To the still misleading and far too generic:
And that's what's currently up there.
Three swings, three strikes.
And I used to take note of the 'Last updated' tag, believing it to be a trustworthy way of knowing whether and when stories had been amended. The fact that all three versions of the story purport to be 'Last updated at 15:11' is clearly misleading nonsense.
All in all, a pretty poor show, and a totally unfounded and easily avoidable slur on the performance of the NHS.
Always dangerous to to be sanctimonious, but you'd have to think that a 10 second reality check would have prevented the original error. You don't need to be a health expert. You just need to think for a second and say, "So does this mean that people are waiting twice as long as they used to?".
If the answer is a resounding no, then tailor your headline appropriately. And if you revise it (repeatedly), try to remember to mention A&E, and the crucial four hour barrier.
It, you know, helps.
Chris
Subscribe to:
Posts (Atom)