Friday, 11 November 2011

Consequences... or lack there of

So Nadine Dorries thinks that Tom Watson "let himself down" (58mins 12secs) and "jumped the shark" (58mins 22secs) by calling James Murdoch a "mafia boss".

I wonder what she thinks about herself saying that "you let 2.2 million in" (26mins 20secs) to Rachel Reeves, shadow Chief Secretary to the Treasury, last night (Thursday 10 November, 2011) on Question Time.

This is an outrageous, wildly inaccurate claim. Googling "2.2 million immigration UK" returns a whole range of sources scotching it. 2.2 million was actually the net migration over the 12 years of the previous Labour government. It has absolutely nothing to do with a breach in security, or people entering the country without the authority's knowledge, or without having the right to do so (the context in which her comment was made).

Some relevant snippets from the Google search below:

"But under Labour, net migration to Britain was close to 200,000 per year, for most years since 2000. As a result, over Labour’s time in office net migration totalled more than 2.2 million people – more than double the population of Birmingham"

"The prime minister will open his speech, in Hampshire, by saying that immigration is a hugely emotive subject that must be handled with sensitivity. But he will then say that Labour presided over the "largest influx" of immigration in British history, which saw 2.2 million more people settling in Britain between 1997 and 2009 than leaving to live abroad".

"Between 1997 and 2009, 2.2 million more people came to live in Britain than those who left to live abroad, Mr Cameron will say"

My mum has a number of sayings that she's fond of - one of which is "No horse play on the stairs", but that's not really relevant here. Another is "There'll be consequences" - she's particularly keen on this one when dealing with naughty grandchildren.

I guess what I'm increasingly wondering is what are the consequences for MPs making false claims?

No, not false expense claims - we all know that can now carry consequences. I mean false claims like the one last night. Or these about the NHS. And also, on the same type topic, what are the consequences of refusing to answer straightforward questions, as highlighted by this sorry episode?

Are there any consequences? And if not, why on earth not?!?


Wordle: Consequences... or lack there of
P.S. I guess if I'm being strictly fair, Dorries could claim she didn't say that Labour explicitly let in 2.2 million illegal immigrants. But look at the context in which she said it, and decide for yourself. Also, the fact it went unchallenged at the time is massively damaging, as chances are, no-one cares enough to correct / clarify it, so now it's just out there... festering.

P.P.S. Oh, and don't get me started about the "statistics" (24mins 40secs; 25mins 30secs) that Dorries gave... I await full publication and open scrutiny of them! And Michael Moore MP seemed to say that the pilot was still being "evaluated" (29mins 38secs)...

Wednesday, 9 November 2011

Don't have a cow, man

Above is a particularly rubbish attempt to link in to a post about Barts and The London NHS Trust. And here's a totally unfair, knee-jerk generalisation for you... what is it with journalists? I'm not on a crusade and I haven't got an axe to grind, but I keep finding myself in conflict with them.

And my crime? Well, reading what they write... and, um, querying what looks odd or surprising to me... and, well, that's about it really. Naively, I would think that's exactly what they'd want. When I'm a sports journalist (and surely it can only be a matter of time before a major newspaper comes knocking at my door, begging me to watch every Everton, Bath and Essex game on their time and money), I would want the self-affirmation provided by knowing that someone reads what I write, and engages with the topics that I write about. But hey, as I say, what do I know?

I've got two sagas rumbling on at the moment. The first is looking like being interminable (the Independent have now amended the offending article, but they've made it worse and more inaccurate than the original, which is pretty special of them). And the second is the topic of this blog posting. Here goes...

On the 26th of August 2011, this article was published by The Guardian - "One in 13 A&E patients return within a week - despite being seen". Now, various things about this article irritated me - for example, it muddies the picture between different data sources without any acknowledgement or proper explanation, and it presents one of those data sources, which is clearly published as provisional and experimental, as if it is robust and established. Beyond all that, there was an absolutely huge clanger in it.

It's not entirely dissimilar to the BBC clanger about A&E waiting times, with The Guardian article reporting that:

"Barts and London NHS Trust saw 95% of people waiting more than eight hours in A&E"

Now, because I know that the vast majority of people attending A&E nationally wait less than four hours, this claim leapt out at me. So in the interest of accuracy, I queried it:

 Chris Mason 

Barts & London - 95% of people waiting longer than 8 hours in A&E?!? Er, >95% spent <4 hours.

And obviously Twitter isn't always the best medium to get your point across, so I simplified my point as well (I also employed the patented Ben Goldacre tactic of being polite and humble... partly to be nice, and partly because it may well be that I've made the stuff up!):

 Chris Mason 

I could be wrong, but Barts and London para looks way off beam, and I make it 165,252 waiting >4 hours, not 165,279 

Oh, and because I'm an annoying pedant, I couldn't resist one more:

 Chris Mason 

Final point, promise. No mention in the article of the A&E data being experimental and provisional.

Now, I was going to give you a blow by blow, tweet by tweet account of what happened over the next few days and weeks, but ultimately it gets very dull and repetitive. And at least a little bit odd. What I will do is give you a brief summary.

It took one day for the major mistake in the article to be pointed out, but it took over a month for it to be corrected... I say corrected, it would be more accurate to say made a lot less wrong.

During those five weeks, I was repeatedly told that I was wrong and the article was right, that I should take it up with the Department of Health, and that I should declare my interests! I in turn repeatedly explained the massive difference between '95%' and the '95th percentile', and why it's important to recognise data quality caveats.

When numerous tweets failed to get the message across, I also spent the time to write two long emails explaining absolutely everything. The email exchange is below (skim read or skip past it, see if I care! It's fascinating, honest... ahem):

Good afternoon

This should hopefully be easier, freed from the 140 character tyranny.

As you know, DH published new provisional A&E indicators on 26 August, 2011:

They've been open and honest about the provisional and experimental nature of the data:

These A&E HES data are published as experimental statistics to note the shortfalls in the quality and coverage of records submitted via the A&E commissioning data set. The data used in these reports are sourced from Provisional A&E HES data, and as such these data may differ to information extracted directly from Secondary Uses Service (SUS) data, or data extracted directly from local patient administration systems.

Provisional HES data may be revised throughout the year (for example, activity data for April 2011 may differ depending on whether they are extracted in August 2011, or later in the year). Indicator data published for earlier months have not been revised using updated HES data extracted in subsequent months.

The excel workbook is here, with the IC repeating the caveats:

These A&E HES data are published as experimental statistics to note the shortfalls in the quality and coverage of records submitted via the A&E commissioning data set. The data used in these reports are sourced from Provisional A&E HES data, and as such these data may differ to information extracted directly from Secondary Uses Service (SUS) data, or data extracted directly from local patient administration systems

One of the key facts listed is:

Several organizations reported data that did not meet the data quality checks required by the A&E indicators. The 95th percentile and longest single wait information are particularly sensitive to poor data quality, outliers and data definitional issues, which contributes to why some unusually high values may be observed for these measures

On to the data itself, and Barts in particular.

Barts - row 22.
Time to departure - columns AT to BA.

Because the data is provisional and experimental, DH and the IC have gone heavy on data quality measures, which is a good thing. They want to drive up the quality before deeming the new data set to be properly robust. They've also included helpful footnotes, one of which states:

"The 95th percentile is particularly sensitive to poor data quality and definitional issues, which is why some unusually high values may be observed"

Sorry to bang on about the data quality caveats, but it is important. Your article makes it sound like it is an established data source, which it really isn't - and one of your twitter responses mentions data revisions, which are standard. But the caveats published along with the new A&E data aren't standard.

Of all the trusts listed, Barts has by far the highest proportion of departure times recorded at exactly midnight - 5.1%, compared to a national average of just 0.2%. That means they've almost certainly got a data quality problem.

Their median waiting time is 129 minutes (roughly 2 hours), shorter than the national average of 131 minutes (roughly 2 hours).

Their 95th percentile wait is 521 minutes (roughly 9 hours), much longer than the national average of 258 minutes (roughly 4 hours).

So we know that there's a data quality problem with 5% of Barts' data, and therefore it most likely follows that there will be a data quality problem with looking at Barts' 95th percentile performance.

Also, if Barts had seen 100 patients in A&E (just to keep it simple - they actually saw 11,541), and we listed them all out in order of shortest to longest wait, then the indicators are saying that:

The middle (median) person waited 129 minutes (roughly 2 hours) - in reality, Barts' middle person was actually person 5,771
The 95th (95th percentile) person waited 521 minutes (roughly 9 hours) - in reality, Barts' 95th percentile person was actually person 10,964

That's very, very different to:

Barts and London NHS Trust saw 95% of people waiting more than eight hours in A&E.

Even if we ignore the data quality concerns (which we shouldn't), all that could actually be said is "Barts and The London NHS Trust saw 5% of people waiting more than eight hours in A&E". We really shouldn't say even that though, as we know that 5% of Barts' data looks dodgy (5.1% of records set to a departure time of exactly midnight, 00:00 - I'm willing to wager a decent sum that that's not a departure time, it's a default setting).

Given that Barts' median (middle) waiting time is better than the national average, they really shouldn't be singled out. And it would be good if the provisional and experimental nature of the data was mentioned in your article.

Hope the above is a better, more comprehensive explanation than my poor twitter based attempts.



Thanks for this Chris

Sorry have been busy with another set of things here.

I completely see your point. The 95 percentile is actually the data measure used by DoH so apologies for not taking the time to comb through the data. DoH pointed out that Barts was the first trust on that list not to have the 24-hour error that systematically wrecks the data. That being said it must contain a fair few of these as the average time to departure is so high. So really the story is 

Dodgy data set used by DoH shows Barts top 5% longest wait was 9hours. 

I will correct when next back in the office. (Tuesday i think). Why do you thnk they did not use median figure?


Surprised that DH highlighted Barts - they should really understand the data.

Mean, median, mode, quartiles, percentiles... horses for courses, to be honest.

I personally think that you get the most value and accuracy when you report multiple measures in conjunction.

A&E is high volume, so I'd go mean (the average gives you your headline, instantly understandable figure), and then I'd keep an eye on the median, 25th and 75th percentiles (to give you an understanding of the distribution). And I'd throw in the 95th percentile if (!) I was confident that there weren't data quality issues that undermined the measure. The 95th percentile gives you a good idea about the 'tail' of your distribution.

To be honest, as the data set is provisional, experimental and in its infancy, I'd go heavy on the data quality measures and working with the trusts to understand any oddities.

Barts being a case in point!

Thanks again for taking the time to go through it all. Good to get a correction.



So what, I don't hear you ask, was the end result of my patient, comprehensive explanation of why Barts had been unfairly and erroneously singled out? On the 30th of September 2011, one line of the article was amended to read:

"Barts and London NHS Trust saw 5% of people waiting more than eight hours in A&E"

What a load of wasted effort on my part. And how arrogant to deny, deny, fob off, fob off, deflect, deflect, and ultimately only partially correct the original major mistake after a ridiculous length of time. And not take on board any of the other points.

Am I asking too much? Do I expect too much? Am I being incredibly petty? Maybe... maybe not.

Goodnight journalism, wherever you are. 


Wordle: Don't have a cow, man
P.S. You may have guessed by the name of the post and its content that I've struggled to give it a coherent structure. I've also wrestled with whether to post it at all (hence the time elapsed), as there's obviously a clear argument that a journalist taking the time to reply is better than one that just ignores you completely. I absolutely concede that, but ultimately I guess my point is that in my admittedly very limited experience there seems to be a real resistance from journalists to being questioned. Read my stuff - great. Agree with me - fantastic. Question me - what, what?!? Who are you? Don't you know who I am? Go away. I'm very busy etc. That's a real shame. Helping correct articles is surely a good thing, and the number one concern should always be the wronged party - in this case the hard working staff at Barts and The London's A&E department. Particularly as they are so open and transparent about how they are performing... if you don't believe me, look here!

P.P.S. It would also be wrong of me not to mention the good - George Monbiot, in my humble opinion, is an absolute star. Engaging, provocative, and above all - well researched, and approachable. Look at the glorious selection of heavily referenced articles on his website:

Nothing special, you might think. But compared to others, it is a revelation. And he goes further... a lot further. A comprehensive biography, as well as helpful career advice, AND a full registry of interests: