Updated numbers on violent deaths in Iraq

I’ve cited the Lancet numbers myself (in qualified fashion), but maybe the estimate of a million Iraqi deaths is far too high:

A new survey estimates that 151,000 Iraqis died from violence in the three years following the U.S.-led invasion of the country…For the new study, however, surveyors visited 23 times as many places
and interviewed five times as many households. Surveyors also got more
outside supervision in the recent study; that wasn’t possible in the
spring of 2006 when the Johns Hopkins survey was conducted…"Overall, this is a very good study," said Paul Spiegel, a medical
epidemiologist at the United Nations High Commission on Refugees in Geneva.
"What they have done that other studies have not is try to compensate
for the inaccuracies and difficulties of these surveys, triangulating
to get information from other sources."

Here is more, here is the study itself.  This new estimate is probably not the final word, but you will recall that anyone who questioned the older Lancet estimate was pilloried at length; there is a lesson here  – Thy Shall Not Use Thy Blog to Squelch Heretics — and I am curious to see who will offer mea culpa and who will not.  "The two estimates aren’t as different as they look" is one way of spinning it; "I was wrong" is another.

Update: Here, and here.

Comments

I wonder if these new figures will get much press, other than blogs. I also find it strange that there are no comments. Anybody out there?

My prior was that the Lancet studies were shocking and hard to believe but the best estimate available. This new information should lead to revised priors. In search for the discrepancy between the Lancet studies and this questions will include why the pre-war mortality rate is so different, and whether this implies under/over-reporting in the NEJM/Lancet study/ies.

But:

What is the estimate of one million deaths? (Neither of the John Hopkins / Bloomberg studies report this.)

Also, I would be interested to have spelled out the lesson you refer to is? My impression is that the article you link to is a mostly entirely valid criticism of sundry obvious statistical fallacies committed by people refusing to review their priors even when intellectually unable to provide a reason for their stance. It could only have raised the standard of debate. And it does not refer to Megan McArdle (who would not be silly enough to make those mistakes), nor to the arguments she made.

Meanwhile, innumerate criticism of the John Hopkins / Bloomberg study is alive and well; here's the latest National Journal version: http://news.nationaljournal.com/articles/databomb/index.htm. No wonder Deltoid and others have been kept busy.

There are still two points that aren't much changed by the numbers:

(1) Nobody should be comforted by "only" 150k deaths.
(2) There is responsibility, and then there is blame; the difference is more than just semantics. Responsibility falls to the US for security, but blame falls to the murderers who actually did the killing. If anything, the higher casualty figures were an argument for more troops, not less.

dearime: yes, of course; if one is happy to dismiss as fraud any study whose findings do not conform with one's priors one can learn nothing new. That, too, may be called a dangerous game.

So only 150,000 Iraqis died. And here I was thinking the war was unjustified. My bad. Obviously it was an even-handed reponse to the deaths of 3000 New Yorkers

One obvious potential bias is that victimized groups are far more likely to have fled the country, and therefore were not surveyed. Since millions reputedly fled their homes, this is not a trivial problem. I wonder how they addressed it?

Dearest "Not all,"

Credibility is an absolute when presenting such a study. Those two issues I touched on destroy any possible credibility. I didn't even have to point out that the funding for the Lancet report came from George Soros.

Good grief...you can't possibly be defending the original numbers. Right?
Cheers,
JR

Tyler, I don't see how this study means defenders of the Lancet study owe anyone an apology. Either the Lancet critics had sound objections about the methodology of the Lancet study, in which case they were always owed an apology, or they were making unsound objections and are still not owed an apology. I don't believe anyone's argument turned on what a future study would show.

Do they not understand, that the majority of these victims were killed by the same people who attacked the WTC?

This would seem to be a new revelation.

Obviously, this hasn't gotten much press play- this is the first I have read of it.

Thou shalt not use thy...

Thou- you
Thy- your.

Curious why apparently in this crowd the following two assertions are made:

1. The study that says *all* fatalities (including disease, poor medical care, etc.) are high must be wrong.

2. The study that says that all *violence* caused fatalities are lower must be right.

Why aren't the same doubts applied to the 1st, not applied to the second? Since the studies measure two different quantities, it's possible that both are correct.

I don't know what the correct answer to the number of *total* fatalities is, but when I draw the conclusion that the civilian toll in civil war(s) is usually high, I don't derive it from political priors, I derive it from history. The Lancet study is certainly consistent with history. The new study may indicate the actual toll was mercifully less. But the idea that the war is a most excellent adventure because it *only* directly kills 50,000 people per year (for the economists: at a cost of hundreds of billions of dollars per year) is ah, wrong.

By the way, this estimate of 151,000 violent deaths was only for the first 3.0 years of this 4.8 year long (and counting) war. The first year of the war was pretty peaceful, so the total violent deaths under this methodology should be approaching a quarter of a million by now.

And then there is the big increase in nonviolent deaths reported in this latest study.

But it's all worth it because we've made Iraq safe for Iranian influence!

Just to get the numbers straight, the Lancet study estimated that, from the March 2003 invasion through June 2006, there were 601,000 violent Iraqi deaths (95% confidence interval: 426,000-794,000). This new study estimates 151,000 violent Iraqi deaths over the same time period (95% confidence interval: 104,000-223,000). Correct?

Well, I am someone who threw the Lancet study at some folks when it came out.
However, I always granted that there was lots of uncertainty about the numbers.
I also note that the only other number out there at the time was something on
the order of 30,000, which President Bush was quoting. That was the number I
used the Lancet study to trash, and that number still looks ridiculous.

The number of extra nonviolent deaths is supposedly the same as the number of
violent ones. They did an arbitrary split. So, the total is just over 300,000,
with lots of those due to the lost power, water, sewage, and so forth, thanks to
the war.

Indeed, to JR, and some others, the differences between the studies are not really
all that great. The main source of the difference is the murky estimate of the
prewar death rate. In terms of the death rate since, they are not all that different.

Oops. It looks a little more complicated than I calculated above. In this new study, "intentional" + "armed conflict" only equals 14% of all deaths. About 1 million deaths. But then you run into the problems with excess deaths based on pre-invasion rates.

If only I could squelch people with my blog. But Megan McArdle is stilling writing rubbish about the Lancet study and Tyler Cowen is still linking to her.

"It is interesting to contrast the reasonable tone of Megan's original post with the vehement response then and some of the agitated posters here who still want to cling to the Lancet study. "

Posted by: Rich Berger

When somebody does a study, and a propagandist with no background at all claims that it couldn't be true, because her (MBA) 'gut' says so, a vehement response is called for.

Sven:

"Is there really people, who are stupid enough to believe stuff like this? Do these people believe that the US Marines went from house to house to kill civilians? Do they not understand, that the majority of these victims were killed by the same people who attacked the WTC? Why are so many people so fricking retarded?"

We're not retarded, we simply remember the recent history - the US blew away the government, killed anybody who tried to (rightly, wrongly or whatever) impose order, and quite deliberately did not police Iraq itself. The stunningly obvious result, predicted beforehand, was that Iraq fell into sh*tland.

Claims by conservatives that Iraq War opponents were too quick to trumpet the Lancet results leaves conservatives open to criticisms that they are trumpeting the IFHS results before understanding them fully. The fact is that the mainstream response to the Lancet study was cautiously pessimistic ("even if it's off by a factor of two it still means the Bush Administration has been lowballing us") and the conservative reaction was shrill and defensive. Now conservatives are claiming that the Johns Hopkins researchers are biased against America, and that the IFHS survey conducted by the Iraqi Ministry of Health is less biased. Their responses don't strke me as intellectually honest, (although they would be quick to point out that doesn't mean they are wrong.)

But the whole controversy between the two sets of numbers is a red herring: Both the Johns Hopkins and the IFHS estimates of civilian deaths are much higher than official estimates that were coming from Iraqi, British or American governments at the time. If the Johns Hopkins study had said 150,000 iraqis were killed, that would have been big news, too.

Did someone "squelch" Megan McArdle? Last time I heard, she was being published in a Big Media magazine with over a million readers, mostly parroting the US government party line in the cause of defending the world's most powerful state and its military. I don't really see her as a persecuted dissident, but I am not surprised to learn that the "libertarians" who populate this comments section see things differently...

I'll start

1) Lack of internal controls and reality checks. I really like how the NEJM study also asked females about mortality among siblings. It's a cheap way to get two relatively independent estimates of the same quantity. Another approach might be to assign clusters to 'A' and 'B' groups before the study and compare the A and B estimates.

2) Lack of external controls and reality checks. Differing from Iraq body count and government figures by a factor of 10 is embarassing. Especially since most of the dead in the original study had been issued death certificates. If I were reviewing the original paper, I would ask the authors to account for the disparity between their estimated number of death certificates issued and the official government count.

Here again I like the NEJM approach of comparing the distribution of violence among provinces to Iraq body count.

3) If the statistical methods used for the Lancet study are really valid, I have to wonder if the surveyers implemented the protocol carefully enough. Call it the lesson of the exit polls: slight departures from planned statistical methods can skew results by large amounts.

Hate to say it but all this recimination back-and-forth about which study is correct is bullshit. Anyone out there think we would be quibbling about how many tens of thousands (set aside hundreds of thousands) of American civilians had been killed in an morally and politically disastrous (set aside illegal) invasion?

Sure the integrity of science as a tool for ascertaining something like truth is important. But we are talking about tens of thousands of civilians dead because Bush and company lied repeatedly and we let them get away with it. Let's worry about truth there.

As several of us keep pointing out in comments sections, and as daniel has just said at Crooked Timber, the first Lancet paper actually agrees very closely with this NEJM paper if you exclude the Fallujah outlier.

If we just stick to the violent deaths, the Lancet1 paper wasn't explicit about this, but Tim Lambert and others pointed out that its midrange violent death toll (excluding the Fallujah outlier) was about 60,000. Over 17 months that's about 120 deaths per day. Now look at the NJEM paper. How many violent deaths per day does it give? Around 120. The close agreement is coincidental, given the error bars, but remember that critics of the Lancet1 paper weren't just saying that its estimate was outlandish if you included Fallujah (they might have been right about that). They were saying it was outlandish without Fallujah. Iraq Body Count's death count at that time (they corrected it upwards by a few thousand later on) was 15,000, about 4 times less. None of the critics said that it was reasonable to think the true violent death toll might be 3-4 times higher--until the Lancet paper was published the prowar people thought Iraq Body Count's numbers were inflated.

The number that got into the press was 100,000--of that 100,000, roughly 60,000 were from violence. The 100,000 excess is the number without the Fallujah outlier. And it appears to be consistent with this latest paper.

I know, Rich. I've been following the arguments over this since 2004. I was just pointing out that the earlier study is in agreement with this NJEM paper for the period up until Sept 2004. And that earlier study was trashed by various people, but it appears to be correct that Iraq Body Count is only capturing a fraction of the total death. The main question now is the size of that fraction. Whether the later study (the 2006 paper) is correct or whether this NJEM paper is right I have no idea. Maybe we'll know someday if Iraq is ever at peace.

What seems to be disputed about the older study? If we assume that the violent death excess totted up by the newer one is correct, and if we assume that table 3 in the report is correct, then I get:

[(6.01-3.17)/(1.09-0.10)]*150,000~400,000

Since this is in agreement with the lower bound on the confidence interval in the Lancet study (~400,000), how can anyone say these findings differ in any significant way? Is there something wrong with my math?

There are more than two studies involved in this discussion. There are four statistical studies (first and second Lancet, UN study, NEJM study), and at least two actual enumerations of some subclass of fatalities -- Iraq Body Count and the official government estimate. Actual enumeration is by far the gold standard if the data is complete. If the original study were well designed, it should have been comparable to the actual enumerations, which are exact, or nearly so, for the subclass of fatalities measured by the more precise technique. The original Lancet study could not possibly be correct, because 90 odd percent of its fatalities had associated death certificates, and nowhere near 500,000 death certificates were issued, by actual count. That's a reality check, and it's one that the Lancet study failed miserably.

It seems like the right conclusion at this point is that the number of excess deaths is somewhere between 300K and 600K, modulo error bars.

Completely wrong. (Dare I say, innumerate?) The original study was terrible. It doesn't belong in the discussion at all.

A useful lesson for reading statistical studies: statistical error bars only have meaning in the context of a function with a known theoretical distribution. Failed statistical studies are close to worthless. You're far better off taking the very best study and its associated error bars than trying to do a mental meta-study with ill-defined protocols in a poorly conceived attempt to include all the data.

With a little googling, I was able to find an article referencing an official Iraqi accounting of death certificates, complete with explanations of where and how they are incomplete. This put the figure, before corrections for these errors, near 50,000 at the beginning of 2006.

http://www.commondreams.org/headlines06/0625-03.htm

My remarks on mental meta studies were a little patronising, and I apologize for that, but it is a good technique to use in the face of unreliable studies or particularly difficult measurements. Studies can err due to statistical error -- the designed-in error that comes from not using an infinite study size -- and also from implementation error -- the error that comes from using the wrong model distribution, not implementing your protocols perfectly, systematically missing something, etc. These are more study-specific. My problem with doing mental meta studies is that if you don't understand the different studies well enough, you end up carrying the sins of the father unto the nth generation -- a single rotten study keeps throwing you off. In a regime where the statistical error dominates, you can lump different studies together. In a regime where implementation error dominates, you should find the very best study or studies and treat them as the gold standard.

Violets, my wording was unclear. What I meant to say was what I said in my reply to Dsquared. That is, quoted statistical error bars aren't the same thing as real-life error bars. That got mixed up with a second point, that statistical techniques work far better when the object of study has a known theoretical distribution, and the study can verify that they're seeing the correct distribution -- ie, that they know their system well enough that they can correct for implementation errors. If you talk to experimenters, you'll find that it can be quite difficult to get things working so well that statistical error in a single run is actually the dominant error term.

It's important to avoid what you might call the statistics 101 fallacy -- the belief that things are working perfectly, even when the underlying measurement is difficult and might fail in poorly understood ways. That's why my suggested changes to the Lancet study revolve around introducing internal and external cross-checks.

Statistical error bars measure the difference between a particular calculated average and the true average. Real life error bars measure the difference between a particular calculated average and the platonic ideal quantity that you were trying to measure in the first place.

Consider a common experimental scenario: on a particular day when everything is working, you can rerun the experiment a million times and measure a particular quantity as having value x with a tiny statistical error bar. But when you come in the next day, random jitter, thermal expansions and contractions, slight changes in where the optics focus, etc, mean that you can repeat the experiment another million times and measure the same quantity as having value y. The error bar is just as small, but x and y may differ by many times the statistical error. That happens because the error due to not controlling everything perfectly dominates the statistical error seen in one run. The run-to-run error bars are what I'm calling the real-life error bars; they measure the error associated with repeating the measurement under putatively the same conditions.

Zach, I can't see any basis for deciding that the Lancet study was the "dreadful" one and the NEJM one is the "good" one on methodological grounds alone. It is not as if the NEJM study is flawless after all - not visiting 11% of the clusters and stitching the data together from ad hoc assumptions based on the IBC dataset is at least as much of a problem as any of the abstruse issues people raised with respect to Lancet 2 and much more than Lancet 1. I don't think we should reject any of the studies, but if we're going to chuck one out, the only basis for doing so appears to be an a priori decision that the Lancet 2 number was "too high", and that's not really statistics.

Free weight loss resources is to help you make healthy food and lifestyle choices for you and your family. You'll find topics that will help improve your overall health and sense of well-being related to weight loss, relationships, diet, exercises, nutrition, vitamins, prevention, and more. The first goal of dieting is to stop further weight gain. The next goal is to establish realistic weight loss goals. While the ideal weight is a BMI of 20-25, this is difficult to achieve for many people. Thus success is higher when a goal is set to lose 10% to 15% of baseline weight as opposed to 20% to 30% or higher. It is also important to remember that any weight reduction in an obese person would result in health benefits.

Hoodia is currently the most publicized and talked about natural weight loss product in America . It has been featured on CBS 60 Minutes, ABC, NBC Today, BBC, CNN and in other major electronic and print media.

Is it realistic?

The war is so terrible. Let’s pray for peach together

Comments for this post are closed