At Long Last, Can We Please Start Counting the Dead?

Under the strange Bizarro rules that right-wing pundits use to interpret politics in the United States, election season is the time when no one is supposed to discuss any of the things that might actually have a serious impact on their voting decision. The Mark Foley scandal was dismissed as an election-season "October surprise" cooked up by Democrats (even though the people who exposed it were Republicans, not Democrats). And James Baker announced that his secret plan to help Bush turn things around in Iraq would not be released publicly until "after the election in order to try and take our report out of domestic politics."

Let's ignore for the moment the fact that this curious delicacy about political bombshells in an election season comes from the same people who chose September 2002 — the beginning of congressional midterm elections — as the moment to launch their public push for war with Iraq. Let's humor the pundits and pretend that there really is some reason why people should hold off on discussing matters of pressing political interest during elections. If that's the case, then now is the moment when those discussions ought to begin. Let's start by talking about the dead in Iraq.

Last month there was very little discussion of the study published in the Lancet, a highly respected British medical journal, which estimated that 650,000 Iraqis have died since 2003 as a result of the war. The Lancet study too was dismissed as an "October surprise," and it disappeared from the news within days of its publication. But now that the election is over, can we finally discuss it?

I was shocked myself when I saw the figure of 650,000. It seemed huge, much larger than I had imagined possible. It is approximately four times the Iraqi Health Ministry's recent estimate, and twice the figure of 300,000 that is often given as an estimate of the number of people killed by Saddam Hussein during his 23 years of brutal rule.

The Lancet study, with Gilbert Burnham as its lead author, was conducted by some of the same researchers from Johns Hopkins University and Al Mustansiriya University in Baghdad who conducted a previous study in 2004 which estimated that 98,000 people had died. The earlier study was attacked at the time by supporters of the war and was largely ignored by the mainstream news media in the United States, as John Stauber and I noted in our recent book, The Best War Ever: Lies, Damned Lies and the Mess in Iraq (for an excerpt, see the Third Quarter 2006 issue of PR Watch). The new study suggests that some half a million additional lives have been lost in the subsequent two years.

As the Lancet paper explains, this number is an estimate based on statistical sampling of Iraq's population, and due to limitations in the number of people surveyed, it has a fairly wide margin of error. The researchers followed standard scientific procedure and reported their findings using a "95% confidence interval" — a minimum and maximum value derived from statistical analysis which finds a 95 percent probability that the two limiting values enclose the true number. The minimum value in their confidence interval was 392,979, and their maximum value was 942,636, which means that although 650,000 is their most likely estimate, the true number could be substantially lower or higher. Even so, the low end of this range is nearly 400,000, while the high end is nearly a million.

Are these numbers credible? I looked at reactions to the Lancet study from several groups: American political pundits, scientists with expertise in health and mortality research, and Iraqis (as reflected in the views of Iraqis with English-language weblogs). Many of the political pundits (even those with anti-war views) either rejected the study or questioned its conclusions and methodology. The scientists, however, gave it high marks, and most of the Iraqis thought the number sounded like it was in the right ballpark.

What the Study Says

The full Lancet study is available online. Although it is a scientific paper, I found it easy to read and jargon-free. However, a couple of terms might need explanation.

The study uses a "cluster sampling" methodology that is commonly used in health and mortality research, especially in places hit by war or other humanitarian disasters such as floods or earthquakes. The methodology is somewhat less precise — but more cost-effective and practical — than simple random sampling, in which individual members of the population being studied are selected and interviewed at random. Rather than individuals, researchers interview randomly-selected clusters of individuals and use standard statistical techniques to reach conclusions about the entire population. As Daniel Engber explains in Slate magazine, "It's the same basic method used for political polls in America, which estimate the attitudes of millions of people by surveying 1,000 adults."

A survey of this type, in which researchers go out and methodically sample the population being studied, is called "active surveillance" as opposed to "passive surveillance," which relies on information collected by external sources such as government or news reports. Passive surveillance generally tends to produce unrealistically low estimates, because they miss cases in which someone has died but the death has simply gone reported. Consider, for example, the difference between the results that you would get if you attempted to estimate the health impact of tobacco smoking using passive rather than active surveillance. Epidemiologists have repeatedly and conclusively demonstrated that tobacco smoking causes several hundred thousand deaths per year in the United States, but individual cases of smoking-related death are rarely reported as such in newspapers, so you would get a much lower number if you attempted to compile statistics based on newspaper reports alone.

Currently the most comprehensive attempt to compile statistics on Iraqi death using passive surveillance is being done by the Iraq Body Count website, which as of this writing (November 2, 2006) has tallied 45,061 to 50,022 deaths — less than a tenth of the Lancet result. As the Lancet paper itself notes, "Our estimate of excess deaths is far higher than those reported in Iraq through passive surveillance measures. This discrepancy is not unexpected. Data from passive surveillance are rarely complete, even in stable circumstances, and are even less complete during conflict, when access is restricted and fatal events could be intentionally hidden. Aside from Bosnia, we can find no conflict situation where passive surveillance recorded more than 20% of the deaths measured by population-based methods."

Lancet editor Richard Horton made the same point in a commentary published in the Guardian:

Only when you go out and knock on the doors of families, actively looking for deaths, do you begin to get close to the right number. This method is now tried and tested. It has been the basis for mortality estimates in war zones such as Darfur and the Congo. Interestingly, when we report figures from these countries politicians do not challenge them. They frown, nod their heads and agree that the situation is grave and intolerable. The international community must act, they say. When it comes to Iraq the story is different. Expect the current government to mobilise all its efforts to undermine the work done by this American and Iraqi team. Expect the government to criticise the Lancet for being too political. Expect the government to do all it can to dismiss this story and wash its hands of its responsibility to take these latest findings seriously.

Assessments from Scientists

Here are some of the reactions from scientists who work in the field of mortality research:

  • Ronald Waldman, an epidemiologist at Columbia University who worked at the Centers for Disease Control and Prevention for many years, told the Washington Post that the Lancet's survey method was "tried and true" and said its findings were "the best estimate of mortality we have."
  • According to Professor Frank E. Harrell Jr., chairman of the biostatistics department in the School of Medicine at Vanderbilt University, "The investigators used a solid study design and rigorous, well-justified analysis of the data. They used several analytic techniques having different levels of assumptions to ensure the robustness of mortality estimates and the estimated margin of error. The researchers are also world-class."
  • Francisco Checchi, an epidemiologist at the London School of Hygiene and Tropical Medicine who has worked on mortality surveys in Angola, Darfur, Thailand and Uganda, said that he found the survey's estimates "shockingly high," but added that dismissing it "simply on gut feeling grounds seems more than irrational." He noted that its "choice of method is anything but controversial" and found its results "scientifically solid" and "compelling."
  • In Australia, 27 of the country's leading scientists in epidemiology and public health signed a letter supporting the study, noting that it "was undertaken by respected researchers assisted by one of the world's foremost biostatisticians. Its methodology is sound and its conclusions should be taken seriously. ... The study by Burnham and his colleagues provides the best estimate of mortality to date in Iraq that we have, or indeed are ever likely to have."

Asked about the study at a news conference, President Bush dismissed it out of hand, calling it "not credible" and saying its methodology was "pretty well discredited."

"That's exactly wrong," responded Richard Garfield, a public health professor at Columbia University who works closely with a number of the authors of the report. "There is no discrediting of this methodology. I don't think there's anyone who's been involved in mortality research who thinks there's a better way to do it in unsecured areas. I have never heard of any argument in this field that says there's a better way to do it."

Politicians and Pundits

Most of the methodological criticisms of the Lancet study actually come from people like Bush who have no expertise in epidemiology, and of course the boldest attacks have come from supporters of the war.

Writing in the conservative National Review, for example, Richard Nadler called the Lancet paper a "cooked up study." His only methodological critique, however, consisted of an odd claim that the researchers were guilty of "baseline bungling" because they "chose their 'base-line' for pre-invasion Iraq carefully: January 2002 through March 2003."

The baseline to which he referred is the study's pre-war estimate of the annual death rate in Iraq. The Lancet researchers arrived at that estimate the same way they arrived at their estimate of post-war deaths — by asking the people they interviewed whether any members of their household had died during that period. By comparing the pre-war baseline against the post-war death rate, they arrived at their estimate of 650,000 "excess" deaths in the post-war period.

Nadler's argument is that the period from January 2002 through March 2003 was less violent than earlier periods of Saddam Hussein's rule. If, therefore, the researchers had measured the post-war death rate against an earlier, more violent period, the comparison wouldn't look so bad. Of course, January 2002 through March 2003 wasn't chosen arbitrarily, since it happens to be the period that immediately preceded the actual invasion of Iraq. And if the period immediately before the invasion was relatively peaceful, why was the Bush administration so insistent on the urgent need for war?

Criticism of the baseline mortality rate was also a central element in another criticism of the Lancet, published by Fred Kaplan in Slate magazine. The Lancet estimated that 5.5 Iraqis per 1,000 were dying each year before the war. According to Kaplan, this estimate showed that the study was flawed because it differed from an estimate of 10 per 1,000 published by the United Nations. Moreover, he says, a 5.5 per thousand prewar mortality rate would have been "lower than that of almost every country in the Middle East" (a claim made also by columnist William M. Arkin in the Washington Post).

Australian computer scientist Tim Lambert demolishes this criticism in more detail than I'm prepared to, pointing out that 5.5 deaths per thousand was actually higher than the mortality rate in "all but one" of the other countries in the Middle East. The CIA Factbook also estimates Iraq's mortality rate at 5.37 per thousand, a figure that is very close to the Lancet estimate. Moreover, the United Nations estimate cited by Kaplan was itself just a guess, since prior to the Lancet studies, "No surveys or census based estimates of crude mortality have been undertaken in Iraq in more than a decade, and the last estimate of under-five mortality was from a UNICEF sponsored demographic survey from 1999."

Kaplan's other critique invoked an entirely new coinage and concept — the term "main street bias." He cited a letter published in Science magazine by a British physics professor and an economist who argue that the Lancet team's sampling technique was insufficiently random because it overselected people who live near main streets in Iraqi cities. This would skew the results, they claim, because people who live near main streets would have had higher death rates than the country's overall population.

There are two problems with this criticism. First, the Lancet researchers deny that they oversampled from main streets. The methodology section of the published study states that they surveyed homes selected randomly from "a list of residential streets crossing" a randomly-selected main street (emphasis added). But even if the study did overselect homes located near main streets, there is no evidence other than speculation to support the conclusion that "main street bias" would lead to an overcount.

Another attempt at methodological criticism came from Republican pollster Steven E. Moore, who conducted surveys in Iraq and served as an advisor to Paul Bremer. Moore blasted the Lancet paper, calling it a "bogus study." His criticism focused on the study's allegedly too-small sample size and imprecision. "Survey results frequently have a margin of error of plus or minus 3% or 5%--not 1200%," he wrote. This is generally true — with regard to the sort of opinion surveys that Moore performs (although his research in Iraq left Bremer forced to admit belatedly that "we really didn't see the insurgency coming"). The Lancet study, however, was studying mortality, and its sample size was dictated in part by the limited funds available to finance it and in part by concern for the safety of the Iraqi researchers who conducted the survey. It is true that the results are less precise than the results that would be needed to predict election outcomes in a political opinion poll, but that was not its purpose. As the Lancet paper itself explains, "A sample size of 12,000 was calculated to be adequate to identify a doubling of an estimated pre-invasion crude mortality rate of 5·0 per 1000 people per year with 95% confidence and a power of 80%, and was chosen to balance the need for robust data with the level of risk acceptable to field teams."

Moore also claimed that the Lancet researchers collected no demographic data about survey respondents — a claim that was untrue but that is nevertheless repeated in a separate Wall Street Journal editorial which called the study a "fraud."

Similar vitriol came from Christopher Hitchens, the former Trotskyist turned pro-war polemicist, who dashed off a column that didn't so much critique the Lancet paper as urinate on it. After accusing the epidemiologists of "moral idiocy," Hitchens mocked the name "Lancet," called its editor an "Islamist-Leftist," and went on to claim that its mortality estimate is (1) "almost certainly inflated" and (2) actually justifies the war. Why? The study found that 31 percent of deaths were attributed to coalition forces, while 24 percent were attributed to "other" causes and 45 percent were "unknown" (because either the responsible party was not known, or the surveyed households were hesitant to specifically identify them). From this evidence, Hitchens concluded that insurgents are the true killers in Iraq and that the Lancet study is therefore "a reminder of the nature of the enemy we face."

Iraq Body Count

Other criticism of the study came from a source that may seem surprising: Iraq Body Count (IBC), the anti-war, London-based organization that has been tracking Iraqi deaths since the beginning of the war. IBC issued a news release questioning the wide gap that separates its own numbers and official Iraqi government statistics from the Lancet's much larger estimate. The discrepancy, they argued, is so large as to be implausible. For example, IBC doubts that the number of deaths estimated by the Lancet could have occurred "with less than a tenth of them being noticed by any public surveillance mechanisms." A gap that large, they argue, can only mean that either there has been "incompetence and/or fraud on a truly massive scale by Iraqi officials in hospitals and ministries," or else the Lancet authors "have drawn conclusions from unrepresentative data."

Les Roberts, one of the authors of the Lancet study, has responded to these criticisms in an interview with the British Broadcasting Corporation. Citing examples from other wars, he points out that "It is really difficult to collect death information in a war zone! ... I do not think that very low reporting implies fraud."

It should be noted that IBC's own methodology follows rules that should be expected to lead to a lower count than the Lancet survey:

  • Whereas the Lancet study attempts to estimate all deaths — including the deaths of insurgents, police and Iraqi military — IBC only counts civilian deaths and excludes combatants.
  • IBC only counts deaths that are reported in English-language news media, and Iraq is not an English-speaking nation. Many more deaths are reported in the Iraqi press in Arabic than in the Western-language wire services.

As for the gap between the Lancet figure and deaths reported by the Iraqi Health Ministry, a number of Iraqi commentators (some of whom I quote below) have noted that conditions in many parts of the country as so unstable as to prevent reliable government accounting. Moreover, the question of how many people have died in Iraq has been politically charged since the start of the war, and the United States has not only avoided issuing statistics of its own but on a number of occasions has also pressured Iraqi officials against doing so. Shortly after the invasion in 2003, Baghdad's medical officials were forbidden to release morgue counts. In December of that year, Iraq's Health Ministry ordered a halt to counting civilian deaths and told its statistics department not to release figures, according to the Associated Press. More recently, following a wave of Shiite-on-Sunni violence in February of this year, Iraqi officials originally estimated more than 1,000 deaths but lowered the estimate to 350 following what an international official described as political pressure. In October of this year, Iraqi Prime Minister Nouri al-Maliki's office even instructed the country's health ministry to stop providing mortality figures to the United Nations.

Complicating things still further, the Washington Post reported in August that Iraqi hospitals themselves have become killing fields where Sunni Muslims are kidnapped and killed because Iraq's Shiite-run Health Ministry has been taken over, along with several other government ministries, by the Mahdi Army, a militia controlled by radical cleric Moqtada al-Sadr. The resulting "reluctance of Sunnis to enter hospitals is making it increasingly difficult to assess the number of casualties caused by sectarian violence," the Post noted. As another indicator of the inadequacy of official Iraqi statistics, the most recent United Nations human rights report on Iraq states, "The Ministry of Health reported zero number of killed in Al-Anbar for July, which may indicate an under-estimation due to difficulties experienced in collecting information in that particular Governorate." Al-Anbar is one of the most violent provinces in the country, where the Lancet study found that more than 10 people per 1,000 have died annually in violence.

Iraqis Weigh In

Among Iraqi bloggers, the strongest challenge to the Lancet study came from Omar Fadhil, one of two brothers who contributes to a pro-occupation website called "Iraq the Model" (ITM). Fadhil emotionally blasted the study, accusing the Lancet researchers of

exploiting the suffering of people to make gains that are not the least related to easing the suffering of those people... They shamelessly made an auction of our blood, and it didn't make a difference if the blood was shed by a bomb or a bullet or a heart attack because the bigger the count the more useful it becomes to attack this or that policy in a political race and the more useful it becomes in cheerleading for murderous tyrannical regimes.

When the statistics announced by hospitals and military here, or even by the UN, did not satisfy their lust for more deaths, they resorted to mathematics to get a fake number that satisfies their sadistic urges...

These comments prompted an equally emotional outpouring from dozens of other Iraqi bloggers, who called ITM "a holocaust denier," "sucking up to the Americans," "a traitor," "like the Baathist apologist that they so despise," and "shameful." An Iraqi housewife declared that she was full of "Guilt and anger because the Iraq I always dreamt of has become one big nightmare. ... Guilt and anger because outside these walls are trashbins filled with decapitated bodies of women, children and men. ... Guilt and anger because after all the years of tyranny, people are now wishing for Saddam the criminal to come back. ... The so called freedom that everyone, every single person was hoping and dreaming of has gone."

I spent some time sampling discussions of the Lancet study from among the more than 200 blogs listed at the Iraq Blog Count website. Many of the bloggers there noted that they themselves have seen widespread death due to the war, including the loss of personal friends and family: "I don't know of anyone who hasn't lost at least some members of their extended family," wrote Iraqi blogger Raed Jarrar.

Riverbend, an anti-occupation blogger, wrote that she found the figure of 650,000 dead entirely plausible:

For American politicians and military personnel, playing dumb and talking about numbers of bodies in morgues and official statistics, etc, seems to be the latest tactic. But as any Iraqi knows, not every death is being reported. As for getting reliable numbers from the Ministry of Health or any other official Iraqi institution, that's about as probable as getting a coherent, grammatically correct sentence from George Bush — especially after the ministry was banned from giving out correct mortality numbers. ... The chaos and lack of proper facilities is resulting in people being buried without a trip to the morgue or the hospital. During American military attacks on cities like Samarra and Fallujah, victims were buried in their gardens or in mass graves in football fields. Or has that been forgotten already?

We literally do not know a single Iraqi family that has not seen the violent death of a first or second-degree relative these last three years. Abductions, militias, sectarian violence, revenge killings, assassinations, car-bombs, suicide bombers, American military strikes, Iraqi military raids, death squads, extremists, armed robberies, executions, detentions, secret prisons, torture, mysterious weapons – with so many different ways to die, is the number so far fetched?

Similar comments came from a Zeyad at Healing Iraq. Zeyad's reaction is interesting in part because he initially supported the war as a means of getting rid of Saddam Hussein and bringing democracy to his country. After reading the Lancet study, he questioned whether its methodology was appropriate "in Iraq's case, where the level of violence is not consistent throughout the country," and he thought its estimate of 650,000 deaths was too high. "My personal guesstimate would be half that number," he wrote, adding, "but then I have a limited grasp on statistics and I stress that I may be wrong. ... The people who conducted the survey should be commended for attempting to find out, with the limited methods they had available. On the other hand, the people who are attacking them come across as indifferent to the suffering of Iraqis, especially when they have made no obvious effort to provide a more accurate body count." Moreover,

There also seems to be a common misconception here that large parts of the country are stable. In fact, not a day goes by without political and sectarian assassinations all over the south of Iraq, particularly in Basrah and Amara, but they always go unnoticed, except in some local media outlets. The ongoing conflict between political parties and militias to control resources in holy cities and in the oil-rich region of Basrah rarely gets a nod from the media every now and then, simply because there are very few coalition casualties over there. The same with Mosul and Kirkuk, both highly volatile areas. I am yet to see some good coverage on the deadly sectarian warfare in Baquba, northeast of Baghdad, which has the highest rate of unknown corpses dumped on the streets after the capital, and which was about to be announced an Islamic Emirate by the end of Ramadan. There are absolutley no numbers of civilian casualties from Anbar. There is no one to report them and the Iraqi government controls no territory there, while American troops are confined to their bases. And much, much less data from other governorates which give the impression of being "stable."

I have personally witnessed dozens of people killed in my neighbourhood over the last few months (15 people in the nearby vicinity of our house alone, over 4 months), and virtually none of them were mentioned in any media report while I was there. And that was in Baghdad where there is the highest density of journalists and media agencies. Don't you think this is a common situation all over the country?

A few days later, Zeyad noted the recent killing of another close friend before adding, "I now officially regret supporting this war back in 2003. The guilt is too much for me to handle."


Of course, no single survey should be regarded as the final word on a topic as important and complex as the death toll from this war. It's possible (although unproven) that Zeyad is correct in suspecting that the researchers may have somehow based their findings on an unrepresentative sample of the Iraqi population. One American who works on health projects in Iraq had a similar reaction, stating that "there appears to be an unintentional sampling bias toward the most violent governorates" and that "there could also be a trend to sample the more violent locations within each governorate. ... I offer these critiques as grains of salt. The report may in fact be accurate. I do not dispute the honesty of the researchers. I know from experience that one never has much control over operations in Iraq, and without a great deal of control, information errors can creep in. ... My own guess is that the death rate in the war is twice as much or more than Iraq Body Count, but probably half as much as reported in this study."

Even so, the results of the Lancet study, combined with what we know about the limitations of other attempts to count the dead, suggest that the war in Iraq has already claimed hundreds of thousands rather than tens of thousands of lives.

It is rather striking, moreover, that critics of this research have mostly avoided calling for additional, independent studies that could provide a scientific basis for either confirming or refuting its alarming findings.

The Lancet researchers themselves have called for such research. "At the conclusion of our 2004 study," they state, "we urged that an independent body assess the excess mortality that we saw in Iraq. This has not happened. We continue to believe that an independent international body to monitor compliance with the Geneva conventions and other humanitarian standards in conflict is urgently needed. With reliable data, those voices that speak out for civilians trapped in conflict might be able to lessen the tragic human cost of future wars."


Thank you for a wonderful article! The media has done an excelent job in distorting this issue and spreading unecessary mass-confusion. While requiring a relatively thorough and careful analysis, Mr. Rampton shows that understanding of statistical estimation is not beyond the reach of a lay reader. The convolution of "counts" vs. "estimates" has been beyond negligent. IBC makes no secret of the fact that their count is likely a vast undercount on their website, due to the under-reporting of casualties. The fact that the mainstream media neglected to add this qualification is bad enough, however, I see no excuse for the omission of what it is that is actually being counted! "Civilian casualties" vs. "excess deaths" alone should be expected to result in a much larger number, let alone the HUGE difference between a "count" and an "estimate". Also, given the mainstream media's LOVE of "experts", I found it interesting that most sources did not seek out expert opinions, perhaps due to the near unanimous support the study enjoyed. The dismissal of this excelent John Hopkins study, not exactly a Micky Mouse institution, was infuriating to me. I subjected almost everyone I encountered in the following days to a lesson in elementary statistics, venting this frustration! Should the issue come up in the future, I definitely will point them to this article. Thanks again!

The method used is good for other questions, but in this case, it led to a very low count. There were cases where entire families were killed and no one left to report their deaths. Thomas Love

When I first heard Bush (and Australian PM, John Howard) dismiss the report out of hand as "not credible", simply on the grounds that it <em>couldn't</em> be true, surely, (because it didn't <em>feel</em> right?), I remembered the textbook on how to discredit a report and it was in the BBC-TV comedy, <em>Yes Minister</em>'s "The Greasy Pole" episode. <a href="" target="blank">The strategy to suppress a document</a> was explained by Sir Humphrey: <blockquote>- Stage One: You list reasons in terms of the public interest: •Security considerations; •Results could be misinterpreted; •Better to wait for a wider and more detailed study over a longer timescale. - Stage Two: Discredit the evidence you are not publishing (using press leaks): •The evidence leaves some important questions unanswered (presumably the ones that were not asked); •Much of the evidence in inconclusive; •The figures are open to other interpretations; •Certain findings are contradictory; •Some of the main conclusions have been questioned (if not, then question them and then they have). - Stage Three: You undermine the recommendations: •Not really a basis for long-term decisions; •Not enough information on which to base a valid assessment; •Not really a need for a rethink of existing policies; •Broadly speaking, it endorses current practice. - Stage Four: Discredit the writer of the report (of course off the record): •The writer is harbouring a grudge against the government; •The writer is a publicity seeker; •The writer used to be (or wants to be) a consultant for a multinational company; •The writer is trying to get a knighthood, chairmanship, vice-chancellorship, etc. </blockquote> It's not only that it couldn't be true "surely", but that it <em>mustn't</em> be true because what Prime Minister or President could live with themselves if it were proved they were responsible for the deaths of so many innocent civilians. The "unavoidable" casualties must be moderate for the action to be defensible as "collateral" and the numbers in the Lancet report were very immoderate indeed. I <a href="" target=blank">refer</a> to casualty numbers from time to time and even I understate the probable numbers because, frankly, I fear that readers will discount everything else I say because they think I am an exaggerator. So I think Sheldon is right. There ought to be another survey done as soon as possible.

Hi Sheldon, I wondered if you could comment on a response to your article (by myself) at Alternet: Gilbert Burnham (Lancet study co-author) has stated that: <i>"As far as selection of the start houses, in areas where there were residential streets that <b>did not cross</b> the main avenues in the area selected, these were included in the random street selection process, in an effort to reduce the selection bias that <b>more busy streets</b> would have."</i> (My bold) In other words, Burnham seems to be acknowledging that there is a bias from sampling close to (or on) main streets. (Apparently contrary to what Les Roberts and yourself imply: that there is no evidence for such a bias). Burnham's statement also contradicts the description of the methodology published in the Lancet: <i>"The third stage consisted of random selection of a main street within the administrative unit from a list of all main streets. A residential street was then randomly selected from a list of residential streets <b>crossing</b> the main street."</i> (My bold) The latter (from the published methodology) is precisely what the main street bias criticism addresses. (It holds that the study is unrepresentative of the population of Iraq since it surveys only houses that are "located on <b>cross streets</b> next to main roads or on the main road itself" [my bold]) Given your intention to <i>clarify</i> criticisms of the Lancet study, do your comments on main street bias (in light of the above) not strike you as unsatisfactory (if not misleading)? I'd be grateful if you could investigate further and perhaps supply an update on main street bias. I notice also that you imply scientific approval of the Lancet study has been unanimous (<i>"The scientists, however, gave it high marks"</i>). However, Debarati Guha-Sapir (director of the Centre for Research on the Epidemiology of Disasters in Brussels) says that Burnham's team have published "inflated" numbers that "discredit" the process of estimating death counts. And Jon Pedersen, director of the ILCS study (which you omit to mention in your article, even though it uses a methodology similar to the Lancet study's) says the Lancet figure is "high, and probably way too high".

I didn't go into more detail on the topic of "main street bias" because my article is already quite long. However, "main street bias" is unlikely to have significantly skewed the Lancet results for the following reasons: <ul><li>In the previous 2004 Lancet study, the researchers randomly selected locations in Iraq by using a GPS device to begin at a randomly-selected longitude and latitude. For the 2006 study, they felt that this completely random system was impractical because going around the country with an electronic device in their hand was likely to be interpreted as military activity, so they used the "randomly selected main street" technique as a means to <i>approximate</i> random selection of a location. It's not perfectly random, but it's a reasonable attempt to achieve randomness given the practical limitations posed by the level of violence and suspicion in Iraq. Can you suggest something better?</li> <li>The researchers didn't limit their visits to homes which were on the "residential street ... crossing the main street." Rather, they began at a randomly-selected address <i>on</i> that residential street and then visited the 40 homes closest to it, which would have taken them onto homes on other streets. Since the home on the residential street that served as the starting point was randomly-selected, there is no reason to expect that it would have been even close to the main street that the residential street intersected. It could be many blocks away. Calling this "main street bias" is in itself a misnomer. The correct term would be "streets that cross main streets bias."</li> <li>No one has presented any evidence, other than pure speculation, to suggest that people who live on main streets (or on residential streets that cross main streets) have been more likely to die than people who live on streets that do not intersect with main streets.</li> <li>The details of the Lancet results suggest that most people have been killed away from home anyway, which would vitiate any main street bias. For example, the male-to-female ratio shows that many more men than women have been killed: 3.4 male deaths for each female death, and 9.8 violent male deaths for each violent female death. If people were being killed at home, you would expect a higher percentage of female deaths, particularly since women tend to stay at home more than men.</li> <li>Even if we set aside all of the above considerations and assume that "main street bias" exists, I find it hard to imagine that it would have been significant enough to substantially alter the study's outcome. Can you give me any meaningful estimate for how much more likely you think it is for someone to be killed who lives on a street which intersects with a main street in Iraq, as opposed to someone who lives on a street that <i>doesn't</i> happen to intersect with a main street? I find it hard to imagine that the ratio could be so much as 2:1, and even if there was a 2:1 sampling bias, you'd still be left with an estimate of 300,000+ Iraqi deaths.</li> </ul> As for Jon Pedersen, his methodology was <i>not</i> similar to the Lancet study, as he himself notes in the URL you provided. For starters, his survey asked about a wide range of health and living conditions, whereas the Lancet study focused exclusively on mortality. That difference alone is in my opinion more likely to produce a discrepancy in outcomes between the two studies than any skewing due to "main street bias." The bottom line in all of this, though, is that we should be having <i>more</i> research like the Lancet study, from other research teams in addition to the Johns Hopkins group. If you question the methodology of the Lancet study, fine; then let's have additional studies that address your methodological concerns. In science, no single study should be presumed definitive, and the question of how many people are dying in Iraq (and what they're dying from) is important enough to deserve multiple studies.

Hi Sheldon, Thanks for your response on main street bias. I'll deal with your points in reverse order... Firstly, I agree we need more research into conflict mortality - including multidisciplinary studies reviewing application of epidemiological methods to conflict mortality (since this application is not well-validated). The work on main street bias is one such study, and its authors call for more studies. The ILCS study is probably the closest to the Lancet study in terms of methodology (cluster sampling, etc), and its sample of the Iraqi population was far greater than the Lancet's. So if we're interested in different studies on Iraqi mortality, we should be looking at ILCS. As you mention, ILCS's Jon Pedersen states that the focus on mortality (in the Lancet) compared to the more general focus of ILCS is a difference - although Pedersen doesn't accept this explains the large discrepancy in mortality estimate between Lancet and ILCS for the overlapping period. My point about the similarity (in methodology) between ILCS and Lancet is that if one's support of the Lancet figure rests to a large extent on the "established", "standard" nature of the cluster sampling methodology used, then one can't ignore or dismiss the ILCS study without applying double standards. If you discount the ILCS findings because of the focus of its interview questions (etc), then you should equally be able to discount the Lancet estimate on similar bases (ie subjective, unquantifiable bases). Clearly supporters of the Lancet study should not accept this, if they wish to remain consistent. So, the ILCS should be discounted no more lightly than the Lancet study. And yet the ILCS does seem to be ignored or discounted (or at least it's not discussed in many articles on Iraq mortality - including your own lengthy piece). Now, onto main street bias. You say that <i>"even if there was a 2:1 sampling bias, you'd still be left with an estimate of 300,000+ Iraqi deaths."</i> In that event, I assume you'd be in favour of people being <i>informed</i> that the corrected estimate is 300,000? Sampling bias <b>does matter</b> (even if it affects the estimate by less than your above example). It's also important to remember that small biases introduced in the sampling procedure can lead to very different estimations. You state: <i>"The details of the Lancet results suggest that most people have been killed away from home anyway"</i>. To my knowledge, the Lancet study didn't record details of <b>where</b> people were killed. So this is at most an indirect inference. Furthermore, there is no distinction in the Lancet study between civilians and combatants - the ratio between male and female violent deaths may reflect this (ie predominantly male combatants). Either way, no inference from these ratios (including that females are more likely to stay at home) removes the possibility of bias from sampling close to busy streets. Gilbert Burnham in fact <b>accepts</b> that efforts should be made to avoid such a bias. I'm puzzled by the claim that main street bias is refuted by the fact that a start house could be anywhere on a cross street (and that the next 39 houses would take the interviews around the block to a few other streets). Main street bias (as I understand it) is about <i>network distance</i> (not physical distance per se) from main streets. Cross roads are one link away from a main street, the side streets connected to the cross streets are two links, back alleys connected to these are three links etc. Given the methodology (as published) whole sections of neighbourhoods would be excluded in the sampling process. For example, see: On the other hand, Gilbert Burnham has elsewhere stated that: <i>"As far as selection of the start houses, in areas where there were residential streets that <b>did not cross</b> the main avenues in the area selected, these were included in the random street selection process, in an effort to reduce the selection bias that more busy streets would have."</i> (My bold) This is a departure from the published methodology. The question remains: how often did they depart from the methodology, and exactly how did they depart from it? If one is interested in accuracy, surely these questions deserve to be answered. Finally, you ask me for "something better" than the "randomly selected main street" technique. How about a completely random selection process such as that conducted by Jon Pedersen in the ILCS survey of Iraq?