Observational data for Tamiflu effects just doesn’t add up

The reason no one sticks  to the results of randomized trials is that they believe the dramatic effects observed in observational studies.

It is important to note the World Health Organization approved the use of Oseltamivir on the essential medicines list on the basis of observational data.

What the WHO did was fund a review of observational studies, you can get it here, the results of which were presented to an expert committee, in Ghana, in March 2011.

The review reported a large effect on mortality. Yet, get this, the mortality result was based on only three studies. Three studies and you are on the essential medicines list, that can’t be right?

More importantly, does anyone actually read these studies?

These three studies were done in hospitalized patients; but none actually described the reasons for administering oseltamivir to patients.

One of these studies, undertaken in Thailand,  was a retrospective review of medical records. Even the authors called for caution in interpreting their results, ‘our small, retrospective, observational study has limitations with respect to establishing causality.’

The reported effect size is truly large: the authors report that 310 of 423 non-fatal cases (73%) took oseltamivir whereas 5 of 22 fatal cases (23%) only took the drug: Crude Odds ratio was 0.11. Note they only reviewed 22 fatal cases (you could have picked any 22 cases) and, only 29% of the medical records for the non-fatal cases were actually reviewed.

The second study was also a retrospective review of clinical data. Medical data was obtained for 67 (72%) of 93 cases diagnosed with human influenza A (H5N1) in Vietnam. Oseltamivir was administered in 55 (82%) of the 67 cases. But after adjustment for age the effect of tamiflu was not statistically significant.

The third study, funded by the manufacturer undertaken in adults in Toronto requiring hospitalization, noted a number of limitations .  Only 63% of eligible patients were tested for influenza, data collection was by chart review, which limited the number of risk factors that might have been considered.

If you see a large effect in observational effects it is more likely to be due to poor quality evidence and systematic biases, which the review points out clearly in their limitations.

The very fact that the observational studies were retrospective, done after the event, renders their results invalid. Evidence–Based Medicine is not difficult, it is just time consuming, as it requires reading the research. So, exactly  why did this treatment get on the essential medicines list?