estimating HIV prevalence
UNAIDS announced an eight-year trend shows new HIV infections are down 17% in sub-Saharan Africa with its release today of the 2009 AIDS Epidemic Update, and everyone is all atwitter about it. I was no exception, but my tweet mentioned that I was skeptical.
“Why?” asked Mark Daku. I just don’t know what to make of changes in estimates of HIV prevalence over time — especially since methods of estimation and data collection have also changed over time. This particularly bothered me when I sat and listened to an “epidemiologist” discuss trends in Malawi at the 2008 National AIDS Conference; I asked, but how can you compare these estimates if you are now using different data, collected from more sites? Needless to say, the answer was not convincing (actually, he didn’t even answer the question).
UNAIDS is clear on its current estimation methods in the new report:
The epidemiological estimates summarized in this report are the result of a systematic process used by UNAIDS and WHO. Estimates for 2008 build on recent improvements in HIV surveillance and estimation methods.
HIV surveillance has historically focused on anonymous epidemiological monitoring in designated sites (‘sentinel surveillance’). The number of sentinel surveillance sites has significantly increased in recent years. In a growing number of countries, sentinel surveillance has been complemented by national population-based surveys that include HIV testing. Since 2001, national HIV surveys have been conducted in 31 countries in sub-Saharan Africa, two countries in Asia, one province in each of two other countries in Asia and two Caribbean countries. In eight African countries and one Caribbean nation, more than one population-based HIV survey has been conducted since 2001, permitting an assessment of trends over time. The notable growth in the magnitude and quality of HIV epidemiological data has significantly strengthened the reliability of HIV estimates.
Of course, in the best situation, we’d have gotten great data from the start and then this large shift in the methods wouldn’t have us worried about whether we’re comparing apples to apples or to oranges. If only the intertemporal difference in data collection and estimation were the single concern…
A colleague wrote an article published not that long ago that showed that even the holy grail of HIV estimation used by UNAIDS — population-based testing — suffers from a refusal bias: HIV-positive respondents are four times more likely to refuse an HIV test in the future after learning their HIV-positive status. In fact, in writing a dissertation chapter that tried to separate the policy preferences of HIV-positive respondents, I had to use earlier wave data from our longitudinal study to figure out if any of our respondents in 2008 that refused HIV testing had learned in a previous round that they were positive (and then impute their status).
All of this is to say that I’m no longer convinced that population-based testing will provide us HIV prevalence estimates we can be confident about.