"
And going to the university and the University of Florence in particular, it came out that Professor Ruggiero – that’s myself – was in absolute terms the Best Professor in the Entire University... and not only in biology and medicine but overall, concerning all the professors of the entire university" - Dr Marco Ruggiero, Professor of Molecular Biology at the University of Florence.

"Derrida's method consisted in demonstrating the forms and varieties of this originary complexity, and their multiple consequences in many fields. He achieved this by conducting thorough, careful, sensitive, and yet transformational readings of philosophical and literary texts, to determine what aspects of those texts run counter to their apparent systematicity (structural unity) or intended sense (authorial genesis)."
- Wikipedia: Jaques Derrida (and also copy-pasted to 2,520 other websites)

"I have long ago given up looking at anything from Snout... He has no credentials at all to discuss the things he talks about, yet feels free to denigrate a long-established, peer-reviewed Italian journal, and highly competent, even distinguished scientists and scholars. If anyone prefers to take his opinion rather than mine, I think that shows rather poor judgement in view of the curriculum vitae posted on my website and the anonymity and missing C.V. of Snout…"

- Henry H. Bauer. Professor Emeritus of Chemistry, Science Studies and Dean Emeritus of Arts and Sciences, Virginia Polytechnic Institute and State University.

Tuesday, February 24, 2009

Henry Bauer’s harebrained disproofs III: “The age distribution of positive HIV tests superposes on the age distribution of deaths”

HERE HENRY IS CLAIMING that HIV diagnoses and HIV/AIDS deaths occur in the same age distribution, therefore there is no latent period between infection and death, therefore “HIV/AIDS theory” is wrong.

“Another shibboleth [sic] of HIV/AIDS theory is that infection by HIV is followed by a latent period averaging [sic] 10 years before symptoms of illness present themselves; and this pre-symptomatic period is supposed to have been lengthened by contemporary antiretroviral treatment. It follows that the ages at which people die from “HIV disease” should be much greater than the ages at which they become “infected”. Yet the ages at which people most often test “HIV-positive” are the same as the ages at which people are most likely to die of “HIV disease”, in the range of 40 ± 5 years. There is no indication of a latent period, nor that antiretroviral drugs have extended it.”
- "How 'AIDS deaths" and 'HIV infections' vary with age and why"

This claim of no HIV “latent period” is so self-evidently absurd that when I first saw it I thought it had to be a hoax, and that Henry’s real agenda was to demonstrate how easy it was to fool people into accepting a patently ridiculous claim if you baffle them with enough statistics and hand waving. But sadly, he doesn’t seem to be joking.

The claim is self-evidently absurd because when people die with HIV disease it is always after an HIV diagnosis (with the exception of rare HIV diagnoses first made at autopsy), and individuals who die usually do so years and sometimes decades after that diagnosis. Henry’s “explanation” of this apparent paradox is to wave his hands about and claim that the “hypothesized” period between infection and death does not exist, and therefore HIV/AIDS theory must be wrong.

This “explanation” makes no sense. Even if HIV diagnoses had nothing to do with the presence of an infection, and even if there were no causal relationship between HIV and subsequent deaths, that would make no difference to the existence of the intervening period. Furthermore, the existence of a period between HIV seroconversion and death is not “a hypothesis” – it is a fact established through numerous longitudinal studies following subjects with a known time of seroconversion, and is a fact that exists independently of any HIV/AIDS theory. It is also a fact which is the everyday experience of millions of people currently living with an HIV diagnosis years and sometimes decades after their first positive test.

The alternative resolution of this “paradox” – that HIV diagnoses and deaths supposedly occur with the same age distribution – is to recognize that it is simply wrong, and that Henry has either misunderstood his data, or has comprehensively botched his analysis, or is fudging and dissembling in his exposition. In keeping with his status as a crank, none of these possibilities seem to have occurred to Henry, but as we shall see, he has done all three.


TO ILLUSTRATE THE SUPPOSED “superposition” of HIV diagnoses by age over deaths by age, Henry has drawn us this graph, which he has reproduced in assorted variations throughout his blog, see for example
"HIV/AIDS and age - HIV theory is wrong"



Now there is no y-axis scale provided here, so it’s difficult to know exactly what figures these curves are supposed to refer to, but even a quick glance leads us to the startling conclusion that between 1999 and 2004 people in their early sixties were diagnosed with HIV at the same rates as people in their early 30s (whatever those rates were)!



Another version of the graph can be found on
slide 9 of a presentation he gave to the "Society for Scientific Exploration", from which we discover that young babies test HIV positive at 3-4%, a rate significantly higher than that of people in their 30s and 40s who seem to test positive at around 2.8%.


This startlingly high level of positive HIV tests among babies is also asserted in his seminar notes from the talk he gave at the Virginia School of Osteopathic Medicine:


Babies are infected at about the highest level found among adults who appear to be in good health. Infection rates drop sharply in the first year after birth, and begin to rise again in or after the teens. Males are always infected more than females, except in the low teens when females are more infected than males.

- “Truth is stranger than fiction: HIV is not the cause of AIDS” p.6

That is nuts. Between 2003 and 2006 the CDC received between 100 and 200 notifications annually of diagnoses of perinatally acquired HIV from the 33 reporting states, out of around 4 million births per year for the whole country. This works out to a rate of about 0.005%, not 3-4%, allowing for the notifications that weren’t received from the non-reporting states. This compares with over 5000 HIV diagnoses annually in each 5 year age group of 35-44 year olds.

Furthermore, from the scale provided on the version of the graph presented to the SSE Henry is claiming HIV positive rates for most age ranges of around 1-3%, which is odd since the total prevalence of HIV in the US is currently only around 0.3%. Worse, he seems to be claiming these rates as referring to incident diagnoses (new diagnoses made each given year) when the CDC estimates an annual infection rate of only 0.017% (around 50,000 new infections annually per 300 million Americans).

And are 62 year olds really diagnosed with HIV at the same rate as 32 year olds?

Well, no.

Since 2003 the CDC has published annual data for incident HIV/AIDS diagnoses broken down by 5 year age groups for 33 states, and between 1999 and 2002 provided similar data for 30 states by 10 year age groups. AIDS diagnoses and deaths have been recorded for the whole country since the epidemic was first observed. The age distribution of HIV diagnoses in 2006 mapped to AIDS deaths in 2006 look like this:




Source: Cases of HIV Infection and AIDS in the United States and Dependent Areas, 2006, Table 1 and Table 7



A few points to note:

1.The 33 states that reported new HIV/AIDS diagnoses in 2006
accounted for only 62% of all people living with AIDS in the entire 50 states plus D.C. Therefore the amplitude of the HIV diagnoses curve is likely to underestimate all diagnoses of HIV in the 50 states plus D.C. by a factor of about two thirds. However, this is unlikely to significantly affect the shape of the age distribution.

2. HIV diagnoses are not the same as HIV infections and seroconversions: diagnosis can occur at any stage from seroconversion until presentation with an AIDS defining illness (and occasionally later). The CDC estimated that
38% of AIDS diagnoses occurred within 12 months of first HIV diagnosis: this tendency for late diagnosis of HIV infection was more marked in older age groups than younger: more than half of HIV diagnoses in the over 55s were followed by an AIDS diagnosis within 12 months, while less than 20% were among 15-24 year olds.

3. The 2006 HIV diagnosis and 2006 AIDS deaths curves refer to different populations. AIDS deaths in 2006 occurred in people diagnosed with HIV any time during the previous two decades. Current annual mortality for people living with HIV in the US is only one or two per cent (15,000 out of about a million), which means that deaths in people diagnosed in 2006 will be distributed over a large number of years into the future. The median age of death with HIV/AIDS has increased by around 0.67 years per year since the availability of HAART: on those trends the median age of death for people diagnosed with HIV in 2006 will be significantly greater than the median age of those who died in that year.

(A side note: commenting on a previous post, Chris Noble remarked on the fact that the age distribution of incident syphilis has developed a bimodal distribution since 2006 with peaks in the 20s and 40s. It appears that a similar bimodal pattern is starting to emerge with new HIV diagnoses clearly visible on the above graph – yet another refutation to Bauer’s claim that the demographics of HIV are unlike any other STI.)


EVEN WITHOUT CONSIDERING the three points above, it is obvious that there is a marked difference in the age distributions of HIV infections and HIV/AIDS deaths, corresponding to the “latent” period. So how did Henry manage to make such a hash of his data and end up with the ludicrous graph he keeps hawking round the internet and elsewhere?

Here’s how:


To compare the actual years of that peak on “HIV” tests with the peak years of "HIV” deaths, I wanted “HIV”-test data for the population as a whole, since the death-data in Table A are also for the population as a whole. The most appropriate data-sets are those, totaling nearly 10,000,000 tests, published in 1995-8 by CDC for all public testing-sites (clinics for TB, HIV, STD, drugs, family planning, prenatal care, and more, as well as prisons and colleges and some reports from private medical practices). Pooling the actual numbers for each of those four years and making the appropriate calculations delivers the following results...


In other words Henry is assuming that if, for example, the CDC funded 4,511 HIV tests for 0-4 year olds in 1997 of which 149 (3.3%) were positive, then that percentage can be extrapolated to the 15 million or so 0-4 year olds in the population as a whole, ignoring the fact that these kids were selected for testing from the few thousand in the country actually at risk of infection because they had been born to HIV positive mothers.

Similarly, each other age group in the public test site data is not a representative sample of that age group in the population as a whole: each group consisted of people with identified risks for HIV infection, and who chose to undergo testing using the CDC funded services. The percentage of positive tests in each age group, therefore, is not just a function of the overall incidence in each age range, but also of the percentage in each group who (a) fit the CDC criteria for funding on the basis of HIV risk, and (b) choose to use public sites for testing rather than private, and (c) chose to test at all in that year. Some groups of people test regularly, others rarely, and the frequency of testing does not necessarily reflect the probability of having acquired HIV since the last test. Actual risk, perceived risk, and the options available for testing change in different age groups. Because of this, comparing rates of percentage positive tests between different age ranges does not give you relative rates of incidence for the population as a whole.

Unfortunately,this elementary error of extrapolating data from highly and differently selected groups to the population as a whole is a recurring theme throughout Henry’s thesis.


ONE FINAL COUPLE OF POINTS for those who have bothered to read down this far. When Henry says, “the ages at which people most often test HIV-positive are the same as the ages at which people are most likely to die of HIV disease, in the range of 40 ± 5 years” he is being... well vaguely correct (or he was in describing the figures for 2004), but this is of very limited value in describing the overall age distributions of diagnoses and deaths. The “the ages at which people most often test “HIV-positive” (the mode) is not the average (mean) age, nor is it the median (the midway point with half above and half below): it is substantially older than either of these values, and is even further removed from the mean or median ages of seroconversion because diagnoses at older ages tend to be later in the course of HIV disease than at younger ages.



The “40 ± 5 years” is a fudge, too, spanning as it does an entire decade. Median age at diagnosis and age at death have both been increasing over the course of the epidemic, the latter rising more quickly than the former. And in fact both the median and modal age of incident deaths - not predicted lifespan - is now in the 45 to 50 range, the median increasing by around 0.67 years per year.



8 comments:

Anonymous said...

"Snout's harebrained disproofs III: “Longitudinal studies produce facts”

Snout, your obviously unscientific mind shines through quite clearly in such childish statements such as:

"Furthermore, the period between HIV seroconversion and death is not “a hypothesis” – it is a fact established through numerous longitudinal studies following subjects with a known time of seroconversion, and is a fact that exists independently of any HIV/AIDS theory".

Obviously, Snout, you do not know the difference between data and information and facts and knowledge and wisdom and opinions.

You obviously do not know that findings from longitudinal studies are simply creating information, not facts, and that they are fully dependant upon many variables such as the choices of what data to use, as well as the opinions of those who decide which data to use in such a study.

Furthermore, such a study can only create information, not facts, for those who then interpret the symbology of such data.

Longitudinal studies do not ever result in facts, because they are based upon variables.

Longitudinal studies only produce information, which may or may not be reliable, according to those who teach how to do such studies.

The information produced creates opportunities to make individual and again highly variable extrapolations and conlusions and opinions of what the symbology of the information is are then formed from the data, and again are not ever considered as facts.

Data is simply known to be crude information and not knowledge by itself, let alone considered to be a fact. The sequence from data to knowledge is: from Data to Information, from Information to Facts, and finally, from Facts to Knowledge. Data becomes information, when it becomes relevant to your decision problem. Information becomes fact, when the data can support it. Facts are what the data reveals. However the decisive instrumental knowledge resulting from interpreting the symbology of the information(i.e., applied knowledge) is only expressed together with some statistical degree of variable confidence in the resultant perceptions of FACTS.

Considering this uncertain environment, the chance that "good decisions" are made increases with the availability of "good information" to begin with. And that requires a lack of bias and lack of opinionation in the originating choices of basic data gathering. The chance that "good information" is available also increases with the level of structuring the process of Knowledge Management. As the exactness of a statistical model increases, the level of improvements in decision-making increases.

Knowledge is more than knowing something technical. Knowledge needs wisdom. Wisdom is the power to put time and knowledge to the proper use. Wisdom is the accurate application of ACCURATE KNOWLEDGE and its key component is to knowing the limits of your own knowledge. Wisdom is about knowing how something technical can be best used to meet the needs of the decision-maker.

So please, Snout, do not even pretend to be wise, or knowledgeable or that you are recognising "FACTS".

You have already presented us your disqualifications by presenting that data is facts or that facts are knowledge.

Every single person who is taught to create longitudinal studies is taught to know full well that such thinking is wrong, and knows that they are simply gathering data to create other data that creates information, not facts.

And I am also sure you are well aware that in 1987, it was considered by by the childish and unwise such as yourself to be a "FACT" that people would only live 2 to 5 years after hiv diagnosis. Yet many thousands of long term nonprogressors, though individuals such as yourself usually refuse to acknowledge them, have proven that such "facts" are wrong, regardless of what others say or believe who refuse to acknowledge this.

But try doing the same study and include these LTNP individuals as the majority of the data base and see what "FACTS" you find!

Such previously presumed "FACTS" regarding life expectancies have been stretched and reconsidered many times over by various other "longitudinal information producing studies", and went from 2 to 5, to 5 to 10, to 10 to 15, to 15 to 20, to 20 to 25, to 25 to 30, and so on, and at this point concensus of opinion regarding lifespan is often considered by those doing such studies to be as high as 30 years or longer.

So, Gee Golly, Snout! You have clearly showed us the difference between you and Dr. Bauer.

At least what Dr. Bauer presents comes from knowledge and wisdom, and is based on presenting the FACTS of exactly what information has been presented in silly biased studies of the data, and not on harebrained juvenile fear induced preprogrammed biased self protecting egotistical consensus beliefs.

SteveN said...

Another great post, Snout!

As you have pointed out, Bauer's habit of extrapolating from highly selective groups to the population as a whole is the basis for a lot of his misinterpretations. I still find it hard to believe that he's that incompetent.

Snout said...

Thanks Steve.

And also thank you, “Anonymous”.

Please note, Anonymous, that when copying and pasting slabs of other people’s work it is customary to attribute it, otherwise it looks like plagiarism. It is also fairly obvious when you insert three paragraphs of the work of a competent academic such as Professor Hossein Arsham into the middle of your own rambling and incoherent rant. It tends to stick out.

The three parapraphs from “Data is simply known to be crude information…” to “... best used to meet the needs of the decision-maker” were copied and pasted from Prof Arsham’s statistics site or one of the other sites carrying his work. Next time, please acknowledge him.

Now when you sober up, could you please clarify:

Are you saying that when people diagnosed with HIV infection die this is typically years or even decades after the original diagnosis, as is shown by both longitudinal studies and also by CDC reports of incident diagnoses and incident deaths among people diagnosed with HIV/AIDS?

Or not, as Henry claims with his bizarre 'No Latent Period' and 'Age Distribution of Positive Diagnoses Superposes on that of Deaths' arguments?

jtdeshong said...

Snout,
Great post, as usual. However, I must say that the comments section is even better!! I love how you nailed "Anonymous" to the wall!!
I guess posting as "anonymous" is the only smart thing they ever did.
Todd.

Anonymous said...

Good job Snout! Bauer is loads of laughs, but taking on his wacky convoluted arguments requires some time commitment. Like unplugging a clogged toilet, it takes a bit of effort to wade through the crap. So good for you!

PhiJ said...

That side note - surely it is not a clearly visible double hump, or a refutation of Bauer's claims? It looks to my untrained eye like it is possibly a "similar bimodal pattern [that] is starting to emerge", but possibly a random bump which just appeared for 2006.

Okay I've just looked up the 2007 data, and it does look like there is an emerging peak (it's non-existent in 2005 and larger in 2007), but your data doesn't show that (does it?)

Snout said...

I thought given the magnitude of the bump it was unlikely to be random, and it was also notable that a similar bump first appeared for syphilis in 2006 and grew in 2007. But you're right - while the bump is visible it's not in itself definitive evidence of an emerging trend.

I have not been able to find the 2007 data: can you tell me where you found it?

PhiJ said...

It's here. I found it by following your link to table 1, clicking on the 'reports' link at the top of that page - the 2007 issue is near the top of the page and then it's table 1 again. :D

It also seems to have data for an extra state - maybe one which didn't provide info in 2003.