Archive for January 2011
Individual science fiction stories may seem as trivial as ever to the blinder critics and philosophers of today — but the core of science fiction, its essence, the concept around which it revolves, has become crucial to our salvation if we are to be saved at all.
The above was written by Isaac Asimov in 1978. Five years later, in 1983 he would undergo a bypass surgery. The older and/or more attentive of you might see where this is going. Asimov died in 1992. It would be ten years before his family would make public that he had died of AIDS. Not only did HIV rob us of one of the greatest writers of the 1900s, but it was ten years before his family felt that they might be safe from the stigma attached to the disease. I find the above quote fascinating and haunting because of how pertinent it seems to the manner of Mr. Asimov’s death. He, and so many many others, quite simply did not have to die. They died because simple minded people could not expand their thinking into areas that they didn’t want to.
Infections from tainted blood supply are an often overlooked subset of HIV infections. Why? Certainly not for any good reason, this information could only help understand the current crisis. I feel that in all likelihood society is so embarrassed by how horribly it handled the early days of the HIV epidemic that they try to pretend it never happened. And almost nothing from those days is more starkly horrific than the way we handled the blood supply.
See, blood is more than just a nutrient delivery system, it’s a very lucrative industry. Put yourself in the shoes of an executive of that industry circa 1983. So there is this “gay cancer” going around and everyone is up in arms about it, well…everyone except the current presidential administration which is just ignoring the problem and hoping it will go away. Who knows how you feel about it? Maybe you think that it is a tragedy, maybe you think that it is God’s judgement. Regardless, you get called into a meeting with some guys down at the CDC in January 1983. All of a sudden you are introduced to a new idea: gay cancer might be transmittable by blood. Moreover the CDC actually wants you and your company to take some minimum of responsibility. The horror.
If you were to actually screen your blood it could cost as much as 5 million (1983) US dollars! And how come if this is true, only expert virologists at the CDC know it? Aren’t all those guys just alarmists elites anyway? And hey, they only have clear evidence that three people have contracted the disease from tainted blood so far. That whole thing about 10% of all haemophiliacs being infected by transfusion is rubbish, isn’t it? And it isn’t like a public outcry is likely. Moreover, what gives big government the right to poke its nose into your business anyway? They’re the bad guys here. The government needs to get the CDC under control. So what if about 5% of diagnosed AIDS sufferers had donated blood within the past year, there just is too much of a financial stake for you to bother listening, to bother believing, or to even bother imagining that these experts might be right.
In the meantime, Isaac Asimov would undergo bypass surgery and be supplied blood from a donor with AIDS. That donor had given that blood in good faith, with the intent of saving a life, but the pathogen hiding within would instead claim another victim. And a host more would follow in the wake.
The status quo is often assumed to be good. The first response to troubling information that would change it is generally disbelief. And we have seen this before and will see it again on a grand scale. Sometimes that disbelief leads to inconvenience, sometimes that disbelief leads to no consequence, this time it led to an ever escalating body count. This time it was just one more step in letting a horrifying pathogen rage out of control. And all but for the sake of a little thinking about what could be. All for the sake of a little open-mindedness, a little evidence based speculation and extrapolation on the future, the blood banks voted to willingly go on acting as a vector for HIV. The irony that these institutions earned their keep by saving lives is not lost on me.
At the end of the meeting, CDC’s Jeffrey Koplan, who was chairing it, began proposing consensus recommendations. Bruce Voeller suggested a resolution opposed to deferral of high-risk donors; the proposal was defeated soundly on a voice vote. Other proposals met similar fates or were modified so extensively that they were rendered meaningless. The meeting adjourned with no recommendation or agreed-upon course of action. Things would simply go on as they were, as if nothing was happening.
-Randy Shilts; “And the Band Played On”
Anyone who seriously has to ask: “How much will it cost me not to kill people? Is it worth it?” Fuck you. I know some of those blood bank execs that attended that meeting are still alive and probably still working today. I just hope they realize the extent of the blood on their hands. And I hope all of you realize just how important the questioning wonder of speculative fiction is if we are to be saved at all.
Sources & Further Reading
- And the Band Played On Randy Shilts
- Seriously, if you don’t read pretty much any Asimov that you can get your hands on, start.
- Not directly related, but there was a recent interesting pictorial history of the epidemic by Life.
So once again it is Friday, it is late, and I really have no topic. Going to do my best to get this blog better organised in the coming weeks. Really. Perhaps now that I have the burden of writing lectures (or at least, writing my lectures, I am slated to help a dear friend over the weekend) off my chest I can finally cover some ground.
Anyway, I thought I would spend this time and technically get a blog post out before midnight, although I am sorry that it isn’t really more academic. I just wanted to talk for a minute about why I do what I do. That is to say, why I bust my ass in grad school and am driving myself nuts trying to get a lab fellowship. Quite simply: I am a very afraid person who wants to save humanity. That doesn’t sound too hubristic, right? Although the devil of the thing is trying to figure out why I would want to do the later.
Here is the deal: Species rise and species fall and in the end ΔS is greater than zero for any real system. That’s the world we live in, in a nutshell. Anyone who tells you that humanity is different, that humanity is special, that humanity is outside of these rules, is most likely selling something. Either we die out, are naturally selected into something we won’t recognise, or live till the limit of macro life before heat death. And those are listed in exponentially decreasing order of likely-hood. But it can take a shorter or longer time to reach that point of extinction, and the quality of the living we do in the time our species has is important.
Disease is rampant, disease will always be rampant, it is the nature of things that bacteria will want to feast on us, viruses will want to reproduce off us, parasites of many forms and flavours will want to drain us dry. And the worst part is, that people will largely ignore it. 2,752 people died in the September 11th attacks. And America has spent a fundamentally staggering amount of money and life waging war overseas as a result. Now, almost 3,000 dead is a tragedy, and it deserves a response. However, ~40,000 people die each year in America from influenza ~500,000 worldwide. Now, look at the relative amounts of governmental spending on warfare versus influenza vaccine research. And that is one subtype of one disease. Where is the outrage? Where is the proportional response? Death and suffering due to disease are accepted as a status quo and certain groups will even fight for it (anti-vaxers, certain religious groups, many in the ‘alternative’ medicine crowd, etc) . But I don’t think they need to be. And I am dedicating my life to the idea that I am right.
Okay, so here is the promised speculation:
If we try to build a logically consistent hypothesis as to the steps between “not life” and “life”, our thinking naturally centres around something very much like RNA. In general inorganic chemicals do not become organic chemicals without an organic catalyst (thus the need for metabolism). So, there has always been a gulf there between the inorganic world and the so called “genetic takeover”. However, it seems that under certain circumstances two of the key components of life, Uracil and Cytosine, can arise from inorganic constituents. So we can hypothesis a form of proto-RNA.
However, these molecules would be very unstable and need to be kept in a particular aqueous medium. Thus we hypothesise the proto-cell. And more than just hypothesise, we can build such things in the lab out of simple fatty-acids. This proto-cell would have to be mostly permeable (as opposed to our modern semi-permeable membranes), since the absence of proteins precludes a complex membrane transport system and otherwise the proto-cell would lack the nutrients to thrive.
All of these things are nicely laid out in the review “The Origin of Life on Earth” by Ricardo & Szostak, 2009. However, there is one thing they notably do not cover. Let us imagine this environment proto-cells floating around in a soup of proto-RNA and various proto-metabolites drifting in and out through their fatty acid membranes. But here is the thing. If genetic material can freely diffuse through these membranes (and it can, since there is no way to stop it without stopping everything else) then how does any kind of proto-genome selected for and developed? If the genetic material of any cell is constantly being polluted by foreign material, nothing is going to happen, we are never going to get past this proto-stage. Indeed this infection of foreign material seems a great deal like our modern problem with viruses. So what is a cell to do? Exactly what single celled organisms do against viruses today.
See, bacteria generate a basic self/other dichotomy and protective response based on enzymes known as restriction/modification enzymes. This can be seen as the simple most instance of our much more complex immune systems that work through much different mechanisms but stem from a similar system of self/other recognition at a molecular level. Restriction enzymes cut DNA at specific point. Modification enzymes tag DNA to indicate that it should not be cut. When the system is working properly, all the DNA in a given cell is tagged. So any foreign DNA that enters is untagged, therefore not self, therefore cleaved to pieces by the restriction enzymes. It is conceivable that this system has its first roots in an RNA world where a membrane line defence against foreign genetic material was impossible, leaving the proto-cell to rely on a catalytic defence.
However, we must remember that this is an RNA world we are talking about. No DNA. No proteins. So, does that mean no enzymes? Well…you might think so. One of the things that tends to get glossed over in modern bio is the wonder of the ribozyme. Ribozymes are catalytic RNA. RNA that serves the same function as an enzyme (a brilliant example of RNA with catalytic activity is the ribosome, which is ~70% RNA including all of its catalytic sites). And ribozymes that function like restriction enzymes are extant today.
Now, it may never be possible to know what actually occurred at the dawn of life, but trying to produce logically consistent hypotheses is entertaining, and I think these putative proto-ribozymes fill a gap in the current theory and explain how the proto-genome could have the protection it needed to grow towards something resembling our modern day biology.
Hah, made it before midnight!
For reasons outlined yesterday this is going to be pretty short.
In 1994, Francis Crick published a book called The Astonishing Hypothesis. While I find this hypothesis a little less than astonishing (indeed, I feel that any serious reflection on neurobiology makes it apparent and wholly unsurprising), the book is a fantastic overview of evidence and experiment towards demonstrating this principle (and one I will write a proper review of as soon as I can find the time to finish reading it).
Let us accept this hypothesis. Consciousness is wholly rooted in biology. Now, we do the evobio thing, and things look less astonishing and more alarming. What causes selection for consciousness? And what could cause selection against it?
To understand why that is chilling, we need to first redefine the way we look at things. I often accuse humans of being anthropocentric, but if you limit the system to our own bodies, we are sapiocentric. We think of the “I that is I” as our minds more often than we think of it as our flesh. But in truth the conscious mind does incredibly little. You don’t need it to breathe, you don’t need it to pump your blood, you don’t need it to digest your food. On a lower level the conscious mind has absolutely no control over the cellular workings of the body, or say the immune system, or growth. Indeed repetitive tasks can be done without the conscious mind, take your daily commute or any routine chore. As another example walking does not require that we thoughtfully place each step. Tossing something onto your desk from across the room requires no kinematic calculation. The conscious mind does make some decisions, but it does not make, and indeed cannot make many more (and recent cognitive research even makes this suspect, it has been suggested that the impulse is sent to the muscles to move before the brain even decides to, this is something I should explore in more detail and with proper citation at a later date). Complicated ideas and even mathematical equations can work themselves out best when we are actively not thinking about them. And as for memory, the conscious mind hardly has any. It can only hold a very small number of variables before shunting them off to the unconscious to be recalled later (with varying degrees of success). So it seems apparent that the conscious mind is not necessary for action, even so-called intelligent action.
Feeling it yet?
Now, it cannot be the case that there is no benefit to consciousness. If it were merely an anchor around the neck of our body it would (most likely) have been cast off long ago. Consciousness gives you up to the minute control over a certain limited subset of systems. And in a number of cases this could be useful. For instance it is unclear how well the unconscious can do foresight and preparation. Similarly making logical jumps and providing important feedback to what systems it has a say in (as a for instance realising through cues other than sense that an atmosphere is poisonous and taking respiration into conscious control). Moreover, there could be certain actions best performed by the union of conscious and unconscious into a unified whole: creativity and technology development come to mind as potential examples.
All of that said however, it does seem in some ways to be horribly detrimental. Let’s take paraphilia. The sex drive serves an evolutionary purpose to push us towards reproduction. A number of fetishes involve pleasure taken from something other than the actual sex act, this separate act cannot lead to the passing along of genes. The conscious mind has developed a hangup to the detriment of the body’s reproduction. Let it be clear I am not saying that sexual fetishes are bad (I personally have a few…dozen >_<). However, it seems absurd to argue that they stem from anywhere but the ‘self’ or have any evolutionary benefit. I use sexual fetish as an example because the line is very easy to draw, but one could say the same of any entertainment. There is no biological requirement for entertainment, or aesthetics, or indeed even philosophy. But the mind stagnates without them. Any way you slice it, from a biological standpoint this is a flaw. We can hypothesise that this flaw crept into the schema over time, or that it was there all along and consciousness was selected for anyway because the benefit outweighed the cost. For the sake of the “us that is us” let us hope for the latter. because if not, it seems like there is a very troubling possibility, that the age of the “I that is I” will be a footnote in the history of life.
Okay, s I suck at being brief. Also, credit must go to Peter Watts and his novel Blindsight for planting this line of inquiry in my mind.
So there is a lot of talk going around right now about the antibiotic crisis. And a good bit of it is on HuffPo. No, I won’t link you. You will never find a more wretched hive of anti-vaxers and homeopaths. Suffice to say, Dan Rather, writing on the antibiotic crisis, made the following comment about a week ago:
[…] there isn’t a single known antibiotic to which bacteria have not become resistant.
I think that everyone who is taking the time to read this probably already has a good idea of my attitude towards scientific misinformation. And the above quote is a particularly good one because it manages to pack two mistakes in only thirteen words. What a value! Now, lets be clear about one thing, there is an antibiotic crisis, but the nature of it is subtly and importantly different than the picture many people are trying to paint.
Okay, the first mistake is a high school biology one. And its pedantic of me to even point it out, yet still important to know. Bacteria do not “become” resistant to antibiotics. That’s not how evolution works. A certain small percentage of bacteria already are resistant. In the absence of antibiotics this percentage remains small because antibiotic resistance (like every trait) has a cost, and it is not efficient to have it when there are no antibiotics. When we start using antibiotics, that small percentage grows and while the rest of the bacteria die off. Thus the already pre-existing resistant bacteria replace the vulnerable ones. This was once and for all conclusively demonstrated when we isolated penicillin resistant bacteria from among the gut flora of polar explorers who died and were frozen in permafrost in 1845 (Medical Tribune, December 29, 1988, p. 1, 23.) This one just bugs me.
The second mistake is the big one: “there isn’t a single known antibiotic”. Do you have any idea how many antibiotics we know? Dan Rather certainly doesn’t. I don’t either, so I guess I shouldn’t throw stones. But Mr. Rather seems to think that the number of antibiotics known is somehow equivalent to the number of antibiotics in use. I can guarantee you that he is at least an order of magnitude off. And if he wants to talk about the antibiotics that exist, then he is multiple orders of magnitude off. See, we tend to be anthropocentric, another pet peeve of mine. This means that we naturally want to view the battle against microbes as us vs. them. And that’s not true. We do not make antibiotic agents de novo to wage our war against tiny invaders. Rather, we subvert their own mechanisms (or the mechanisms of other types of life) and use those. As far as I know, there is no wholly synthetic antibiotic compound. No one is going to waste the money making one when they can be plucked from anywhere. These things are literally more numerous than if they grew on trees.
So why the crisis? Why, if there are so many antibiotic compounds known are we running out of ones that are effective to use? Because it is just not profitable. Because there are other, higher profit yield things that the pharmaceutical industry could be doing, and it doesn’t want to waste the time and resources on deriving, testing, and going through all the governmental red tape to get approval for a new antibiotic. Not to mention they have no pressure. In terms of the numbers game, the number of people who are dying from resistant strains is too small to raise a great public outcry. And because of people like Mr. Rather, everyone is going to be too busy pointing the finger at doctors and ag business to ask why the hell the pharmaceutical industry wasn’t prepared.
Should we stop being so careless with our antibiotics? Certainly. But are we going to run out? Well, no. Not unless we sit around twiddling our thumbs and playing the blame game when we could be isolating and testing more.
[featured image source: By Janice Haney Carr, Centers for Disease Control and Prevention (Public domain)]
As much as I take notice of the Nobel prizes from time to time, they are far from any kind of perfect system for delineating and recognising the most important research and researchers in the sciences. This is nowhere more apparent than in the case of Oswald Avery. There were many great contributions to biology over the course of the 1900s, but arguably none more important than the Avery, et al. 1944 paper “Studies on the chemical nature of the substance inducing transformation of pneumococcal types.” Now, the last one hundred years have seen some amazing biological feats, and the Nobel foundation has caught a lot of them: double-helix nature of DNA, PCR, etc. However, without this one 1944 paper those discoveries wouldn’t have been possible, because we wouldn’t have been looking.
As with most things, this needs a little history to understand. Oswald Avery was born in Canada in 1877. His family moved to America while Avery was still young, and as a youth he displayed a great talent for oratory, music, and art. But he didn’t pursue any of these paths. Instead, he went into biology and medicine. Avery did not have a natural talent for this, at least so it seemed. His career was almost entirely unremarkable, that is until he was picked up by the Rockefeller Institute (and even there his first several publications were complete flops). This trend would continue until the eve of America’s involvement in WWI.
The Rockefeller Institute had essentially become the medical R&D arm of the government as Woodrow Wilson slowly forged America into a (really quite horrifying) war machine. The camps where soldiers lived during training (overcrowded in complete disregard for military hygiene rules) were really the most perfect breeding ground that disease could ask for. And it struck hard. Even before the infamous 1918 flu, the American training camps were hit by an epidemic of measles. Now, measles itself doesn’t kill you, but it does leave you open to secondary infections, most notably pneumococcus and it fell to Oswald Avery, now a private in the US Army (the entire institute had been inducted into the military practically overnight), to deal with the problem. And he did. Avery went after pneumococcus with a passion, studying it, cataloguing it, and working tirelessly to find some kind of vaccine.
It is difficult for me to believe that Avery did not first come across the transforming principle in his work, since we do know that he tried mixing live pneumococcus and heat-killed pneumococcus in his attempts to create a vaccine. But perhaps not. Or perhaps in the midst of war he failed to realise the significance of what he had done. But almost around ten years later, Frederick Griffith didn’t. Frederick Griffith was a British researcher trying to solve the same problem Avery had been: How to inoculate people against pneumoccocus. Once again he tried mixing live strains and dead strains of the bacteria. But something strange happened when he did that, and he took note. If he took a non-lethal strain and injected a rat with it, the rat was fine. If he took a heat-killed lethal strain and injected a rat with it, the rat was fine. However, if he took a non-lethal strain and a heat-killed lethal strain and injected a rat with both at the same time, the rat would die. Moreover live, lethal strain pneumoccocus could be isolated from the corpse. He happened upon the idea that the non-lethal strain could somehow be made into the lethal strain merely by being put in close contact. Something passed between them. This became known as “the transforming principle”.
No one really knew what to make of this. Although of course, the main theory was that it was due to genetic material being taken up by the living bacteria. Y’know, genetic material a.k.a. nuclear proteins. Because, after all, proteins are large and dynamic. That nucleic acid junk floating around in there was obviously just extraneous trash. Everyone believed this. Probably even Avery. When he returned to research on pneumococcus in the 1930s in an attempt to isolate the transforming principle, he most likely thought he was going in search of a protein. But bias has never been a friend to bleeding edge research, and fortunately Avery was able to look past that. He, Colin MacLeod, and Maclyn McCarty performed a series of experiment spanning more than a decade and published their findings in 1944: DNA was the transforming principle. Therefore, DNA was the most likely candidate for the genetic material.
This was it. This was the big one. We lost a century of scientific progress to phlogiston theory, how much time would molecular biology have remained stymied without Avery? Indeed, the understanding that DNA is the hereditary material laid the foundation for every advance that came afterwards. This is the lynchpin of modern biology, and every one of us who work in the field are personally indebted to him and his fellows. However, the initial reaction to Avery’s publication was very negative. Many scientists had a lot invested in this whole nuclear protein theory and some of Avery’s fellows at the Rockefeller Institute took the opportunity to slander him and his research at every turn. This included to the Nobel Foundation. Of course, once it became generally recognised (around 1950) that Avery was correct, the Nobel Foundation couldn’t just swallow its pride and admit error. So they staunchly continued denying Avery the award until his death in 1955 at which point he became ineligible.
Fortunately, by all accounts Avery really didn’t care about recognition (or really much besides his work), so we can hope he didn’t take offence. Of course, given the recent behaviour of some Nobel recipients perhaps we should be glad.
- “Studies on the chemical nature of the substance inducing transformation of pneumococcal types.” Avery, et al.
- The Transforming Principle: Discovering That Genes Are Made of DNA by Maclyn McCarty
Abraham Lincoln probably had syphilis. Mary Todd Lincoln certainly had it (leading to her dementia and institutionalisation) and most likely contracted it from her husband. Accounts from Lincoln’s friend, law partner, and biographer, William Hearndon, indicate that Lincoln claimed to have contracted it sometime in 1835-36. It is a matter of record that Lincoln was medicated with “blue mass” (a mercury salt pill) which was the treatment of the day (which in and of itself is a little bit horrifying). Lincoln’s personal medical history is no concern of mine, but what I do find troubling is the way people react to the suggestion (Just look at some of the negative response that Deborah Hayden and Gore Vidal received when they expressed this possibility in a public forum) and a general lack of knowledge or curiosity about the subject.
This is a generally observable response to the subject of sexually transmitted infections in the 19th and 20th centuries. Take as an example a few more instances that American history has conveniently tried to forget regarding “social diseases”: (1) in 1890, one in every 7 marriages was sterile due to venereal infection; (2) in 1910 Army hospital admissions figures show that the rate of venereal disease among troops was ~20%, and epidemiologists noted that admissions figures would only show the worst cases and hypothesise that many more minor infections went entirely unreported; (3) in 1918 the Commission on Training Camp Activities in conjunction with the Law Enforcement Division started wide scale programs to inter any woman “reasonably suspected” of carrying a venereal disease or of being a prostitute or “charity girl” in detention centres as an attempt to stop venereal infection “at its source”; (4) the very first PSA film the government would ever make was “Fit to Fight”, an announcement on the evils of prostitution and the importance of combating venereal disease for achieving victory in WWI. It would later be re-imagined as “Fit to Win” and intended for general release, but theatres at the time declared it morally licentious and obscene, destroying many copies.
Do note that these are just a smattering of examples of the extraordinary impact that a single type of disease had over a relatively short span of years (for a meticulously detailed account of all the ways that venereal disease has shaped American society from 1880 on, I cannot recommend enough Allan M. Brandt’s No Magic Bullet, it is the source for the above examples and manages to pack worlds of exhaustive research all into a relatively slender book that remains quite accessible). Yet due to the nature of these illnesses, and shame over certain responses to them, prudishness balks at presenting the history accurately.
And the reason is pretty straightforward: how many people really want a bunch of syphilitics clogging up romantic fantasies about the past? How many historians really want to admit that the actions of nations and the victories in wartime might be more influenced by our biology than any amount of zeitgeist or any number of charismatic and skilled leaders? It doesn’t fit into our preferred narrative and as a result we scrub the very consideration out of history.
Only that doesn’t always work out so well. I really hate how trite and overused the aphorism “Those who don’t learn from history are doomed to repeat it,” has become, but at times I have to accept that it might be appropriate. In a number of ways, the HIV epidemic in America mirrors our struggle with syphilis. And while the two pathogens were nothing alike, it turns out that humanity remained exactly the same and made a number of the same mistakes (silence due to prudery, the unwillingness of leaders to directly address the problem, scapegoating, scaremongering, misinformation, capitalising on tragedy for religious/political power, we even talked about internment camps again for a while there but thankfully that never became a reality). How many times had we made those mistakes before, and how many times will we make them again in the future? This isn’t some abstract question, the behaviour of society at large in these two instances can be directly tied to a body count. One that I would really not like to see get any higher.
Note: In this particular case I have used venereal disease as an example because I believe that an immature/prudish strain in society is particularly reticent to discussing it. However, you can see some measure of the same reticence to discussing any past disease provided it has not become remote enough from the present day. For instance, much more is said about the bubonic plague than is said about the 1918 flu or polio, possibly because the plague seems so remote in time and thus safer (despite the people that still die from it each year >_< I have actually had people argue with me when I tell them that bubonic plague still kills people today because they swear up and down that it went extinct in “like the 1300s or something”). I vividly remember reading an account in Nancy Tomes’ The Gospel of Germs (a discussion of the social impacts of germ theory in the Progressive era) that in the aftermath of the 1918 flu a sort of collective amnesia fell upon society after the disease was over and no one seemed to want to acknowledge it. It is a shame she didn’t develop that point more.
Further Note: This article speaks from an American point of view because that is what I have to work with and against on a day to day basis. If any of my (unfortunately still very small) pool of readers have noticed similar or contradictory cases in their country of origin, I would love to hear about them in the comments.
Sources & Further Reading
- AIDS Science and Society Fan, et al.
- And the Band Played On: Politics People and the AIDS Epidemic Randy Shilts
- No Magic Bullet: A Social History of Venereal Disease in the United States Since 1880 Allan M. Brandt
- Pox: Genius, Madness, and the Mysteries of Syphilis Deborah Hayden