Wednesday, November 30, 2011

Blinded by Science - Part 1

“Do not worry about your problems with Mathematics, I assure you mine are far greater.” - Albert Einstein

Take Einstein's quote, insert the word Science in place of Mathematics, and you're at the crux of my problem.  I'm not a scientist.  Thank GOD the "far greater" problems (quotes are there not because I'm implying their problems aren't in fact far greater, but because it's from the Einstein quote above) scientists face in studying something are not the problems I face as a non-scientist.  And, while I graduated with a Bachelors Degree of Science in Chemical Engineering, let's face it, I was drunk 50% of the time I was in college and copying off fellow students' papers the other 50% of the time.  I possess a decent working brain and understand rudimentary scientific process, and the fact is I have some worries about the science of autism.  These are not the worries of a scientist.  These are the worries of a parent who is trying to get his daughter the best care he can.

My current worries about Science:

1)  How to determine what treatments/therapies/diets/etc. are backed by science.
 While my understanding of the science of autism research is not what worries me, my understanding of how to interpret its results is.  Why?  Because scientists tell me that the best, most effective treatment for my daughter comes only from those treatments backed by solid scientific data.  So how do I determine that?  yes. . . yes. . . I know. . . Testimonials are not the same as Data.  Stuff like that helps. . . but I need more. 

2)  How to determine whose science is real science.
So.  Who tells me, the primary caregiver, which treatments are or are not backed by solid scientific data?  Scientists?  My pediatrician?  That in itself is a problem.  "The Tale of Two Scientists".  The easiest and most polarizing example of this for folks sympathetic to autism issues is of course, Wakefield vs. The World.  Or Anti-vax vs. Vax.  Leaving all the conspiracy theories. . . drug companies, corporate greed, government cover-ups, etc completely out of the equation, we're left with a group of scientists who made a claim, and another group of scientists who later disputed the claim.  And yes, I know all about what happened to Wakefield in the . . . heh. . . wake of it all.  It doesn't change the fact that as a parent, I can't even begin to tell which scientist is credible and which is not.  How do I do that?

3)  How to understand what constitutes an effective scientific study.
I have read descriptions of treatments that claim to be backed by science that have been ADAMANTLY  refuted as quackery by scientists.  When I look at the claimant's study. . . it looks very sciency.  (Totally a word).  I need to better understand how to read the studies themselves to determine WHY the study was quackery.  The most recent example I can think of is the Mercola study with the National Vaccine Information Center.  I read a review that sliced and diced it.  It was shady, it was shifty, NVIC was in business with Mercola, the participants were being treated by Mercola using Mercola's products. . . how could this study be "Scientific".  And yet, as a parent, looking at it from the outside. . . I have no knowledge of the participants, or the relationship between NVIC and Mercola.  To ME. . . it looks sciency!  Are there things I can look at to see that things were done right?  Buzzwords like double blind, control, group sizes, peer-reviewed journal, etc?

4)  And while we're on the topic (see above) how can I tell what's a reputable peer-reviewed journal, and what's trash?  Because I've seen criticisms of studies that were nothing more than "it came from (implied scoff) The Journal of Insert Technical Sounding Title, so you know it's crap".  All those journals?  Yeah, they look the same to me.  What are the 'reputable' journals?

5)  How to understand whether I give a shit whether the study is sciency or not.  Because, let's face it, before ANY of these studies were backed by science, there was someone who was using them to treat kids with Autism, and at that time, they were NOT backed by science.  I think as a parent this one is one of the toughest and most guilt-filled decisions:  Deciding to attempt a treatment even though it hasn't been "adequately" studied.  Whose fault is that?  The lack of science doesn't necessarily disprove the treatment, it just means more study is needed.  I want to make the "right" decision, but science is telling me that the right decision is only to use treatments with proper scientific data backing them up.  And what we, as parents of kiddos on the spectrum know, is that time is of the essence.  So it's not like I can really afford to wait around letting my child's best, most treatable years tick away while scientists reach their conclusions.  Or can I afford NOT to wait?  If I decide to try a treatment or therapy or participate in a study with my daughter that later turns out to be (hindsight) backed by science, I just bought that much more time.  If I decide to try a treatment with my daughter that is later debunked by science, I'm the rube who wasted valuable therapy time on quackery.

Science is heady stuff, and scientists are a snobby and defensive (but ADORABLE) little group of know-it-alls.  No offense, scientists, but you know you are.  Hell, it's practically a point of pride.  When I read some of the stuff you write I think, "Wow, these people are really bright!"  And then in the next paragraph I read this same seemingly brilliant person poo-pooing the right of someone to criticize a study or therapy not based on the merits of that person's comment or criticism, but solely because that person lacks an advanced degree in that specific field of study.  When it comes to choosing care for ourselves or for our children, it's too important NOT to have an opinion and/or take a stand, and none of us have time to go get that doctorate in neurology just so that we are then welcomed to the debate on the efficacy of the treatment du jour.

"I don't believe it!
There she goes again!
She's tidied up and I can't find anything!
all my tubes and wires
And careful notes
And antiquated notions"

What I'm hoping I can generate from this blog, on its very own page (I'll assign a new tab to it after it's been up a few days), is a list of sciency links.  I have some already.  And I don't mean sciency bloggers necessarily, although I'll certainly post those as well.  I mean links to trusted sites that compile treatments or therapies that are (in the eyes at least of some scientists) appropriately sciency.  I don't intend to provide a database of what is or isn't. . . just links to resources to help me (and you, if you want) find out which studies make the cut, or perhaps links to "how to" posts that help you, a non-sciency parent, make sense of the data. 

I don't know how much I'll break it out, categorize and subcategorize, etc.  So I'll sort of play it by ear right now.  I don't have the answers.  Just lots and lots of questions.  I'm hoping you folks in the blogosphere can help me with this.  I know I'm still new to this autism parent blogging thing, so I'm worried I won't get the sort of feedback I'll need, but it's definitely worth a shot.

We need all the help we can get.


  1. I love the honesty and frankness here!!! Would love to start the conversation and see where it takes us....check out my two blogs and we can go from there! :) and

  2. Hi Jim,

    I’ve been pondering your dilemma, not that I can offer you any solid science sites that will help with your situation. I’ve worked in the medical field for many years, as a nurse, but have never participated in studies to any great extent. I do know that enough quackery exists to be suspicious of most unscientific studies, and those that use double blind methods etc. are the more reliable.

    However, in certain cases the quackery produces amazing results and scientific studies fall short of their miraculous claims.

    Case study: My father.

    About twelve years ago, my father had end-stage heart disease. He’d undergone a quintuple bypass a few years prior to this and the doctors refused to do another. He was loaded with meds, sent home and he could barely walk from the bedroom to the kitchen. Desperate, he turned to quackery. He’d heard of chelation, but the doctors all advised him not to pursue it. He found a source in the US that offered oral chelation, a cheaper alternative – the product created by another desperate man - a chemist who was also dying of heart disease. My father took a full course of this treatment and within days he began to feel better. He was able to walk without getting short of breath, work again at gardening and carpentry, all the activities he’d always enjoyed but had been unable to do (along with walking up a flight of stairs). The Nitropaste and other cardiac meds he relied on, simply to breathe without chest pain, he could now dispense with. My father has lived twelve years longer than he should have because of quackery.

    I’ve always been a skeptic and distrusted any treatment without solid scientific backing, but my father was dying, so I was willing to suspend my distrust and see what would happen, since this was his choice. Now I’m much more open-minded.

    I don’t advise you to trust studies that don’t seem sufficiently tried and tested, especially for the treatment of a child. I’m just saying there are cases . . . Hopefully you can find a decent specialist to advise you and give Lily the best of what’s out there. But keep your eye on everything that arises and don’t always trust the MD to have your best interest at heart.

    My father’s cardiologist still won’t test him to see how he managed such a miraculous recovery.

  3. This is an excellent post, and the reasons you list are some of the many reasons I refuse to place myself at either pole (in this and many other instances) when there is so much space to navigate in between. I am a college student with autism, planning to do autism research, so this post is valuable to me not just because it explains what it's like from a parent's perspective (as opposed to the perspective of someone with autism), but also because it explains the gap between research and the people who serve to benefit from it (or suffer as the result of it) the most.

    As a general rule, I don't consider anything sciency reported in the press to be completely accurate, unless it's from a sciency publication with a good reputation, in which case it might be. My developmental psychology class actually had an assignment this term in which we compared press articles to the actual journal articles they mentioned. There was definitely some shoddy journalism afoot - lots of oversimplification, failing to mention factors that could have influenced study outcomes, and even creating results that weren't reported in the journal articles. Unfortunately, the press is where most people understandably get their information.

  4. (I guess I should clarify that I do believe that science is the way to go, but that I also understand why it's not always that simple or straightforward, and I understand even more from your perspective after reading this post.)

  5. Thanks for your comments. I agree with you comments about the press. That's at least a starting point. More often than not I'm not even aware of a study in the first place. The press at least makes me aware of it.

    But being aware of a study, and being able to evaluate whether it's a "good" study . . . or "sciency" are two entirely different things.

  6. Oh, oh oh--I think I can help. I have a sciency background (although I never finished a degree). I apologize in advance if some of this is repetitive. Part of having Asperger's is not always knowing when to shut the hell up about something. I try so hard, though.

    The *best* studies out there are set up in ways that reduce as much as possible, if not eliminate, subjectivity and instances of witting or unwitting manipulation. That "double-blind" buzzword, for example, means that both participants AND researchers are in the dark as to which participants are receiving, say, a placebo vs. the real medicine, or something like that. Which means that researchers can only go by the data they receive and not their perceptions of what the drug SHOULD do.

    A "bad" study can be sciency as hell, but still be a bad study. A bad study is one that is set up to favor a pre-determined conclusion, or one in which the data collected becomes so subjective or corrupted that it's meaningless. Subjective data could come from a study where participants were not treated uniformly, for instance--maybe it's a weight loss study, and two different researchers are meeting with participants from Group A, but one is giving them a lot of diet tips and the other is not talking about weight loss at all. Then you'd have some participants from the same group who were getting an extra nudge without it being part of the plan, which skews the end results of the study, if that makes sense.

    A good study will have, when possible and ethical, a control group and at least one experimental group. The control group should be of the same makeup as the experimental groups--age, sex, those sorts of things should be uniform in both groups. Each experimental group should, ideally, only have one thing that is being done different from the control group or from another experimental group. If you had an experimental group that had TWO different variables--we'll take the weight loss example, if you had a control group doing nothing different and your experimental group was both dieting AND exercising differently--then it would be impossible to sort out which thing was causing the result. (Although you could definitely set up that experiment using a control group, an experimental group that ONLY dieted, one that ONLY exercised, and then one that did BOTH, because you would have the diet-only and exercise-only groups to compare. Does that make sense? Because you could then see how diet by itself works, and exercise by itself works, you could see if both work together better or if it's one being stronger than the other.)

    (Part 2 forthcoming)

  7. (Part 2 has arrived!)

    The experimental group participants should be undergoing strictly the same procedures or regimens. With the weight loss groups example, again--everyone should be following the same diet in the Diet group, but they should also all be presented the diet in the same way, with the same materials and instructors (or instructors carefully trained to follow a specific script and present the same way), so that there is uniformity, because the smallest thing--maybe one instructor presents the diet enthusiastically, and the other thinks it's a waste of time, so half the participants are excited and half dread it--could skew the results. And if that diet were to be measured in a second group, the Diet and Exercise group, it also has to be replicated exactly the same, or else variables in procedure could skew data. The idea is to make the diet, or the exercise, the ONLY variable present, so you have to work really hard to eliminate any variables that might exist within the experimental group--this also has a lot to do with randomizing selection, as you wouldn't want the control group to be all men and the experimental group to be all women, that's another variable that would make it hard to tell whether the different results are coming from significant differences among participants or because of the variables that were introduced by researchers. A random selection minimizes the possibility that it would be because of a significant difference between groups.

    The major thing to be on the lookout for when you're reading through studies is, what ELSE could have caused the result that they got? Did they do something that inadvertently or subconsciously--or on purpose, even--caused the result, other than what they claim to have caused it? Like, they were testing out a drug that they claimed to make ADHD children more calm, but maybe they were monitoring their medicated group later in the day when more of the kids might be tired, and their placebo group earlier in the day when they were more energetic, or maybe the placebo group was in a space that had more stimuli for the kids, etc etc, and it wasn't just the effect of the drug. You're basically looking for any other possible explanation that the results could have turned out the way they said, and also looking to make sure the results really *did* turn out the way that they said, because sometimes, they get mighty interpret-y.If there's any other variable at play that wasn't taken into account, it makes the study kind of garbage.

    (Also, the study should be able to be replicated by any scientist with reasonable access to equipment/participants/funding/etc--if the results can't be replicated because some of the procedure wasn't released or because it is just impossible to reproduce for whatever reason, the study is garbage.)

    1. awesome stuff. Thanks for the reply. I love the summary, and it's a fantastic refresher for someone (like me) who had enough of a science background in college to know how an experiment (or study) SHOULD be conducted in order to get results that are meaningful.

      What I really need to know is how, once the study is published, do I as a concerned reader, determine whether those principles as summarized so fantastically above were followed.

    2. Several years ago I had a one-day training course on "Terms and Conditions". . . essentially, how do I, as a Project Manager, evaluate contractual terms and modify them so that my company's best interests were represented. One of the things I found useful was an exercise where an actual contract was presented, and we marked it up, then the instructor went through and analyzed it paragraph by paragraph to show us what was being said in the contract, and how we SHOULD have marked it up to prevent our company from being taken advantage of.

      I think that would be a magnificent exercise here. I may see if I can find a study, print it out, annotate it, and look for signs that show the principles were followed. It might even be MORE instructive (or if not instructive. . . "telling") if it APPEARS that the principles were followed, but was actually a junk study.

      This requires additional thought. . .

    3. Unfortunately, it's really something that just requires a lot of scrutiny of actual studies. Researchers are tricky bastards at times, too, so they'll do their best to make something sound whiz-bang awesome when it's not, either to make themselves look good, or to get funding, or just because they fully and wholly believe that their hypothesis MUST be correct.

      Sorry, my brain is a little ... different, as I said previously, and I'm not entirely connecting with your need here--but am trying! I'm not like, a full-on scientist with a degree, but I could help you look over a study to see if I can help you learn to analyze them.

      Maybe it would help to take notes when reading the studies? Science papers can get a little dense because they're crammed with information and often with the scientist's own subjective ideas smeared all over them (I feel a little swimmy when I read some of them--not all scientists are excellent writers, either, heh). If it were me, I'd start by going through the paper and organizing their procedures into an easy-to-read list. Then I could see more objectively if the procedure seemed legit, or make notes to myself if I thought it was inconsistent--I could even make separate lists of what was done to each group and compare for consistency. If the procedure passed muster, I would then look at the results--listing them out without interpretation, just to see if I thought the numbers told the same story on their own.

    4. nono. . . I think your comments are great. I'm thinking I now need to put it to practical use; look at a study and try to spot the buzzwords and see if I can!