Stat Counter

Friday, December 24, 2010

Placebos - now with added ethics!

Placebo effects can be induced in patients without deception, according to a new study in PLoS One. The study was an open label (no blinding for patients) single blind (the investigators did not know which treatment participants were given) controlled trial in 80 people suffering from Irritable Bowel Syndrome.

The patients were followed for one month, with a baseline, end and midpoint assessment. According to validated self report measures for the syndrome, the patients who were given the pill improved much more than the patients who received no treatment. Its important to note that the groups were randomised, and they were also matched in the amount of interaction they had with the medical providers.

An interesting point in the study (which doesn't appear to have been picked up by other science bloggers) is that some of the patients (N=17) also received counterbalanced provider interaction - they saw a male doctor once, and and also a female nurse. Contrary to some conceptions of the authority of the provider having an impact on the response to treatment, there were no differences in outcome which could be attributed to this difference. Given the small number of patients in this group, thats not very surprising. I really wish this trial had gone the whole hog and randomised everyone to see both practitioners, as that might have provided some very useful data.

Open-label placebo produced significantly higher mean (±SD) global improvement scores (IBS-GIS) at both 11-day midpoint (5.2±1.0 vs. 4.0±1.1, p<.001) and at 21-day endpoint (5.0±1.5 vs. 3.9±1.3, p = .002). Significant results were also observed at both time points for reduced symptom severity (IBS-SSS, p = .008 and p = .03) and adequate relief (IBS-AR, p = .02 and p = .03); and a trend favoring open-label placebo was observed for quality of life (IBS-QoL) at the 21-day endpoint (p = .08).

The above is from the finding section of the abstract, but it cogently sums up the major findings of the study. Now, there are a number of important caveats to the straightforward interpretation of the study. There are also a number of interesting implications arising from both the study, and the reactions of some of the better known science bloggers.

My first issue with this study is the number of statistical analyses which were carried out, with only 80 participants and at least ten significance tests reported in the article (and probably more which were not reported), the authors probably should have corrected for multiple tests (the most popular approach is to divided the required p value by the number of tests). That being said, this was a pilot study, so the results will require replication in a larger trial which would ideally have a protocol with details of analyses to be carried out published beforehand (what can i say, I'm an optimist).

Orac, (of respectful insolence fame) critiqued this study on a number of grounds. The first was (he claimed) a failure of randomisation. He based this on the numbers with each type of IBS (diarrhea or constipation primary) and argued that this could be responsible for the observed improvement. While i do take his point, i would suggest to my readers (all three of you) to look at the Table itself published in the article. Now, it can be seen from the table that indeed the groups did not appear matched on type of IBS.


However, if you look a little more closely in the table, it can be seen that the open label group had a longer mean duration of IBS,  a higher initial mean symptom severity score, and a lower initial quality of life. If anything, if there was no change over the course of the trial, then the no treatment group should have come out superior.  Given that only stable IBS patients were admitted into the study (look at the confidence intervals for the lenght of time with IBS) it seems unlikely that regression to the mean could account for these results.

The effect size for the mean difference between groups at the end of the study was d=.79, which is a large effect by anyone's standards (see the pun? you're a nerd if you do). To explain the effect size measure, its a difference in standard deviations, and one standard deviation is the difference between your high school teacher and Einstein, as measured by IQ (assuming an IQ for the teacher of 115 and giving Einstein 130).

This is not a small difference, and yet Ed Yong reports that Edward Ernst claims this is too small to be clinically significant, which makes me wonder what effect sizes he sees in everyday practice, cos thats a large effect to me (I'd kill for an effect size that large, i'd get t-shirts printed and everything).

Orac also takes the research to task for deception, as the placebo pills were described as empty sigar pills which have been proven in rigorous clinical testing to have an impact on self healing processes. He claims that this is the worst deception of all, far nastier than those involved in ordinary Randomised trials. Frankly, i don't agree. There have been a number of meta-analyses  conducted on the placebo, as well as re-analyses of data from many, many clinical trials, and what participants were told was not a deception, unless telling people about what clinical trials have shown about any medical treatment is a deception. So, i really dont see why this bothers him.

 Orac also uses guilt by association as he notes that the study was funded by the National Council for Complementary and Alternative Medicine, but i believe that argument is beneath any self respecting scientist so I'll ignore it.

PaiMD also takes note of the study, and claims that what was compared here was one non biological treatment against another. I would disagree, what this trial shows is that care and some kind of medical ritual (take these pills two times daily) are much better in combination than they are apart. That, to me, is perhaps the most interesting finding of this study.

Something which may also interests students of the placebo was the theoretical implications of this study. I've talked elsewhere about theories of placebo, and briefly i think that this study shows that the effects of expectancies are subservient to those of ritual. This is a clear mark in favour of Hyland's theory of motivational concordance, which basically holds that placebo effects arise from what we do that we find meaningful, rather than what we think (or expect) about a treatment. See the link to my previous post on theories if you'd like a more in depth discussion of the theoretical approaches in the field.

One brief detail that i would like to know more about in the study is how many pills the open label group took, and whether those who overdosed got more benefit. The authors note that a pill count was taken, but they do not report the results of this measure, which is a shame. This measure would have been especially great as it could be modelled as a Poisson variable (for count data) and then demographics and other measures collected could have been regressed against it to understand the causes and correlates of this interesting variable a little better.

Now, my own thoughts on the future of this line of research are as follows: it will be difficult to replicate this effect when a drug is being used in the study. This seems intuitive to me, given that the meaning of getting placebo or drug is very different from the meaning of getting placebo or no treatment. Mind you, I hope I'm wrong here. An interesting line of research which bears upon this study is the work using the balanced placebo design. Essentially, this work combines the drug and placebo arms of your standard clinical trial with two deceptive conditions, where the participant is told they get a drug but get placebo and vice versa.

I personally would find a replication of this study using that design to be far more interesting (though ethically challenging) as the effects of placebo-placebo condition could be contrasted with the others. Again, this is probably just a pipe dream, but if ever have enough funding, i'd like to make this happen.

One way in which these findings can be explained is as follows. I've spoken before about the effects of self monitoring of internal processes on the placebo effect (essentially, it increases them). It may be that the open label group, while taking the pills paid more attention to their bodies, and this attention increased their self healing processes (which is all the placebo effect really is). This work on somatic awareness also ties into the results of a recent meta-analysis on in what conditions the placebo is effective Just a thought, probably needs a little more refining into a useful predictive theory.

Anyway, thanks for reading. You may have noted that I haven't really talked that much about the paper itself. Its is PLos One, which is free to everyone, and its very clearly and engagingly written, so go read it yourselves, rather than relying on me or any other blogger to give you their perspective.


Kaptchuk, T., Friedlander, E., Kelley, J., Sanchez, M., Kokkotou, E., Singer, J., Kowalczykowski, M., Miller, F., Kirsch, I., & Lembo, A. (2010). Placebos without Deception: A Randomized Controlled Trial in Irritable Bowel Syndrome PLoS ONE, 5 (12) DOI: 10.1371/journal.pone.0015591

Wednesday, November 17, 2010

Placebos: All you never wanted to know (Part 4) - Neurobiology

Before I begin this, i'd like to note that I am not a  neurobiologist, and its a weak area of mine, so please be gentle and tell me if a make a glaring error.

A strand of placebo research which has become more and more important with time has been the increasing focus on brain correlates of placebo responses. The biochemical history of placebo begins with Levine and his demonstration that naloxone blocks many placebo pain responses. Induced from this is the notion that placebo pain is mediated by the endogenous opioid system. This is not always true, and seems to vary based on the type of response which is wanted. . The lasting contribution of this research is that it paved the way for the placebo to come in from the fringes of medical science, as even the most dogmatic materialist could not deny the biochemical evidence as demand characteristics.

 A recent meta-analytic review seems to argue that placebo effects in pain are quite large (d=.89) and that naloxone is quite effective in reducing them (d=.55), pointing towards an interpretation of placebo effects in pain being substantially mediated by endogenous opioids.

In this area, the work of Benedetti and his colleagues has been instrumental in unveiling the biochemical pathways through which placebos exert their effects, and much of this work is summarised in his book  along with the work of others. It appears that both the opioid and dopaminergic systems are involved in the placebo effect. While Benedetti and others have done much of the research into the opioid system, De La Fuente Fernandez \cite{DeLaFuente-FernAĆ’A¡ndez2002} has blazed a trail in looking at the dopaminergic system.

It has been observed that the dopamine system activates not just to reward, but rather the expectancy of reward, and that this release varies as a factor of the certainty of the expectancies. In one study, the activation of the dopaminergic systems during placebo analgesia was correlated with activity observed during a monetary reward task, suggesting that the mechanisms of reward are a common feature of placebo effects\cite{Scott2007a}  It has been argued that there is a descending link from the OFC to the Periaqueductal Gray Area (PAG) and from here to the amygdala, and that this link is responsible for the observed placebo effects.


 Another interesting suggestion was that placebo analgesia experiments which show altered brain activity in the rACC and OFC demonstrate the existence of a generalised expectancy network. This hypothesis received some support from a recent experimental study which used either true or false sound cues to create expectancies for particular aversive tastes. This study showed that the rACC and OFC and to a lesser extent, the DLPFC activated in response to these expectations, suggesting that these parts of the brain may well be associated with the expectancies

An interesting finding arose from an experimental study into patients suffering from Irritable Bowel Syndrome (IBS) \cite{Lieberman2004}. This study looked at placebo using a disruption theory account, which accounts for neural changes due to placebo in terms of inhibition. The authors found that although the right ventro-lateral pre frontal cortex was activated by expectancies of analgesia, this activity was totally mediated by the dorsal anterior cingulate cortex which argues that this part of the brain is more foundational to the placebo response.  Another study which looked at patients suffering from IBS found that naloxone did not reduce the size of placebo effects, which would suggest that these were not opioid mediated \cite{Vase2005}.

A further discovery around placebo analgesia is that it can be directed at specific sites in the body \cite{Benedetti1999} . This study induced expectancies of placebo responses at either the right or the left hand, and demonstrated the expected placebo effects. These effects were completely antagonised by naloxone, which suggests that they were mediated by the endogenous opioid system. This finding is interesting, as it suggests that the opioid systems can be activated at specific parts of the body, and not just globally as some former theorists have claimed. A more recent finding \cite{Watson2006} found that perhaps 50\% of participants in a placebo analgesia study generalised a placebo response across both arms, even though cream was only applied to one arm for each person. This study would suggest that the placebo analgesia phenomenon is quite malleable and subject to individual interpretation.

There is some evidence to suggest that some of the effects may involve both descending and ascending pathways within the brain, judging from the results of a study on mechanical hyperalgesia \cite{Goffaux2007}.This study used a counter-irritation technique and the use of a basin of water to act either as a placebo or nocebo. The authors suggested that the reflexes in the arm should not change if the placebo effect was completely cortically mediated, but the results suggest that descending pathways are equally as important in placebo analgesia. These pathways are controlled from the mid-brain and these findings suggest that the placebo effect exerts changes in large portions of the body, and is not exclusively a cortical phenomenon.

Further evidence in favour of this idea comes from the study of Matre \cite{Matre2006a} who noted large differences between placebo and control areas of mechanical hyperalgesia, again suggesting the involvement of the whole body in the response. In this context, the results of Roelofs et al. \cite{Roelofs2000} are worth considering. Using similar techniques to the two other studies referenced in this paragraph, they found no evidence that placebo effects cause changes in spinal reflex activity. However, this study also found no evidence for a placebo effect in general, which weakens their conclusions. It is worth mentioning that even though they found no significant effects, they did find a correlation between the brain activity and spinal reflexes, which suggests that they found an effect, but their study was either underpowered or used a badly designed expectancy manipulation (most likely the latter) \cite{Goffaux2007}.

An interesting finding which has come about through placebo research is what is known as the uncertainty principle in analgesia \cite{Colloca2005} , where they argue that the effects of any analgesic can not be accurately measured in a clinical situation as the awareness of being given this substance will activate the opioid system which will further reduce pain. This finding arises from work done previously, where it was shown that open injections of painkillers or placebo registered far more variability than hidden injections, suggesting that while physiological responses to analgesia may be similar across people, the awareness of treatment may invoke differential activation of endogenous painkilling systems which cause the total effects to appear to vary quite substantially \cite{Amanzio2001} . Some research has also confirmed that placebo and opioid analgesia share the same neural patterns of activation in the brain \cite{Petrovic2005}.

Some sterling work has also been done in the area of depression and placebo response. A fascinating study suggests that prior to treatment, placebos may induce changes in neurophysiology which predict later treatment response. This is an extremely interesing finding, however the authors used a new measure (that of EEG cordance) developed by themselves and to date, there have been no replications of the study. Another useful study of placebo neural activity in depression has also been conducted comparing the activation of particular brain regions following treatment with either prozac or placebo \cite{Mayberg2002}. . One fascinating finding of the Mayberg et al study is that areas of the striatum were activated, and this region of the brain is known to be rich in dopamine receptors, which may suggest that while the placebo response in depression is primarily opioid mediated, the effects of SSRI's may also influence the dopamine systems, which may account for their (slightly) superior effectiveness overall. However, some research shows that psychotherapy activates different brain regions in the treatment of depression, which argues against the existence of a common depression treatment pathway in the brain

So far, so interesting. My one complaint about this stream of research is that much of it reiterates old findings backed up with correlational evidence of which parts of the brain are involved. To the extent that it furthers understanding, its great, but to the extent that it substitutes for it, its worthless. 

Weeks, S., & Tsao, J. (2009). Fabrizio Benedetti, Placebo effects: Understanding the mechanisms in health and disease , Oxford University Press (2009) ISBN: -13: 978-0-19-955912-1, 310 pages, $59.95. Journal of the Neurological Sciences, 281 (1-2), 130-131 DOI: 10.1016/j.jns.2009.03.013

Thursday, September 16, 2010

Sweave, LaTeX and R

Yesterday, I finally got the hang of Sweave, R and LaTeX.

This essentially means that I can write my scientific paper in  LaTeX, insert code chunks in the text, feed it to R (through the Sweave package) and get perfectly formatted output in APA style for any paper I choose to write. Its taken me a few months of on and off trying, but I've finally done it. That being said, I'd like to share some of the things that caught me out, so that others can benefit.


Before Installation:
If you've been using LaTeX or R on Windows, they were probably installed in the  Program Files folder. This will cause you no end of problems.
Re-install these programs on C, in a path with no spaces, as LaTeX doesn't like spaces in path names. For example, install LaTeX in C:\Miktex and install R in C:\bin\R. This will head off a lot of problems that you will encounter.

When installing LaTeX be aware that there are a number of text writing programs which you can use. Of these, I am using TeXniccenter, as it came with my distribution. Its also open source, which is a plus. Others include WinEdT, which is shareware and apparently quite good. Vim, Emacs Speaks Statistics are the only text editors that provide completion for R code, but both of these programs take a lot of effort to learn. In any case, working entirely from LaTeX as the start is very difficult.

The next step is learning to use R. If you are a psychologist, download the arm package and the psych package, as this will give you useful regression diagnostics, and psych provides all the psychometric tools one could need (see here for the authors website, which i devoured when i started laerning R). Unfortunately it doesnt provide IRT methods, but these can be accessed through the ltm and eRm packages, which are also easy to obtain (Select the install packages option in R menus, select a mirror site close to you and select the package name - done).

For exporting to LaTeX, there are a number of packages which do different things. I'm currently using xtable, but this doesn't have a defined outfit profile for factor matrices, which is pain. The manual does show you how to define new classes though, and I will share my results when I have made this work.

One extremely important thing to remember (and something that stumped me for a while) is the syntax for inserting R code.
<>=
some R code
@
The double arrows and the equals sign signal the start of the R code chunk, the options within the arrows define how the output looks (echo=FALSE means that the R code will not be shown, and results=tex tells LaTeX to format the results in its own format). The @ sign ends the code chunk. Now, the part that got me was this: the R code, arrows and @ need to be left justified, otherwise this does not work. This means that if you want to insert a table from your results, do this after running the code through Sweave.

At the moment (since I am neither an expeRt or a TeXpert) I am creating the LaTex objects in R, and then telling R to print them to LaTeX. This allows me to ensure that the objects are created properly before I send them to LaTeX. Sweave will tell you if it has a coding problem and where that occurred,but some things look OK until you actually see them.

The next step is to download the apa package for LaTeX, this will allow you to format the paper in APA style. This is the part that tends not to work if your LaTeX distribution path has spaces in it, so make sure that doesn't happen (I actually reinstalled R and LaTeX on my machine in the recommended places, and now it works like a dream).

You will probably need to learn a little LaTeX, but if you use WinEdT, TeXniccenter or Lyx, then there is  GUI with menus that can aid in this. There are some Sweave templates scattered about the web, and you should probably use one of these. Its probably worth reading either this or this (or both) guide to using LaTeX.

With R, as long as you understand some statistics, its easy enough to Google and then read the recommended files. The introduction is extremely terse and focuses on fundamentals rather than applied analysis, but its useful for its description of summary, plot, lm and the other really useful generic functions.

Friday, September 3, 2010

My blogging motivations

1. Sum up your blogging motivation, philosophy and experience in exactly 10 words:

A ranty blog about life and science which I feel gets ignored.

Pass it on to 10 others. 

If you read this and are blogging and have not yet done this, consider yourself tagged.

From: Dr. Girlfriend

LaTeX success!

Today, I successfully typeset my first paper using LaTeX and BibTeX.

I know that no-one else cares, but nonetheless, I feel much better about my life.

LaTex, for those of you who don't know, is an open source typesetting program which allows you to create all kinds of text documents through the use of simple scripting commands and output the results in a variety of files (I tend to use PDF).

LaTeX can also be used with the open source software R, to embed the results of analyses neatly in one file, which then creates your paper (I haven't made this work yet, but tomorrow is another day). 

The major advantages (as I see it) are that you can update analyses and the finished paper much more easily, and LaTeX draws all the tables for you (which is good, because R is not good at producing pretty tables).

LaTex was originally invented to produce mathematical documents, so equations et al are very easy to do.

LaTeX is awesome and the future of science and reproducible research.

R is also awesome, but I've talked about that before.

With these tools, I shall be a publication machine! (if i can collect enough useable data).

Another advatange is that the three tools are available on all formats, and so soon (perhaps next month) I shall delete Windows and go completely free software. You should too.

Sunday, August 22, 2010

Grad school, Irish style.

Taking some time off from my placebo series, I'd like to talk about my experience as a Phd student in Ireland.

This is somewhat inspired by the zomg grad school blog carnival, but i was too busy to submit in time.

Its also inspired by the fact that everyone who submitted to that carnival was a natural scientist, which impels me to give the social science side of the equation.

First, a few notes on Irish phd's versus the american grad school experience.
First off, there is very little funding, Ireland is in a depression at the moment, and never really put much money into the social sciences before that.
Secondly, what funding there is (in my area at least) tends to be awarded to the student rather than to a PI. Luckily enough, i did get funding (although it doesnt cover conferences or expenses, which sucks).

Also, there tends not to be many courses, you are essentially thrown into research, which I prefer but which many people would not find appealing. I sometimes wish that I'd had people to explain methods and stats in a lot of detail to me at the start, but then again, learning that stuff myself has been extremely rewarding.

So, without further ado, here are my top ten tips for surviving a PhD.

1) Do something you like - this is extremely important, as if you don't like your thesis, its unlikely that you will finish on time or that anyone else will care. Liking your Phd also makes it easier to write good grant applications.

2) Try to figure out what you want to do, in some detail, ASAP. This again is critical to finishing on time. Don't worry if your methods or approach changes, just figure out your key question and how you are going to assess it. Then draw up a schedule. You won't stick to it, but it can often be a spur to ensure that you keep working.

3) Work consistently. This was really difficult for me, as I was always a crammer in school and undergrad. However, this will not work for a PhD (if you want to finish on time at least) so get into the habit of doing some work at least 4 days a week. This is very important when you are, like me, an independent scholar without compatriots in a lab somewhere.

4) Read outside your discipline, especially for methods. Often, the methods in your field will be some amalgam of tradition, stupidity and lack of thought. Other disciplines can often point out the blind spots of your own.

5) Read, read, read. Spend at least six months reading before you start collecting data. Make sure you read around any instrument you plan on using. This can often give you a good idea of unanswered questions, which can help you get published (which is important if you want to stay in academia).

6) In total contrast to the last point, start collecting and analysing data ASAP. There's nothing like trying to figure out your own data to help you to understand the methods you are using. If something doesn't make sense, google it and read some papers. Its likely that someone else has had the same problems, and they may know how to solve it. If you cant collect data quickly for some reason, search the internet and start analysing other peoples data for practice.

7) Use R - seriously, if you intend to do any kind of hardcore statistical analysis, use R. Its the best stats program out there, and is constantly having new packages added. Its made me a much better scientist, both by forcing me to learn exactly what i'm doing (to decipher the error messages) and by centralising all of the routines i need in one place. Most psychologists end up using SPSS, some IRT program, some SEM program and various other odds and ends. R does all of this, so just learn it now before you get left behind.

8) Take some time off. I've lost count of the amount of times I've been stumped on a problem, have taken a couple or hours or a day off and the solution has come to me while I was relaxing. Creative thought and hard slog do not often co-occur, so make time for both.

9)  Use as many useful computer tools as possibe. Get a computerised reference manager, cos references are annoying. Get a good stats program (Use R). Get a good qualitative analysis program (I'm using Nvivo, but theres probably a good open source alternative). Learn LaTeX, lest you lose a whole chapter to the demons that infest Word.

10) Write, write, write. Its often easier to understand what the problems are once you try to explain yourself. Aim to write a few hundred words a day. Take notes on absolutely everything you read, this will save you time in the long run.

Finally, have fun! Doing research is supposed to be fun, and you can bet your ass that all the greats enjoyed their work. To paraphrase something I heard once: Doing a PhD is like living your life; if you're not enjoying it, neither the life nor the PhD will turn out to be any good.

Sunday, August 8, 2010

Placebos: All you never wanted to know (Part 3a) - Experimental Evidence

Well, here we are again to continue on our tour of research surrounding the knife in Descartes eye, the placebo effect.

I wonder sometimes if mind-body dualism hadn't become so popular, would we have learned to understand the placebo effect long before now?

In any case, Part 3 (Parts 1 and 2) of this series is going to look at experimental evidence surrounding the placebo effect. This section may end getten broken up some more, but we will cross that bridge when we come to it.

An interesting study took place in the heady days of 2006 (Kaptchuk et al 2007) and involved some of the biggest names in the field. The study set out to examine the differential effects of two wholly placebo treatments. They used a sham acupuncture treatment and also the traditional sugar pill. Now, if placebo effects were an illusion, one would not expect to see a difference between these two treatments, but thats not what was seen.

The study was an 8 week randomised controlled trial (sortof, given that there was no active ingredient), and the results showed that the two placebos were mostly the same, except that the pill group showed increased hand grip (which was measured objectively, for those of you who care about such things).

The next study is far more interesting, however. Again its Kaptchuk et al, 2008 this time though. Essentially it was a randomised controlled trial of acupuncture with three groups using patients suffering from Irritable Bowel Syndrome.
The first group was the no treatment control. They came in, they took part, but didnt have any treatment.
The second group was minimal contact group. These people received either real or fake acupuncture, but delivered in a business like fashion, and they didnt spend very much time with each patient.
The third group was enhanced acupuncture which had the therapist come in and talk to them for about half an hour before putting in the needles.

At the six week outcome measure point, 3% of group 1, 20% of group 2 and 37% of group 3 had significant improvement (all differences between the groups significant at the p<0.0001 level). This to me is pretty amazing. If a treatment which doesnt tend to do well in clinical trials (acupuncture) augmented by warm and friendly interaction with another can be this effective, how much more effective would it be with a well validated treatment? Its perhaps sad that doctors are now so focus on diagnosing and dismissing that they are not making much of an effort with their relationship with their patients, and this is having a measureable impact on their healing capacities. Anyway, moving on to some fascinating work by Geers et al. In this study student participants took a pill which purported to increase their anxiety and irritableness. However, Geers added a number of interesting modifications to this study. Again, there were 3 main groups in the study. Group 1 - this drug is active and will increase your irritability Group 2- you may or may not get the active drug Group 3 - you are getting a placebo. Geers also measured optiimism levels at baseline. The major findings were as follows. The participants in the deceptive administration group tended to show more of an effect than those in groups 2 or 3. However, optimism mediated these results as those high in optimism tended not to respond to the nocebo suggestion, while those high in pessimism tended to respond much more to the nocebo suggestion. These interesting finding may explain why optimists tend to have better health outcomes than pessimists. So, the take home message is this: be optimistic about your medical treatments, it just might save your life. I hope to get another post on this experimental evidence section up for all of you sometime in the next few days.


Kaptchuk, T. (2006). Sham device v inert pill: randomised controlled trial of two placebo treatments BMJ, 332 (7538), 391-397 DOI: 10.1136/bmj.38726.603310.55

Kaptchuk, T., Kelley, J., Conboy, L., Davis, R., Kerr, C., Jacobson, E., Kirsch, I., Schyner, R., Nam, B., Nguyen, L., Park, M., Rivers, A., McManus, C., Kokkotou, E., Drossman, D., Goldman, P., & Lembo, A. (2008). Components of placebo effect: randomised controlled trial in patients with irritable bowel syndrome BMJ, 336 (7651), 999-1003 DOI: 10.1136/bmj.39524.439618.25

GEERS, A., HELFER, S., KOSBAB, K., WEILAND, P., & LANDRY, S. (2005). Reconsidering the role of personality in placebo effects: Dispositional optimism, situational expectations, and the placebo response Journal of Psychosomatic Research, 58 (2), 121-127 DOI: 10.1016/j.jpsychores.2004.08.011

Tuesday, August 3, 2010

Placebos: All you never wanted to know (Part 2) - Theories

Welcome back to this, the second post in my review of placebos. Now, to the fodder that makes us scientists, theories about the placebo. This is important as a good theory can both account for our data and predict new data, while also giving us something to falsify, which is apparently how science progresses.

ResearchBlogging.org

Steve Stewart Williams (all round good guy) reviewed the state of placebo theorising in 2004, and he specified that there are a number of criteria that a successful placebo theory should meet.

1) Account for both objective and subjective placebo effects (that whole obj vs subj dichotomy is very unsatisfying to me, but thats another post)
2) Should also account for nocebo effects (negative placebo effects)
3) Should account for dose reponse rates of placebos (take more fake pills, get a better effect)
4) Should account for differences between different forms of placebo treatments
5) Should account for stronger effects on subjective than objective measures
6) Must account for active placebos producing stronger effects (active placebos are real drugs prescribed in either conditions for which they have no specificity, or in too small a dose to have a biological effect)
7) Must explain placebo effects in healthy people in non clinical settings
8) Must account for the general and local effects observed in placebo analgesia



Thats a tall order, and none of the theories we have can account for all of these items. That may be because the placebo is ill defined, or down to a lack of awareness of key features of the effect (a problem in many young sciences and fields of study).

Drumroll please......
Theory the First: Conditioning. This draws from work with behaviourists and physiologists. Basically, the idea is that when given an active drug, we learn to associate the physiological response with the treatment, and then a fake treatment can activate the same response in our bodies.
The trouble with this theory is that it cannot account for placebo effects as a result of new drugs or substances that a person has not experienced before.
That being said, conditioning by means of an active drug or experimental manipulation is probably the most reliable way of inducing a placebo response in an experimental setting.
Its also worth noting that conditioning can occur to the other stimuli (like a white coat, hospital, doctor etc) present in the environment at the time of the active treatment, so figuring out what we have conditioned someone to can be quite difficult.

Theory the Second: The current 800pound gorilla in the ring, expectancy theory (developed by the inestimable Irving Kirsch) suggests that we have certain cognitions that relate to particular treatments, and that these can activate physiological responses which create placebo effects. Expectancy theory is very good at explaining placebo effects, but it raises the question of how to explain "expectancies". To be precise, Kirsch talks about response expectancies, which he defines as the expectation of a non-volitional response (Kirsch 1985). However, given that much of what Kirsch calls non-volitional can be manipulated by hypnosis and meditation, it creates problems in the definition (but that seems to be quite common in this field. In fairness to Kirsch, he knows this as he has worked with hypnosis for many years also, but its a point that gets missed by many people.

Theory the Third: Michael Hyland's motivational concordance is a new kid on the block, having been put forth recently. This theory suggests that placebos are generated by a ritual which fits the belief system or goals of the person concerned. This theory is basically a sysnthesis of the expectancy and conditioning theories, drawing upon both cognitive and behavioural features for its explanation. I do like this theories focus on the action, as this seems to be the essential part of placebo responses.

Theory the Fourth: Allan and Siegel proposed a signal detection model of the placebo effect some years ago, which is elegant in its simplicity. They suggest that placebos alter response thresholds encouraging false postive signals on the part of patients. In other words, there is no placebo save that people alter their way of responding because they feel like they should be better. Unfortunately, this theory does not fit the data on releases of opioids at specific sites to create analgesia, so it must be discarded.

Theory the Fifth: Motivation. This theory is synonomous with Andrew L. Geers, who has examined it in many different contexts in his own work. He suggests that placebo effects are mediated by the desire to reach a particular goal and demonstrates that priming techniques can enhance placebo effects. This theory is useful as it can account for larger clinical placebo effects than are typically seen in the lab. However, the problem here is that one would expect motivation to be high in patients suffering from chronic pain but they typically do not respond well to placebo treatment. The issue with these patients could be that negative expectancies and conditioning built up from previous experience inhibit the showing of the response, but thats just a hypothesis.

So, what can we make of these theories?
Expectancy definitely seems to exert a large influence. Motivation is also quite important, and conditioning probably aids in the retention of placebo effects over time. I personally like the motivational concordance theory quite a lot, as it seems to suggest why so many people report good results from alternative medicines where the RCT's suggest (but only suggest) that they are indistinguishable from placebos.The optimal path (for now) seems to be to include measures for all the various effects in large scale studies, to examine which of them contribute the most to healing.

P.S. Some anthropologists propose an embodied cognition approach to placebo, but I currently don't know enough about it to talk about it here.

Next on this series, Interesting Experimental Placebo evidence (now with added optimism!).


Stewart-Williams, S. (2004). The Placebo Puzzle: Putting Together the Pieces. Health Psychology, 23 (2), 198-206 DOI: 10.1037/0278-6133.23.2.198

Thursday, July 29, 2010

Placebo response without placebos

Often, I hear that the placebo response is an artifact, merely a control for the "real" treatment. Today, I'd like to blog about a paper that suggests that every treatment is partially placebo. The paper is Benedetti et al 2003, and is probably one of the most interesting papers I have read.

Essentially, the study looked at whether or not the awareness of treatment had any impact on the response to real drugs. To do this, they used (mostly) post-operative patients and looked at pain & anxiety.


Each treatment was given in two conditions: open and hidden. In the open condition, patients were given a drug by a doctor who told them what they were getting. In the hidden condition, the drug was given without the knowledge of the patients.

The study also looked at open and hidden interruptions in treatment, and the results were essentially the same (i.e. pain/anxiety levels were higher after the open interruptions). 

The results were clear for pain and anxiety. Open infusions were much more effective than hidden ones, with pain decreasing much more in the open condition than in the hidden one.  The drug given for anxiety was diazepam (Valium) and this drug was COMPLETELY ineffective in reducing anxiety in the hidden condition. One could take these results to mean that Valium is a placebo, and only works because people believe it will. Is it the cultural lore thats developed around Valium be the only reason its effective? Shocking stuff, and food for thought the next time someone argues that placebos are "just" controls or have "no clinical significance".

Now, its worth being aware of a few caveats to this study. Firstly, the open condition was actually measuring the difference between the presence of a doctor and the awareness of a treatment. This could be gotten around by using a prerecorded voice telling participants that they were about to get medication.  Unfortunately, no one appears to have done this study yet, but its an interesting question nonetheless.


Benedetti, F., Maggi, G., Lopiano, L., Lanotte, M., Rainero, I., Vighetti, S., & Pollo, A. (2003). Open versus hidden medical treatments: The patient's knowledge about a therapy affects the therapy outcome. Prevention & Treatment, 6 (1) DOI: 10.1037/1522-3736.6.0001a

Tuesday, July 27, 2010

No (living) man is an island

Wow, just wow.

I just finished reading a new paper published in PLOS Medicine (Lunstad et al 2010), on the association between social support and health.

The take home message: social support can increase your odds of living longer by 50%. Thats frankly, amazing. As the authors themselves note, this is a larger increase in life comparable to that gained by quitting smoking (something that i really need to do).

They examined 180 studies involving large samples (mostly community based). They had four coders assessing quality of the studies (which is pretty impressive, one rarely sees more than two), and they estimated that there would need to be over 4000 studies to reduce their results to a clinically insignificant level.

Normally, I would start talking here about some of the flaws I saw in the study. However, from my perspective, there are none. Its a wonderful study, and you all should read it (especially since access is totally free).

The question that we need to ask now, is how are these effects mediated? Is it talking to friends, the idea that people care about you, or what? I foresee a huge interest in this paper, from sociologists, anthropologists, psychologists, doctors and everyone who is the least bit interested in health.

If you're still here, go call a friend - it might save your life.

Julianne Holt-Lunstad,, Timothy B. Smith,, & J. Bradley Layton (2010). Social Relationships and Mortality Risk: A Meta-analytic Review PLoS Medicine : doi:10.1371/journal.pmed.1000316

Woo reconsidered!?

Today, a very interesting paper was revealed to me, by the magic and mystery that is Google Reader.

Now, as some of you may know, I am currently researching placebo. As part of this, I've read a lot about alternative medicine and interviewed some of the practitioners. This has all been very interesting, but until quite recently, i wasn't aware of any high quality studies which suggested that there are measurable effects (apart from placebo). It appears that this may be changing. Lutgendoprf et al, writing in Brain, Behavior and Immunity suggest that Healing Touch may contribute to improved immune function in women with cervical cancer.

I read this paper quite closely, so here's the deal.
It was a randomised controlled trial, which had three groups.
The first group was the Healing Touch group, the second was a relaxation group, and the third was usual care. The study was not blind, given that its difficult to conceal treatment allocations to psychosocial interventions. This may (or may not) be a fatal flaw to your way of thinking.

Now, there were a number of outcomes and covariates. They were looking at immune function, depression, anxiety, you know, all the good stuff. The major finding of the study was that the patients in the Healing Touch group maintained NK cell activity throughout the course of chemotherapy, while the other two groups showed declines. Pretty crazy eh? Maybe faith healing does work after all....

Its interesting that the authors actually considered the biofield hypothesis, albeit seeming to prefer others.

Now, I have a few caveats about the study, coming from my perspective as a placebo researcher.

1) The HT was given by nurses, while the relaxation technique was facilitated by graduate students. Its quite possible that the patients may have attributed more credibility to the nurses than too the grad students (a pain I know all too well....).
2) The second issue is that the nurses were licensed practitioners of HT, and as such may have been far more enthusiastic about the treatment, which can definitely exert influences on healing.
3) The authors note that they measured expectancies at baseline and after treatment, and that these did not contribute to outcomes. This is very weird, given that there is a lot of literature that suggests that perceived reality of treatment may be an important predictor of outcome, for acupuncture at least (Bausell et al 2005; Linde et al 2007)
4) The impact of touch - there was no touch in the relaxation group while there was in the Healing Touch group (obvious, but still important). It seems plausible (warning, speculation ahead) that the touch of others can contribute significantly to a placebo response). For my money, I would have preferred a real HT group given by professionals, versus a HT group given by naive patients who are taught the movements, but not the energy manipulations regarded as important by practitioners. Alternatively, use practitioners with different levels of training, to examine if there is a HT effect rather than a placebo/expectancy effect.

All of that being said, its an extremely interesting study, which builds on a recent meta-analysis of MBSR in cancer which suggests that psychosocial factors may have large impacts on the physical (d=.2) and psychological (d=1) measures of well-being. Interesting stuff.

Funnily enough, there was a Yale professor, Harold Saxton Burr, who claimed that electromagnetic fields were a prime mover in health and disease. He was mostly ignored, as were his students. I find it quite sad that such an obvious explanation for biofields (if they do exist) is ignored, given the potential for rewards from this kind of research. Then again, I'm not a biologist or physicist, so I might be horribly confused here. It does, however, remind me of the case of Wilheim Reich, a student of Freuds who claimed to have discovered a universal energy. I suppose at least Burr wasn't thrown into prison, which is progress (of a sort).

I do also note, however, a recent paper in Medical Hypotheses (i know, I know...) by Irmak where he argues that Merkel cells are specially adapted for electromagnetic perception and he hypothesised that these are responsible for the effects of Reiki and other healing touch modalties. Its all very strange, but it makes you think (or at least it makes me think).

That being said, I love a controversial theory, so your mileage may vary.

Coming up next: theories about the placebo (unless I get distracted again)

Susan K. Lutgendorfa, b, c, d, Corresponding Author Contact Information, E-mail The Corresponding Author, Elizabeth Mullen-Housera, Daniel Russelle, Koen DeGeestb, Geraldine Jacobsonf, Laura Hartg, David Benderb, Barrie Andersonb, Thomas E. Buekersb, Mich (2010). Preservation of immune function in cervical cancer patients during chemoradiation using a novel integrative approach Brain, Behavior and Immunity : doi:10.1016/j.bbi.2010.06.014

Thursday, July 22, 2010

Placebo commentary

placTaking a little break from my placebo run down, as I found these old blog posts on placebos, both inspired by a wired article which notes the problem of decreasing drug-placebo differences in recent clinical trials.

The link to White Coat Underground is here, and Greg Laden's rather more thoughtful peice is here .

The first point I would like to make about PaiMD's article is that he seems to be unaware of all the experimental work done on placebos, focusing instead on the clinical trial use of placebos.

Recent work has shown that the nature of clinical trials actually decreases the placebo effect, due to whats known as the expectancy theory. It goes like this: in a trial, participants are told that they may get placebo or they may get the drug. This is a conditional expectancy. In experimental research, they are told that it is a powerful painkiller (uncondtional expectancy). The UCE's have been shown to produce much better results, with less requirements for painkillers following surgery, with stronger coffee effects and indeed even when measuring sleep patterns. 

So, what does this tell us? To me, it seems to indicate that placebo effects are underestimated in clinical trials, and if they appear to be getting stronger here then something weird is going on. I actually agree with Greg Laden's idea that it may be down to stronger cultural associations between pills and healing, which creates stronger expectancies that then interfere with the testing of new drugs. This could be tested properly by a meta-analysis comparing drug advertsing and placebo response across countries, and hopefully I'll get a chance to do this review after i finish my doctorate.

Something else that struck me, as I read through the comments, was that they were recapitualting the history of studies of placebo. We had people claiming that placebos were useless, that they were only a control and an artifact, while others claiming that modern medicine was a lie and only the mind had power (the burt dude at the end of Greg Laden's comments was hilarious, especially the way he made so many people angry).

Anyway, a few issues came up and I thought i should post some more recent research which hopefully, can illuminate the debate.

Firstly, the meta-analysis by Hrobjarrtson and Goetzsche (god those names are hard to spell). Now, this meta-analysis claimed that placebo was not significantly different from no treatment across 114 clinical conditions.

Personally, i can't see how they were able to do this meta-analysis, considering the extreme heterogenity of the data base (which is supposed to mean that you avoid the meta-analysis). Anyway, this review drove a lot of placebo people into studying pain, as it was the only part which they didnt slate.

Now, more recently, another meta-analysis was done by Meissner, Distal and Colleagues. This meta-analysis actually seperated the trials they found into different components, and found that there was a large placebo effect (d=.5 approx) when the outcome variable was a physical parameter, but none when the outcome variable was a hormone level. They re-analysed H&G's sample and replicated their results. For some reason, this study didnt get nearly as much attention as the negative one.

So, essentially, we can see placebo effects in some areas but not in others, which makes a lot of sense, if you study the literature. An alternative explanation is that the clinical trials only activated the expectancy pathways which affect these outcomes, while the conditioning pathways were not activated. This makes sense if you look at some of the work by Benedetti et al.

Another canard that was raised in the comments was the notion of response bias. This was popularised by Allan and Siegel in their Signal Detection Theory of the Placebo Effect. While I like SDT,  i dont think that it can account for all of the observed placebo effects. Referring to the surgery paper above, what the results suggest is that most of the response variability to opioids is the result of placebo effects, which is the result (mostly) of the endoegnous opioid system. Response bias cannot account for these well documented effects.

Anyway, thats probably enough for now. Hopefully I'll get my second part of the placebo review done today, if not my cousin's wedding will intervene and it'll be next Monday or Tuesday.

Tuesday, July 20, 2010

Placebos: All you never wanted to know (Part 1)

Well, its that time of the week again when I cant put off blogging any longer. I have a terrible habit of putting off blogging (which i enjoy) to ensure that i actually complete my Phd. Therefore, I've decided to start blogging about my actual research.

To whit, everybody's favourite sugar pill: placebo!.

This will be a relatively long series, with about seven parts. Essentially, I'm updating my literature review this week, so I'll blog about each section as I do it (perhaps before, if i get really into this series).

Anyway, we'll start with the hard part: definitions. The placebo is something that most people in our society have an idea about, but it's a surprisingly difficult phenomenon to define. That being said, almost everyone in the field has had their hand at it, so there's a lot to choose from.

The first, classic definition is from Shapiro & Shapiro (1997) - the placebo effect is the result of a placebo treatment.
Pretty illuminating eh? The sad part is that this definition was the end of their long and ultimately fruitless search for a good way of describing the phenomenon.

That being said, it has its good points. Firstly, it can account for all placebo effects, it doesnt presuppose any mechanisms, and it doesn't limit the phenomenon unduly.

However, its bad points are legion also, the largest being that its a tautology, and not in the universal truth sense.

Probably the definition most people are familiar with is this one: the placebo effect is the effect seen in the placebo arm of a double blind trial. However, this one also has large problems. The major issue with this definition is that not all of the response in a placebo arm will be down to the placebo.

One thing that can happen to mess up this definition is a funny little phenomenon called regression to the mean. Regression to the mean is a statistical phenomeon that works as follows. There are sick people, whom you select for a trial on the basis of their sickness. Say if the sickness was measured on a ten point scale, they would be a seven. Now, even if the treatment you give them is harmful, it is likely that some of them will report less sickness after a week, because its more probable that the next measurement will be closer to the mean. I'm relatively sure that this could be eliminated with a perfectly reliable instrument, but we don't have any of those (certainly not in psychology).

Warning: previous example requires a normal distribution. If in doubt, consult a friendly statistician ( if you can find one). Update: apparently it only require a distribution with equal marginal probabilities - i do remember seeing an explanation that used the normal distribution though.

Another feature that can cause issues in estimating the placebo effect is the natural history of a sickness. The major problem here is that people's health may wax and wane, and again if you select a person for inclusion on the basis of sickness, the natural history effect could cause them to report feeling better even in the absence of any real effect from your treatment.

So, if you actually want to estimate the placebo effect accurately, you need a no treatment group. These poor suckers are recruited into the trial on the basis of sickness, and then don't get anything to help, except to be poked and prodded by doctors and nurses. Many clinical trials don't include these groups, and its easy to see why. Bad enough that you have to give half the participants placebo, but to give another group of people nothing, thats way too harsh. (We'll get back to clinical trials with no treatment groups later, i promise).

So, following on from this long and rambling excursion into clinical trials, we can update our definition of the placebo effect to as follows: the placebo effect is the improvement seen in the placebo arm less the improvement in the no treatment arm.

So, this is the workhorse of placebo definitions, but it still won't do. This definition requires a particular setting which does not fit where many placebo effects take place. For example, the response shown by a patient to the archetypal sugar pill after a visit to the doctor cannot be accounted for with this particular definition. So, we'll have to move on.

A more recent definition came from Price et al (2008) where they claimed that a placebo was any effect which simulated a treatment.

A fascinating recent study by Oken et al gave us some interesting findings. Essentially it was an RCT which randomised seniors (65-80 years old) to either placebo or no treatment. They were told that the pill would improve their memory, and lo and behold it did. They scored better on measures of verbal and working memory (interestingly enough only the men showed this effect).

This is a problem for definitions of the placebo which rely on the notion of treatment. I can't really see how the effects of this pill could be considered such, they were a neuro-enhancer rather than something to stave off decline. So, it looks like we may have to confine the Price et al definition to the fire.

A definition which can account for the experiment noted above is that of Daniel Moerman, an anthropologist: a placebo is the positive mental or physical effects induced by the meaning of a substance or procedure. He prefers to call placebos the meaning response, which is a much nicer phrase than placebo (or at least has less negative associations).

I really like Moerman's definition (and his book is really very good, even if you're not a specialist). However, there are some weasel words in there, the main culprit being "meaning".
So, boys and girls, what does meaning mean?

Presumably it refers to the interpretation one gives to something, but its a hard word to define, and even worse, its a horrible word to attempt to operationalise (i.e. figure out how to define or measure it). Although, that being said, i suppose we could just substitute meaning for expectancy and get on with our research.

That, dear readers, would probably be letting you off a little lightly though. So, lets move on to another defintion, this one by a wonderful scientist and human being, Dr Zelda Di Blasi (2001) she and her colleagues renamed placebo (everyone loves doing this this) to context effects (which again, is nice and doesnt have negative associations) and said: a placebo is an inert substance which has an effect due to context.

This is nice, it again leaves open the mechanisms and wonderfully enough, doesn't preclude none health related placebos. However, context (to me at least) means what surrounds the patient, and this ignores the fact that focusing on bodily sensations increases the size of the placebo effect

The other issue with this definition is that it somewhat marginalises the role of the person who experiences the placebo effect, as it implies that all the impetus comes from outside, when clearly the internal experience is perhaps the defining characteristic.

Moving on, I think that the term placebo is growing more and more useless. These days, its used by many and seems (in psychology at least) to be a convenient shorthand for the effects of the mind on the body. My first exhibit for this kind of thing is the 2007 paper by Crum and Langer (if you're Irish you probably giggled at that last name, otherwise, carry on), which is called: mind set matters: exercise and the placebo effect.

The study itself is really interesting, they took a large group of hotels, matched them, and randomised hotels to either control or treatment. In each of the hotels which were in the treatment, they told the cleaning people how many calories they burned in the course of their work. In the other hotel, they just talked to them for a while and got them to fill out some forms.

The really interesting part was that the women (i believe the entire sample was female) who were told about their calorie burning habits lost more weight over the next month, and were both healthier and happier by the end of the study. I suppose the take home message from this study is that you should learn how many calories you burn in your daily activities if you want to lose weight.

However, my point here is that the use of the term placebo effect here is confusing and causing problems with our understanding of the concept. I personally would much prefer to have a placebo effect that only related to healthcare and medicine, along with mind/body effects or expectancy effects for the Oken and Crum studies I noted above.

To be honest though, I'm not going to lose too much sleep over the definition of the effect. Having read some of the Shapiro papers where they grapple with the construct over the years, I've come to the conclusion that its a waste of effort and time that could better be spent trying to figure out how to induce the damn thing (whatever we call it) reliably.

Tuesday, July 13, 2010

Science, Religion and Evidence

I read a lot of blogs, especially the ones over at Science Blogs. I'm also quite lazy, so I merely subscribed to the three channels I was most interested in (Brain and Behaviour, Humanities and Social Sciences and Medicine and Health).

Now, many of the science bloggers seem to be quite virulently sciency, in that they appear to regard the mere existence of religion as a personal affront. Now, personally I don't follow any religion (raised catholic, abandoned it following reading up on church history at around 12), but I am very interested in the experiences recorded throughout time by mystics, monks and saints.

I personally reckon that there may be some truth in all this religion stuff, at least the idea that humans can experience the numinous and/or sacred by working at particular practices. It is a fact that religiosity is associated with better health, and that forgiveness, gratitude and compassion appear to have substantial health benefits.

Now, finally we reach the meat of the post. Recently, a science blogger wrote an article entitled does theology progress. His major point appears to be that science jettisons theories the moment they contradict the evidence (well mostly, but thats a whole other post), while religions do not typically do this. While I would agree that many religious believers do not do this, i would suggest that the impetus of religion and spirituality is to keep searching until whatever it is that humans look for has been found.

Another issue that illuminates the science/religion divide is this: science offers descriptions, religions seem to offer interpretations. Put another way, science deals with information while religion deals with meaning. Now, I would argue that the social sciences, properly done can investigate particular sets of meanings, but i am doubtful that we will ever be able to reduce them to information or discover mathematical laws that guide their experience (then again, i could well be wrong on that).

I suppose my major point here is that while religion defined as the book or practices on which particular faiths are founded may not progress, the people reading the book certainly do, and this is what causes such wildly different interpretations of the same book and teachings. Of course, many people who profess to believe do not follow the teachings exactly (or even at all - how you can be a Christian and refuse benefits for the long term unemployed is beyond me), but the point is that the interpretation and meanings given to a particular scripture are not inherent in the text but rather emerge from the interaction of text and reader, and as such, the idea that religion is stale and unchanging seems to me to be absurd.

Woah, went a bit post-modern there. I think I should set out my stall somewhat more clearly though. I believe (this is my faith) that what people call God is an experience which we all have the potential to achieve through dilegent work. I believe that this process is entirely amenable to the study of well conducted science. I do not believe that the experience itself can be reduced to neural firings, but again I could be wrong there. The remembrance of it certainly is related to particular patterns of neural activity though.

Here's the kind of research agenda I would like to see:

1) Large, globally diverse sample
2) Longitudinal design
3) Measurement of practice, mood and other personality variables daily
4) Measurment of physiological data either by self administration (BP, HRV etc) and by clinicians on a monthly or weekly basis
5) Examination of different cultural beliefs and their relationship to the outcomes of practice.

This study would probably need to continue for 5 years minimum, to give us a decent chance of observing one of these experiences in controlled conditions.

Now, the funny thing is that the groundwork for this study has been done. The use of Mindfulness Based (insert problem here) therapies has become very popular in the last few years, and these are the kinds of people we should follow. They tend to come from different walks of life and cultures, and they have already been trained in meditation with minimal preconceptions.

Of course, we'd need to examine the different kinds of meditative practices, as they may well have differential effects. Again, some of the groundwork has been done on this but the long term focus is lacking.

Thats where I stand on this whole thing, anyway.

Also, i regard evolution (the arguments about, not the FACT of) as a distraction from this grand project, in many ways its more important to understand where we are going than where we came from. Of course, the two are not mutually exclusive either.

Wednesday, July 7, 2010

Placebos and Power

My research is very much focused on placebos. Therefore, I'm at least tangentially interested in homeopathy and its use. Recently, the BMA came out against homeopathy (and by extension, the placebo). This has been picked up by a BMJ blogger and the Guardian

Now, this is obviously a subject of great interest to me, regardless of the efficacy or otherwise of homeopathy. The placebo has been demonstrated to be very effective in relieving pain, depression and ulcers. The issue then becomes, if one is aware that placebo can help, what is the grounds for denying this effective treatment to a patient?

Many doctors would argue that the use of placebos has the possibility to diminish trust, and research has shown that this trust can be a powerful healing force (the therapuetic alliance, as it were).

However, in this case, they should probably not have come out against homeopathy, and indeed should probably encourage people to try alternative medicines more generally. My reasoning for this is as follows:
a) Doctors do not wish to prescribe placebos due to the deception
b) Even if they did, their knowledge that they were doing so would probably reduce the efficacy of the placebo
c) Homeopathy is a placebo (just assume this is true for the moment)
d) Homeopathists beleive in their treatments.

Therefore, its a win win for doctors to encourage (privately of course) patients to see homeopathists. The patients will gain some benefit, the homeopathist will be able to give them more time and attention (which is critical for placebo) and the doctors need not engage in any unethical behaviour.

Its simple really, but I'm not really surprised that the BMA didnt go for it. The placebo and this kind of stuff pushes against most of what doctors believe, and its very difficult to go against one's beliefs, even for the best of causes.

Friday, June 25, 2010

Error messages and their value

I use R for most of my statistical work these days. Why you say? Well, its open source, free and has the most comprehensive set of add-on packages I have ever seen.

Its also sometimes incomprehensible and annoying. Take for instance, an error message I got a while back: Error in cov.wt(z) : 'x' must contain finite values only

I was attempting to do a factor analysis, and the above popped up. Naturally I was a little confused, as I hadn't allowed for participants in my surveys to respond "infinite". However, upon Googling, I discovered that factor analysis (along with most statistical methods) is highly sensitive to missing values, and R appears to treat these as infinite - i would prefer to call them undetermined, but I didn't write R.

The same thing happened to me today, when working out some correlations and I kept getting NA as the result. Given that i knew these correlations should exist in some form, I was confused. However, the problem was again missing values and when i used "pairwise complete obs" i got some sensible results.

The point (insofar as I have one) is that I had been using SPSS for many years, and never really copped to the issue that missing values were. The convenience of the GUI was preventing me from learning about the methods I was using.

And that, ladies and gentlemen, is one of the many reasons why I will continue to use R. (Don't worry, I'll go into excruciating details about the benefits thereof at another time).

Monday, June 21, 2010

On publishing and journals

So, I'm currently writing my first paper for publication. Woo hoo, and what not.

Therefore, I've started to pay attention to things like impact factors. Impact factors, for those of you who don't know, are numbers that reflect how often an average paper from a journal has been cited over the last 5 year period. Think of it as a journals reputation, if you will.

Many people claim that the bigger the impact factor, the better the journal. This occurred most recently in the Chronicle of Higher Education where a number of people moaned about the amount of research that goes uncited. Of course, this doesn't control for the number of people in a field and the "sexiness" of a topic, so obviously its not the whole story.

Now, the other major factor (for me at least) is the time taken to review. Psychology apparently has a long, long time to review journal articles. I've seen many papers that show a two year lag between submission and print publication. Most journals operate a pre-printing service these days, which means that one might only wait a year before others see your work.

So, when choosing a journal, I find myself making a trade-off. Should I go for a lower impact journal that reviews quickly, or a higher impact journal that will take longer, but make my research more visible?

Another point to remember is that you can't submit to multiple journals at once, so the reviewing time is an opportunity cost for the researcher. This is of particular relevance for students like myself, who need to get papers published quickly in order to be able to show them on a CV and thus get a job (and the opportunity to do more research.

I'm still not decided on which route to go, and time is running out. The deadline is somewhat external and somewhat self-imposed. My funders want a report in ten days, and i'd like to be able to claim that a paper is under review by that time. If anyone is reading and has advice, it would be greatly appreciated.

Thursday, June 17, 2010

On the absurdity of marking schemes

So, I'm a PhD student somewhere in the South of Ireland.
Recently, I taught my very first class, which was nice.

Even more recently, I had to mark all the scripts, which was not so nice.

You see, in my University, psychology (which i have been assured is, in fact, a science) is examined like a liberal arts degree i.e. with essay questions.

All well and good, you say. However, the marking scheme - which is handed down from on high - is crazy. And not in a good funny sortof entertaining way but in the hair-pulling chair destroying data falsifying kindof way.

Here's a breakdown of how the marks work:
A (or First) 70-100%
B(OR 2.1) 60-69%
C (or 2.2) 50-59%
D (or 3.1) 45-49%
E (or pass) 40-44%
F (for fail) 0-39%

Now, I'm sure that many of you can spot the issues here, but I'll illustrate anyway. The A covers 30% of the scale, and is subdivided into three (as are the others). However, the A grades are separated by 10% each (75,85, 95) while the E grades are separated by only 1% each.

So essentially what the marking scheme dictates is that there is ten times more difference between the A grades than the E grades. Its absurd, and yet occurs everywhere on this emerald isle (and also in the UK, but don't quote me on that).

The worst part, for me at least, is that the A is rare, very rare in fact, and most of the marks are squashed into the 55-69% range, which gives students a very misleading idea of their relative standing in the class and between classes.

There's a sizeable majority of the scale (A and F) that is used perhaps 3-5% of the time, and everyone else just gets pushed into the dank and unwholesome middle. Now personally, I'd prefer it if the scale was divided into 15 points per grade and if A's were a possibility rather than a carrot used to urge undergraduates to engage in insane amounts of study for very little reward.

Unfortunately, its not up to me, but rather up to the NUI, and I hardly think they'll change it because of this blog post. In the unlikely event that they do change it, i would of course accept recompense from any grateful students or teachers.