Stat Counter

Friday, December 24, 2010

Placebos - now with added ethics!

Placebo effects can be induced in patients without deception, according to a new study in PLoS One. The study was an open label (no blinding for patients) single blind (the investigators did not know which treatment participants were given) controlled trial in 80 people suffering from Irritable Bowel Syndrome.

The patients were followed for one month, with a baseline, end and midpoint assessment. According to validated self report measures for the syndrome, the patients who were given the pill improved much more than the patients who received no treatment. Its important to note that the groups were randomised, and they were also matched in the amount of interaction they had with the medical providers.

An interesting point in the study (which doesn't appear to have been picked up by other science bloggers) is that some of the patients (N=17) also received counterbalanced provider interaction - they saw a male doctor once, and and also a female nurse. Contrary to some conceptions of the authority of the provider having an impact on the response to treatment, there were no differences in outcome which could be attributed to this difference. Given the small number of patients in this group, thats not very surprising. I really wish this trial had gone the whole hog and randomised everyone to see both practitioners, as that might have provided some very useful data.

Open-label placebo produced significantly higher mean (±SD) global improvement scores (IBS-GIS) at both 11-day midpoint (5.2±1.0 vs. 4.0±1.1, p<.001) and at 21-day endpoint (5.0±1.5 vs. 3.9±1.3, p = .002). Significant results were also observed at both time points for reduced symptom severity (IBS-SSS, p = .008 and p = .03) and adequate relief (IBS-AR, p = .02 and p = .03); and a trend favoring open-label placebo was observed for quality of life (IBS-QoL) at the 21-day endpoint (p = .08).

The above is from the finding section of the abstract, but it cogently sums up the major findings of the study. Now, there are a number of important caveats to the straightforward interpretation of the study. There are also a number of interesting implications arising from both the study, and the reactions of some of the better known science bloggers.

My first issue with this study is the number of statistical analyses which were carried out, with only 80 participants and at least ten significance tests reported in the article (and probably more which were not reported), the authors probably should have corrected for multiple tests (the most popular approach is to divided the required p value by the number of tests). That being said, this was a pilot study, so the results will require replication in a larger trial which would ideally have a protocol with details of analyses to be carried out published beforehand (what can i say, I'm an optimist).

Orac, (of respectful insolence fame) critiqued this study on a number of grounds. The first was (he claimed) a failure of randomisation. He based this on the numbers with each type of IBS (diarrhea or constipation primary) and argued that this could be responsible for the observed improvement. While i do take his point, i would suggest to my readers (all three of you) to look at the Table itself published in the article. Now, it can be seen from the table that indeed the groups did not appear matched on type of IBS.


However, if you look a little more closely in the table, it can be seen that the open label group had a longer mean duration of IBS,  a higher initial mean symptom severity score, and a lower initial quality of life. If anything, if there was no change over the course of the trial, then the no treatment group should have come out superior.  Given that only stable IBS patients were admitted into the study (look at the confidence intervals for the lenght of time with IBS) it seems unlikely that regression to the mean could account for these results.

The effect size for the mean difference between groups at the end of the study was d=.79, which is a large effect by anyone's standards (see the pun? you're a nerd if you do). To explain the effect size measure, its a difference in standard deviations, and one standard deviation is the difference between your high school teacher and Einstein, as measured by IQ (assuming an IQ for the teacher of 115 and giving Einstein 130).

This is not a small difference, and yet Ed Yong reports that Edward Ernst claims this is too small to be clinically significant, which makes me wonder what effect sizes he sees in everyday practice, cos thats a large effect to me (I'd kill for an effect size that large, i'd get t-shirts printed and everything).

Orac also takes the research to task for deception, as the placebo pills were described as empty sigar pills which have been proven in rigorous clinical testing to have an impact on self healing processes. He claims that this is the worst deception of all, far nastier than those involved in ordinary Randomised trials. Frankly, i don't agree. There have been a number of meta-analyses  conducted on the placebo, as well as re-analyses of data from many, many clinical trials, and what participants were told was not a deception, unless telling people about what clinical trials have shown about any medical treatment is a deception. So, i really dont see why this bothers him.

 Orac also uses guilt by association as he notes that the study was funded by the National Council for Complementary and Alternative Medicine, but i believe that argument is beneath any self respecting scientist so I'll ignore it.

PaiMD also takes note of the study, and claims that what was compared here was one non biological treatment against another. I would disagree, what this trial shows is that care and some kind of medical ritual (take these pills two times daily) are much better in combination than they are apart. That, to me, is perhaps the most interesting finding of this study.

Something which may also interests students of the placebo was the theoretical implications of this study. I've talked elsewhere about theories of placebo, and briefly i think that this study shows that the effects of expectancies are subservient to those of ritual. This is a clear mark in favour of Hyland's theory of motivational concordance, which basically holds that placebo effects arise from what we do that we find meaningful, rather than what we think (or expect) about a treatment. See the link to my previous post on theories if you'd like a more in depth discussion of the theoretical approaches in the field.

One brief detail that i would like to know more about in the study is how many pills the open label group took, and whether those who overdosed got more benefit. The authors note that a pill count was taken, but they do not report the results of this measure, which is a shame. This measure would have been especially great as it could be modelled as a Poisson variable (for count data) and then demographics and other measures collected could have been regressed against it to understand the causes and correlates of this interesting variable a little better.

Now, my own thoughts on the future of this line of research are as follows: it will be difficult to replicate this effect when a drug is being used in the study. This seems intuitive to me, given that the meaning of getting placebo or drug is very different from the meaning of getting placebo or no treatment. Mind you, I hope I'm wrong here. An interesting line of research which bears upon this study is the work using the balanced placebo design. Essentially, this work combines the drug and placebo arms of your standard clinical trial with two deceptive conditions, where the participant is told they get a drug but get placebo and vice versa.

I personally would find a replication of this study using that design to be far more interesting (though ethically challenging) as the effects of placebo-placebo condition could be contrasted with the others. Again, this is probably just a pipe dream, but if ever have enough funding, i'd like to make this happen.

One way in which these findings can be explained is as follows. I've spoken before about the effects of self monitoring of internal processes on the placebo effect (essentially, it increases them). It may be that the open label group, while taking the pills paid more attention to their bodies, and this attention increased their self healing processes (which is all the placebo effect really is). This work on somatic awareness also ties into the results of a recent meta-analysis on in what conditions the placebo is effective Just a thought, probably needs a little more refining into a useful predictive theory.

Anyway, thanks for reading. You may have noted that I haven't really talked that much about the paper itself. Its is PLos One, which is free to everyone, and its very clearly and engagingly written, so go read it yourselves, rather than relying on me or any other blogger to give you their perspective.


Kaptchuk, T., Friedlander, E., Kelley, J., Sanchez, M., Kokkotou, E., Singer, J., Kowalczykowski, M., Miller, F., Kirsch, I., & Lembo, A. (2010). Placebos without Deception: A Randomized Controlled Trial in Irritable Bowel Syndrome PLoS ONE, 5 (12) DOI: 10.1371/journal.pone.0015591

Wednesday, November 17, 2010

Placebos: All you never wanted to know (Part 4) - Neurobiology

Before I begin this, i'd like to note that I am not a  neurobiologist, and its a weak area of mine, so please be gentle and tell me if a make a glaring error.

A strand of placebo research which has become more and more important with time has been the increasing focus on brain correlates of placebo responses. The biochemical history of placebo begins with Levine and his demonstration that naloxone blocks many placebo pain responses. Induced from this is the notion that placebo pain is mediated by the endogenous opioid system. This is not always true, and seems to vary based on the type of response which is wanted. . The lasting contribution of this research is that it paved the way for the placebo to come in from the fringes of medical science, as even the most dogmatic materialist could not deny the biochemical evidence as demand characteristics.

 A recent meta-analytic review seems to argue that placebo effects in pain are quite large (d=.89) and that naloxone is quite effective in reducing them (d=.55), pointing towards an interpretation of placebo effects in pain being substantially mediated by endogenous opioids.

In this area, the work of Benedetti and his colleagues has been instrumental in unveiling the biochemical pathways through which placebos exert their effects, and much of this work is summarised in his book  along with the work of others. It appears that both the opioid and dopaminergic systems are involved in the placebo effect. While Benedetti and others have done much of the research into the opioid system, De La Fuente Fernandez \cite{DeLaFuente-FernAĆ’A¡ndez2002} has blazed a trail in looking at the dopaminergic system.

It has been observed that the dopamine system activates not just to reward, but rather the expectancy of reward, and that this release varies as a factor of the certainty of the expectancies. In one study, the activation of the dopaminergic systems during placebo analgesia was correlated with activity observed during a monetary reward task, suggesting that the mechanisms of reward are a common feature of placebo effects\cite{Scott2007a}  It has been argued that there is a descending link from the OFC to the Periaqueductal Gray Area (PAG) and from here to the amygdala, and that this link is responsible for the observed placebo effects.


 Another interesting suggestion was that placebo analgesia experiments which show altered brain activity in the rACC and OFC demonstrate the existence of a generalised expectancy network. This hypothesis received some support from a recent experimental study which used either true or false sound cues to create expectancies for particular aversive tastes. This study showed that the rACC and OFC and to a lesser extent, the DLPFC activated in response to these expectations, suggesting that these parts of the brain may well be associated with the expectancies

An interesting finding arose from an experimental study into patients suffering from Irritable Bowel Syndrome (IBS) \cite{Lieberman2004}. This study looked at placebo using a disruption theory account, which accounts for neural changes due to placebo in terms of inhibition. The authors found that although the right ventro-lateral pre frontal cortex was activated by expectancies of analgesia, this activity was totally mediated by the dorsal anterior cingulate cortex which argues that this part of the brain is more foundational to the placebo response.  Another study which looked at patients suffering from IBS found that naloxone did not reduce the size of placebo effects, which would suggest that these were not opioid mediated \cite{Vase2005}.

A further discovery around placebo analgesia is that it can be directed at specific sites in the body \cite{Benedetti1999} . This study induced expectancies of placebo responses at either the right or the left hand, and demonstrated the expected placebo effects. These effects were completely antagonised by naloxone, which suggests that they were mediated by the endogenous opioid system. This finding is interesting, as it suggests that the opioid systems can be activated at specific parts of the body, and not just globally as some former theorists have claimed. A more recent finding \cite{Watson2006} found that perhaps 50\% of participants in a placebo analgesia study generalised a placebo response across both arms, even though cream was only applied to one arm for each person. This study would suggest that the placebo analgesia phenomenon is quite malleable and subject to individual interpretation.

There is some evidence to suggest that some of the effects may involve both descending and ascending pathways within the brain, judging from the results of a study on mechanical hyperalgesia \cite{Goffaux2007}.This study used a counter-irritation technique and the use of a basin of water to act either as a placebo or nocebo. The authors suggested that the reflexes in the arm should not change if the placebo effect was completely cortically mediated, but the results suggest that descending pathways are equally as important in placebo analgesia. These pathways are controlled from the mid-brain and these findings suggest that the placebo effect exerts changes in large portions of the body, and is not exclusively a cortical phenomenon.

Further evidence in favour of this idea comes from the study of Matre \cite{Matre2006a} who noted large differences between placebo and control areas of mechanical hyperalgesia, again suggesting the involvement of the whole body in the response. In this context, the results of Roelofs et al. \cite{Roelofs2000} are worth considering. Using similar techniques to the two other studies referenced in this paragraph, they found no evidence that placebo effects cause changes in spinal reflex activity. However, this study also found no evidence for a placebo effect in general, which weakens their conclusions. It is worth mentioning that even though they found no significant effects, they did find a correlation between the brain activity and spinal reflexes, which suggests that they found an effect, but their study was either underpowered or used a badly designed expectancy manipulation (most likely the latter) \cite{Goffaux2007}.

An interesting finding which has come about through placebo research is what is known as the uncertainty principle in analgesia \cite{Colloca2005} , where they argue that the effects of any analgesic can not be accurately measured in a clinical situation as the awareness of being given this substance will activate the opioid system which will further reduce pain. This finding arises from work done previously, where it was shown that open injections of painkillers or placebo registered far more variability than hidden injections, suggesting that while physiological responses to analgesia may be similar across people, the awareness of treatment may invoke differential activation of endogenous painkilling systems which cause the total effects to appear to vary quite substantially \cite{Amanzio2001} . Some research has also confirmed that placebo and opioid analgesia share the same neural patterns of activation in the brain \cite{Petrovic2005}.

Some sterling work has also been done in the area of depression and placebo response. A fascinating study suggests that prior to treatment, placebos may induce changes in neurophysiology which predict later treatment response. This is an extremely interesing finding, however the authors used a new measure (that of EEG cordance) developed by themselves and to date, there have been no replications of the study. Another useful study of placebo neural activity in depression has also been conducted comparing the activation of particular brain regions following treatment with either prozac or placebo \cite{Mayberg2002}. . One fascinating finding of the Mayberg et al study is that areas of the striatum were activated, and this region of the brain is known to be rich in dopamine receptors, which may suggest that while the placebo response in depression is primarily opioid mediated, the effects of SSRI's may also influence the dopamine systems, which may account for their (slightly) superior effectiveness overall. However, some research shows that psychotherapy activates different brain regions in the treatment of depression, which argues against the existence of a common depression treatment pathway in the brain

So far, so interesting. My one complaint about this stream of research is that much of it reiterates old findings backed up with correlational evidence of which parts of the brain are involved. To the extent that it furthers understanding, its great, but to the extent that it substitutes for it, its worthless. 

Weeks, S., & Tsao, J. (2009). Fabrizio Benedetti, Placebo effects: Understanding the mechanisms in health and disease , Oxford University Press (2009) ISBN: -13: 978-0-19-955912-1, 310 pages, $59.95. Journal of the Neurological Sciences, 281 (1-2), 130-131 DOI: 10.1016/j.jns.2009.03.013

Thursday, September 16, 2010

Sweave, LaTeX and R

Yesterday, I finally got the hang of Sweave, R and LaTeX.

This essentially means that I can write my scientific paper in  LaTeX, insert code chunks in the text, feed it to R (through the Sweave package) and get perfectly formatted output in APA style for any paper I choose to write. Its taken me a few months of on and off trying, but I've finally done it. That being said, I'd like to share some of the things that caught me out, so that others can benefit.


Before Installation:
If you've been using LaTeX or R on Windows, they were probably installed in the  Program Files folder. This will cause you no end of problems.
Re-install these programs on C, in a path with no spaces, as LaTeX doesn't like spaces in path names. For example, install LaTeX in C:\Miktex and install R in C:\bin\R. This will head off a lot of problems that you will encounter.

When installing LaTeX be aware that there are a number of text writing programs which you can use. Of these, I am using TeXniccenter, as it came with my distribution. Its also open source, which is a plus. Others include WinEdT, which is shareware and apparently quite good. Vim, Emacs Speaks Statistics are the only text editors that provide completion for R code, but both of these programs take a lot of effort to learn. In any case, working entirely from LaTeX as the start is very difficult.

The next step is learning to use R. If you are a psychologist, download the arm package and the psych package, as this will give you useful regression diagnostics, and psych provides all the psychometric tools one could need (see here for the authors website, which i devoured when i started laerning R). Unfortunately it doesnt provide IRT methods, but these can be accessed through the ltm and eRm packages, which are also easy to obtain (Select the install packages option in R menus, select a mirror site close to you and select the package name - done).

For exporting to LaTeX, there are a number of packages which do different things. I'm currently using xtable, but this doesn't have a defined outfit profile for factor matrices, which is pain. The manual does show you how to define new classes though, and I will share my results when I have made this work.

One extremely important thing to remember (and something that stumped me for a while) is the syntax for inserting R code.
<>=
some R code
@
The double arrows and the equals sign signal the start of the R code chunk, the options within the arrows define how the output looks (echo=FALSE means that the R code will not be shown, and results=tex tells LaTeX to format the results in its own format). The @ sign ends the code chunk. Now, the part that got me was this: the R code, arrows and @ need to be left justified, otherwise this does not work. This means that if you want to insert a table from your results, do this after running the code through Sweave.

At the moment (since I am neither an expeRt or a TeXpert) I am creating the LaTex objects in R, and then telling R to print them to LaTeX. This allows me to ensure that the objects are created properly before I send them to LaTeX. Sweave will tell you if it has a coding problem and where that occurred,but some things look OK until you actually see them.

The next step is to download the apa package for LaTeX, this will allow you to format the paper in APA style. This is the part that tends not to work if your LaTeX distribution path has spaces in it, so make sure that doesn't happen (I actually reinstalled R and LaTeX on my machine in the recommended places, and now it works like a dream).

You will probably need to learn a little LaTeX, but if you use WinEdT, TeXniccenter or Lyx, then there is  GUI with menus that can aid in this. There are some Sweave templates scattered about the web, and you should probably use one of these. Its probably worth reading either this or this (or both) guide to using LaTeX.

With R, as long as you understand some statistics, its easy enough to Google and then read the recommended files. The introduction is extremely terse and focuses on fundamentals rather than applied analysis, but its useful for its description of summary, plot, lm and the other really useful generic functions.

Friday, September 3, 2010

My blogging motivations

1. Sum up your blogging motivation, philosophy and experience in exactly 10 words:

A ranty blog about life and science which I feel gets ignored.

Pass it on to 10 others. 

If you read this and are blogging and have not yet done this, consider yourself tagged.

From: Dr. Girlfriend

LaTeX success!

Today, I successfully typeset my first paper using LaTeX and BibTeX.

I know that no-one else cares, but nonetheless, I feel much better about my life.

LaTex, for those of you who don't know, is an open source typesetting program which allows you to create all kinds of text documents through the use of simple scripting commands and output the results in a variety of files (I tend to use PDF).

LaTeX can also be used with the open source software R, to embed the results of analyses neatly in one file, which then creates your paper (I haven't made this work yet, but tomorrow is another day). 

The major advantages (as I see it) are that you can update analyses and the finished paper much more easily, and LaTeX draws all the tables for you (which is good, because R is not good at producing pretty tables).

LaTex was originally invented to produce mathematical documents, so equations et al are very easy to do.

LaTeX is awesome and the future of science and reproducible research.

R is also awesome, but I've talked about that before.

With these tools, I shall be a publication machine! (if i can collect enough useable data).

Another advatange is that the three tools are available on all formats, and so soon (perhaps next month) I shall delete Windows and go completely free software. You should too.

Sunday, August 22, 2010

Grad school, Irish style.

Taking some time off from my placebo series, I'd like to talk about my experience as a Phd student in Ireland.

This is somewhat inspired by the zomg grad school blog carnival, but i was too busy to submit in time.

Its also inspired by the fact that everyone who submitted to that carnival was a natural scientist, which impels me to give the social science side of the equation.

First, a few notes on Irish phd's versus the american grad school experience.
First off, there is very little funding, Ireland is in a depression at the moment, and never really put much money into the social sciences before that.
Secondly, what funding there is (in my area at least) tends to be awarded to the student rather than to a PI. Luckily enough, i did get funding (although it doesnt cover conferences or expenses, which sucks).

Also, there tends not to be many courses, you are essentially thrown into research, which I prefer but which many people would not find appealing. I sometimes wish that I'd had people to explain methods and stats in a lot of detail to me at the start, but then again, learning that stuff myself has been extremely rewarding.

So, without further ado, here are my top ten tips for surviving a PhD.

1) Do something you like - this is extremely important, as if you don't like your thesis, its unlikely that you will finish on time or that anyone else will care. Liking your Phd also makes it easier to write good grant applications.

2) Try to figure out what you want to do, in some detail, ASAP. This again is critical to finishing on time. Don't worry if your methods or approach changes, just figure out your key question and how you are going to assess it. Then draw up a schedule. You won't stick to it, but it can often be a spur to ensure that you keep working.

3) Work consistently. This was really difficult for me, as I was always a crammer in school and undergrad. However, this will not work for a PhD (if you want to finish on time at least) so get into the habit of doing some work at least 4 days a week. This is very important when you are, like me, an independent scholar without compatriots in a lab somewhere.

4) Read outside your discipline, especially for methods. Often, the methods in your field will be some amalgam of tradition, stupidity and lack of thought. Other disciplines can often point out the blind spots of your own.

5) Read, read, read. Spend at least six months reading before you start collecting data. Make sure you read around any instrument you plan on using. This can often give you a good idea of unanswered questions, which can help you get published (which is important if you want to stay in academia).

6) In total contrast to the last point, start collecting and analysing data ASAP. There's nothing like trying to figure out your own data to help you to understand the methods you are using. If something doesn't make sense, google it and read some papers. Its likely that someone else has had the same problems, and they may know how to solve it. If you cant collect data quickly for some reason, search the internet and start analysing other peoples data for practice.

7) Use R - seriously, if you intend to do any kind of hardcore statistical analysis, use R. Its the best stats program out there, and is constantly having new packages added. Its made me a much better scientist, both by forcing me to learn exactly what i'm doing (to decipher the error messages) and by centralising all of the routines i need in one place. Most psychologists end up using SPSS, some IRT program, some SEM program and various other odds and ends. R does all of this, so just learn it now before you get left behind.

8) Take some time off. I've lost count of the amount of times I've been stumped on a problem, have taken a couple or hours or a day off and the solution has come to me while I was relaxing. Creative thought and hard slog do not often co-occur, so make time for both.

9)  Use as many useful computer tools as possibe. Get a computerised reference manager, cos references are annoying. Get a good stats program (Use R). Get a good qualitative analysis program (I'm using Nvivo, but theres probably a good open source alternative). Learn LaTeX, lest you lose a whole chapter to the demons that infest Word.

10) Write, write, write. Its often easier to understand what the problems are once you try to explain yourself. Aim to write a few hundred words a day. Take notes on absolutely everything you read, this will save you time in the long run.

Finally, have fun! Doing research is supposed to be fun, and you can bet your ass that all the greats enjoyed their work. To paraphrase something I heard once: Doing a PhD is like living your life; if you're not enjoying it, neither the life nor the PhD will turn out to be any good.

Sunday, August 8, 2010

Placebos: All you never wanted to know (Part 3a) - Experimental Evidence

Well, here we are again to continue on our tour of research surrounding the knife in Descartes eye, the placebo effect.

I wonder sometimes if mind-body dualism hadn't become so popular, would we have learned to understand the placebo effect long before now?

In any case, Part 3 (Parts 1 and 2) of this series is going to look at experimental evidence surrounding the placebo effect. This section may end getten broken up some more, but we will cross that bridge when we come to it.

An interesting study took place in the heady days of 2006 (Kaptchuk et al 2007) and involved some of the biggest names in the field. The study set out to examine the differential effects of two wholly placebo treatments. They used a sham acupuncture treatment and also the traditional sugar pill. Now, if placebo effects were an illusion, one would not expect to see a difference between these two treatments, but thats not what was seen.

The study was an 8 week randomised controlled trial (sortof, given that there was no active ingredient), and the results showed that the two placebos were mostly the same, except that the pill group showed increased hand grip (which was measured objectively, for those of you who care about such things).

The next study is far more interesting, however. Again its Kaptchuk et al, 2008 this time though. Essentially it was a randomised controlled trial of acupuncture with three groups using patients suffering from Irritable Bowel Syndrome.
The first group was the no treatment control. They came in, they took part, but didnt have any treatment.
The second group was minimal contact group. These people received either real or fake acupuncture, but delivered in a business like fashion, and they didnt spend very much time with each patient.
The third group was enhanced acupuncture which had the therapist come in and talk to them for about half an hour before putting in the needles.

At the six week outcome measure point, 3% of group 1, 20% of group 2 and 37% of group 3 had significant improvement (all differences between the groups significant at the p<0.0001 level). This to me is pretty amazing. If a treatment which doesnt tend to do well in clinical trials (acupuncture) augmented by warm and friendly interaction with another can be this effective, how much more effective would it be with a well validated treatment? Its perhaps sad that doctors are now so focus on diagnosing and dismissing that they are not making much of an effort with their relationship with their patients, and this is having a measureable impact on their healing capacities. Anyway, moving on to some fascinating work by Geers et al. In this study student participants took a pill which purported to increase their anxiety and irritableness. However, Geers added a number of interesting modifications to this study. Again, there were 3 main groups in the study. Group 1 - this drug is active and will increase your irritability Group 2- you may or may not get the active drug Group 3 - you are getting a placebo. Geers also measured optiimism levels at baseline. The major findings were as follows. The participants in the deceptive administration group tended to show more of an effect than those in groups 2 or 3. However, optimism mediated these results as those high in optimism tended not to respond to the nocebo suggestion, while those high in pessimism tended to respond much more to the nocebo suggestion. These interesting finding may explain why optimists tend to have better health outcomes than pessimists. So, the take home message is this: be optimistic about your medical treatments, it just might save your life. I hope to get another post on this experimental evidence section up for all of you sometime in the next few days.


Kaptchuk, T. (2006). Sham device v inert pill: randomised controlled trial of two placebo treatments BMJ, 332 (7538), 391-397 DOI: 10.1136/bmj.38726.603310.55

Kaptchuk, T., Kelley, J., Conboy, L., Davis, R., Kerr, C., Jacobson, E., Kirsch, I., Schyner, R., Nam, B., Nguyen, L., Park, M., Rivers, A., McManus, C., Kokkotou, E., Drossman, D., Goldman, P., & Lembo, A. (2008). Components of placebo effect: randomised controlled trial in patients with irritable bowel syndrome BMJ, 336 (7651), 999-1003 DOI: 10.1136/bmj.39524.439618.25

GEERS, A., HELFER, S., KOSBAB, K., WEILAND, P., & LANDRY, S. (2005). Reconsidering the role of personality in placebo effects: Dispositional optimism, situational expectations, and the placebo response Journal of Psychosomatic Research, 58 (2), 121-127 DOI: 10.1016/j.jpsychores.2004.08.011