Monday, August 20, 2012
Clinical trials: enrollment targets vs. valid hypothesis testing
The questions raised in this Scientific American article ought to concern all of us, and I want to take some of these questions further. But let me first explain the problem.
Clinical trials and observational studies of drugs, biologics, and medical devices are a huge logistical challenge, not the least of which is finding physicians and patients to participate. The thesis of the article is that the classical methods of finding participants – mostly compensation – lead to perverse incentives to lie about one’s medical condition.
I think there is a more subtle issue, and it struck me when one of our clinical people expressed a desire not to put enrollment caps on large hospitals for the sake of a fast enrollment. In our race to finish the trial and collect data, we are biasing our studies toward larger centers where there may be better care. This effect is exactly the opposite of that posited in the article, where treatment effect is biased downward. Here, treatment effect is biased upward, with doctors more familiar with best delivery practices (many of the drugs I study are IV or hospital-based), best treatment practices, and more efficient care.
We statisticians can start to characterize the problem by looking at treatment effect by different sites, or using hierarchical models to separate out center effect from drug. But this isn’t always a great solution, because low-enrolling sites, by definition, have a lot fewer people, and pooling is problematic because low-enrolling centers tend to have way more variation in level and quality of care than high-enrolling centers.
We can get creative on the statistical analysis end of studies, but I think the best solution is going to involve stepping back at the clinical trial logistics planning stage and recasting the recruitment problem in terms of a generalizability/speed tradeoff.
Monday, August 13, 2012
Observational data is valuable
I’ve heard way too many times that observational studies are flawed, and to really confirm a hypothesis you have to do randomized controlled trials. Indeed, this was an argument in the hormone replacement therapy (HRT) controversy (scroll down for the article). Now that I’ve worked with both observational and randomized data, here are a few observations:
- The choice of observational vs. randomized is an important, but not the only, study design choice.
Studies have lots of different design choices: followup length, measurement schedule, when during disease course to observe, assumptions about risk groups, assumptions about stability of risk over time (which was important in the HRT discussion about breast cancer), and the list goes on. A well-designed observational trial can give a lot of more valid information than a poorly-designed randomized trial. - Only one aspect of a randomized trial is randomized (usually). Covariates and subgroups are not randomized.
- Methods exists to make valid comparisons in an observational study. While data have to be handled much more carefully, and assumptions behind the statistical methods have to be examined more carefully. However, very powerful methods such as causal analysis or case-control studies can be used to make strong conclusions.
Observational studies can complement or replace randomized designs. In fact, in controversies such as the use of thimerosol in vaccines, observational studies have been required to supply all the evidence (randomizing children to thimerosol and non-thimerosol groups in a randomized study to see if they develop autism is not ethical). In post-marketing research and development for drugs, observational studies are used to further establish safety, determine the rate of rare serious adverse events, and determine the effects of real-world usage on the efficacy that has been established through randomized trials.
Through careful planning, observational studies can generate new results, extend the results of randomized trials, or even set up new randomized trials.