Showing posts with label randomization. Show all posts
Showing posts with label randomization. Show all posts
Sunday, May 25, 2008
Blinding and randomization, Part II
I could title this post "things that go bump with blinding and randomization." The nice, clean picture I presented in the first part of this series works a lot of the time, but usually, there are problems.
Before I go into them, there's one aspect I didn't touch in the first part, and that is the business aspect. And that's what concerns Schering-Plough -- if they were unblinded to trial results they could possibly make financial decisions, such as "cashing out" while shareholders are stuck footing the bill for trial results. I usually don't mess with that end of things, but it's a very important one for management of companies.
Ok, so back to things that go bump:
Before I go into them, there's one aspect I didn't touch in the first part, and that is the business aspect. And that's what concerns Schering-Plough -- if they were unblinded to trial results they could possibly make financial decisions, such as "cashing out" while shareholders are stuck footing the bill for trial results. I usually don't mess with that end of things, but it's a very important one for management of companies.
Ok, so back to things that go bump:
- It's hard to make a placebo. Sometimes, it's really hard to match the drug. If there's an actively-compared trial, what happens if the active control is an intravenous injection and the experimental treatment is a pill? You could dummy up so that everybody gets an IV and a pill (and only one is active), but if you get too complicated, there's too much room for error.
- The primary endpoint is not the only expression of a drug. For example, if your drug is known to dry out skin, and a patient presents with a severe skin drying adverse event, your investigator has a pretty good idea of what the assigned treatment is.
- If all outcomes come back pretty close to each other, relative to uncertainty in treatment, you have a pretty good idea the treatment has no effect. While this may not unblind individual patients, it gives a pretty good idea of trial results during the writing of analysis programs, and presumably to senior management so they can make decisions based off "very likely" results.
Wednesday, May 14, 2008
Blinding and randomization: the basics
Before I start talking too much about if it's possible to effectively unblind a study without knowing treatment codes, it will be helpful to establish why blinding is important. (Also called masking.) In a nutshell, these techniques, when applied correctly, correct our unconscious tendencies to bias results in favor of an experimental treatment.
Randomization is the deliberate introduction of chance into the process of assigning subjects to treatments. This technique not only establishes a statistical foundation for analysis methods used to draw conclusions from the data in the trial, but also corrects the well-known tendency for doctors to assign sicker patients to one treatment group (the experimental treatment if placebo-controlled, or active control if the active control is well-established, e.g.).
Blinding is the keeping secret the treatment assignment from either the patient, the doctor, or both. (Statisticians and other study personnel are kept blinded as well.) Single-blind studies maintain treatment assignment from the subject. Double-blind studies maintain treatment assignment from both the subject and doctor. I have run across a case before where the doctor was blinded to treatment assignment but not the subject, but those are rare.
For some examples of kinds of bias handled by these techniques, see here.
If a particular patient experiences problems with treatment in such a way that the treatment assignment has to be known, we have ways of exposing just the treatment assignment of one patient without having to expose everybody's treatment assignment. If all goes well, this is a relatively rare event. That's a big "if."
At the end of the study, ideally we produce the statistical analysis with dummy randomization codes in order to get a "shell" of what the statistical analysis will look like. This analysis is conducted according to some prespecified plan that is documented in the study protocol and a statistical analysis plan. In many cases, we will draw up in Microsoft Word or other editing software a shell of what the tables will look like. (I've heard about some efforts at using OASIS table model for both shells and analysis.) When we are satisfied with the results, we drop in the true randomization codes (seen for the first time) and hope nothing strange happens. (Usually, very little goes awry unless there was a problem in specifying the data structure of the true randomization codes.)
Any analysis that occurs afterward might be used to generate hypotheses, but isn't used to support an efficacy or safety claim. If something interesting does come up, it has to be confirmed in a later study.
Ideally.
What happens when things aren't so ideal? Stay tuned.
Randomization is the deliberate introduction of chance into the process of assigning subjects to treatments. This technique not only establishes a statistical foundation for analysis methods used to draw conclusions from the data in the trial, but also corrects the well-known tendency for doctors to assign sicker patients to one treatment group (the experimental treatment if placebo-controlled, or active control if the active control is well-established, e.g.).
Blinding is the keeping secret the treatment assignment from either the patient, the doctor, or both. (Statisticians and other study personnel are kept blinded as well.) Single-blind studies maintain treatment assignment from the subject. Double-blind studies maintain treatment assignment from both the subject and doctor. I have run across a case before where the doctor was blinded to treatment assignment but not the subject, but those are rare.
For some examples of kinds of bias handled by these techniques, see here.
If a particular patient experiences problems with treatment in such a way that the treatment assignment has to be known, we have ways of exposing just the treatment assignment of one patient without having to expose everybody's treatment assignment. If all goes well, this is a relatively rare event. That's a big "if."
At the end of the study, ideally we produce the statistical analysis with dummy randomization codes in order to get a "shell" of what the statistical analysis will look like. This analysis is conducted according to some prespecified plan that is documented in the study protocol and a statistical analysis plan. In many cases, we will draw up in Microsoft Word or other editing software a shell of what the tables will look like. (I've heard about some efforts at using OASIS table model for both shells and analysis.) When we are satisfied with the results, we drop in the true randomization codes (seen for the first time) and hope nothing strange happens. (Usually, very little goes awry unless there was a problem in specifying the data structure of the true randomization codes.)
Any analysis that occurs afterward might be used to generate hypotheses, but isn't used to support an efficacy or safety claim. If something interesting does come up, it has to be confirmed in a later study.
Ideally.
What happens when things aren't so ideal? Stay tuned.
Thursday, May 8, 2008
Can the blind really see?
That's Sen. Grassley's concern, stated here. (A thorough and well-done blog with some eye candy, though I don't agree with a lot of opinions expressed there.)
I've wondered about this question even before the ENHANCE trial came to light, but, since I'm procrastinating on getting out a deliverable (at 11:30pm!) I'm going to just say that I plan to write about this soon.
I've wondered about this question even before the ENHANCE trial came to light, but, since I'm procrastinating on getting out a deliverable (at 11:30pm!) I'm going to just say that I plan to write about this soon.
Subscribe to:
Comments (Atom)