Friday, August 29, 2014

Subgroup Analysis in Clinical Trials - Revisited

I had previously written an article about the sub-group analysis in clinical trials. I would like to revisit this topic. The subgroup analysis has been one of the regular discussion topics in statistical conferences recently. The pitfalls of the subgroup analyses are well-understood in statistical communities. However, the subgroup analyses in regulatory setting for product approval, in multi-regional clinical trials, in confirmatory trials are quite complicated.

EMA is again ahead of FDA in issuing its regulatory guidelines on this topic. Following an expert workshop on subgroup analysis, EMA issued its draft guideline titled “Guideline on the investigation of subgroups in confirmatory clinical trials”. In addition to the general considerations, they provided the guidelines on issues to be addressed during the study planning stage and the issues to be addressed during the assessment stage.

In practice, the sub-group analysis is almost always conducted. For a study with negative results, the purpose of the sub-group analysis is usually to see if there is a sub-group where the statistical significant results can be found. For a study with positive results, the purpose of the sub-group analysis is usually to see if the result is robust across different sub-groups. The sub-group analysis is not just performed in industry sponsor trials, it may even more often performed in academic clinical studies for publication purpose.

Sometimes it is not so easy to explain the caveats of the sub-group analysis (especially the unplanned sub-group analysis) to non-statisticians. The explanation of the sub-group analysis issues needs the good understanding of the multiplicity adjustments and the statistical power. I recently saw some presentation slides on sub-group analysis issues and pitfalls of the sub-group analysis were well explained in the table below. Either way can make the sug-group analysis results unreliable.


Dr George (2004) “Subgroup analyses in clinical trials
When H0 is true
Increased probability of type I error
Too many “differences”
  • Because the probability of each “statistically significant difference” not being real is 5%
  • So lots of 5% all add together
  • Some of the apparent effects (somewhere) will not be real
  • We have no way of knowing which ones are and which ones aren’t
When H1 is true
Decreased power (increased type II error) in individual subgroup
  • Not enough “differences”
  • The more data we have, the higher the probability of detecting a real effect (“power”)
  • But sub-group analyses “cut the data”
  • Trials are expensive and we usually fix the size of the trial to give high “power” to detect important differences overall (primary efficacy endpoint)
  • When we start splitting the data (only look at men, or only look at women, or only look at renally impaired; or only look at the elderly; etc., etc.), the sample size is smaller … the power is much reduced 

In clinical trials for licensure, the regulatory agencies such as FDA may require the sub-group analyses (planned or unplanned) to see if the results are consistent across different sub-groups or if there are different risk-benefit profiles across different sub-groups. The reviewers may also perform their own sub-group analyses. However, they are aware of the pitfalls of these sub-group analyses. The recently approved Zontivity by FDA is a great example for this exact issue. See Pink Sheet article "FDA Changed Course On Zontivity Because Of Skepticism Of Subgroups At High Levels". Initially, FDA reviewers performed sub-group analyses and identified that the subjects with weight less than 60 kg had different risk-benefit profile comparing to subjects with weight greater than 60 kg. An advisory committee meeting was organized to discuss the issue if the approved indication should be limited to the specific sub-group. However, eventually FDA changed the course and did not impose the label restriction for specific sub-group. They commented that “The point is that one has to be careful not to over-interpret these subgroup findings.”









Friday, August 15, 2014

SAE Reconciliation and Determining/recording the SAE Onset Date


Traditionally clinical operations and drug safety / pharmacovigilence departments have elected to independently collect somewhat different sets of safety data from clinical trials. For serious adverse events (SAE), drug safety / pharmacovigilence department will collect the information through the SAE form and the information will be maintainted in a safety database. In clinical operation or data management departments, the adverse events (AE) including SAEs will be collected on case report form (CRFs) or eCRFs if it is an EDC study. For SAEs, the information from safety database and clinical database come from the same source (the investigational sites).  During the study or at the end of the study, the key fields regarding the SAEs from two independently maintained databases will need to be reconciled and the key data fields must match in both databases.

A poster by Chamberlain et al “Safety Data Reconciliation for Serious Adverse Events (SAE)” has nicely described the SAE reconciliation process. They stated that for these fields to be reconciled, “some will require a one to one match with no exception, while some may be deemed as acceptable discrepancies based on logical match. “

They also gave examples for fields which require an exact match or logical determination are in the following Table 1



Among these fields, the onset date is the one usually causing problems. It is due to the different interpretation of the regulatory guidelines by the clinical operations and the drug safety/pharmacovigilence departments. The onset date of SAE could be reported as the first date when signs and symptoms appears or as the date when the event meets one of the following SAE criteria (as defined in ICH E2A).

* results in death,
* is life-threatening,
* requires inpatient hospitalisation or prolongation of existing hospitalisation,
* results in persistent or significant disability/incapacity, or
            * is a congenital anomaly/birth defect.

Klepper and Dwards did a survery and published their results in their paperIndividual Case Safety Reports – How to Determine the Onset Date of an Adverse Reaction”. The results indicated the variability of determining the onset date of a suspected adverse reaction. They recommend that a criterion for onset time, i.e., beginning of signs or symptoms of the event, or date of diagnosis, be chosen as the standard.

However, many companies and organizations (such as NIH and NCI) indicated in their SAE completion guidelines that event start date should be the date when the event satisfied one of the serious event criteria (for example, if criteria “required hospitalization” was met, the date of admission to the hospital would be the Event Start Date).  If the event started prior to becoming serious (was less severe), it should be recorded on the AE page as non-serious AE with a different severity.

In NIDCR Serious Adverse Event Form Completion Instructions, SAE onset date is torecord the date that the event became serious

In SAE Recording and Reporting Guidelines for Multiple Study Products by Division of Microbiology and Infectious Disease, NIH, the onset date of SAE is instructed to be the date the investigator considers the event to meet one of the serious categories

In the HIV Prevention Trials Network, Adverse Event Reporting and Safety Monitoring section indicated that  

“If an AE increases in severity or frequency (worsens) after it has been reported on an Adverse Experience Log case report form, it must be reported as a new AE, at the increased severity or frequency, on a new AE Log. In this case, the status outcome of the first AE will be documented as “severity/frequency increased.” The status of the second AE will be documented as “continuing”. The outcome date of the first AE and the onset date of the new (worsened) AE should be the date upon which the severity or frequency increased.”

In Serious Adverse Event Form Instructions for Completion by National Cancer Institute Division of Cancer Prevention, the event onset date is to be entered as the date the outcome of the event fulfilled one of the serious criteria.

In Good Clinical Practice Q&A: Focus on Safety Reporting in Journal of Clinical Research Best Practice, it contains the following example for reporting SAE onset date.

What would an SAE’s onset date be if a patient on study develops symptoms of congestive heart failure (CHF) on Monday and is admitted to the hospital the following Friday?
 If known, the complete onset date (month-day-year) of the first signs and/or symptoms of
the most recent CHF episode should be recorded. In this case, it would be Monday. If the
onset date of the first signs and/or symptoms is unknown, the date of hospitalization or
diagnosis should be recorded.”

If the SAE onset date is recorded as the date when one of the SAE criteria is met (this seems to be more popular in practice), it may essentially require the splitting of the event. If an event start as non-serious and later on meet one of the serious criteria, the same event will be recorded as two events: one as non-serious event with onset date being the first sign and symptom date and one as serious adverse event with the onset date being the date when one of the SAE criteria is met. Therefore this approach results in a late onset date and a short SAE duration; but double counting perhaps the same event.


If the SAE onset date is recorded as the date when the first sign or symptom appears, it will result in an early onset date and a longer SAE duration. Since SAE reporting to the regulatory authorities / IRBs is based on the SAE onset date, this may be more stringent in meeting the SAE reporting requirement.