In radiologic diagnostic imaging studies the goal is typically to compare the performance of readers (usually radiologists) for two or more tests or modalities (e.g., digital versus film mammograms, CT versus MRI, etc.) to determine which performs better. The most common design for such studies is a paired design where each reader assigns confidence-of-disease ratings to the same images using each test, with reader-performance outcomes estimated by functions of the estimated receiver-operating-characteristic (ROC) curve. Examples of such reader-performance measures are the sensitivity achieved for a given specificity or the area under the ROC curve (AUC), which estimates the probability of correctly discriminating between a randomly chosen pair of normal and abnormal images.
Typically the researcher wants to account for two sources of variation in these studies, attributable to variation across patients and variation across readers. Although there are standard statistical methods for accounting for multiple sources of variation, these imaging studies present a unique challenge for the statistician because the outcome of interest, the reader performance measure, is not indexed by case. Thus, e.g., a conventional linear or generalized linear mixed model with reader and patient treated as random effects cannot be used. Presently, the standard analysis approach is the Obuchowski-Rockette method. I will present an introduction to the Obuchowski-Rockette method, describe its present level of development, and outline future areas of research.