Eva A. Enns, PhD

It is well-established that contact network structure strongly influences infectious disease dynamics. However, less well-studied is the impact of network structure on the effectiveness and efficiency of disease control strategies. In this talk, I will present an evaluation of partner management strategies to address a hypothetical bacterial sexually transmitted infection (STI). I will compare the costs, disease outcomes, and cost-effectiveness of three partner management interventions (partner notification, expedited partner therapy, and contact tracing) in populations with the same average behavior, but configured according to different network structures. This case study is one demonstration of how network structure can influence both the effectiveness and efficiency of infectious disease interventions, as well as the interplay between intervention capacity constraints, disease dynamics, and network connectivity patterns.

Eva A. Enns, PhD

Stephen L. Hillis, PhD

In radiologic diagnostic imaging studies the goal is typically to compare the performance of readers (usually radiologists) for two or more tests or modalities (e.g., digital versus film mammograms, CT versus MRI, etc.) to determine which performs better.  The most common design for such studies is a paired design where each reader assigns confidence-of-disease ratings to the same images using each test, with reader-performance outcomes estimated by functions of the estimated receiver-operating-characteristic (ROC) curve.  Examples of such reader-performance measures are the sensitivity achieved for a given specificity or the area under the ROC curve (AUC), which estimates the probability of correctly discriminating between a randomly chosen pair of normal and abnormal images.

Typically the researcher wants to account for two sources of variation in these studies, attributable to variation across patients and variation across readers. Although there are standard statistical methods for accounting for multiple sources of variation, these imaging studies present a unique challenge for the statistician because the outcome of interest, the reader performance measure, is not indexed by case.  Thus, e.g., a conventional linear or generalized linear mixed model with reader and patient treated as random effects cannot be used. Presently, the standard analysis approach is the Obuchowski-Rockette method.  I will present an introduction to the Obuchowski-Rockette method, describe its present level of development, and outline future areas of research.

Stephen L. Hillis, PhD

Barry D. Nussbaum, PhD

Data are quite important.  And with big data, there are more and more data elements to contend with.  The 3 V’s of big data: velocity, volume, and variety attest to this.  But are all data created equal?  NO.  So the statistician has an ongoing and increasingly important role to assure relevant, representative data are being analyzed.   This talk will discuss where data analytics meets statistics and some of the great potential and, yes, the pitfalls, of our deriving useful information from all that data.  It also includes examples from the author’s real-life involving court cases and presentations for the President.

Key words: Big data, analytics, statistician’s role, pitfalls

Ying Zhang, PhD

In many longitudinal studies, outcomes are assessed on time scales anchored by certain clinical events. When the anchoring events are unobserved, the study timeline becomes undefined, and the traditional longitudinal analysis loses its temporal reference. We consider the analytical situations where the anchoring events are interval censored. We show that by expressing the regression parameter estimators as stochastic functionals of a plug-in estimate of the unknown anchoring event distribution, the standard longitudinal models can be modified and extended to accommodate the less well-defined time scale. This extension enhances the existing tools for longitudinal data analysis. Under mild regularity conditions, we show that for a broad class of models, including the frequently used generalized mixed-effects models, the functional parameter estimates are consistent and asymptotically normally distributed with an n1/2 convergence rate. To implement, we developed a hybrid computational procedure combining the strengths of the Fisher’s scoring method and the expectation-expectation (EM) algorithm. We conducted a simulation study to validate the asymptotic properties, and to examine the finite sample performance of the proposed method. A real data analysis was used to illustrate the proposed method.

Uche Nwoke and Nina Kim

Uche Nwoke
Diagnostics for Data Quality and Stability in Undergraduate Predictive Models  (Advisor: Grant Brown)

Nina Kim
Identifying At-Risk Geographic Regions of Stillbirth Events in Iowa Using a Bayesian Poisson Point Process Analysis  (Advisor: Jacob Oleson)

Melissa Jay, Elliot Burghardt and Evan Walser-Kuntz

Melissa Jay
Estimating Lung Cancer Mortality Rates in U.S. Counties using Bayesian Hierarchical Poisson Regression Models (Advisor: Jacob Oleson)

Elliot Burghardt
Reproducibility of Living Data – Validation of Published Research Using the Parkinson’s Progression Marker Initiative Living Database (Advisor: Christopher Coffey)

Evan Walser-Kuntz
Evaluation of Glaucoma Change Probability Methods on Clinical Data (Advisor: Gideon Zamba)

 

Eva Petkova, PhD

In a randomized clinical trial (RCT), it is often of interest not only to estimate the effect of various treatments on the outcome, but also to determine whether any patient characteristic has a different relationship with the outcome, depending on treatment. In regression models for the outcome, if there is a non-zero interaction between the treatment indicator and a predictor, that predictor is called an “effect modifier”. Identification of such effect modifiers is crucial as we move towards precision medicine, that is, optimizing individual treatment assignment, based on patient’s assessment when s/he presents for treatment. In most settings, there will be several baseline predictor variables that could potentially modify the treatment effects. I will introduce optimal methods of constructing a composite variable (defined as a linear combination of pre-treatment patient characteristics) in order to generate a strong effect modifier in an RCT setting. This is a parsimonious alternative to existing methods for developing individualized treatment decision rules, that combines baseline covariates into a single strong moderator of treatment effect called a Generated Effect Modifier (GEM). The GEM can be constructed and used in the framework of the classic linear model. While the meaning and the characteristics of “a moderator” of treatment effect are well understood when the outcome is linearly related to a predictor, this meaning is less obvious when the outcome is related to the predictors nonlinearly. A GEM for a flexible nonlinear model is presented as well. I will discuss similarities between the GEM approach and single index models (SIM). I will present an illustration using data from a RCT designed to discover biosignatures for treatment response to antidepressants.

Eva Petkova, PhD

Yuko Y. Palesch, PhD

A multitude of networks supported by the NIH now exists to maximize efficiencies to develop, promote and conduct high-quality, multi-site clinical trials.  The efficiencies are gained from establishing and maintaining an infrastructure that consists of coordinating center(s) and multiple clinical sites that enroll patients into the multiple, concurrently ongoing trials.

Currently, the National Institute of Neurological Disorders and Stroke (NINDS) funds four networks to conduct clinical trials, one of which is the NeuroNEXT managed by the Massachusetts General Hospital (for clinical coordination) and the University of Iowa (for statistics and data management).  The other three – Neurological Emergencies Treatment Trials (NETT), Stroke Trials Network (NIH StrokeNet), and Strategies to Innovate Emergency Care Clinical Trials (SIREN) – are managed operationally (i.e., project management and network coordination) by the University of Michigan (NETT and SIREN) and the University of Cincinnati (StrokeNET). The Medical University of South Carolina (MUSC) acts as the statistics and data management center (SDMC) for all three.

The lecture briefly describes the three networks and presents the experiences of the Data Coordination Unit at the MUSC in establishing and managing the SDMCs, comparing and contrasting the three networks with respect to the infrastructure and processes of developing projects, obtaining funding, and implementing clinical trials.  In addition, the expanded roles of the SDMC statisticians, above and beyond analyzing data, in the networks are discussed.

With each network, there have been growing pains, along with addressing and complying with new rules and regulations (e.g., single IRBs, grant submission formats, etc). The magnitude of the networks as well as of the projects also have posed some challenges (“trials”).  Nevertheless, the realization of efficiencies in initiating and managing clinical trials through the networks are emerging.  Furthermore, participating in the three networks has provided the MUSC statistical and information systems teams with great opportunities to be informed and be innovative in designing and conducting clinical trials (“jubilation”).