Bayesian analyses have made important inroads in modern clinical research due, in part, to the incorporation of the traditional tools of noninformative priors as well as the modern innovations of adaptive randomization and predictive power. Presenting an introductory perspective to modern Bayesian procedures, Elementary Bayesian Biostatistics explores Bayesian principles and illustrates their application to healthcare research. Building on the basics of classic biostatistics and algebra, this easy-to-read book provides a clear overview of the subject. It focuses on the history and mathematical foundation of Bayesian procedures, before discussing their implementation in healthcare research from first principles. The author also elaborates on the current controversies between Bayesian and frequentist biostatisticians. The book concludes with recommendations for Bayesians to improve their standing in the clinical trials community. Calculus derivations are relegated to the appendices so as not to overly complicate the main text. As Bayesian methods gain more acceptance in healthcare, it is necessary for clinical scientists to understand Bayesian principles. Applying Bayesian analyses to modern healthcare research issues, this lucid introduction helps readers make the correct choices in the development of clinical research programs.
Interval-Censored Time-to-Event Data: Methods and Applications collects the most recent techniques, models, and computational tools for interval-censored time-to-event data. Top biostatisticians from academia, biopharmaceutical industries, and government agencies discuss how these advances are impacting clinical trials and biomedical research. Divided into three parts, the book begins with an overview of interval-censored data modeling, including nonparametric estimation, survival functions, regression analysis, multivariate data analysis, competing risks analysis, and other models for interval-censored data. The next part presents interval-censored methods for current status data, Bayesian semiparametric regression analysis of interval-censored data with monotone splines, Bayesian inferential models for interval-censored data, an estimator for identifying causal effect of treatment, and consistent variance estimation for interval-censored data. In the final part, the contributors use Monte Carlo simulation to assess biases in progression-free survival analysis as well as correct bias in interval-censored time-to-event applications. They also present adaptive decision making methods to optimize the rapid treatment of stroke, explore practical issues in using weighted logrank tests, and describe how to use two R packages. A practical guide for biomedical researchers, clinicians, biostatisticians, and graduate students in biostatistics, this volume covers the latest developments in the analysis and modeling of interval-censored time-to-event data. It shows how up-to-date statistical methods are used in biopharmaceutical and public health applications.
Author: Kelly H. Zou
Publisher: CRC Press
Release Date: 2016-04-19
Statistical evaluation of diagnostic performance in general and Receiver Operating Characteristic (ROC) analysis in particular are important for assessing the performance of medical tests and statistical classifiers, as well as for evaluating predictive models or algorithms. This book presents innovative approaches in ROC analysis, which are relevant to a wide variety of applications, including medical imaging, cancer research, epidemiology, and bioinformatics. Statistical Evaluation of Diagnostic Performance: Topics in ROC Analysis covers areas including monotone-transformation techniques in parametric ROC analysis, ROC methods for combined and pooled biomarkers, Bayesian hierarchical transformation models, sequential designs and inferences in the ROC setting, predictive modeling, multireader ROC analysis, and free-response ROC (FROC) methodology. The book is suitable for graduate-level students and researchers in statistics, biostatistics, epidemiology, public health, biomedical engineering, radiology, medical imaging, biomedical informatics, and other closely related fields. Additionally, clinical researchers and practicing statisticians in academia, industry, and government could benefit from the presentation of such important and yet frequently overlooked topics.
Author: Robert G. Newcombe
Publisher: CRC Press
Release Date: 2012-08-25
Confidence Intervals for Proportions and Related Measures of Effect Size illustrates the use of effect size measures and corresponding confidence intervals as more informative alternatives to the most basic and widely used significance tests. The book provides you with a deep understanding of what happens when these statistical methods are applied in situations far removed from the familiar Gaussian case. Drawing on his extensive work as a statistician and professor at Cardiff University School of Medicine, the author brings together methods for calculating confidence intervals for proportions and several other important measures, including differences, ratios, and nonparametric effect size measures generalizing Mann-Whitney and Wilcoxon tests. He also explains three important approaches to obtaining intervals for related measures. Many examples illustrate the application of the methods in the health and social sciences. Requiring little computational skills, the book offers user-friendly Excel spreadsheets for download at www.crcpress.com, enabling you to easily apply the methods to your own empirical data.
Written by a biostatistics expert with over 20 years of experience in the field, Bayesian Methods in Epidemiology presents statistical methods used in epidemiology from a Bayesian viewpoint. It employs the software package WinBUGS to carry out the analyses and offers the code in the text and for download online. The book examines study designs that investigate the association between exposure to risk factors and the occurrence of disease. It covers introductory adjustment techniques to compare mortality between states and regression methods to study the association between various risk factors and disease, including logistic regression, simple and multiple linear regression, categorical/ordinal regression, and nonlinear models. The text also introduces a Bayesian approach for the estimation of survival by life tables and illustrates other approaches to estimate survival, including a parametric model based on the Weibull distribution and the Cox proportional hazards (nonparametric) model. Using Bayesian methods to estimate the lead time of the modality, the author explains how to screen for a disease among individuals that do not exhibit any symptoms of the disease. With many examples and end-of-chapter exercises, this book is the first to introduce epidemiology from a Bayesian perspective. It shows epidemiologists how these Bayesian models and techniques are useful in studying the association between disease and exposure to risk factors.
Praise for the Second Edition: "... this is a useful, comprehensive compendium of almost every possible sample size formula. The strong organization and carefully defined formulae will aid any researcher designing a study." -Biometrics "This impressive book contains formulae for computing sample size in a wide range of settings. One-sample studies and two-sample comparisons for quantitative, binary, and time-to-event outcomes are covered comprehensively, with separate sample size formulae for testing equality, non-inferiority, and equivalence. Many less familiar topics are also covered ..." – Journal of the Royal Statistical Society Sample Size Calculations in Clinical Research, Third Edition presents statistical procedures for performing sample size calculations during various phases of clinical research and development. A comprehensive and unified presentation of statistical concepts and practical applications, this book includes a well-balanced summary of current and emerging clinical issues, regulatory requirements, and recently developed statistical methodologies for sample size calculation. Features: Compares the relative merits and disadvantages of statistical methods for sample size calculations Explains how the formulae and procedures for sample size calculations can be used in a variety of clinical research and development stages Presents real-world examples from several therapeutic areas, including cardiovascular medicine, the central nervous system, anti-infective medicine, oncology, and women’s health Provides sample size calculations for dose response studies, microarray studies, and Bayesian approaches This new edition is updated throughout, includes many new sections, and five new chapters on emerging topics: two stage seamless adaptive designs, cluster randomized trial design, zero-inflated Poisson distribution, clinical trials with extremely low incidence rates, and clinical trial simulation. ?
Get Up to Speed on Many Types of Adaptive Designs Since the publication of the first edition, there have been remarkable advances in the methodology and application of adaptive trials. Incorporating many of these new developments, Adaptive Design Theory and Implementation Using SAS and R, Second Edition offers a detailed framework to understand the use of various adaptive design methods in clinical trials. New to the Second Edition Twelve new chapters covering blinded and semi-blinded sample size reestimation design, pick-the-winners design, biomarker-informed adaptive design, Bayesian designs, adaptive multiregional trial design, SAS and R for group sequential design, and much more More analytical methods for K-stage adaptive designs, multiple-endpoint adaptive design, survival modeling, and adaptive treatment switching New material on sequential parallel designs with rerandomization and the skeleton approach in adaptive dose-escalation trials Twenty new SAS macros and R functions Enhanced end-of-chapter problems that give readers hands-on practice addressing issues encountered in designing real-life adaptive trials Covering even more adaptive designs, this book provides biostatisticians, clinical scientists, and regulatory reviewers with up-to-date details on this innovative area in pharmaceutical research and development. Practitioners will be able to improve the efficiency of their trial design, thereby reducing the time and cost of drug development.
Bayesian Modeling in Bioinformatics discusses the development and application of Bayesian statistical methods for the analysis of high-throughput bioinformatics data arising from problems in molecular and structural biology and disease-related medical research, such as cancer. It presents a broad overview of statistical inference, clustering, and classification problems in two main high-throughput platforms: microarray gene expression and phylogenic analysis. The book explores Bayesian techniques and models for detecting differentially expressed genes, classifying differential gene expression, and identifying biomarkers. It develops novel Bayesian nonparametric approaches for bioinformatics problems, measurement error and survival models for cDNA microarrays, a Bayesian hidden Markov modeling approach for CGH array data, Bayesian approaches for phylogenic analysis, sparsity priors for protein-protein interaction predictions, and Bayesian networks for gene expression data. The text also describes applications of mode-oriented stochastic search algorithms, in vitro to in vivo factor profiling, proportional hazards regression using Bayesian kernel machines, and QTL mapping. Focusing on design, statistical inference, and data analysis from a Bayesian perspective, this volume explores statistical challenges in bioinformatics data analysis and modeling and offers solutions to these problems. It encourages readers to draw on the evolving technologies and promote statistical development in this area of bioinformatics.
Reliably optimizing a new treatment in humans is a critical first step in clinical evaluation since choosing a suboptimal dose or schedule may lead to failure in later trials. At the same time, if promising preclinical results do not translate into a real treatment advance, it is important to determine this quickly and terminate the clinical evaluation process to avoid wasting resources. Bayesian Designs for Phase I–II Clinical Trials describes how phase I–II designs can serve as a bridge or protective barrier between preclinical studies and large confirmatory clinical trials. It illustrates many of the severe drawbacks with conventional methods used for early-phase clinical trials and presents numerous Bayesian designs for human clinical trials of new experimental treatment regimes. The first two chapters minimize the technical language to make them accessible to non-statisticians. These chapters discuss the severe drawbacks of the conventional paradigm used for early-phase clinical trials and explain the phase I–II paradigm for optimizing dose, or more general treatment regimes, based on both efficacy and toxicity. The remainder of the book covers a wide variety of clinical trial methodologies, including designs to optimize the dose pair of a two-drug combination, jointly optimize dose and schedule, identify optimal personalized doses, optimize novel molecularly targeted agents, and choose doses in two treatment cycles. Written by research leaders from the University of Texas MD Anderson Cancer Center, this book shows how Bayesian designs for early-phase clinical trials can explore, refine, and optimize new experimental treatments. It emphasizes the importance of basing decisions on both efficacy and toxicity.
Although the popularity of the Bayesian approach to statistics has been growing for years, many still think of it as somewhat esoteric, not focused on practical issues, or generally too difficult to understand. Bayesian Analysis Made Simple is aimed at those who wish to apply Bayesian methods but either are not experts or do not have the time to create WinBUGS code and ancillary files for every analysis they undertake. Accessible to even those who would not routinely use Excel, this book provides a custom-made Excel GUI, immediately useful to those users who want to be able to quickly apply Bayesian methods without being distracted by computing or mathematical issues. From simple NLMs to complex GLMMs and beyond, Bayesian Analysis Made Simple describes how to use Excel for a vast range of Bayesian models in an intuitive manner accessible to the statistically savvy user. Packed with relevant case studies, this book is for any data analyst wishing to apply Bayesian methods to analyze their data, from professional statisticians to statistically aware scientists.
There are numerous advantages to using Bayesian methods in diagnostic medicine, which is why they are employed more and more today in clinical studies. Exploring Bayesian statistics at an introductory level, Bayesian Biostatistics and Diagnostic Medicine illustrates how to apply these methods to solve important problems in medicine and biology. After focusing on the wide range of areas where diagnostic medicine is used, the book introduces Bayesian statistics and the estimation of accuracy by sensitivity, specificity, and positive and negative predictive values for ordinal and continuous diagnostic measurements. The author then discusses patient covariate information and the statistical methods for estimating the agreement among observers. The book also explains the protocol review process for cancer clinical trials, how tumor responses are categorized, how to use WHO and RECIST criteria, and how Bayesian sequential methods are employed to monitor trials and estimate sample sizes. With many tables and figures, this book enables readers to conduct a Bayesian analysis for a large variety of interesting and practical biomedical problems.
Author: Scott M. Berry
Publisher: CRC Press
Release Date: 2010-07-19
Already popular in the analysis of medical device trials, adaptive Bayesian designs are increasingly being used in drug development for a wide variety of diseases and conditions, from Alzheimer’s disease and multiple sclerosis to obesity, diabetes, hepatitis C, and HIV. Written by leading pioneers of Bayesian clinical trial designs, Bayesian Adaptive Methods for Clinical Trials explores the growing role of Bayesian thinking in the rapidly changing world of clinical trial analysis. The book first summarizes the current state of clinical trial design and analysis and introduces the main ideas and potential benefits of a Bayesian alternative. It then gives an overview of basic Bayesian methodological and computational tools needed for Bayesian clinical trials. With a focus on Bayesian designs that achieve good power and Type I error, the next chapters present Bayesian tools useful in early (Phase I) and middle (Phase II) clinical trials as well as two recent Bayesian adaptive Phase II studies: the BATTLE and ISPY-2 trials. In the following chapter on late (Phase III) studies, the authors emphasize modern adaptive methods and seamless Phase II–III trials for maximizing information usage and minimizing trial duration. They also describe a case study of a recently approved medical device to treat atrial fibrillation. The concluding chapter covers key special topics, such as the proper use of historical data, equivalence studies, and subgroup analysis. For readers involved in clinical trials research, this book significantly updates and expands their statistical toolkits. The authors provide many detailed examples drawing on real data sets. The R and WinBUGS codes used throughout are available on supporting websites. Scott Berry talks about the book on the CRC Press YouTube Channel.
Author: Karl E. Peace
Publisher: CRC Press
Release Date: 2009-04-23
Using time-to-event analysis methodology requires careful definition of the event, censored observation, provision of adequate follow-up, number of events, and independence or "noninformativeness" of the censoring mechanisms relative to the event. Design and Analysis of Clinical Trials with Time-to-Event Endpoints provides a thorough presentation of the design, monitoring, analysis, and interpretation of clinical trials in which time-to-event is of critical interest. After reviewing time-to-event endpoint methodology, clinical trial issues, and the design and monitoring of clinical trials, the book focuses on inferential analysis methods, including parametric, semiparametric, categorical, and Bayesian methods; an alternative to the Cox model for small samples; and estimation and testing for change in hazard. It then presents descriptive and graphical methods useful in the analysis of time-to-event endpoints. The next several chapters explore a variety of clinical trials, from analgesic, antibiotic, and antiviral trials to cardiovascular and cancer prevention, prostate cancer, astrocytoma brain tumor, and chronic myelogonous leukemia trials. The book then covers areas of drug development, medical practice, and safety assessment. It concludes with the design and analysis of clinical trials of animals required by the FDA for new drug applications. Drawing on the expert contributors’ experiences working in biomedical research and clinical drug development, this comprehensive resource covers an array of time-to-event methods and explores an assortment of real-world applications.
Fundamental Concepts for New Clinical Trialists describes the core scientific concepts of designing, data monitoring, analyzing, and reporting clinical trials as well as the practical aspects of trials not typically discussed in statistical methodology textbooks. The first section of the book provides background information about clinical trials. It defines and compares clinical trials to other types of research studies and discusses clinical trial phases, registration, the protocol document, ethical issues, product development, and regulatory processes. It also includes a special chapter outlining the valuable attributes that statisticians can develop to maximize their contributions to a clinical trial. The second section examines scientific issues faced in each progressive step of a clinical trial. It covers issues in trial design, such as randomization, blinding, control-group selection, endpoint selection, superiority versus noninferiority, and parallel group versus crossover designs; data monitoring; analyses of efficacy, safety, and benefit-risk; and the reporting/publication of clinical trial results. As clinical trials remain the gold standard research studies for evaluating the effects of a medical intervention, newcomers to the field must have a fundamental understanding of the concepts to tackle real-world issues in all stages of trials. Drawing on their experiences in academia and industry, the authors provide a foundation for understanding the fundamental concepts necessary for working in clinical trials.
As clinicians begin to realize the important role of dose-finding in the drug development process, there is an increasing openness to "novel" methods proposed in the past two decades. In particular, the Continual Reassessment Method (CRM) and its variations have drawn much attention in the medical community, though it has yet to become a commonplace tool. To overcome the status quo in phase I clinical trials, statisticians must be able to design trials using the CRM in a timely and reproducible manner. A self-contained theoretical framework of the CRM for researchers and graduate students who set out to learn and do research in the CRM and dose-finding methods in general, Dose Finding by the Continual Reassessment Method features: Real clinical trial examples that illustrate the methods and techniques throughout the book Detailed calibration techniques that enable biostatisticians to design a CRM in timely manner Limitations of the CRM are outlined to aid in correct use of method This book supplies practical, efficient dose-finding methods based on cutting edge statistical research. More than just a cookbook, it provides full, unified coverage of the CRM in addition to step-by-step guidelines to automation and parameterization of the methods used on a regular basis. A detailed exposition of the calibration of the CRM for applied statisticians working with dose-finding in phase I trials, the book focuses on the R package ‘dfcrm’ for the CRM and its major variants. The author recognizes clinicians’ skepticism of model-based designs, and addresses their concerns that the time, professional, and computational resources necessary for accurate model-based designs can be major bottlenecks to the widespread use of appropriate dose-finding methods in phase I practice. The theoretically- and empirically-based methods in Dose Finding by the Continual Reassessment Method will lessen the statistician’s burden and encourage the continuing development and implementation of model-based dose-finding methods.