Following the chronological development of sample surveys, this book provides an analysis of the mathematical and statistical theory of the subject. The text begins with the mathematics of randomized sampling designs as well as a general treatment of estimation of population totals through the Horvits-Thompson estimator and its variants. The book then examines approximations and limit theorems for the distribution of the estimators and design-based estimation of other population quantities. It concludes with chapters concerning inference from surveys. Theory of Sample Surveys will assist in a range of applications, including: auditing quality monitoring market research wildlife surveys mining exploration agriculture and business surveys population health studies This book acts as an exceptional resource for survey methodologists in government organizations as well as lecturers and graduate students in statistics and biostatistics.
Assuming a basic knowledge of the frequentist approach to finite population sampling, Bayesian Methods for Finite Population Sampling describes Bayesian and predictive approaches to inferential problems with an emphasis on the likelihood principle. The authors demonstrate that a variety of levels of prior information can be used in survey sampling in a Bayesian manner. Situations considered range from a noninformative Bayesian justification of standard frequentist methods when the only prior information available is the belief in the exchangeability of the units to a full-fledged Bayesian model. Intended primarily for graduate students and researchers in finite population sampling, this book will also be of interest to statisticians who use sampling and lecturers and researchers in general statistics and biostatistics.
Author: Raymond L. Chambers
Publisher: CRC Press
Release Date: 2012-05-02
Sample surveys provide data used by researchers in a large range of disciplines to analyze important relationships using well-established and widely used likelihood methods. The methods used to select samples often result in the sample differing in important ways from the target population and standard application of likelihood methods can lead to biased and inefficient estimates. Maximum Likelihood Estimation for Sample Surveys presents an overview of likelihood methods for the analysis of sample survey data that account for the selection methods used, and includes all necessary background material on likelihood inference. It covers a range of data types, including multilevel data, and is illustrated by many worked examples using tractable and widely used models. It also discusses more advanced topics, such as combining data, non-response, and informative sampling. The book presents and develops a likelihood approach for fitting models to sample survey data. It explores and explains how the approach works in tractable though widely used models for which we can make considerable analytic progress. For less tractable models numerical methods are ultimately needed to compute the score and information functions and to compute the maximum likelihood estimates of the model parameters. For these models, the book shows what has to be done conceptually to develop analyses to the point that numerical methods can be applied. Designed for statisticians who are interested in the general theory of statistics, Maximum Likelihood Estimation for Sample Surveys is also aimed at statisticians focused on fitting models to sample survey data, as well as researchers who study relationships among variables and whose sources of data include surveys.
This book is devoted to biased sampling problems (also called choice-based sampling in Econometrics parlance) and over-identified parameter estimation problems. Biased sampling problems appear in many areas of research, including Medicine, Epidemiology and Public Health, the Social Sciences and Economics. The book addresses a range of important topics, including case and control studies, causal inference, missing data problems, meta-analysis, renewal process and length biased sampling problems, capture and recapture problems, case cohort studies, exponential tilting genetic mixture models etc. The goal of this book is to make it easier for Ph. D students and new researchers to get started in this research area. It will be of interest to all those who work in the health, biological, social and physical sciences, as well as those who are interested in survey methodology and other areas of statistical science, among others.
Several years ago our statistical friends and relations introduced us to the work of Amari and Barndorff-Nielsen on applications of differential geometry to statistics. This book has arisen because we believe that there is a deep relationship between statistics and differential geometry and moreoever that this relationship uses parts of differential geometry, particularly its 'higher-order' aspects not readily accessible to a statistical audience from the existing literature. It is, in part, a long reply to the frequent requests we have had for references on differential geometry! While we have not gone beyond the path-breaking work of Amari and Barndorff- Nielsen in the realm of applications, our book gives some new explanations of their ideas from a first principles point of view as far as geometry is concerned. In particular it seeks to explain why geometry should enter into parametric statistics, and how the theory of asymptotic expansions involves a form of higher-order differential geometry. The first chapter of the book explores exponential families as flat geometries. Indeed the whole notion of using log-likelihoods amounts to exploiting a particular form of flat space known as an affine geometry, in which straight lines and planes make sense, but lengths and angles are absent. We use these geometric ideas to introduce the notion of the second fundamental form of a family whose vanishing characterises precisely the exponential families.
This venture aspires to be a mix of a textbook at the undergraduate and postgraduate levels and a monograph to catch the attention of researchers in theoretical and practical aspects of survey sampling at diverse levels demanding a comprehensive review of what useful materials have preceded, with an eye to what beacons to the depth of the imminent future.
Author: Ettore Lanzarone
Publisher: Springer Science & Business Media
Release Date: 2013-11-22
The first Bayesian Young Statisticians Meeting, BAYSM 2013, has provided a unique opportunity for young researchers, M.S. students, Ph.D. students, and post-docs dealing with Bayesian statistics to connect with the Bayesian community at large, exchange ideas, and network with scholars working in their field. The Workshop, which took place June 5th and 6th 2013 at CNR-IMATI, Milan, has promoted further research in all the fields where Bayesian statistics may be employed under the guidance of renowned plenary lecturers and senior discussants. A selection of the contributions to the meeting and the summary of one of the plenary lectures compose this volume.
Author: Timothy P. Johnson
Publisher: John Wiley & Sons
Release Date: 2014-10-13
A comprehensive guidebook to the current methodologies and practices used in health surveys A unique and self-contained resource, Handbook of Health Survey Methods presents techniques necessary for confronting challenges that are specific to health survey research. The handbook guides readers through the development of sample designs, data collection procedures, and analytic methods for studies aimed at gathering health information on general and targeted populations. The book is organized into five well-defined sections: Design and Sampling Issues, Measurement Issues, Field Issues, Health Surveys of Special Populations, and Data Management and Analysis. Maintaining an easy-to-follow format, each chapter begins with an introduction, followed by an overview of the main concepts, theories, and applications associated with each topic. Finally, each chapter provides connections to relevant online resources for additional study and reference. The Handbook of Health Survey Methods features: 29 methodological chapters written by highly qualified experts in academia, research, and industry A treatment of the best statistical practices and specific methodologies for collecting data from special populations such as sexual minorities, persons with disabilities, patients, and practitioners Discussions on issues specific to health research including developing physical health and mental health measures, collecting information on sensitive topics, sampling for clinical trials, collecting biospecimens, working with proxy respondents, and linking health data to administrative and other external data sources Numerous real-world examples from the latest research in the fields of public health, biomedicine, and health psychology Handbook of Health Survey Methods is an ideal reference for academics, researchers, and practitioners who apply survey methods and analyze data in the fields of biomedicine, public health, epidemiology, and biostatistics. The handbook is also a useful supplement for upper-undergraduate and graduate-level courses on survey methodology.
This four-volume reference work builds upon the success of past editions of Elsevier’s Corrosion title (by Shreir, Jarman, and Burstein), covering the range of innovations and applications that have emerged in the years since its publication. Developed in partnership with experts from the Corrosion and Protection Centre at the University of Manchester, Shreir’s Corrosion meets the research and productivity needs of engineers, consultants, and researchers alike. Incorporates coverage of all aspects of the corrosion phenomenon, from the science behind corrosion of metallic and non-metallic materials in liquids and gases to the management of corrosion in specific industries and applications Features cutting-edge topics such as medical applications, metal matrix composites, and corrosion modeling Covers the benefits and limitations of techniques from scanning probes to electrochemical noise and impedance spectroscopy
Filling a gap in current Bayesian theory, Statistical Inference: An Integrated Bayesian/Likelihood Approach presents a unified Bayesian treatment of parameter inference and model comparisons that can be used with simple diffuse prior specifications. This novel approach provides new solutions to difficult model comparison problems and offers direct Bayesian counterparts of frequentist t-tests and other standard statistical methods for hypothesis testing. After an overview of the competing theories of statistical inference, the book introduces the Bayes/likelihood approach used throughout. It presents Bayesian versions of one- and two-sample t-tests, along with the corresponding normal variance tests. The author then thoroughly discusses the use of the multinomial model and noninformative Dirichlet priors in "model-free" or nonparametric Bayesian survey analysis, before covering normal regression and analysis of variance. In the chapter on binomial and multinomial data, he gives alternatives, based on Bayesian analyses, to current frequentist nonparametric methods. The text concludes with new goodness-of-fit methods for assessing parametric models and a discussion of two-level variance component models and finite mixtures. Emphasizing the principles of Bayesian inference and Bayesian model comparison, this book develops a unique methodology for solving challenging inference problems. It also includes a concise review of the various approaches to inference.
Author: Richard Valliant
Publisher: Springer Science & Business Media
Release Date: 2013-05-16
Genre: Social Science
Survey sampling is fundamentally an applied field. The goal in this book is to put an array of tools at the fingertips of practitioners by explaining approaches long used by survey statisticians, illustrating how existing software can be used to solve survey problems, and developing some specialized software where needed. This book serves at least three audiences: (1) Students seeking a more in-depth understanding of applied sampling either through a second semester-long course or by way of a supplementary reference; (2) Survey statisticians searching for practical guidance on how to apply concepts learned in theoretical or applied sampling courses; and (3) Social scientists and other survey practitioners who desire insight into the statistical thinking and steps taken to design, select, and weight random survey samples. Several survey data sets are used to illustrate how to design samples, to make estimates from complex surveys for use in optimizing the sample allocation, and to calculate weights. Realistic survey projects are used to demonstrate the challenges and provide a context for the solutions. The book covers several topics that either are not included or are dealt with in a limited way in other texts. These areas include: sample size computations for multistage designs; power calculations related to surveys; mathematical programming for sample allocation in a multi-criteria optimization setting; nuts and bolts of area probability sampling; multiphase designs; quality control of survey operations; and statistical software for survey sampling and estimation. An associated R package, PracTools, contains a number of specialized functions for sample size and other calculations. The data sets used in the book are also available in PracTools, so that the reader may replicate the examples or perform further analyses.
Author: Martin A. Tanner
Publisher: CRC Press
Release Date: 2001-07-09
Exactly what is the state of the art in statistics as we move forward into the 21st century? What promises, what trends does its future hold? Through the reflections of 70 of the world's leading statistical methodologists, researchers, theorists, and practitioners, Statistics in the 21st Century answers those questions. Originally published in the Journal of the American Statistical Association, this collection of vignettes examines our statistical past, comments on our present, and speculates on our future. Although the coverage is broad and the topics diverse, it reveals the essential intellectual unity of the field as we see the same themes recurring in different contexts. We see how the development of statistics has been driven by the unprecedented and still growing range of applications, by the explosion in computer technology, and by the new types of data that continue to emerge and advance the discipline. Organized around major areas of application and leading up to vignettes on theory and methods, Statistics in the 21st Century forms a landmark record of the progress and perceived future of the discipline. No student, researcher, or practitioner of statistics should miss this extraordinary opportunity to view the past, present, and future world of statistics through the eyes of its foremost thinkers.
Although there has been a surge of interest in density estimation in recent years, much of the published research has been concerned with purely technical matters with insufficient emphasis given to the technique's practical value. Furthermore, the subject has been rather inaccessible to the general statistician. The account presented in this book places emphasis on topics of methodological importance, in the hope that this will facilitate broader practical application of density estimation and also encourage research into relevant theoretical work. The book also provides an introduction to the subject for those with general interests in statistics. The important role of density estimation as a graphical technique is reflected by the inclusion of more than 50 graphs and figures throughout the text. Several contexts in which density estimation can be used are discussed, including the exploration and presentation of data, nonparametric discriminant analysis, cluster analysis, simulation and the bootstrap, bump hunting, projection pursuit, and the estimation of hazard rates and other quantities that depend on the density. This book includes general survey of methods available for density estimation. The Kernel method, both for univariate and multivariate data, is discussed in detail, with particular emphasis on ways of deciding how much to smooth and on computation aspects. Attention is also given to adaptive methods, which smooth to a greater degree in the tails of the distribution, and to methods based on the idea of penalized likelihood.
Author: Wayne A. Fuller
Publisher: John Wiley & Sons
Release Date: 2011-09-20
Discover the latest developments and current practices in survey sampling Survey sampling is an important component of research in many fields, and as the importance of survey sampling continues to grow, sophisticated sampling techniques that are both economical and scientifically reliable are essential to planning statistical research and the design of experiments. Sampling Statistics presents estimation techniques and sampling concepts to facilitate the application of model-based procedures to survey samples. The book begins with an introduction to standard probability sampling concepts, which provides the foundation for studying samples selected from a finite population. The development of the theory of complex sampling methods is detailed, and subsequent chapters explore the construction of estimators, sample design, replication variance estimation, and procedures such as nonresponse adjustment and small area estimation where models play a key role. A final chapter covers analytic studies in which survey data are used for the estimation of parameters for a subject matter model. The author draws upon his extensive experience with survey samples in the book's numerous examples. Both the production of "general use" databases and the analytic study of a limited number of characteristics are discussed. Exercises at the end of each chapter allow readers to test their comprehension of the presented concepts and techniques, and the references provide further resources for study. Sampling Statistics is an ideal book for courses in survey sampling at the graduate level. It is also a valuable reference for practicing statisticians who analyze survey data or are involved in the design of sample surveys.
This book describes an array of power tools for data analysis that are based on nonparametric regression and smoothing techniques. These methods relax the linear assumption of many standard models and allow analysts to uncover structure in the data that might otherwise have been missed. While McCullagh and Nelder's Generalized Linear Models shows how to extend the usual linear methodology to cover analysis of a range of data types, Generalized Additive Models enhances this methodology even further by incorporating the flexibility of nonparametric regression. Clear prose, exercises in each chapter, and case studies enhance this popular text.