Author: Joseph K. Blitzstein
Publisher: CRC Press
Release Date: 2015-09-15
Developed from celebrated Harvard statistics lectures, Introduction to Probability provides essential language and tools for understanding statistics, randomness, and uncertainty. The book explores a wide variety of applications and examples, ranging from coincidences and paradoxes to Google PageRank and Markov chain Monte Carlo (MCMC). Additional application areas explored include genetics, medicine, computer science, and information theory. The print book version includes a code that provides free access to an eBook version. The authors present the material in an accessible style and motivate concepts using real-world examples. Throughout, they use stories to uncover connections between the fundamental distributions in statistics and conditioning to reduce complicated problems to manageable pieces. The book includes many intuitive explanations, diagrams, and practice problems. Each chapter ends with a section showing how to perform relevant simulations and calculations in R, a free statistical software environment.
Based on a popular course taught by the late Gian-Carlo Rota of MIT, with many new topics covered as well, Introduction to Probability with R presents R programs and animations to provide an intuitive yet rigorous understanding of how to model natural phenomena from a probabilistic point of view. Although the R programs are small in length, they are just as sophisticated and powerful as longer programs in other languages. This brevity makes it easy for students to become proficient in R. This calculus-based introduction organizes the material around key themes. One of the most important themes centers on viewing probability as a way to look at the world, helping students think and reason probabilistically. The text also shows how to combine and link stochastic processes to form more complex processes that are better models of natural phenomena. In addition, it presents a unified treatment of transforms, such as Laplace, Fourier, and z; the foundations of fundamental stochastic processes using entropy and information; and an introduction to Markov chains from various viewpoints. Each chapter includes a short biographical note about a contributor to probability theory, exercises, and selected answers. The book has an accompanying website with more information.
Based on the authors’ lecture notes, Introduction to the Theory of Statistical Inference presents concise yet complete coverage of statistical inference theory, focusing on the fundamental classical principles. Suitable for a second-semester undergraduate course on statistical inference, the book offers proofs to support the mathematics. It illustrates core concepts using cartoons and provides solutions to all examples and problems. Highlights Basic notations and ideas of statistical inference are explained in a mathematically rigorous, but understandable, form Classroom-tested and designed for students of mathematical statistics Examples, applications of the general theory to special cases, exercises, and figures provide a deeper insight into the material Solutions provided for problems formulated at the end of each chapter Combines the theoretical basis of statistical inference with a useful applied toolbox that includes linear models Theoretical, difficult, or frequently misunderstood problems are marked The book is aimed at advanced undergraduate students, graduate students in mathematics and statistics, and theoretically-interested students from other disciplines. Results are presented as theorems and corollaries. All theorems are proven and important statements are formulated as guidelines in prose. With its multipronged and student-tested approach, this book is an excellent introduction to the theory of statistical inference.
Author: Michael A. Proschan
Publisher: CRC Press
Release Date: 2016-03-23
Essentials of Probability Theory for Statisticians provides graduate students with a rigorous treatment of probability theory, with an emphasis on results central to theoretical statistics. It presents classical probability theory motivated with illustrative examples in biostatistics, such as outlier tests, monitoring clinical trials, and using adaptive methods to make design changes based on accumulating data. The authors explain different methods of proofs and show how they are useful for establishing classic probability results. After building a foundation in probability, the text intersperses examples that make seemingly esoteric mathematical constructs more intuitive. These examples elucidate essential elements in definitions and conditions in theorems. In addition, counterexamples further clarify nuances in meaning and expose common fallacies in logic. This text encourages students in statistics and biostatistics to think carefully about probability. It gives them the rigorous foundation necessary to provide valid proofs and avoid paradoxes and nonsensical conclusions.
Since 1975, The Analysis of Time Series: An Introduction has introduced legions of statistics students and researchers to the theory and practice of time series analysis. With each successive edition, bestselling author Chris Chatfield has honed and refined his presentation, updated the material to reflect advances in the field, and presented interesting new data sets. The sixth edition is no exception. It provides an accessible, comprehensive introduction to the theory and practice of time series analysis. The treatment covers a wide range of topics, including ARIMA probability models, forecasting methods, spectral analysis, linear systems, state-space models, and the Kalman filter. It also addresses nonlinear, multivariate, and long-memory models. The author has carefully updated each chapter, added new discussions, incorporated new datasets, and made those datasets available for download from www.crcpress.com. A free online appendix on time series analysis using R can be accessed at http://people.bath.ac.uk/mascc/TSA.usingR.doc. Highlights of the Sixth Edition: A new section on handling real data New discussion on prediction intervals A completely revised and restructured chapter on more advanced topics, with new material on the aggregation of time series, analyzing time series in finance, and discrete-valued time series A new chapter of examples and practical advice Thorough updates and revisions throughout the text that reflect recent developments and dramatic changes in computing practices over the last few years The analysis of time series can be a difficult topic, but as this book has demonstrated for two-and-a-half decades, it does not have to be daunting. The accessibility, polished presentation, and broad coverage of The Analysis of Time Series make it simply the best introduction to the subject available.
Author: Thomas A Severini
Publisher: CRC Press
Release Date: 2017-07-06
Genre: Business & Economics
This book provides an introduction to the use of statistical concepts and methods to model and analyze financial data. The ten chapters of the book fall naturally into three sections. Chapters 1 to 3 cover some basic concepts of finance, focusing on the properties of returns on an asset. Chapters 4 through 6 cover aspects of portfolio theory and the methods of estimation needed to implement that theory. The remainder of the book, Chapters 7 through 10, discusses several models for financial data, along with the implications of those models for portfolio theory and for understanding the properties of return data. The audience for the book is students majoring in Statistics and Economics as well as in quantitative fields such as Mathematics and Engineering. Readers are assumed to have some background in statistical methods along with courses in multivariate calculus and linear algebra.
Offering deep insight into the connections between design choice and the resulting statistical analysis, Design of Experiments: An Introduction Based on Linear Models explores how experiments are designed using the language of linear statistical models. The book presents an organized framework for understanding the statistical aspects of experimental design as a whole within the structure provided by general linear models, rather than as a collection of seemingly unrelated solutions to unique problems. The core material can be found in the first thirteen chapters. These chapters cover a review of linear statistical models, completely randomized designs, randomized complete blocks designs, Latin squares, analysis of data from orthogonally blocked designs, balanced incomplete block designs, random block effects, split-plot designs, and two-level factorial experiments. The remainder of the text discusses factorial group screening experiments, regression model design, and an introduction to optimal design. To emphasize the practical value of design, most chapters contain a short example of a real-world experiment. Details of the calculations performed using R, along with an overview of the R commands, are provided in an appendix. This text enables students to fully appreciate the fundamental concepts and techniques of experimental design as well as the real-world value of design. It gives them a profound understanding of how design selection affects the information obtained in an experiment.
Author: Simon N. Wood
Publisher: CRC Press
Release Date: 2017-05-18
The first edition of this book has established itself as one of the leading references on generalized additive models (GAMs), and the only book on the topic to be introductory in nature with a wealth of practical examples and software implementation. It is self-contained, providing the necessary background in linear models, linear mixed models, and generalized linear models (GLMs), before presenting a balanced treatment of the theory and applications of GAMs and related models. The author bases his approach on a framework of penalized regression splines, and while firmly focused on the practical aspects of GAMs, discussions include fairly full explanations of the theory underlying the methods. Use of R software helps explain the theory and illustrates the practical application of the methodology. Each chapter contains an extensive set of exercises, with solutions in an appendix or in the book’s R data package gamair, to enable use as a course text or for self-study. Simon N. Wood is a professor of Statistical Science at the University of Bristol, UK, and author of the R package mgcv.
Author: Alan M. Polansky
Publisher: CRC Press
Release Date: 2011-01-07
Helping students develop a good understanding of asymptotic theory, Introduction to Statistical Limit Theory provides a thorough yet accessible treatment of common modes of convergence and their related tools used in statistics. It also discusses how the results can be applied to several common areas in the field. The author explains as much of the background material as possible and offers a comprehensive account of the modes of convergence of random variables, distributions, and moments, establishing a firm foundation for the applications that appear later in the book. The text includes detailed proofs that follow a logical progression of the central inferences of each result. It also presents in-depth explanations of the results and identifies important tools and techniques. Through numerous illustrative examples, the book shows how asymptotic theory offers deep insight into statistical problems, such as confidence intervals, hypothesis tests, and estimation. With an array of exercises and experiments in each chapter, this classroom-tested book gives students the mathematical foundation needed to understand asymptotic theory. It covers the necessary introductory material as well as modern statistical applications, exploring how the underlying mathematical and statistical theories work together.
Author: Peter Watts Jones
Publisher: CRC Press
Release Date: 2017-10-30
Based on a well-established and popular course taught by the authors over many years, Stochastic Processes: An Introduction, Third Edition, discusses the modelling and analysis of random experiments, where processes evolve over time. The text begins with a review of relevant fundamental probability. It then covers gambling problems, random walks, and Markov chains. The authors go on to discuss random processes continuous in time, including Poisson, birth and death processes, and general population models, and present an extended discussion on the analysis of associated stationary processes in queues. The book also explores reliability and other random processes, such as branching, martingales, and simple epidemics. A new chapter describing Brownian motion, where the outcomes are continuously observed over continuous time, is included. Further applications, worked examples and problems, and biographical details have been added to this edition. Much of the text has been reworked. The appendix contains key results in probability for reference. This concise, updated book makes the material accessible, highlighting simple applications and examples. A solutions manual with fully worked answers of all end-of-chapter problems, and Mathematica® and R programs illustrating many processes discussed in the book, can be downloaded from crcpress.com.
Author: Michael W. Trosset
Publisher: CRC Press
Release Date: 2009-06-23
Emphasizing concepts rather than recipes, An Introduction to Statistical Inference and Its Applications with R provides a clear exposition of the methods of statistical inference for students who are comfortable with mathematical notation. Numerous examples, case studies, and exercises are included. R is used to simplify computation, create figures, and draw pseudorandom samples—not to perform entire analyses. After discussing the importance of chance in experimentation, the text develops basic tools of probability. The plug-in principle then provides a transition from populations to samples, motivating a variety of summary statistics and diagnostic techniques. The heart of the text is a careful exposition of point estimation, hypothesis testing, and confidence intervals. The author then explains procedures for 1- and 2-sample location problems, analysis of variance, goodness-of-fit, and correlation and regression. He concludes by discussing the role of simulation in modern statistical inference. Focusing on the assumptions that underlie popular statistical methods, this textbook explains how and why these methods are used to analyze experimental data.
Bayesian statistical methods have become widely used for data analysis and modelling in recent years, and the BUGS software has become the most popular software for Bayesian analysis worldwide. Authored by the team that originally developed this software, The BUGS Book provides a practical introduction to this program and its use. The text presents complete coverage of all the functionalities of BUGS, including prediction, missing data, model criticism, and prior sensitivity. It also features a large number of worked examples and a wide range of applications from various disciplines. The book introduces regression models, techniques for criticism and comparison, and a wide range of modelling issues before going into the vital area of hierarchical models, one of the most common applications of Bayesian methods. It deals with essentials of modelling without getting bogged down in complexity. The book emphasises model criticism, model comparison, sensitivity analysis to alternative priors, and thoughtful choice of prior distributions—all those aspects of the "art" of modelling that are easily overlooked in more theoretical expositions. More pragmatic than ideological, the authors systematically work through the large range of "tricks" that reveal the real power of the BUGS software, for example, dealing with missing data, censoring, grouped data, prediction, ranking, parameter constraints, and so on. Many of the examples are biostatistical, but they do not require domain knowledge and are generalisable to a wide range of other application areas. Full code and data for examples, exercises, and some solutions can be found on the book’s website.
Author: Annette J. Dobson
Publisher: Chapman and Hall/CRC
Release Date: 2008-05-12
Continuing to emphasize numerical and graphical methods, An Introduction to Generalized Linear Models, Third Edition provides a cohesive framework for statistical modeling. This new edition of a bestseller has been updated with Stata, R, and WinBUGS code as well as three new chapters on Bayesian analysis. Like its predecessor, this edition presents the theoretical background of generalized linear models (GLMs) before focusing on methods for analyzing particular kinds of data. It covers normal, Poisson, and binomial distributions; linear regression models; classical estimation and model fitting methods; and frequentist methods of statistical inference. After forming this foundation, the authors explore multiple linear regression, analysis of variance (ANOVA), logistic regression, log-linear models, survival analysis, multilevel modeling, Bayesian models, and Markov chain Monte Carlo (MCMC) methods. Using popular statistical software programs, this concise and accessible text illustrates practical approaches to estimation, model fitting, and model comparisons. It includes examples and exercises with complete data sets for nearly all the models covered.
Emphasizing the use of WinBUGS and R to analyze real data, Bayesian Ideas and Data Analysis: An Introduction for Scientists and Statisticians presents statistical tools to address scientific questions. It highlights foundational issues in statistics, the importance of making accurate predictions, and the need for scientists and statisticians to collaborate in analyzing data. The WinBUGS code provided offers a convenient platform to model and analyze a wide range of data. The first five chapters of the book contain core material that spans basic Bayesian ideas, calculations, and inference, including modeling one and two sample data from traditional sampling models. The text then covers Monte Carlo methods, such as Markov chain Monte Carlo (MCMC) simulation. After discussing linear structures in regression, it presents binomial regression, normal regression, analysis of variance, and Poisson regression, before extending these methods to handle correlated data. The authors also examine survival analysis and binary diagnostic testing. A complementary chapter on diagnostic testing for continuous outcomes is available on the book’s website. The last chapter on nonparametric inference explores density estimation and flexible regression modeling of mean functions. The appropriate statistical analysis of data involves a collaborative effort between scientists and statisticians. Exemplifying this approach, Bayesian Ideas and Data Analysis focuses on the necessary tools and concepts for modeling and analyzing scientific data. Data sets and codes are provided on a supplemental website.
One of the most popular introductory texts in its field, Statistics for Technology: A Course in Applied Studies presents the range of statistical methods commonly used in science, social science, and engineering. The mathematics are simple and straightforward; statistical concepts are explained carefully; and real-life (rather than contrived) examples are used throughout the chapters. Divided into three parts, the Introduction describes some simple methods of summarizing data. Theory examines the basic concepts and theory of statistics. Applications covers the planning and procedures of experiments, quality control, and life testing. Revised throughout, this Third Edition places a higher priority on the role of computers in analysis, and many new references have been incorporated. A new appendix describes general methods of tackling statistical problems, including guidance on literature searching and report writing.