Designed for an intermediate undergraduate course, Probability and Statistics with R shows students how to solve various statistical problems using both parametric and nonparametric techniques via the open source software R. It provides numerous real-world examples, carefully explained proofs, end-of-chapter problems, and illuminating graphs to facilitate hands-on learning. Integrating theory with practice, the text briefly introduces the syntax, structures, and functions of the S language, before covering important graphically and numerically descriptive methods. The next several chapters elucidate probability and random variables topics, including univariate and multivariate distributions. After exploring sampling distributions, the authors discuss point estimation, confidence intervals, hypothesis testing, and a wide range of nonparametric methods. With a focus on experimental design, the book also presents fixed- and random-effects models as well as randomized block and two-factor factorial designs. The final chapter describes simple and multiple regression analyses. Demonstrating that R can be used as a powerful teaching aid, this comprehensive text presents extensive treatments of data analysis using parametric and nonparametric techniques. It effectively links statistical concepts with R procedures, enabling the application of the language to the vast world of statistics.
Author: Joseph L. Fleiss
Publisher: John Wiley & Sons
Release Date: 1973
Includes a new chapter on logistic regression. Discusses the design and analysis of random trials. Explores the latest applications of sample size tables. Contains a new section on binomial distribution.
Author: Regina Y. Liu
Release Date: 2016-09-20
The contributors to this volume include many of the distinguished researchers in this area. Many of these scholars have collaborated with Joseph McKean to develop underlying theory for these methods, obtain small sample corrections, and develop efficient algorithms for their computation. The papers cover the scope of the area, including robust nonparametric rank-based procedures through Bayesian and big data rank-based analyses. Areas of application include biostatistics and spatial areas. Over the last 30 years, robust rank-based and nonparametric methods have developed considerably. These procedures generalize traditional Wilcoxon-type methods for one- and two-sample location problems. Research into these procedures has culminated in complete analyses for many of the models used in practice including linear, generalized linear, mixed, and nonlinear models. Settings are both multivariate and univariate. With the development of R packages in these areas, computation of these procedures is easily shared with readers and implemented. This book is developed from the International Conference on Robust Rank-Based and Nonparametric Methods, held at Western Michigan University in April 2015.
This book is comprised of presentations delivered at the 5th Workshop on Biostatistics and Bioinformatics held in Atlanta on May 5-7, 2017. Featuring twenty-two selected papers from the workshop, this book showcases the most current advances in the field, presenting new methods, theories, and case applications at the frontiers of biostatistics, bioinformatics, and interdisciplinary areas. Biostatistics and bioinformatics have been playing a key role in statistics and other scientific research fields in recent years. The goal of the 5th Workshop on Biostatistics and Bioinformatics was to stimulate research, foster interaction among researchers in field, and offer opportunities for learning and facilitating research collaborations in the era of big data. The resulting volume offers timely insights for researchers, students, and industry practitioners.
Author: W. J. Conover
Publisher: John Wiley & Sons Inc
Release Date: 1999
Genre: Business & Economics
This highly-regarded text serves as a quick reference book which offers clear, concise instructions on how and when to use the most popular nonparametric procedures. This edition features some procedures that have withstood the test of time and are now used by many practitioners, such as the Fisher Exact Test for two-by-two contingency tables, the Mantel-Haenszel Test for combining several contingency tables, the Kaplan-Meier estimates of the survival curve, the Jonckheere-Terpstra Test and the Page Test for ordered alternatives, and a discussion of the bootstrap method.
Author: John Wiley & Sons
Release Date: 2003-06-23
This is an introductory textbook for a first course in applied statistics and probability for undergraduate students in engineering and the physical or chemical sciences. These individuals play a significant role in designing and developing new products and manufacturing systems and processes, and they also improve existing systems. Statistical methods are an important tool in these activities because they provide the engineer with both descriptive and analytical methods for dealing with the variability in observed data. Although many of the methods we present are fundamental to statistical analysis in other disciplines, such as business and management, the life sciences, and the social sciences, we have elected to focus on an engineering-oriented audience. We believe that this approach will best serve students in engineering and the chemical/physical sciences and will allow them to concentrate on the many applications of statistics in these disciplines. We have worked hard to ensure that our examples and exercises are engineering- and science-based, and in almost all cases we have used examples of real data—either taken from a published source or based on our consulting experiences. We believe that engineers in all disciplines should take at least one course in statistics. Unfortunately, because of other requirements, most engineers will only take one statistics course. This book can be used for a single course, although we have provided enough material for two courses in the hope that more students will see the important applications of statistics in their everyday work and elect a second course. We believe that this book will also serve as a useful reference. ORGANIZATION OF THE BOOK We have retained the relatively modest mathematical level of the first two editions. We have found that engineering students who have completed one or two semesters of calculus should have no difficulty reading almost all of the text. It is our intent to give the reader an understanding of the methodology and how to apply it, not the mathematical theory. We have made many enhancements in this edition, including reorganizing and rewriting major portions of the book. Perhaps the most common criticism of engineering statistics texts is that they are too long. Both instructors and students complain that it is impossible to cover all of the topics in the book in one or even two terms. For authors, this is a serious issue because there is great variety in both the content and level of these courses, and the decisions about what material to delete without limiting the value of the text are not easy. After struggling with these issues, we decided to divide the text into two components; a set of core topics, many of which are most likely to be covered in an engineering statistics course, and a set of supplementary topics, or topics that will be useful for some but not all courses. The core topics are in the printed book, and the complete text (both core and supplementary topics) is available on the CD that is included with the printed book. Decisions about topics to include in print and which to include only on the CD were made based on the results of a recent survey of instructors. The Interactive e-Text consists of the complete text and a wealth of additional material and features. The text and links on the CD are navigated using Adobe Acrobat™. The links within the Interactive e-Text include the following: (1) from the Table of Contents to the selected eText sections, (2) from the Index to the selected topic within the e-Text, (3) from reference to a figure, table, or equation in one section to the actual figure, table, or equation in another section (all figures can be enlarged and printed), (4) from end-of-chapter Important Terms and Concepts to their definitions within the chapter, (5) from in-text boldfaced terms to their corresponding Glossary definitions and explanations, (6) from in-text references to the corresponding Appendix tables and charts, (7) from boxed-number end-of-chapter exercises (essentially most odd-numbered exercises) to their answers, (8) from some answers to the complete problem solution, and (9) from the opening splash screen to the textbook Web site. Chapter 1 is an introduction to the field of statistics and how engineers use statistical methodology as part of the engineering problem-solving process. This chapter also introduces the reader to some engineering applications of statistics, including building empirical models, designing engineering experiments, and monitoring manufacturing processes. These topics are discussed in more depth in subsequent chapters. Chapters 2, 3, 4, and 5 cover the basic concepts of probability, discrete and continuous random variables, probability distributions, expected values, joint probability distributions, and independence. We have given a reasonably complete treatment of these topics but have avoided many of the mathematical or more theoretical details. Chapter 6 begins the treatment of statistical methods with random sampling; data summary and description techniques, including stem-and-leaf plots, histograms, box plots, and probability plotting; and several types of time series plots. Chapter 7 discusses point estimation of parameters. This chapter also introduces some of the important properties of estimators, the method of maximum likelihood, the method of moments, sampling distributions, and the central limit theorem. Chapter 8 discusses interval estimation for a single sample. Topics included are confidence intervals for means, variances or standard deviations, and proportions and prediction and tolerance intervals. Chapter 9 discusses hypothesis tests for a single sample. Chapter 10 presents tests and confidence intervals for two samples. This material has been extensively rewritten and reorganized. There is detailed information and examples of methods for determining appropriate sample sizes. We want the student to become familiar with how these techniques are used to solve real-world engineering problems and to get some understanding of the concepts behind them. We give a logical, heuristic development of the procedures, rather than a formal mathematical one. Chapters 11 and 12 present simple and multiple linear regression. We use matrix algebra throughout the multiple regression material (Chapter 12) because it is the only easy way to understand the concepts presented. Scalar arithmetic presentations of multiple regression are awkward at best, and we have found that undergraduate engineers are exposed to enough matrix algebra to understand the presentation of this material. Chapters 13 and 14 deal with single- and multifactor experiments, respectively. The notions of randomization, blocking, factorial designs, interactions, graphical data analysis, and fractional factorials are emphasized. Chapter 15 gives a brief introduction to the methods and applications of nonparametric statistics, and Chapter 16 introduces statistical quality control, emphasizing the control chart and the fundamentals of statistical process control Each chapter has an extensive collection of exercises, including end-of-section exercises that emphasize the material in that section, supplemental exercises at the end of the chapter that cover the scope of chapter topics, and mind-expanding exercises that often require the student to extend the text material somewhat or to apply it in a novel situation. As noted above, answers are provided to most odd-numbered exercises and the e-Text contains complete solutions to selected exercises. USING THE BOOK This is a very flexible textbook because instructors’ ideas about what should be in a first course on statistics for engineers vary widely, as do the abilities of different groups of students. Therefore, we hesitate to give too much advice but will explain how we use the book. We believe that a first course in statistics for engineers should be primarily an applied statistics course, not a probability course. In our one-semester course we cover all of Chapter 1 (in one or two lectures); overview the material on probability, putting most of the emphasis on the normal distribution (six to eight lectures); discuss most of Chapters 6 though 10 on confidence intervals and tests (twelve to fourteen lectures); introduce regression models in Chapter 11 (four lectures); give an introduction to the design of experiments from Chapters 13 and 14 (six lectures); and present the basic concepts of statistical process control, including the Shewhart control chart from Chapter 16 (four lectures). This leaves about three to four periods for exams and review. Let us emphasize that the purpose of this course is to introduce engineers to how statistics can be used to solve real-world engineering problems, not to weed out the less mathematically gifted students. This course is not the “baby math-stat” course that is all too often given to engineers. If a second semester is available, it is possible to cover the entire book, including much of the e-Text material, if appropriate for the audience. It would also be possible to assign and work many of the homework problems in class to reinforce the understanding of the concepts. Obviously, multiple regression and more design of experiments would be major topics in a second course. USING THE COMPUTER In practice, engineers use computers to apply statistical methods to solve problems. Therefore, we strongly recommend that the computer be integrated into the class. Throughout the book we have presented output from Minitab as typical examples of what can be done with modern statistical software. In teaching, we have used other software packages, including Statgraphics, JMP, and Statisticia. We did not clutter up the book with examples from many different packages because how the instructor integrates the software into the class is ultimately more important than which package is used. All text data is available in electronic form on the e-Text CD. In some chapters, there are problems that we feel should be worked using computer software. We have marked these problems with a special icon in the margin. In our own classrooms, we use the computer in almost every lecture and demonstrate how the technique is implemented in software as soon as it is discussed in the lecture. Student versions of many statistical software packages are available at low cost, and students can either purchase their own copy or use the products available on the PC local area networks. We have found that this greatly improves the pace of the course and student understanding of the material. Additional resources for students and instructors can be found at www.wiley.com/college/ montgomery/. ACKNOWLEDGMENTS We would like to express our grateful appreciation to the many organizations and individuals who have contributed to this book. Many instructors who used the first two editions provided excellent suggestions that we have tried to incorporate in this revision. We also thank Professors Manuel D. Rossetti (University of Arkansas), Bruce Schmeiser (Purdue University), Michael G. Akritas (Penn State University), and Arunkumar Pennathur (University of Texas at El Paso) for their insightful reviews of the manuscript of the third edition. We are also indebted to Dr. Smiley Cheng for permission to adapt many of the statistical tables from his excellent book (with Dr. James Fu), Statistical Tables for Classroom and Exam Room. John Wiley and Sons, Prentice Hall, the Institute of Mathematical Statistics, and the editors of Biometrics allowed us to use copyrighted material, for which we are grateful. Thanks are also due to Dr. Lora Zimmer, Dr. Connie Borror, and Dr. Alejandro Heredia-Langner for their outstanding work on the solutions to exercises. Douglas C. Montgomery George C. Runger
Author: Paul H. Kvam
Publisher: John Wiley & Sons
Release Date: 2007-08-24
A thorough and definitive book that fully addresses traditional and modern-day topics of nonparametric statistics This book presents a practical approach to nonparametric statistical analysis and provides comprehensive coverage of both established and newly developed methods. With the use of MATLAB, the authors present information on theorems and rank tests in an applied fashion, with an emphasis on modern methods in regression and curve fitting, bootstrap confidence intervals, splines, wavelets, empirical likelihood, and goodness-of-fit testing. Nonparametric Statistics with Applications to Science and Engineering begins with succinct coverage of basic results for order statistics, methods of categorical data analysis, nonparametric regression, and curve fitting methods. The authors then focus on nonparametric procedures that are becoming more relevant to engineering researchers and practitioners. The important fundamental materials needed to effectively learn and apply the discussed methods are also provided throughout the book. Complete with exercise sets, chapter reviews, and a related Web site that features downloadable MATLAB applications, this book is an essential textbook for graduate courses in engineering and the physical sciences and also serves as a valuable reference for researchers who seek a more comprehensive understanding of modern nonparametric statistical methods.
Author: Gerald J. Hahn
Release Date: 1991-09-02
Genre: Business & Economics
Presents a detailed exposition of statistical intervals and emphasizes applications in industry. The discussion differentiates at an elementary level among different kinds of statistical intervals and gives instruction with numerous examples and simple math on how to construct such intervals from sample data. This includes confidence intervals to contain a population percentile, confidence intervals on probability of meeting specified threshold value, and prediction intervals to include observation in a future sample. Also has an appendix containing computer subroutines for nonparametric statistical intervals.
Author: Thomas P. Hettmansperger
Publisher: John Wiley & Sons
Release Date: 1984-07-30
A coherent, unified set of statistical methods, based on ranks, for analyzing data resulting from various experimental designs. Uses MINITAB, a statistical computing system for the implementation of the methods. Assesses the statistical and stability properties of the methods through asymptotic efficiency and influence curves and tolerance values. Includes exercises and problems.
The importance of nonparametric methods in modern statistics has grown dramatically since their inception in the mid-1930s. Requiring few or no assumptions about the populations from which data are obtained, they have emerged as the preferred methodology among statisticians and researchers performing data analysis. Today, these highly efficient techniques are being applied to an ever-widening variety of experimental designs in the social, behavioral, biological, and physical sciences. This long-awaited Second Edition of Myles Hollander and Douglas A. Wolfe's successful Nonparametric Statistical Methods meets the needs of a new generation of users, with completely up-to-date coverage of this important statistical area. Like its highly acclaimed predecessor, the revised edition, along with its companion ftp site, aims to equip students with the conceptual and technical skills necessary to select and apply the appropriate procedures for a given situation. An extensive array of examples drawn from actual experiments illustrates clearly how to use nonparametric approaches to handle one- or two-sample location and dispersion problems, dichotomous data, and one-way and two-way layout problems. Rewritten and updated, this Second Edition now includes new or expanded coverage of: * Nonparametric regression methods. * The bootstrap. * Contingency tables and the odds ratio. * Life distributions and survival analysis. * Nonparametric methods for experimental designs. * More procedures, real-world data sets, and problems. * Illustrated examples using Minitab and StatXact. An ideal text for an upper-level undergraduate or first-year graduate course, this text is also an invaluable source for professionals who want to keep abreast of the latest developments within this dynamic branch of modern statistics. An Instructor's Manual presenting detailed solutions to all the problems in the book is available upon request from the Wiley editorial department.
Basic consepts and models; Life tables, graphs, and related procedures; Inference procedures for exponential distributions; Inference procedures for weibull and extreme value distributions; Inference proceduresfor some other models; Parametric regression models; Distribution-free methods for the proportional hazards and related regression models; Nonparametric and distribution-free methods; Goodness of fit tests; Multivariate and stochastic process models.
Author: Brani Vidakovic
Publisher: John Wiley & Sons
Release Date: 2009-09-25
A comprehensive, step-by-step introduction to wavelets in statistics. What are wavelets? What makes them increasingly indispensable in statistical nonparametrics? Why are they suitable for "time-scale" applications? How are they used to solve such problems as denoising, regression, or density estimation? Where can one find up-to-date information on these newly "discovered" mathematical objects? These are some of the questions Brani Vidakovic answers in Statistical Modeling by Wavelets. Providing a much-needed introduction to the latest tools afforded statisticians by wavelet theory, Vidakovic compiles, organizes, and explains in depth research data previously available only in disparate journal articles. He carefully balances both statistical and mathematical techniques, supplementing the material with a wealth of examples, more than 100 illustrations, and extensive references-with data sets and S-Plus wavelet overviews made available for downloading over the Internet. Both introductory and data-oriented modeling topics are featured, including: * Continuous and discrete wavelet transformations. * Statistical optimality properties of wavelet shrinkage. * Theoretical aspects of wavelet density estimation. * Bayesian modeling in the wavelet domain. * Properties of wavelet-based random functions and densities. * Several novel and important wavelet applications in statistics. * Wavelet methods in time series. Accessible to anyone with a background in advanced calculus and algebra, Statistical Modeling by Wavelets promises to become the standard reference for statisticians and engineers seeking a comprehensive introduction to an emerging field.
Author: Rupert G. Miller
Publisher: John Wiley & Sons
Release Date: 1981
A concise summary of the statistical methods used in the analysis of survival data with censoring. Emphasizes recently developed nonparametric techniques. Outlines methods in detail and illustrates them with actual data. Discusses the theory behind each method. Includes numerous worked problems and numerical exercises.