Your email was sent successfully. Check your inbox.

An error occurred while sending the email. Please try again.

Proceed reservation?

Export
Filter
  • MPI Ethno. Forsch.  (1)
  • 1995-1999
  • 1980-1984  (1)
  • Beaumont, G. P.  (1)
  • Dordrecht : Springer Netherlands  (1)
  • Humanities.  (1)
Datasource
  • MPI Ethno. Forsch.  (1)
Material
Language
Years
  • 1995-1999
  • 1980-1984  (1)
Year
Publisher
  • Dordrecht : Springer Netherlands  (1)
Keywords
  • 1
    Online Resource
    Online Resource
    Dordrecht : Springer Netherlands
    ISBN: 9789400957947
    Language: English
    Pages: Online-Ressource , online resource
    Edition: Springer eBook Collection. Humanities, Social Sciences and Law
    Parallel Title: Erscheint auch als
    Parallel Title: Erscheint auch als
    Keywords: Science (General) ; Social sciences. ; Humanities.
    Abstract: 1 Sufficiency -- 1.1 Introduction -- 1.2 Factorization criterion -- 1.3 Distribution of statistics conditional on a sufficient statistic -- 1.4 Joint sufficiency -- 1.5 Minimal sufficiency -- 2 Unbiased point estimators -- 2.1 Introduction -- 2.2 Rao-Blackwell theorem -- 2.3 The role of sufficient statistics -- 2.4 Completeness -- 2.5 Joint completeness -- 2.6 Sufficiency, completeness and independence -- 2.7 Minimum-variance bounds -- 2.8 Computation of a minimum-variance bound -- 2.9 Minimum attainable variance -- 2.10 Mean square error -- 2.11 Two parameters -- 3 Elementary decision theory and Bayesian methods -- 3.1 Comments on classical techniques -- 3.2 Loss functions -- 3.3 Decision theory -- 3.4 Bayes decisions -- 3.5 Using data -- 3.6 Computing posterior distributions -- 3.7 Conjugate distributions -- 3.8 Distribution of the next observation -- 3.9 More than one parameter -- 3.10 Decision functions -- 3.11 Bayes estimators -- 3.12 Admissibility -- 4 Methods of estimation -- 4.1 Introduction -- 4.2 Maximum likelihood estimation -- 4.3 Locating the maximum likelihood estimator -- 4.4 Estimation of a function of a parameter -- 4.5 Truncation and censoring -- 4.6 Estimation of several parameters -- 4.7 Approximation techniques -- 4.8 Large-sample properties -- 4.9 Method of least squares -- 4.10 Normal equations -- 4.11 Solution of the normal equations (non-singular case) -- 4.12 Use of matrices -- 4.13 Best unbiased linear estimation -- 4.14 Co variance matrix -- 4.15 Relaxation of assumptions -- 5 Hypothesis testing I -- 5.1 Introduction -- 5.2 Statistical hypothesis -- 5.3 Simple null hypothesis against simple alternative -- 5.4 Applications of the Neyman-Pearson theorem -- 5.5 Uniformly most powerful tests for a single parameter -- 5.6 Most powerful randomized tests -- 5.7 Hypothesis testing as a decision process -- 5.8 Minimax and Bayes tests -- 6 Hypothesis testing II -- 6.1 Two-sided tests for a single parameter -- 6.2 Neyman-Pearson theorem extension (nonrandomized version) -- 6.3 Regular exponential family of distributions -- 6.4 Uniformly most powerful unbiased test of ? = ?0 against ? ? ?0 -- 6.5 Nuisance parameters -- 6.6 Similar tests -- 6.7 Composite hypotheses-several parameters -- 6.8 Likelihood ratio tests -- 6.9 Bayes methods -- 6.10 Loss function for one-sided hypotheses -- 6.11 Testing ? = ?0 against ? ? ?0 -- 7 Interval estimation -- 7.1 One parameter, Bayesian confidence intervals -- 7.2 Two parameters, Bayesian confidence regions -- 7.3 Confidence intervals (classical) -- 7.4 Most selective limits -- 7.5 Relationship to best tests -- 7.6 Unbiased confidence intervals -- 7.7 Nuisance parameters -- 7.8 Discrete distributions -- 7.9 Relationship between classical and Bayesian intervals -- 7.10 Large-sample confidence intervals -- Appendix 1 Functions of random variables -- A 1.1 Introduction -- A 1.2 Transformations: discrete distributions -- A1.3 Continuous distributions -- A 1.4 The order statistics -- Appendix 2 The regular exponential family of distributions -- A2.1 Single parameter -- A2.2 Several parameters -- A2.3 The regular exponential family of bivariate distributions -- Further exercises -- Brief solutions to further exercises -- Further reading -- Author index.
    Abstract: This book covers those basic topics which usually form the core of intermediate courses in statistical theory; it is largely about estima­ tion and hypothesis testing. It is intended for undergraduates following courses in statistics but is also suitable preparatory read­ ing for some postgraduate courses. It is assumed that the reader has completed an introductory course which covered probability, random variables, moments and the sampling distributions. The level of mathematics required does not go beyond first year calculus. In case the reader has not acquired much facility in handling matrices, the results in least squares estimation are first obtained directly and then given an (optional) matrix formulation. If techniques for changing from one set of variables to another have not been met, then the appendix on these topics should be studied first. The same appendix contains essential discussion of the order statistics which are frequently used for illustrative purposes. Introductory courses usually include the elements of hypothesis testing and of point and interval estimation though the treatment must perforce become rather thin since at that stage it is difficult to provide adequate justifications for some procedures-plausible though they may seem. This text discusses these important topics in considerable detail, starting from scratch. The level is nowhere advanced and proofs of asymptotic results are omitted. Methods deriving from the Bayesian point of view are gradually introduced and alternate with the more usual techniques.
    Description / Table of Contents: 1 Sufficiency1.1 Introduction -- 1.2 Factorization criterion -- 1.3 Distribution of statistics conditional on a sufficient statistic -- 1.4 Joint sufficiency -- 1.5 Minimal sufficiency -- 2 Unbiased point estimators -- 2.1 Introduction -- 2.2 Rao-Blackwell theorem -- 2.3 The role of sufficient statistics -- 2.4 Completeness -- 2.5 Joint completeness -- 2.6 Sufficiency, completeness and independence -- 2.7 Minimum-variance bounds -- 2.8 Computation of a minimum-variance bound -- 2.9 Minimum attainable variance -- 2.10 Mean square error -- 2.11 Two parameters -- 3 Elementary decision theory and Bayesian methods -- 3.1 Comments on classical techniques -- 3.2 Loss functions -- 3.3 Decision theory -- 3.4 Bayes decisions -- 3.5 Using data -- 3.6 Computing posterior distributions -- 3.7 Conjugate distributions -- 3.8 Distribution of the next observation -- 3.9 More than one parameter -- 3.10 Decision functions -- 3.11 Bayes estimators -- 3.12 Admissibility -- 4 Methods of estimation -- 4.1 Introduction -- 4.2 Maximum likelihood estimation -- 4.3 Locating the maximum likelihood estimator -- 4.4 Estimation of a function of a parameter -- 4.5 Truncation and censoring -- 4.6 Estimation of several parameters -- 4.7 Approximation techniques -- 4.8 Large-sample properties -- 4.9 Method of least squares -- 4.10 Normal equations -- 4.11 Solution of the normal equations (non-singular case) -- 4.12 Use of matrices -- 4.13 Best unbiased linear estimation -- 4.14 Co variance matrix -- 4.15 Relaxation of assumptions -- 5 Hypothesis testing I -- 5.1 Introduction -- 5.2 Statistical hypothesis -- 5.3 Simple null hypothesis against simple alternative -- 5.4 Applications of the Neyman-Pearson theorem -- 5.5 Uniformly most powerful tests for a single parameter -- 5.6 Most powerful randomized tests -- 5.7 Hypothesis testing as a decision process -- 5.8 Minimax and Bayes tests -- 6 Hypothesis testing II -- 6.1 Two-sided tests for a single parameter -- 6.2 Neyman-Pearson theorem extension (nonrandomized version) -- 6.3 Regular exponential family of distributions -- 6.4 Uniformly most powerful unbiased test of ? = ?0 against ? ? ?0 -- 6.5 Nuisance parameters -- 6.6 Similar tests -- 6.7 Composite hypotheses-several parameters -- 6.8 Likelihood ratio tests -- 6.9 Bayes methods -- 6.10 Loss function for one-sided hypotheses -- 6.11 Testing ? = ?0 against ? ? ?0 -- 7 Interval estimation -- 7.1 One parameter, Bayesian confidence intervals -- 7.2 Two parameters, Bayesian confidence regions -- 7.3 Confidence intervals (classical) -- 7.4 Most selective limits -- 7.5 Relationship to best tests -- 7.6 Unbiased confidence intervals -- 7.7 Nuisance parameters -- 7.8 Discrete distributions -- 7.9 Relationship between classical and Bayesian intervals -- 7.10 Large-sample confidence intervals -- Appendix 1 Functions of random variables -- A 1.1 Introduction -- A 1.2 Transformations: discrete distributions -- A1.3 Continuous distributions -- A 1.4 The order statistics -- Appendix 2 The regular exponential family of distributions -- A2.1 Single parameter -- A2.2 Several parameters -- A2.3 The regular exponential family of bivariate distributions -- Further exercises -- Brief solutions to further exercises -- Further reading -- Author index.
    URL: Volltext  (lizenzpflichtig)
    Library Location Call Number Volume/Issue/Year Availability
    BibTip Others were also interested in ...
Close ⊗
This website uses cookies and the analysis tool Matomo. More information can be found here...