Header

divider
Links
• Home
• Contact Me
• 
Aeries Portal
• Royal High School
• Simi Valley USD

divider

AP Statistics

• Course Calendar
• Chapter Notes
• Practice Tests
• Statistics Links

divider
AP Comp-Sci
 Course Content
• Programs
• Practice Tests
• Comp-Sci Links

divider

CP Statistics
• Course Calendar
• Chapter Notes
• Practice Tests
• Statistics Links


divider

Algebra
• Table of Contents
Geometry
• Table of Contents

Math Power
• Table of Contents

AP Statistics Chapter 11: Inference for Distributions

11.1 – Significance Tests: The Basics

Null and Alternate Hypotheses

The statement that is being tested is called the null hypothesis. The significance test is designed to assess the strength of the evidence against the null hypothesis. Usually the null hypothesis is a statement of "no effect," "no difference," or no change from historical values.

The claim about the population that we are trying to find evidence for is called the alternative hypothesis. Usually the alternate hypothesis is a statement of "an effect," "a difference," or a change from historical values.

Test Statistics

To assess how far the estimate is from the parameter, standardize the estimate. In many common situations, the test statistics has the form

test_statistic

Z-test for a Population Mean

The formula for the z test statistic is:

z_statistic

 

where mu_0 is the value of  mu specified by the null hypothesis. The test statistic z says how far x-bar is from mu_0  in standard deviation units.

 

P-value

The p-value of a test is the probability that we would get this sample result or one more extreme if the null hypothesis is true. The smaller the p-value is, the stronger the evidence against the null hypothesis provided by the data.

Statistical Significance

If the P-value is as small or smaller than alpha, we say that the data are statistically significant at level alpha. In general, use alpha=0.05 unless otherwise noted.

 

11.2 – Carrying out Significance Tests

Follow this plan when doing a significance test:

  1. Hypotheses State the null and alternate hypotheses
  2. Conditions Check conditions for the appropriate test
  3. Calculations Compute the test statistic and use it to find the p-value
  4. Interpretation Use the p-value to state a conclusion, in context, in a sentence or two

 

11.3 – Use and Abuse of Tests

There are four important concepts you should remember from this section:
  1. Finding a level of significance (read p. 717)
  2. Statistical significance does not mean practical importance (see example 11.13)
  3. Statistical inference is not valid for all set of data (see exercise 46)
  4. Beware of multiple analyses (see example 11.17)

 

11.4 – Using Inference to Make Decisions

Type I and Type II Errors

There are two types of errors that can be made using inferential techniques. In both cases, we get a sample that suggests we arrive at a given conclusion (either for or against Ho). Sometimes we get a bad sample that doesn’t reveal the truth. Here are the two types of errors:

Type I – Rejecting the Ho when it is actually true (a false positive)
Type II – Accepting the Ho when it is actually false (a false negative)

Be prepared to write, in sentence form, the meaning of a Type I and Type II error in language of the given situation.

The probability of a Type I error is the same as alpha, the significance level. You will not be asked to find the probability of a Type II error.

Power of a Test
The power of a test is the probability that we will reject Ho when it is indeed false. Another way to consider this is that power is the strength of our case against Ho when it is false. The farther apart the truth is from Ho, the stronger our power is. Also, increasing sample size lowers our chance of making a Type II error, thus increasing power.

 

Home  •  About Me  •  Aeries Portal  •  Contact Me
© DanShuster.com