The F Distribution and the F-Ratio (2024)

Learning Outcomes

  • Interpret the F probability distribution as the number of groups and the sample size change

The distribution used for the hypothesis test is a new one. It is called theF distribution, named after Sir Ronald Fisher, an English statistician. The F statistic is a ratio (a fraction). There are two sets of degrees of freedom; one for the numerator and one for the denominator.

For example, if F follows an F distribution and the number of degrees of freedom for the numerator is four, and the number of degrees of freedom for the denominator is ten, then F ~ F4,10.

Note

TheF distribution is derived from the Student’s t-distribution. The values of the F distribution are squares of the corresponding values of the t-distribution. One-Way ANOVA expands the t-test for comparing more than two groups. The scope of that derivation is beyond the level of this course.

To calculate theF ratio, two estimates of the variance are made.

  1. Variance between samples: An estimate of σ2 that is the variance of the sample means multiplied by n (when the sample sizes are the same.). If the samples are different sizes, the variance between samples is weighted to account for the different sample sizes. The variance is also called variation due to treatment or explained variation.
  2. Variance within samples: An estimate of σ2 that is the average of the sample variances (also known as a pooled variance). When the sample sizes are different, the variance within samples is weighted. The variance is also called the variation due to error or unexplained variation.
  • SSbetween = the sum of squares that represents the variation among the different samples
  • SSwithin = the sum of squares that represents the variation within samples that is due to chance.

To find a “sum of squares” means to add together squared quantities that, in some cases, may be weighted.

MS means “mean square.” MSbetween is the variance between groups, and MSwithin is the variance within groups.

Calculation of Sum of Squares and Mean Square

k = the number of different groups

nj = the size of the jth group

sj = the sum of the values in the jth group

n = total number of all the values combined (total sample size: ∑nj)

x = one value: ∑x = ∑sj

Sum of squares of all values from every group combined: ∑
x2

Between group variability:
SStotal = [latex]\displaystyle\sum{{x}^{{2}}}-\frac{{\sum{x}^{{2}}}}{{n}}[/latex]

Total sum of squares:
[latex]\displaystyle\sum{x}^{{2}}-\frac{{{(\sum{x})}^{{2}}}}{{n}}[/latex]

Explained variation: sum of squares representing variation among the different samples:
[latex]\displaystyle{S}{S}_{{\text{between}}}=\sum{[\frac{{({s}{j})}^{{2}}}{{n}_{{j}}}]}-\frac{{(\sum{s}_{{j}})}^{{2}}}{{n}}[/latex]

Unexplained variation: sum of squares representing variation within samples due to chance:
[latex]\displaystyle{S}{S}_{{\text{within}}}={S}{S}_{{\text{total}}}-{S}{S}_{{\text{between}}}[/latex]

df‘s for different groups (df‘s for the numerator): df = k – 1

Equation for errors within samples (df‘s for the denominator):

dfwithin = nk

Mean square (variance estimate) explained by the different groups:
[latex]\displaystyle{M}{S}_{{\text{between}}}=\frac{{{S}{S}_{{\text{between}}}}}{{{d}{f}_{{\text{between}}}}}[/latex]

Mean square (variance estimate) that is due to chance (unexplained):
[latex]\displaystyle{M}{S}_{{\text{within}}}=\frac{{{S}{S}_{{\text{within}}}}}{{{d}{f}_{{\text{within}}}}}[/latex]

MSbetween and MSwithin can be written as follows:

  • [latex]\displaystyle{M}{S}_{{\text{between}}}=\frac{{{S}{S}_{{\text{between}}}}}{{{d}{f}_{{\text{between}}}}}=\frac{{{S}{S}_{{\text{between}}}}}{{{k}-{1}}}[/latex]
  • [latex]\displaystyle{M}{S}_{{\text{within}}}=\frac{{{S}{S}_{{\text{within}}}}}{{{d}{f}_{{\text{within}}}}}=\frac{{{S}{S}_{{\text{within}}}}}{{{n}-{k}}}[/latex]

The one-way ANOVA test depends on the fact that
MSbetween can be influenced by population differences among means of the several groups. Since MSwithin compares values of each group to its own group mean, the fact that group means might be different does not affect MSwithin.

The null hypothesis says that all groups are samples from populations having the same normal distribution. The alternate hypothesis says that at least two of the sample groups come from populations with different normal distributions. If the null hypothesis is true,
MSbetween and MSwithin should both estimate the same value.

Note

The null hypothesis says that all the group population means are equal. The hypothesis of equal means implies that the populations have the same normal distribution, because it is assumed that the populations are normal and that they have equal variances.

F-Ratio or F Statistic

[latex]\displaystyle{F}=\frac{{{M}{S}_{{\text{between}}}}}{{{M}{S}_{{\text{within}}}}}[/latex]

If
MSbetween and MSwithin estimate the same value (following the belief that H0 is true), then the F-ratio should be approximately equal to one. Mostly, just sampling errors would contribute to variations away from one. As it turns out, MSbetween consists of the population variance plus a variance produced from the differences between the samples. MSwithin is an estimate of the population variance. Since variances are always positive, if the null hypothesis is false, MSbetween will generally be larger than MSwithin.Then the F-ratio will be larger than one. However, if the population effect is small, it is not unlikely that MSwithin will be larger in a given sample.

The foregoing calculations were done with groups of different sizes. If the groups are the same size, the calculations simplify somewhat and the
F-ratio can be written as:

F-Ratio Formula when the groups are the same size

[latex]\displaystyle{F}=\frac{{{n}\cdot{{s}_{\overline{{x}}}^{{ {2}}}}}}{{{{s}_{{\text{pooled}}}^{{2}}}}}[/latex]

where …

  • n = the sample size
  • dfnumerator = k – 1
  • dfdenominator = nk
  • s2pooled = the mean of the sample variances (pooled variance)
  • [latex]\displaystyle{{s}_{\overline{{x}}}^{{ {2}}}}[/latex] = the variance of the sample means

Data are typically put into a table for easy viewing. One-Way ANOVA results are often displayed in this manner by computer software.

Source of VariationSum of Squares (
SS)
Degrees of Freedom (
df)
Mean Square (
MS)
F
Factor(Between)SS(Factor)k – 1MS(Factor) =SS(Factor)/(k – 1)F =MS(Factor)/MS(Error)
Error(Within)SS(Error)nkMS(Error) =SS(Error)/(nk)
TotalSS(Total)n – 1

Example

Three different diet plans are to be tested for mean weight loss. The entries in the table are the weight losses for the different plans. The one-way ANOVA results are shown in in the table here.

3.5

Plan 1:
n1 = 4
Plan 2:
n2 = 3
Plan 3:
n3 = 3
53.58
4.574
4
34.5

s1 = 16.5, s2 =15, s3 = 15.7

Following are the calculations needed to fill in the one-way ANOVA table. The table is used to conduct a hypothesis test.

[latex]\displaystyle{{S}{S}}_{{\text{between}}}=\sum{\left[\frac{{{({s}_{j})}^{2}}}{{{n}_{j}}}\right]}-\frac{{(\sum{{s}_{j})}^{2}}}{{n}}[/latex]

[latex]\displaystyle=\frac{{{{s}_{1}}^{2}}}{{4}}+\frac{{{{s}_{2}}^{2}}}{{3}}+\frac{{{{s}_{3}}^{2}}}{{3}}[/latex]

where

n1 = 4, n2 = 3, n3 = 3 and n = n1 + n2 + n3 = 10

[latex]\displaystyle=\frac{{({16.5})^{2}}}{{4}}+\frac{{({15})^{2}}}{{3}}+\frac{{ ({5.5})^{2}}}{{3}}-\frac{{ {({16.5}+{15}+{15.5})}^{2}}}{{10}}[/latex]

[latex]\displaystyle{{S}{S}}_{{\text{between}}}={2.2458}{S}_{{\text{total}}}=\sum{x}^{2}-\frac{{{(\sum{x})}^{2}}}{{n}}[/latex]

[latex]\displaystyle=\left({5}^{2}+{4.5}^{2}+{4}^{2}+{3}^{2}+{3.5}^{2}+{7}^{2}+{4.5}^{2}+{8}^{2}+{4}^{2}+{3.5}^{2}\right)[/latex]

[latex]\displaystyle{-}\frac{{{\left({5}+{4.5}+{4}+{3}+{3.5}+{7}+{4.5}+{8}+{4}+{3.5}\right)}^{2}}}{{10}}[/latex]

[latex]\displaystyle={244}-\frac{{{47}^{2}}}{{10}}={244}-{220.9}[/latex]

Using a Calculator

One-Way ANOVA Table: The formulas for
SS(Total), SS(Factor) = SS(Between) andSS(Error) = SS(Within) as shown previously.

The same information is provided by the TI calculator hypothesis test function ANOVA in STAT TESTS (syntax is ANOVA(L1, L2, L3) where L1, L2, L3 have the data from Plan 1, Plan 2, Plan 3 respectively).

Source of VariationSum of Squares (
SS)
Degrees of Freedom (
df)
Mean Square (
MS)
F
Factor(Between)SS(Factor)=SS(Between)= 2.2458k – 1= 3 groups – 1= 2MS(Factor)=SS(Factor)/(k– 1)= 2.2458/2= 1.1229F =MS(Factor)/MS(Error)= 1.1229/2.9792= 0.3769
Error(Within)SS(Error)= SS(Within)= 20.8542nk= 10 total data – 3 groups= 7MS(Error)=SS(Error)/(nk)= 20.8542/7= 2.9792
TotalSS(Total)= 2.2458 + 20.8542= 23.1n – 1= 10 total data – 1= 9

Try it

As part of an experiment to see how different types of soil cover would affect slicing tomato production, Marist College students grew tomato plants under different soil cover conditions. Groups of three plants each had one of the following treatments

  • bare soil
  • a commercial ground cover
  • black plastic
  • straw
  • compost

All plants grew under the same conditions and were the same variety. Students recorded the weight (in grams) of tomatoes produced by each of then = 15 plants:

Bare:
n1 = 3
Ground Cover:
n2 = 3
Plastic:
n3 = 3
Straw:
n4 = 3
Compost:
n5 = 3
2,6255,3486,5837,2856,277
2,9975,6828,5606,8977,818
4,9155,4823,8309,2308,677

Create the one-way ANOVA table.

Enter the data into lists L1, L2, L3, L4 and L5. Press STAT and arrow over to TESTS. Arrow down to ANOVA. Press ENTER and enter L1, L2, L3, L4, L5). Press ENTER. The table was filled in with the results from the calculator.

One-Way ANOVA table:

Source of VariationSum of Squares (
SS)
Degrees of Freedom (
df)
Mean Square (
MS)
F
Factor (Between)36,648,5615 – 1 = 4[latex]\displaystyle\frac{{{36},{648},{561}}}{{4}}={9},{162},{140}[/latex][latex]\displaystyle\frac{{{9},{162},{140}}}{{{2},{044},{672.6}}}={4.4810}[/latex]
Error (Within)20,446,72615 – 5 = 10[latex]\displaystyle\frac{{{20},{446},{726}}}{{10}}={2},{044},{672.6}[/latex]
Total57,095,28715 – 1 = 14

The one-way ANOVA hypothesis test is always right-tailed because larger
F-values are way out in the right tail of the F-distribution curve and tend to make us reject H0.

Notation

The notation for theF distribution is F ~ Fdf(num),df(denom)

wheredf(num) = dfbetween and df(denom) = dfwithin

The mean for theF distribution is [latex]\displaystyle\mu=\frac{{{d}{f}{(\text{num})}}}{{{d}{f}{(\text{denom})}}}-{1}[/latex]

References

Tomato Data, Marist College School of Science (unpublished student research)

Concept Review

Analysis of variance compares the means of a response variable for several groups. ANOVA compares the variation within each group to the variation of the mean of each group. The ratio of these two is the
F statistic from an F distribution with (number of groups – 1) as the numerator degrees of freedom and (number of observations – number of groups) as the denominator degrees of freedom. These statistics are summarized in the ANOVA table.

Formula Review

[latex]\displaystyle{S}{S}_{{\text{between}}}=\sum{[\frac{{({s}{j})}^{{2}}}{{n}_{{j}}}]}-\frac{{(\sum{s}_{{j}})}^{{2}}}{{n}}[/latex]

SStotal = [latex]\displaystyle\sum{{x}^{{2}}}-\frac{{\sum{x}^{{2}}}}{{n}}[/latex]

[latex]\displaystyle{S}{S}_{{\text{within}}}={S}{S}_{{\text{total}}}-{S}{S}_{{\text{between}}}[/latex]

dfbetween = df(num) = k – 1

dfwithin = df(denom) = nk

[latex]\displaystyle{M}{S}_{{\text{between}}}=\frac{{{S}{S}_{{\text{between}}}}}{{{d}{f}_{{\text{between}}}}}[/latex]

[latex]\displaystyle{M}{S}_{{\text{within}}}=\frac{{{S}{S}_{{\text{within}}}}}{{{d}{f}_{{\text{within}}}}}[/latex]

[latex]\displaystyle{F}=\frac{{{M}{S}_{{\text{between}}}}}{{{M}{S}_{{\text{within}}}}}[/latex]

F ratio when the groups are the same size: [latex]\displaystyle{F}=\frac{{{n}{{s}_{\overline{{x}}}^{{ {2}}}}}}{{{s}_{{\text{pooled}}}^{{2}}}}[/latex]

Mean of theF distribution:[latex]\displaystyle\mu=\frac{{{d}{f}{(\text{num})}}}{{{d}{f}{(\text{denom})}}}-{1}[/latex]

where:
k = the number of groups nj = the size of the jth group sj = the sum of the values in the jth group n = the total number of all values (observations) combined x = one value (one observation) from the data [latex]\displaystyle{{s}_{\overline{{x}}}^{{ {2}}}}[/latex] = the mean of the sample variances (pooled variance)

The F Distribution and the F-Ratio (2024)

FAQs

The F Distribution and the F-Ratio? ›

The distribution used for the hypothesis test is a new one. It is called the F distribution, invented by George Snedecor but named in honor of Sir Ronald Fisher, an English statistician. The F statistic is a ratio (a fraction). There are two sets of degrees of freedom; one for the numerator and one for the denominator.

What does the F ratio tell you? ›

The F-ratio is the ratio of the between group variance to the within group variance. It can be compared to a critical F-ratio, which is determined by rejecting or accepting the null hypothesis, which determines whether or not there are no differences between groups.

What is the sampling distribution of F ratio? ›

Sampling distribution of F if H0 were true:

= μ I is true. Then the sampling distribution of F is the F distribution with I−1 and N−I degrees of freedom. That is, most of the time we would find relatively small F values, and only sometimes we would find large F values.

What is the F-test for ratio? ›

The statistical test to use to compare variance is called the F -ratio test (or the variance ratio test) and compares two variances in order to test whether they come from the same populations.

What is the F distribution used for? ›

The F distribution is used in many cases for the critical regions for hypothesis tests and in determining confidence intervals. Two common examples are the analysis of variance and the F test to determine if the variances of two populations are equal.

Is a high F ratio good or bad? ›

You'll see a large F ratio both when the null hypothesis is wrong (the data are not sampled from populations with the same mean) and when random sampling happened to end up with large values in some groups and small values in others.

What happens when F ratio increases? ›

This is a favourable situation for the researcher, as an increase in the value associated with the F-ratio will result in an increase in the probability that the null hypothesis would be rejected ad statistically significant study results obtained.

Is the F ratio the same as the F distribution? ›

In probability theory and statistics, the F-distribution or F-ratio, also known as Snedecor's F distribution or the Fisher–Snedecor distribution (after Ronald Fisher and George W.

What is the F distribution of the sample mean? ›

The mean and variance of the F-Distribution are given by: Mean = df2 / (df2 - 2) (when df2 > 2) Variance = (2 * (df2^2) * (df1 + df2 - 2)) / (df1 * (df2 - 2)^2 * (df2 - 4)) (when df2 > 4)

What describes a typical distribution of F ratios? ›

"positively skewed with all values greater than or equal to zero" accurately matches the properties of the F-distribution. "symmetrical with a mean of zero" is incorrect because the F-distribution is not symmetrical and its mean is not zero.

What does an F ratio of 1.00 mean? ›

The expected value of an F ratio is 1.00 when the null hypothesis is true because, in such a case, the treatment variance and the error variance are approximately equal, leading to an F-ratio of 1. This concept is central to hypothesis testing in ANOVA.

Why is the F-statistic also called an F ratio? ›

It is called the F distribution, named after Sir Ronald Fisher, an English statistician. The F statistic is a ratio (a fraction). There are two sets of degrees of freedom; one for the numerator and one for the denominator.

What is the F-test always a ratio of? ›

Because the F-statistic is the ratio of two sample variances, when the two sample variances are close to equal, the F-score is close to one. If you compute the F-score, and it is close to one, you accept your hypothesis that the samples come from populations with the same variance.

What is the F ratio? ›

The F-ratio is a statistic for evaluating whether two variances or standard deviations are significantly different, obtained by dividing one variance by another variance. The F ratio is the ratio of two mean square values. If the null hypothesis is true, you expect F to have a value close to 1.0 most of the time.

What does the F distribution test? ›

An F-test is any statistical test used to compare the variances of two samples or the ratio of variances between multiple samples.

What is the significance level of the F distribution? ›

If the F-test statistic is greater than or equal to 2.92, our results are statistically significant. The probability distribution plot below displays this graphically. The shaded area is the probability of F-values falling within the rejection region of the F-distribution when the null hypothesis is true.

How do you interpret the F-test results? ›

Result of the F Test (Decided Using R2)

If the R2 value is smaller than the critical value in the R2 table, then the model is not significant. If the R2 value is larger, then the model is significant. This answer will always be the same as the result found using the p-value.

What does the F ratio tell us in linear regression? ›

The F-ratio, which follows the F-distribution, is the test statistic to assess the statistical significance of the overall model. It tests the hypothesis that the variation explained by regression model is more than the variation explained by the average value (ȳ).

What does an F ratio less than 1 mean? ›

When F is less than one, we would not reject the hypothesis of no differences. So, when we look at patterns of means when F is less than 1, we should see mostly the same means, and no big differences.

How do you interpret F effect size? ›

Let denote the common standard deviation of all groups. Cohen (1988, 285-287) proposed the following interpretation of f: f = 0.1 is a small effect, f = 0.25 is a medium effect, and f = 0.4 is a large effect.

Top Articles
Latest Posts
Article information

Author: Sen. Emmett Berge

Last Updated:

Views: 5545

Rating: 5 / 5 (80 voted)

Reviews: 95% of readers found this page helpful

Author information

Name: Sen. Emmett Berge

Birthday: 1993-06-17

Address: 787 Elvis Divide, Port Brice, OH 24507-6802

Phone: +9779049645255

Job: Senior Healthcare Specialist

Hobby: Cycling, Model building, Kitesurfing, Origami, Lapidary, Dance, Basketball

Introduction: My name is Sen. Emmett Berge, I am a funny, vast, charming, courageous, enthusiastic, jolly, famous person who loves writing and wants to share my knowledge and understanding with you.