PRESTIGE ED
N9: Confidence Intervals: Two-Sample & Variance Ratio
Node N9 — Section 1

Why This Concept Exists

In N8, we constructed confidence intervals for a single population parameter. But the most common inferential question in practice is not "what is the mean?" but "are the two groups different?" This requires estimating the difference between two population parameters, not just each one individually.

Two-sample CIs are essential because simply comparing two separate one-sample CIs is not a valid method for comparing groups. Two overlapping one-sample CIs do not imply that the difference is zero; conversely, two non-overlapping one-sample CIs are a much stricter criterion than needed. The correct approach is to construct a single CI for \(\mu_1 - \mu_2\) using the sampling distribution of the difference.

Leverage: Two-sample CIs and variance-ratio intervals appear on every PTS2 paper since 2015, typically worth 8-14 marks. The F-based CI for \(\sigma_1^2/\sigma_2^2\) is particularly examinable because it tests whether you understand that the F-distribution is asymmetric and requires two distinct quantiles.

By the end of this node, you will be able to:

  • Construct a two-sample z-CI for \(\mu_1 - \mu_2\) when both variances are known.
  • Construct a two-sample t-CI for \(\mu_1 - \mu_2\) using the pooled variance estimator.
  • Construct a CI for the difference of two proportions \(p_1 - p_2\).
  • Construct a variance-ratio CI \(\sigma_1^2 / \sigma_2^2\) using the F-distribution.

Node N9 — Section 2

Prerequisites

Before engaging with this node, you must be comfortable with:

  • One-sample CIs (N8): You must know how to construct z-intervals, t-intervals, proportion CIs, and variance CIs for a single population parameter. The pivotal quantity method from N8 is reused here with more complex quantities.
  • Sampling distributions (N6): \(\bar{X}_1 - \bar{X}_2 \sim N(\mu_1 - \mu_2, \sigma_1^2/n_1 + \sigma_2^2/n_2)\) when the two samples are independent. You must understand why the variances add (independence). You must also know the F-distribution: if \(S_1^2/\sigma_1^2\) and \(S_2^2/\sigma_2^2\) are independent chi-squared variables, their ratio follows an F-distribution.
  • Pooled variance: \(S_p^2 = \dfrac{(n_1-1)S_1^2 + (n_2-1)S_2^2}{n_1+n_2-2}\) is the best estimator of the common variance \(\sigma^2\) when \(\sigma_1^2 = \sigma_2^2\).
  • F-distribution critical values: \(F_{\alpha, \nu_1, \nu_2}\) denotes the upper \(\alpha\)-quantile. You must know the reciprocity property: \(F_{1-\alpha, \nu_1, \nu_2} = 1/F_{\alpha, \nu_2, \nu_1}\).
  • Normal approximation to the binomial: For large \(n\), \(\hat{p} \approx N\big(p, p(1-p)/n\big)\).
Key idea: The entire node hinges on one principle: independent variances add. Whether we're dealing with \(\bar{X}_1 - \bar{X}_2\), \(\hat{p}_1 - \hat{p}_2\), or \(S_1^2/S_2^2\), the standard error always involves a sum (or ratio) of variance components.

Node N9 — Section 3

Core Exposition

3.1 Two-Sample z-CI for \(\mu_1 - \mu_2\) (Known Variances)

When both population variances \(\sigma_1^2\) and \(\sigma_2^2\) are known, the pivotal quantity is:

\[Z = \frac{(\bar{X}_1 - \bar{X}_2) - (\mu_1 - \mu_2)}{\sqrt{\sigma_1^2/n_1 + \sigma_2^2/n_2}} \sim N(0, 1)\]

The \((1-\alpha) \times 100\%\) CI for \(\mu_1 - \mu_2\) is:

\[(\bar{X}_1 - \bar{X}_2) \pm z_{\alpha/2}\,\sqrt{\frac{\sigma_1^2}{n_1} + \frac{\sigma_2^2}{n_2}}\]
Why the SE is a square root of a sum Because the two samples are independent:
\(\text{Var}(\bar{X}_1 - \bar{X}_2) = \text{Var}(\bar{X}_1) + \text{Var}(\bar{X}_2) = \dfrac{\sigma_1^2}{n_1} + \dfrac{\sigma_2^2}{n_2}\).
The SE is the square root of this variance. Notice that we add the variances, not subtract them — variances never subtract.

3.2 Two-Sample t-CI for \(\mu_1 - \mu_2\) (Equal but Unknown Variances)

When \(\sigma_1^2 = \sigma_2^2 = \sigma^2\) (unknown), we pool the sample variances:

\[S_p^2 = \frac{(n_1-1)S_1^2 + (n_2-1)S_2^2}{n_1 + n_2 - 2}, \qquad T = \frac{(\bar{X}_1 - \bar{X}_2) - (\mu_1 - \mu_2)}{S_p\sqrt{1/n_1 + 1/n_2}} \sim t(n_1 + n_2 - 2)\]

The \((1-\alpha) \times 100\%\) CI for \(\mu_1 - \mu_2\) is:

\[(\bar{X}_1 - \bar{X}_2) \pm t_{n_1+n_2-2, \alpha/2}\, S_p\sqrt{\frac{1}{n_1} + \frac{1}{n_2}}\]
When can you assume equal variances? In PTS2 exams, the question will explicitly state "assume equal variances" or "assume \(\sigma_1^2 = \sigma_2^2\)". If not stated, use the unpooled Welch-Satterthwaite approach (which is non-examinable in PTS2). Always check the problem statement carefully.

3.3 CI for the Difference of Two Proportions

For two independent binomial samples, the large-sample pivotal quantity is:

\[Z = \frac{(\hat{p}_1 - \hat{p}_2) - (p_1 - p_2)}{\sqrt{\dfrac{\hat{p}_1(1-\hat{p}_1)}{n_1} + \dfrac{\hat{p}_2(1-\hat{p}_2)}{n_2}}} \approx N(0, 1)\]

Key point: we use the separate sample proportions in the SE (not pooled). The pooled proportion is only used in hypothesis testing when \(H_0: p_1 = p_2\).

The approximate \((1-\alpha)\) CI for \(p_1 - p_2\) is:
\[(\hat{p}_1 - \hat{p}_2) \pm z_{\alpha/2}\,\sqrt{\frac{\hat{p}_1(1-\hat{p}_1)}{n_1} + \frac{\hat{p}_2(1-\hat{p}_2)}{n_2}}\]

3.4 Variance-Ratio CI via the F-Distribution

When both populations are normal, and the samples are independent:

\[\frac{S_1^2/\sigma_1^2}{S_2^2/\sigma_2^2} \sim F(n_1-1,\; n_2-1)\]
Derivation of the CI Start with:
\[P\!\left(F_{1-\alpha/2, \nu_1, \nu_2} \leq \frac{S_1^2/\sigma_1^2}{S_2^2/\sigma_2^2} \leq F_{\alpha/2, \nu_1, \nu_2}\right) = 1 - \alpha\]

Rearrange to isolate \(\sigma_1^2/\sigma_2^2\):
\[P\!\left(\frac{S_1^2/S_2^2}{F_{\alpha/2, \nu_1, \nu_2}} \leq \frac{\sigma_1^2}{\sigma_2^2} \leq \frac{S_1^2/S_2^2}{F_{1-\alpha/2, \nu_1, \nu_2}}\right) = 1 - \alpha\]

Using the reciprocity property \(F_{1-\alpha/2, \nu_1, \nu_2} = 1/F_{\alpha/2, \nu_2, \nu_1}\):
\[\left[\;\frac{S_1^2/S_2^2}{F_{\alpha/2, n_1-1, n_2-1}}\;,\;\; \left(S_1^2/S_2^2\right) \cdot F_{\alpha/2, n_2-1, n_1-1}\;\right]\]
Exam-critical: The two F-quantiles have swapped degrees of freedom. The upper bound uses \(F_{\alpha/2, n_2-1, n_1-1}\), not \(F_{\alpha/2, n_1-1, n_2-1}\). This is the single most common trap in this topic.

Node N9 — Section 4

Worked Examples

Example 1: Two-Sample z-CI for \(\mu_1 - \mu_2\) (Known Variances)

A factory uses two machines to produce steel rods. Machine A has known standard deviation \(\sigma_A = 0.8\) mm; Machine B has known \(\sigma_B = 1.0\) mm. Independent samples yield:

Machine A: \(n_A = 36\), \(\bar{x}_A = 25.3\) mm
Machine B: \(n_B = 49\), \(\bar{x}_B = 24.7\) mm

Construct a 95% CI for \(\mu_A - \mu_B\).

Step 1: Compute the point estimate \(\bar{x}_A - \bar{x}_B = 25.3 - 24.7 = 0.6\) mm.
Step 2: Compute the standard error \[SE = \sqrt{\frac{\sigma_A^2}{n_A} + \frac{\sigma_B^2}{n_B}} = \sqrt{\frac{0.64}{36} + \frac{1.00}{49}} = \sqrt{0.01778 + 0.02041} = \sqrt{0.03819} = 0.1954\]
Step 3: Form the CI \(z_{0.025} = 1.96\).
Margin of error: \(1.96 \times 0.1954 = 0.3830\).
95% CI: \(0.6 \pm 0.3830 = \mathbf{[0.217,\; 0.983]}\).

Interpretation: We are 95% confident that the true difference in mean lengths is between 0.217 mm and 0.983 mm. Since the entire interval is above zero, Machine A produces rods that are significantly longer on average.

Example 2: Two-Sample t-CI with Pooled Variance

Two teaching methods are compared. Method 1: \(n_1 = 15\), \(\bar{x}_1 = 72.4\), \(s_1 = 6.1\). Method 2: \(n_2 = 12\), \(\bar{x}_2 = 68.9\), \(s_2 = 5.4\). Assume equal population variances and normal populations.

Construct a 95% CI for \(\mu_1 - \mu_2\).

Step 1: Pooled variance \[S_p^2 = \frac{(14)(6.1)^2 + (11)(5.4)^2}{15 + 12 - 2} = \frac{14 \times 37.21 + 11 \times 29.16}{25} = \frac{520.94 + 320.76}{25} = \frac{841.70}{25} = 33.668\] \[S_p = \sqrt{33.668} = 5.803\]
Step 2: Standard error \[SE = S_p\sqrt{\frac{1}{n_1} + \frac{1}{n_2}} = 5.803 \times \sqrt{\frac{1}{15} + \frac{1}{12}} = 5.803 \times \sqrt{0.1500} = 5.803 \times 0.3873 = 2.247\]
Step 3: Form the CI d.f. = 25, \(t_{25, 0.025} = 2.060\).
Point estimate: \(72.4 - 68.9 = 3.5\).
Margin of error: \(2.060 \times 2.247 = 4.629\).
95% CI: \(3.5 \pm 4.629 = \mathbf{[-1.129,\; 8.129]}\).

Interpretation: The interval includes zero. We cannot conclude at the 5% level that the teaching methods differ in their mean outcomes. The data are consistent with both methods being equally effective.

Example 3: Variance-Ratio CI via the F-Distribution

Two ore-processing plants, OreToGo and Ore.com, were tested for consistency in their daily throughput (in tonnes). Independent samples:

OreToGo: \(n_1 = 10\), \(s_1^2 = 36.5\)
Ore.com: \(n_2 = 8\), \(s_2^2 = 18.2\)

Construct a 95% CI for \(\sigma_1^2/\sigma_2^2\).

Step 1: Point estimate Ratio of sample variances: \(S_1^2/S_2^2 = 36.5/18.2 = 2.0055\).
\(\nu_1 = 10-1 = 9\), \(\nu_2 = 8-1 = 7\).
Step 2: F critical values From F-tables:
\(F_{0.025, 9, 7} = 4.823\) (upper 2.5% point, d.f. = 9, 7)
\(F_{0.025, 7, 9} = 4.197\) (upper 2.5% point, d.f. = 7, 9) — note the swapped d.f.
Step 3: Form the CI Lower bound: \(\dfrac{2.0055}{4.823} = 0.4158\).
Upper bound: \(2.0055 \times 4.197 = 8.417\).
95% CI for \(\sigma_1^2/\sigma_2^2\): \(\mathbf{[0.416,\; 8.42]}\).

Interpretation: Since the interval includes 1.0, we cannot reject the hypothesis that the two population variances are equal at the 5% level. The apparent difference in sample variances could be due to sampling variability.

Node N9 — Section 5

Pattern Recognition & Examiner Traps

Trap 1: Asymmetric F-quantiles (the #1 exam trap in N9) The F-distribution is not symmetric. Students often mistakenly believe that \(F_{1-\alpha/2}\) is simply the negative of \(F_{\alpha/2}\), or that the two bounds use the same d.f. ordering. In reality, the two critical values involve swapped degrees of freedom.
WRONG Using \(F_{0.025, 9, 7}\) for both bounds, or trying to negate the critical value. This gives a completely wrong interval.
RIGHT Lower bound divides by \(F_{\alpha/2, \nu_1, \nu_2}\). Upper bound multiplies by \(F_{\alpha/2, \nu_2, \nu_1}\) — swapped d.f. Always write out the d.f. explicitly for each bound.
Trap 2: Using separate SE in pooled t-CI When the problem states "assume equal variances," you must use the pooled variance \(S_p^2\). Using the unpooled SE \(\sqrt{s_1^2/n_1 + s_2^2/n_2}\) with a t-critical value is internally inconsistent.
WRONG \((\bar{x}_1 - \bar{x}_2) \pm t \cdot \sqrt{s_1^2/n_1 + s_2^2/n_2}\) — unpooled SE with pooled d.f. is inconsistent.
RIGHT Use \(S_p\sqrt{1/n_1 + 1/n_2}\) with d.f. = \(n_1+n_2-2\) when equal variances are assumed. Use Welch's method only if the problem explicitly requires it (but Welch's is non-examinable in PTS2).
Trap 3: Overlapping one-sample CIs vs. CI for the difference A frequent conceptual error is concluding "the means are not significantly different" because two one-sample CIs overlap. This is wrong. Overlapping 95% individual CIs do not imply the difference is non-significant at the 5% level.
WRONG "CI for \(\mu_1\) is [20, 30] and CI for \(\mu_2\) is [22, 32], they overlap, so there is no significant difference."
RIGHT You must construct a CI for \(\mu_1 - \mu_2\) directly. Only if that interval contains zero can you conclude no significant difference.
Trap 4: Using pooled proportion in the CI (wrong!) In a CI for \(p_1 - p_2\), the standard error uses the separate sample proportions \(\hat{p}_1\) and \(\hat{p}_2\). The pooled proportion is only used in the hypothesis test when \(H_0: p_1 = p_2\) (see N12). A CI does not assume the null hypothesis is true.
Examiner patterns:
  • "Assume the population variances are equal" — immediately signals use of \(S_p^2\) and the pooled t-CI.
  • "Construct a CI for the ratio of population variances" — immediately signals the F-distribution approach. Check carefully: which sample is in the numerator?
  • When given \(F_{\alpha, \nu_1, \nu_2}\) in the question — examiners often only give one F-value and expect you to compute the other using the reciprocity property.
  • If zero is not in the CI for \(\mu_1 - \mu_2\), that is equivalent to rejecting \(H_0: \mu_1 = \mu_2\) at level \(\alpha\).

Node N9 — Section 6

Connections

N9 connects forward and backward:
  • ← N8 (One-Sample CIs): The pivotal quantity framework from N8 is reused here with multi-sample quantities. Every CI formula in N9 is just the N8 template applied to a difference or ratio.
  • ← N6 (Sampling Distributions): The formulas for \(\text{Var}(\bar{X}_1 - \bar{X}_2)\) and the F-distribution ratio come directly from N6.
  • → N10-N12 (Hypothesis Testing): Every two-sample CI has a direct hypothesis-test counterpart. A two-sample z-test for \(\mu_1 - \mu_2\) uses the same SE as the two-sample z-CI. The decision "reject/do not reject" based on a hypothesis test at level \(\alpha\) is exactly equivalent to checking whether the null value is inside/outside the \((1-\alpha)\) CI.
  • → N11 (Power): The power of a two-sample test depends on the same SE that appears in the N9 CI formula.

Node N9 — Section 7

Summary Table

ParameterConditionsPivotal QuantityCI Formula
\(\mu_1 - \mu_2\)Independent, both \(\sigma\) known\(Z = \dfrac{(\bar{X}_1-\bar{X}_2)-(\mu_1-\mu_2)}{\sqrt{\sigma_1^2/n_1+\sigma_2^2/n_2}}\)\((\bar{X}_1-\bar{X}_2) \pm z_{\alpha/2}\sqrt{\sigma_1^2/n_1+\sigma_2^2/n_2}\)
\(\mu_1 - \mu_2\)Independent, normal, \(\sigma_1^2 = \sigma_2^2\)\(T = \dfrac{(\bar{X}_1-\bar{X}_2)-(\mu_1-\mu_2)}{S_p\sqrt{1/n_1+1/n_2}} \sim t(n_1+n_2-2)\)\((\bar{X}_1-\bar{X}_2) \pm t \cdot S_p\sqrt{1/n_1+1/n_2}\)
\(p_1 - p_2\)Large \(n_1, n_2\)\(Z \approx N(0,1)\) with separate \(\hat{p}_i\) in SE\((\hat{p}_1-\hat{p}_2) \pm z_{\alpha/2}\sqrt{\frac{\hat{p}_1(1-\hat{p}_1)}{n_1}+\frac{\hat{p}_2(1-\hat{p}_2)}{n_2}}\)
\(\sigma_1^2/\sigma_2^2\)Independent, normal\(\dfrac{S_1^2/\sigma_1^2}{S_2^2/\sigma_2^2} \sim F(n_1-1, n_2-1)\)\(\left[\dfrac{S_1^2/S_2^2}{F_{\alpha/2,\nu_1,\nu_2}},\;\; \left(S_1^2/S_2^2\right) \cdot F_{\alpha/2,\nu_2,\nu_1}\right]\)
Variances Add, Never Subtract The SE for a difference always involves \(\sqrt{v_1 + v_2}\), never \(\sqrt{v_1 - v_2}\). Variances are always non-negative, and the variance of a difference of independent variables is the sum of their variances.
F-Distribution: Swapped d.f. The upper bound of the variance-ratio CI uses \(F_{\alpha/2, \nu_2, \nu_1}\), not \(\nu_1, \nu_2\). The degrees of freedom swap between lower and upper bounds. This is the most common numerical trap in N9.
CI vs Test for Proportions In a CI for \(p_1 - p_2\), use separate proportions in the SE. In a hypothesis test of \(H_0: p_1 = p_2\), use the pooled proportion. These are different formulas for different purposes.
CI Contains Zero ⇔ Do Not Reject If 0 is inside the \((1-\alpha)\) CI for \(\mu_1 - \mu_2\), you cannot reject \(H_0: \mu_1 = \mu_2\) at level \(\alpha\). This is a direct consequence of the inversion principle.

Node N9 — Section 8

Self-Assessment

Test your understanding before moving to N10:

Can you do all of these?
  • Given \(n_1 = 40\), \(\bar{x}_1 = 55.2\), \(\sigma_1 = 5.0\) and \(n_2 = 35\), \(\bar{x}_2 = 52.1\), \(\sigma_2 = 4.5\), construct a 90% z-CI for \(\mu_1 - \mu_2\). [Answer: \(3.1 \pm 1.645 \times 1.079 = [1.325,\; 4.875]\).]
  • Given \(n_1 = 10\), \(\bar{x}_1 = 18.4\), \(s_1 = 3.2\) and \(n_2 = 14\), \(\bar{x}_2 = 15.7\), \(s_2 = 2.8\), with the assumption of equal variances, construct a 95% two-sample t-CI for \(\mu_1 - \mu_2\). [Answer: pooled \(S_p^2 = 8.872\), \(S_p = 2.979\), SE = 1.229, d.f. = 22, margin of error = 2.501, CI = [0.199, 5.201].]
  • Explain why you would NOT construct a two-sample t-CI with pooled variance if the sample standard deviations are \(s_1 = 1.0\) and \(s_2 = 8.0\). [Answer: the sample variances differ by a factor of 64, strongly suggesting unequal population variances.]
  • Given \(s_1^2 = 25\), \(n_1 = 8\), \(s_2^2 = 9\), \(n_2 = 12\), construct a 90% CI for \(\sigma_1^2/\sigma_2^2\). Use \(F_{0.05, 7, 11} = 2.76\) and \(F_{0.05, 11, 7} = 3.36\). [Answer: ratio = 2.778, lower = \(2.778/3.36 = 0.827\), upper = \(2.778 \times 2.76 = 7.667\).]
  • Two proportion CIs: \(\hat{p}_1 = 0.42\) from \(n_1 = 150\), \(\hat{p}_2 = 0.35\) from \(n_2 = 180\). Construct a 95% CI for \(p_1 - p_2\). [Answer: SE = 0.0540, CI = \(0.07 \pm 1.96 \times 0.0540 = [-0.036,\; 0.176]\).]
  • Explain, in words, what it means if a 95% CI for \(\mu_1 - \mu_2\) is [3.2, 8.7]. [Answer: We are 95% confident the true difference is between 3.2 and 8.7. Since the entire interval is positive, \(\mu_1\) is significantly larger than \(\mu_2\) at the 5% level.]

High-Leverage Questions

HLQ: Exam-Style Question with Worked Solution

14 MARKS VARIANCE RATIO + MEAN DIFFERENCE MULTI-PART

Two mining companies, OreToGo and Ore.com, extract and process ore. Independent samples of daily throughput (in tonnes) were taken:

OreToGo: \(n_1 = 13\), \(\bar{x}_1 = 182.5\), \(s_1 = 7.3\)
Ore.com: \(n_2 = 10\), \(\bar{x}_2 = 175.2\), \(s_2 = 5.8\)

You may assume the daily throughputs are normally distributed for both companies.

(a) Calculate the pooled sample variance \(S_p^2\). (2 marks)

(b) Construct a 95% confidence interval for the difference in mean daily throughputs \(\mu_1 - \mu_2\), assuming equal population variances. (5 marks)

(c) You are given that \(F_{0.025, 12, 9} = 3.868\) and \(F_{0.025, 9, 12} = 3.439\). Construct a 95% confidence interval for the variance ratio \(\sigma_1^2/\sigma_2^2\). (5 marks)

(d) Based on your answers to parts (b) and (c), comment on whether the two companies have significantly different means and significantly different variances. (2 marks)


Part (a): Pooled Sample Variance \[S_p^2 = \frac{(n_1-1)S_1^2 + (n_2-1)S_2^2}{n_1 + n_2 - 2}\] \[= \frac{12 \times 7.3^2 + 9 \times 5.8^2}{13 + 10 - 2} = \frac{12 \times 53.29 + 9 \times 33.64}{21}\] \[= \frac{639.48 + 302.76}{21} = \frac{942.24}{21} = 44.869\] \[S_p = \sqrt{44.869} = 6.698\]
Part (b): 95% CI for \(\mu_1 - \mu_2\) Point estimate: \(182.5 - 175.2 = 7.3\).
Standard error: \[SE = S_p\sqrt{\frac{1}{n_1} + \frac{1}{n_2}} = 6.698 \sqrt{\frac{1}{13} + \frac{1}{10}} = 6.698 \times \sqrt{0.17692} = 6.698 \times 0.4206 = 2.817\]
d.f. = 21, \(t_{21, 0.025} = 2.080\).
Margin of error: \(2.080 \times 2.817 = 5.860\).
95% CI: \[7.3 \pm 5.860 = \mathbf{[1.44,\; 13.16]}\]
Part (c): 95% CI for \(\sigma_1^2/\sigma_2^2\) Ratio of sample variances: \(S_1^2/S_2^2 = 53.29/33.64 = 1.5842\).
\(\nu_1 = 12\), \(\nu_2 = 9\).

Lower bound: \(\dfrac{1.5842}{F_{0.025, 12, 9}} = \dfrac{1.5842}{3.868} = 0.4096\).
Upper bound: \(1.5842 \times F_{0.025, 9, 12} = 1.5842 \times 3.439 = 5.448\).
95% CI: \(\mathbf{[0.410,\; 5.45]}\)
Part (d): Interpretation Means (part b): The CI [1.44, 13.16] does not contain zero, so at the 5% level, OreToGo's mean throughput is significantly higher than Ore.com's.

Variances (part c): The CI [0.410, 5.45] does contain 1.0, so we cannot reject the hypothesis of equal variances at the 5% level. The assumption of equal variances used in part (b) is reasonable.

Combined: The two companies differ significantly in their means but not in their variances. OreToGo processes more ore on average with comparable consistency.
Summary of answers:
(a) \(S_p^2 = 44.869\), \(S_p = 6.698\).
(b) 95% CI for \(\mu_1 - \mu_2\): [1.44, 13.16]. Significantly different from zero.
(c) 95% CI for \(\sigma_1^2/\sigma_2^2\): [0.410, 5.45]. Includes 1, so variances may be equal.
(d) Means differ significantly; variances do not differ significantly.