search for books and compare prices
Tables of Contents for Practical Statistics for Students
Chapter/Section Title
Page #
Page Count
PREFACE
1
2
1 INTRODUCTION
3
2
1.1 What do we mean by statistics?
3
1
1.2 Why is statistics necessary?
3
1
1.3 The purpose of the text
4
1
1.4 The limitations of statistics
4
1
2 MEASUREMENT CONCEPTS
5
3
2.1 Sources of data
5
1
2.2 Populations, parameters, samples, and statistics
5
1
2.3 Descriptive and inferential statistics
6
1
2.4 Parametric and non-parametric statistics
7
1
3 CLASSIFYING DATA
8
6
3.1 Scales of measurement
8
1
3.2 The nominal scale
8
1
3.3 The ordinal scale
8
1
3.4 The interval scale
9
1
3.5 The ratio scale
9
1
3.6 Discrete and continuous variables
9
1
3.7 Limits of numbers
10
1
3.8 The frequency table
11
1
3.9 Steps in the construction of Table 3
12
2
4 PRESENTING DATA
14
8
4.1 Introduction
14
1
4.2 Bar graph
14
1
4.3 Histogram
14
1
4.4 Frequency polygon
15
2
4.5 Smoothed frequency polygon
17
2
4.6 Cumulative frequency graph or ogive
19
1
4.7 The circle or pie graph
19
3
5 MEASURING TYPICAL ACHIEVEMENT
22
15
5.1 Introduction
22
1
5.2 Calculating the mean from ungrouped data
22
1
5.3 Calculating the mean from grouped data
23
1
5.4 A short method of calculating the mean from grouped data
24
1
5.5 The median
25
1
5.6 Calculating the median
26
1
5.7 Summary
27
1
5.8 The mode
27
1
5.9 Choosing a measure of central tendency
28
1
5.10 Use the mean
29
1
5.11 Use the median
29
1
5.12 Use the mode
29
1
5.13 The normal curve
29
3
5.14 A practical application of the normal probability curve
32
1
5.15 Some mathematical characteristics of the normal probability curve
33
4
6 MEASURING VARIATIONS IN ACHIEVEMENT
37
9
6.1 Introduction
37
1
6.2 The range
37
1
6.3 Average deviation (A.D.)
37
1
6.4 The standard deviation (S.D.)
38
1
6.5 Calculating the standard deviation from ungrouped data
39
2
6.6 Calculating the standard deviation from grouped data
41
2
6.7 Variance
43
1
6.8 Coefficient of variation (V)
43
1
6.9 The quartile deviation (Q)
43
2
6.10 The usefulness of Q
45
1
7 MEASURING RELATIVE ACHIEVEMENT
46
20
7.1 Introduction
46
1
7.2 Percentiles
46
1
7.3 Method 1: Calculating percentile points
47
2
7.4 Method 2: Calculating percentile ranks for individual scores
49
1
7.5 Standard scores or Z scores
50
1
7.6 Example 1
51
1
7.7 Example 2
52
3
7.8 More examples
55
3
7.9 Sigma, Hull and T-scales
58
1
7.10 Sigma scale
59
1
7.11 The Hull scale
60
1
7.12 T-scale
61
1
7.13 Example problem
62
2
7.14 Grading
64
1
7.15 Example
65
1
8 MEASURING ASSOCIATION
66
22
8.1 Introduction
66
1
8.2 Departure from independence between two factors
67
1
8.3 Magnitude of subgroup differences
68
1
8.4 Summary of pair-by-pair comparisons
69
5
8.5 Proportional reduction in error measures of association
74
2
8.6 Measures involving correlation
76
5
8.7 Calculating the product moment correlation coefficient r; method 1
81
1
8.8 Calculating the product moment correlation coefficient r; method 2
82
1
8.9 Rank order correlation coefficients
83
1
8.10 Kendall's rank order correlation coefficient (r, tau)
84
1
8.11 The correlation coefficient eta (XXX)
84
2
8.12 Some further thoughts on relationships
86
1
8.13 The coefficient of determination
86
2
9 REGRESSION ANALYSIS
88
14
9.1 Introduction
88
1
9.2 Simple linear regression
88
7
9.3 Multiple regression
95
3
9.4 Using the coefficient of determination in multiple regression analysis
98
1
9.5 Calculating the coefficient of multiple determination; method 1
98
2
9.6 Calculating the coefficient of multiple determination; method 2
100
2
10 INFERENTIAL STATISTICS
102
18
10.1 Introduction
102
1
10.2 Sampling methods
102
1
10.3 Simple random sampling
102
1
10.4 Systematic sampling
103
1
10.5 Stratified sampling
103
1
10.6 Cluster sampling
103
1
10.7 Stage sampling
103
1
10.8 Sampling error
103
3
10.9 Levels of confidence
106
3
10.10 t distributions
109
3
10.11 Degrees of freedom
112
1
10.12 Hypothesis formulation and testing
113
1
10.13 Statistical significance
114
2
10.14 One-tailed and two-tailed tests
116
1
10.15 Type 1 and Type 2 errors
116
1
10.16 Independent and dependent variables
117
1
10.17 Correlated and uncorrelated data
118
1
10.18 Parametric and non-parametric statistics: some further observations
118
2
11 CHOOSING AN APPROPRIATE TEST
120
5
11.1 Advice
120
1
11.2 Format tabulation
121
4
12 DESIGN 1 ONE GROUP DESIGN: SINGLE OBSERVATIONS ON ONE VARIABLE
125
13
12.1 Using the chi square one-sample test
126
1
12.2 Using the G-test
127
2
12.3 Using the binomial test
129
3
12.4 Using the Kolmogorov-Smirnov one-sample test
132
2
12.5 Using the one-sample runs test
134
1
12.6 A probability test for use with Likert-type scales
135
3
13 DESIGN 2 ONE GROUP DESIGN: ONE OBSERVATION PER SUBJECT ON EACH OF TWO OR MORE VARIABLES
138
32
13.1 Using the Pearson product moment correlation coefficient
138
3
13.2 Using simple linear regression
141
2
13.3 Using Spearman's rank order correlation coefficient (rho)
143
5
13.4 Using Kendall's rank order correlation coefficient (tau)
148
3
13.5 Using the point biserial correlation coefficient
151
2
13.6 Using the correlation coefficient tetrachoric r
153
1
13.7 Using partial correlation
154
2
13.8 Using Kendall's partial rank correlation coefficient (tau(12.3))
156
1
13.9 Using the multiple correlation coefficient R
157
2
13.10 Using multiple regression analysis
159
4
13.11 Using the percentage difference
163
1
13.12 Using the phi coefficient (XXX)
164
1
13.13 Using Yule's Q
165
1
13.14 Using the contingency coefficient
166
2
13.15 Using Cramer's V
168
1
13.16 Choosing a measure of association
169
1
14 DESIGN 3 ONE GROUP DESIGN: REPEATED OBSERVATIONS ON THE SAME SUBJECTS UNDER TWO CONDITIONS OR BEFORE AND AFTER TREATMENT
170
11
14.1 Using the t test for correlated data
170
2
14.2 Using the Wilcoxon matched-pairs signed-ranks test
172
4
14.3 Using the McNemar test for the significance of change
176
2
14.4 Using the sign test
178
3
15 DESIGN 4 ONE GROUP--MULTI-TREATMENT (TRIALS): TREATMENTS AS INDEPENDENT VARIABLE
181
16
15.1 Using the one-way analysis of variance for correlated means (with repeated measures on the same sample or separate measures on matched samples)
181
4
15.2 Using the Friedman two-way analysis of variance by ranks (X(r)(2))
185
4
15.3 Using Kendall's coefficient of concordance (W)
189
4
15.4 Using the Cochran Q test
193
2
15.5 Using Page's L trend test
195
2
16 DESIGN 5 TWO GROUP DESIGNS: STATIC COMPARISONS ON ONE OR MORE VARIABLES
197
44
16.1 Using the t test for independent samples (pooled variance)
197
3
The difference between two proportions in two independent samples
200
1
16.2 Using the t test for independent samples (separate variance)
201
1
16.3 Using the Mann-Whitney U test (for small samples N greater than 8)
202
2
16.4 Using the Mann-Whitney U test (for moderately large samples N(2) between 9 and 20)
204
2
16.5 Using the Mann-Whitney U test (for large samples N(2) less than 20)
206
2
16.6 Using X(2), chi square (2 x k)
208
2
16.7 Using the G-test in a 2 x k contingency table
210
1
16.8 Using the Kolmogorov-Smirnov two-sample test
211
2
16.9 Using the Walsh test
213
2
16.10 Using the Wald-Wolfowitz runs test
215
3
16.11 Using the Fisher exact probability test
218
2
16.12 The analysis of 2 x 2 contingency tables
220
9
16.13 A median test for 2 x 2 tables
229
3
16.14 The analysis of k x n contingency tables
232
9
17 DESIGN 6 MULTI GROUP DESIGN: MORE THAN TWO GROUPS, ONE SINGLE VARIABLE
241
30
17.1 Using one-way analysis of variance, independent samples. (Fixed effects, completely randomized model)
241
7
17.2 Using the Kruskal-Wallis one-way analysis of variance by ranks
248
4
17.3 Using a k-sample slippage test
252
2
17.4 Using the Jonckheere trend test
254
3
17.5 Using chi square in k x n tables
257
7
17.6 Using different measures of association between ordered variables: Goodman and Kruskal's gamma, Somer's d and Kendall's tau b
264
3
17.7 Using Guttman's lambda (XXX(YX)) as an asymmetrical measure of association for nominal level data
267
4
18 DESIGN 7 MULTIVARIATE ANALYSIS. FACTORIAL DESIGNS--THE EFFECT OF TWO INDEPENDENT VARIABLES ON THE DEPENDENT VARIABLE
271
12
18.1 Multivariate analysis in contingency tables
276
1
(a) No repeated measures on factors
18.2 Using two-way analysis of variance--single observation on separate groups
277
6
19 DESIGN 8 FACTORIAL DESIGNS--THE EFFECT OF TWO INDEPENDENT VARIABLES ON THE DEPENDENT VARIABLE
283
6
(b) Repeated measures on ONE factor 19.1 Using two-way analysis of variance--repeated observations on one factor
283
6
20 DESIGN 9 FACTORIAL DESIGNS--THE EFFECT OF TWO INDEPENDENT VARIABLES ON THE DEPENDENT VARIABLE
289
8
(c) Repeated measures on BOTH factors
289
8
20.1 Using the two-way analysis of variance--repeated observations on both factors
290
7
Appendices
297
63
1 Table of random numbers
297
1
2 Table of factorials
298
2
3(a) Percentage scores under the normal curve from 0 to Z
300
2
3(b) Probabilities associated with values as extreme as observed values of Z in the normal curve of distribution
302
2
3(c) Table of Z values for r
304
1
4 x(2) distribution
305
1
5 t distribution
306
2
6 F distribution
308
4
7 Tukey test: percentage points (q) of the Studentized range
312
4
8 Critical values of the product moment correlation coefficient
316
1
9 Critical values of the Spearman rank correlation coefficient
317
1
10 Critical values of the Kendall rank correlation coefficient
318
2
11 Critical values of the multiple correlation coefficient
320
2
12 Estimates of the tetrachoric correlation coefficient for various values of ab/ac
322
1
13(a) Binomial test
323
1
13(b) Binomial coefficients
323
1
14 Critical values of D in the Kolmogorov-Smirnov one-sample test
324
1
15 Critical values of K in the Kolmogorov-Smirnov two-sample test
325
1
16 Critical values of R in the one-sample runs test
326
1
17 Critical values of T in the Wilcoxon test for two correlated samples
327
1
18 Critical values of X(2)(R) in the Friedman two-way analysis of variance
328
1
19 Critical values of L in the Page trend test
329
1
20 Critical values of the Mann-Whitney U test
330
1
21 Critical values for the Walsh test
331
1
22 Wald-Wolfowitz runs test
332
4
23 Critical values for the Fisher exact test
336
15
24 Probabilities associated with values as large as observed values of H in the Kruskal-Wallis one-way analysis of variance by ranks
351
1
25 Critical values of S in the Kendall coefficient of concordance
352
1
26 Minimum values of S (one-tailed) in the Jonckheere trend test
353
1
27 Critical values of A (Sandler's statistic)
354
4
28 Natural cosines
358
2
29 Critical values in the k-sample slippage test
358
2
Bibliography
360
2
Index
362