search for books and compare prices
Tables of Contents for Modern Statistics for the Life Sciences
How to teach this text
xiv
An introduction to analysis of variance
1
21
Model formulae and geometrical pictures
1
1
The basic principles of ANOVA
2
8
What happens when we calculate a variance?3
1
Partitioning the variability4
4
Partitioning the degrees of freedom8
1
F-ratios9
1
Presenting the results14
2
The geometrical approach for an ANOVA
16
3
Melons20
1
Dioecious trees21
1
What kind of data are suitable for regression?
22
1
How is the best fit line chosen?
23
3
The geometrical view of regression
26
2
Confidence and prediction intervals
33
2
Confidence intervals33
1
Prediction intervals33
2
Conclusions from a regression analysis
35
5
A strong relationship with little scatter35
1
A weak relationship with lots of noise36
2
Small datasets and pet theories38
1
Significant relationships-but that is not the whole story39
1
Large residuals40
1
Influential points41
1
The role of X and Y-does it matter which is which?
42
3
Does weight mean fat?45
1
Dioecious trees46
1
Models, parameters and GLMs
47
9
Populations and parameters
47
1
Expressing all models as linear equations
48
4
Turning the tables and creating datasets
52
3
Influence of sample size on the accuracy of parameter estimates54
1
How variability in the population will influence our analysis55
1
Using more than one explanatory variable
56
20
Why use more than one explanatory variable?
56
3
Leaping to the wrong conclusion56
1
Missing a significant relationship57
2
Elimination by considering residuals
59
2
Two types of sum of squares
61
4
Eliminating a third variable makes the second less informative62
2
Eliminating a third variable makes the second more informative64
1
Urban Foxes-an example of statistical elimination
65
3
Statistical elimination by geometrical analogy
68
4
Partitioning and more partitioning68
3
Picturing sequential and adjusted sums of squares71
1
The cost of reproduction73
2
Investigating obesity75
1
Designing experiments-keeping it simple
76
20
Three fundamental principles of experimental design
76
9
Replication76
2
Randomisation78
2
Blocking80
5
The geometrical analogy for blocking
85
3
Partitioning two categorical variables85
1
Calculating the fitted model for two categorical variables86
2
The concept of orthogonality
88
4
The perfect design88
3
Three pictures of orthogonality91
1
Growing carnations93
2
The dorsal crest of the male smooth newt95
1
Combining continuous and categorical variables
96
14
Reprise of models fitted so far
96
1
Combining continuous and categorical variables
97
5
Looking for a treatment for leprosy97
2
Sex differences in the weight-fat relationship99
3
Orthogonality in the context of continuous and categorical variables
102
2
Treating variables as continuous or categorical
104
2
The general nature of General Linear Models
106
1
Conservation and its influence on biomass108
1
Determinants of the Grade Point Average109
1
Interactions--getting more complex
110
17
The factorial principle
110
2
Analysis of factorial experiments
112
3
What do we mean by an interaction?
115
2
Presenting the results
117
10
Factorial experiments with insignificant interactions117
3
Factorial experiments with significant interactions120
3
Error bars123
4
Extending the concept of interactions to continuous variables
127
5
Mixing continuous and categorical variables
127
2
Adjusted Means (or least square means in models with continuous variables)
129
1
Confidence intervals for interactions
130
1
Interactions between continuous variables
131
1
Is the story simple or complicated?
133
1
Is the best model additive?
133
1
Checking the models I: independence
136
17
Same conclusion within and between subsets140
1
Creating relationships where there are none140
1
Concluding the opposite141
1
Single summary approach142
3
The multivariate approach145
2
Detecting non-independence
148
3
Germination of tomato seeds149
2
How non-independence can inflate sample size enormously151
1
Combining data from different experiments152
1
Checking the models II: the other three asumptions
153
33
Homogeneity of variance
153
2
Model criticism and solutions
157
16
Histogram of residuals158
2
Normal probability plots160
3
Plotting the residuals against the fitted values163
3
Transformations affect homogeneity and normality simultaneously166
1
Plotting the residuals against each continuous explanatory variable167
1
Solutions for nonlinearity168
4
Hints for looking at residual plots172
1
Predicting the volume of merchantable wood: an example of model criticism
173
5
Selecting a transformation
178
2
Stabilising the variance181
1
Stabilising the variance in a blocked experiment181
2
Lizard skulls183
1
Checking the `perfect' model184
2
Model selection t: principles of model choice and designed experiments
186
23
The problem of model choice
186
3
Three principles of model choice
189
6
Economy of variables189
2
Multiplicity of p-values191
1
Considerations of marginality192
1
Model choice in the polynomial problem193
2
Four different types of model choice problem
195
1
Orthogonal and near orthogonal designed experiments
196
5
Model choice with orthogonal experiments196
2
Model choice with loss of orthogonality198
3
Looking for trends across levels of a categorical variable
201
4
Testing polynomials requires sequential sums of squares206
1
Partitioning a sum of squares into polynomial components207
2
Model selection II; datasets with several explanatory variables
209
23
Economy of variables in the context of multiple regression
210
7
R-squared and adjusted R-squared210
3
Prediction Intervals213
4
Multiplicity of p-values in the context of multiple regression
217
3
The enormity of the problem217
1
Possible solutions217
3
Automated mode) selection procedures
220
5
How stepwise regression works220
1
The stepwise regression solution to the whale watching problem221
4
Whale Watching: using the GLM approach
225
3
Finding the best treatment for cat fleas229
2
Multiplicity of p-values231
1
What are random effects?
232
2
Distinguishing between fixed and random factors232
2
Why does it matter?234
1
Four new concepts to deal with random effects
234
4
Components of variance234
1
Expected mean square235
1
Nesting236
1
Appropriate Denominators237
1
A one-way ANOVA with a random factor
238
3
A two-level nested ANOVA
241
3
Nesting241
3
Mixing random and fixed effects
244
3
Using mock analyses to plan an experiment
247
5
Examining microbial communities on leaf surfaces253
1
How a nested analysis can solve problems of non-independence254
1
Categorical data: the basics
255
3
Contingency table analysis255
2
When are data truly categorical?257
1
The Poisson distribution
258
7
Two properties of a Poisson process258
1
The mathematical description of a Poisson distribution259
2
The dispersion test261
4
The chi-squared test in contingency tables
265
4
Derivation of the chi-squared formula265
2
Inspecting the residuals267
2
General linear models and categorical data
269
9
Using contingency tables to illustrate orthogonality269
2
Analysing by contingency table and GLMs271
5
Omitting important variables276
1
Analysing uniformity277
1
Soya beans revisited279
1
Fig trees in Costa Rica280
1
Generalised Linear Models
281
2
Multiple y variables, repeated measures and within-subject factors
283
1
Answers to exercises
285
32
Revision section: The basics
317
15
Populations and samples
317
1
Three types of variability: of the sample, the population and the estimate
318
4
Variability of the sample318
1
Variability of the population319
1
Variability of the estimate319
3
Confidence intervals: a way of precisely representing uncertainty
322
2
The null hypothesis-taking the conservative approach
324
3
Two sample t-test327
1
Alternative tests328
1
One and two tailed tests329
2
Appendix 1: The meaning of p-values and confidence intervals
332
3
What is a confidence interval?
334
1
Appendix 2: Analytical results about variances of sample means
335
4
Introducing the basic notation
335
1
Using the notation to define the variance of a sample
335
1
Using the notation to define the mean of a sample
336
1
Defining the variance of the sample mean
336
1
To illustrate why the sample variance must be calculated with n - 1 in its denominator (rather than n) to be an unbiased estimate of the population variance
337
2
Appendix 3: Probability distributions
339
4
Confirming simulations
341
2
<