**Mixed models. Experimental design.**

*Topics*

1. | Introduction. Mixed models. | |

2. | Estimation method. The H-method. | |

3. | Randomized designs. | |

4. | Split-plot experiments. | |

5. | Repeated measures data. | |

6. | Case study | |

7. | Random effects models. | |

8. | Analysis of covariance. | |

9. | Examples | |

10. | Random coefficient models. | |

11. | Between and within variation. | |

12. | Graphics in analysis. | |

13. | Spatial models. | |

14. | Generalized linear mixed models in regression. | |

15. | Case study. | |

16. | Non-linear mixed models | |

17. | Confidence intervals | |

18. | Detection of special features | |

19. | Sensitivity analysis | |

20. | Case study | |

21. | Guidelines for presentation of results |

In the analysis of industrial and
scientific data it is often useful to be able to formulate a detailed model for
the given data. The mixed model theory provides with this flexibility. In
general terms a mixed model can be written as **y**=**X****b**+
**Zu** +**e**,
where **X** and **Z** are the design matrices. **
b** is
the fixed effects parameters, while **u** is the random effects parameters.
The standard methods assume that the data provide with a full rank model. Thus
the traditonal estimation methods break down, when the data show reduced rank.
The idea of the H-method is to build up a solution by parts, where each part is
optimized with respect to prediction aspect of the model. The estimation in
mixed models stops, when predictions derived from the model can not be improved
by including further parts. The significance testing becomes more reliable, when
the parameters have been estimated in a model that gives stable predictions.