Paul J. Mitchell

Experimental Design and Statistical Analysis for Pharmacology and the Biomedical Sciences


Скачать книгу

the last 25 years or so, I have become increasingly involved in teaching the fundamentals of statistical analysis of experimental data to, initially, pharmacy and pharmacology undergraduates but, more lately, undergraduates in other disciplines (e.g. natural sciences, biomedical sciences, biology, biochemistry, psychology, and toxicology) and postgraduate students and early researchers in these and more specific areas of pharmacological research (e.g. neuropharmacology). Throughout this time, I have become increasingly aware of the statistical rigour required by scientific journals for publication of scientific papers to be approved. However, this has been coupled with increased anxiety on the part of both new and experienced researchers as to whether their statistical approach is correct for the data generated by their studies. These observations suggest that in the past the teaching of experimental design and statistical analysis has been poor across the sector. Indeed, if I mention stats (sic) to most researchers, they hold their hands to their face and perform an imitation of Edvard Munch's Der Schrei der Natur (‘The Scream of Nature’, or more commonly known as just ‘The Scream’)! Statistical analysis is often viewed as burdensome and an inconvenient chore, generally borne out of ignorance and a lack of appreciation of how useful rigorous statistical analysis may be.

image

       Der Schrei der Natur (circa 1893), Edvard Munch (1863–1944)

      I'll give you three examples:

      Example 1:

      The principle problem here is ignorance that rigorous statistical analysis is a component of good experimental design. Consequently, the statistical methodology to be employed in research must be decided before the experiments are performed. In my experience, this is due to historically very poor teaching of statistics in pharmacology across the sector, such that those now with the responsibility of teaching pharmacology to current undergraduates or newly qualified graduates (whether they be in academia or the pharmaceutical industry) are themselves at a disadvantage and too naive to understand the importance of rigorous statistical analysis. Consequently, they are unable to provide high‐quality supervision to enable less experienced individuals to develop and hone their experimental technique.

      Example 2:

      I was once stopped in the corridor by a fellow post‐doc (and close friend) who described a series of experiments involving cell culture in different mediums which they were unsure how to analyse. Essentially, the post‐doc had a single flask of a particular CHO cell line and was trying to determine which of three mediums promoted the best cell growth. Three further flasks were prepared each one containing a different medium, and a sample of the cell line was decanted into each of the three test flasks. Sometime later, three samples were taken from each flask (so nine samples in total) and the number of cells per unit volume determined. The question was; ‘how do I analyse the data? Do I do a number of t‐tests (are they paired)? Do I do ANOVA? And if so, which post hoc test (don't worry I'll explain all these terms later in the book)’. I looked at the data, checked I had the right information about the design of this simple experiment and said ‘Sorry, you don't have enough data for statistical analysis – you only have an n of one in each case’. The post‐doc stared at me quizzically and said, ‘Don't be daft, I have n of 3 for each medium!’. ‘Er…, no!’, I replied, ‘You estimated the cell numbers in triplicate, but that only gives you n = 1 in each case, all you've done is obtain an estimate of precision and hopefully accuracy of your estimates, but that doesn't change the fact that you've only got n = 1 for each flask’. ‘No, no, no!’ the post‐doc strenuously exclaimed, ‘I have n = 3 in each case, three samples from each flask for the different mediums!’. ‘Er, no!’, I replied (at the risk of repeating myself), ‘If you wanted to do this properly then you should have prepared three flasks for each medium (so nine flasks in total), and decanted the same volume of CHO cells into each flask. Sometime later, you should then have taken 3 samples from each flask (so 27 samples) and estimated the cell number in each case. You would then calculate the average for each flask so that you get an improved accurate measure of the cell concentration in each flask (thanks to the measures in triplicate). This will then give you three measures for each medium which you can analyse by one‐way ANOVA followed by a Tukey All Means post hoc test (don't worry about these terms; all will become clear later in the book. I just included them to impress you, whet your appetite for what is to come and to try and convince you I know what I'm talking about!). The post‐doc looked at me aghast! ‘I don't have time for that!’, came the reply, ‘I have a group meeting with my Prof. this afternoon and I need to present this data so we can discuss which medium to use in our future studies – our latest grant proposal depends on demonstrating that one of these mediums is significantly different from the others, so I need to subject these data to statistical analysis!’. I looked at the summary bar chart the post‐doc had prepared from the data and it was clear from the eye‐ball test (this is probably one of the best tests to use to appreciate data and is very simple to perform – I'll reveal how later in the book!) that one of the mediums showed clear advantages in terms of cell growth than the others. ‘Just look at your data’, I said, ‘Medium X is