Experimental Design and Statistical Analysis for Pharmacology and the Biomedical Sciences
statement refers to one of the other very common statistical test; the IBO test. Er…, IBO is defined as ‘It's Bloody Obvious’). ‘That's no good’, replied the post doc ‘Prof likes to see stars (so thump him, I thought but didn't say out loud), and the more stars the better and happier he'll be’. (So, thump him harder? At this point, I'm left with the thought, what is the difference between a five years old starting Primary School and a 50 years old Prof leading a prestigious research group? Answer = absolutely nothing, they both like stars, preferably gold!] ‘Not with these data’, I said resignedly, ‘you've only got n = 1 for each medium and you can't perform rigorous statistical analysis with such a paucity of data.’ ‘Never mind’ said the post doc ‘I'll do it my way. Thanks.’ and disappeared down the corridor repeatedly muttering ‘1‐way ANOVA and some bloody post hoc test’. My words were left hanging in the ether!
The problem here is ignorance about n numbers leading to poor experimental design, a lack of understanding of the term ‘triplicate’ and the consequences that this has for statistical analysis. This is a very simple error that is commonly seen.
Example 3:
Publication of research in scientific journals depend on the peer review system (where experienced researchers act as referees on papers submitted to journals for publication). A friend of mine excitedly showed me a paper he had just been requested to review from a very reputable journal. He was thrilled on two counts; first, the fact that he had been asked to act as referee by this journal was a form of tacit recognition of his own expertise. Second, the paper was written by a group he had long admired and indeed, I think he hoped he could join this group in the future to further his own career. The following day we met, and he looked very downcast. It transpired that he had spent a couple of hours looking through the manuscript the previous evening and it soon became apparent that the statistical analysis used by the authors was not appropriate according to the experimental design and the type of data generated by the studies described. It was clear that he was going to have to reject the paper in its current state and advise that publication would only be acceptable if the statistical analysis was completely revised. He duly submitted his report to the journal. I didn't hear anything more for a month or so, but eventually he showed me a letter he had subsequently received from the journal. The letter thanked him for his time and effort in reviewing the manuscript, the tactful way he had written his review and the conscientious and constructive way he had dealt with the statistical shortcomings of the paper. Included also were the comments made by other referees (they were all in agreement in rejecting the manuscript and for the same reasons) and the subsequent response from the authors. To cut a long story short, the authors rejected the comment made by the editor and referees and, in particular, were scathing about the comments regarding their statistical analysis. Their final comment was (and I paraphrase here), ‘This is the way we've always analysed our data. We see no reason to change now and so we'll continue to do it our way!’. The paper was rightly rejected by the journal. However, six months later (or so) my friend barged into my office and threw a paper on my desk. ‘Look at this!’, he exclaimed (getting increasingly louder as he muttered words which I can't repeat here but which seriously questioned the marital status of the parents of those unfortunate individuals related in some way to the document now lying in front of me). The paper was the manuscript he (and others) had rejected earlier but published almost word for word in a different journal! So, while it may be difficult to teach a dog new tricks (see above), success probably depends on the dog willing to learn something afresh – it's just unfortunate that some dogs are too arrogant and set in their ways to consider and adopt new methods or to accept that perhaps what they did in the past was not the best approach.
The problem here is that we all get set in our ways and once we know that a particular technique works then we are loath to change it – even in the face of convincing arguments that our methodology could (should?) be improved or that our current methods are just plain wrong! The use of statistics in pharmacology has improved markedly over the last 15–20 years and bears little resemblance to the techniques used when I first started my career in pharmacology over 45 years ago. Even so, I still see example after example of manuscripts submitted to reputable scientific journals where the statistical analysis employed is wrong; indeed, the majority of manuscripts that I reject (I would estimate about 90%) are rejected on the basis of inappropriate (wrong) statistical analysis.
Experimental design and statistical analysis is not a burdensome and an inconvenient chore – stats is fun and very rewarding! In fact, statistics is only burdensome and an inconvenient chore to those who don't know what they're doing (or why) and are not prepared to learn or understand a few simple basic rules. I am an experimental pharmacologist with over 45 years' experience in designing a wide range of preclinical experiments in pharmacology (both in vitro and in vivo) and through those years I have learnt the basic rules of statistical analysis and how such analyses should be coupled with the process of experimental design. Statistics, to me, is not an academic exercise but simply, in general terms, a tool by which I better understand the data generated by my experiments that enables me to make clear, concise and accurate conclusions about the change in data (usually induced by drug treatment – I am a pharmacologist after all) I observe. Statistics is a tool (not in the derogatory terminology), and just as a cabinet maker knows his tools to produce exquisite pieces of furniture (with each tool having a specific purpose), so the scientist needs to understand his statistical tools to analyse data and draw appropriate conclusions.
I have taught statistics for the better part of 25 years and during throughout this time I have always focused on three points.
1 There are different types of data. Identification of the type of data you have is important since this directly determines the type of statistical analysis which may be used on that data.
2 Once both the type of data and statistical test are identified, then it is just a simple matter of running the appropriate test on the data. These days with current computing power and the plethora of statistical analysis packages available, it is no longer necessary to perform statistical tests by hand and use lookup tables; it is simply a matter of ensuring the data is input into the stats package in the right format and then press the right button. This book will provide screen images of the output from some of the most common statistical packages commercially available (principally GraphPad Prism, Invivo Stat. MiniTab, and SPSS) so you'll know what to expect and look for when you use these packages to analyse your own experimental data.
3 Each statistical test you perform will produce an output. But how do you interpret that output? This book will explain how and help you to draw the appropriate conclusions from the statistical analysis of your own data.
So, you have a set of data and you know what type of data it is and what type of test you need to use to analyse that data. That directly leads to the correct conclusions – easy, isn't it! Well, perhaps not quite that easy, but this book will show you how. It will demystify statistical analysis so that, hopefully, if at some time in the future I said to you ‘Stats’, you wouldn't run into the distance under a tumultuous orange sky with your hands held to your face and your mouth in the shape of a polo mint, but instead you will smile benignly and, with a glint in your eye, say ‘Bring it on!’.
PJM 04/01/2021
1 Introduction
Experimental design: the important decision about statistical analysis
Whenever you make plans for your annual holiday, you do not just pack your suitcase willy‐nilly without first making plans about what you want to do, where you want to go, how you are going to get there, etc. For example, if your idea is to go trekking around the coast of Iceland, then you would look really stupid if, on arrival in Reykjavik, you opened your suitcase only to find beachwear and towels! Indeed, identifying what you want to do on holiday and where you intend to go determines what you need to take with you and what travel arrangements you need to make. In fact, what you do on holiday can be viewed as the final output of your holiday arrangements. The same can be said for the design of any well‐planned, robust, scientific experiment. The final output of your