Abstract
This article clarifies the role and value of three types of evidence used in empirical research – anecdotes derived from case studies or small samples of data, fictions (including both thought experiments and works of art such as novels and plays) and statistics. The conclusion is that all three have an important part to play. Many conventional stereotypes are deeply unhelpful: contrary to the usual assumptions, science is often dependent on anecdote and fiction for exploring possibilities, qualitative research is often statistical in spirit, and social science is more likely to lead to useful conclusions about future possibilities if it draws on anecdotes and fictions.
Keywords
Introduction
I wrote the first version of this article nearly 20 years ago when I was teaching the statistical end of research methods for students in a business school. This was frustrating because it side-lined the idea of searching for, and exploring, interesting and potentially useful possibilities. Things that have only happened occasionally, or have never happened, or are impossible in some sense, were just not of interest to the statistical mindset. I submitted my paper to a journal which seemed appropriate, but they turned it down because, I suspect, the words ‘anecdote’ and ‘fiction’ were anathema to serious scientists. So, I turned to other projects until, many years later I received an email about Possibility Studies and Society, read the Manifesto (Glăveanu, 2023), and the possibility of resuscitating a revised version of the article seemed viable. Empirical methodologies – that is, ‘combination[s] of techniques used to enquire into a specific situation’ (Easterby-Smith et al., 2002) – obviously need to incorporate approaches for obtaining information about the situation in question. I want to suggest three general categories for these approaches which I will call anecdote, fiction, and statistics. My initial focus was management research, but essentially the same points apply to social sciences in general, and many apply to natural science as well.
By anecdote I mean a description of, or a story about, a particular case or event: for example, qualitative researchers often collect anecdotes from interview data. Fiction, of course, is similar to anecdote except that it is invented, and so at first sight seems irrelevant to researching the real world. This is, however, an unhelpfully restrictive view: thought experiments are, by definition, fictional, and yet are an important technique in many spheres of inquiry, and the process of imagining future, as yet unrealised, possibilities – which, of course, is what some science fictions do – is of obvious benefit in many social inquiries. And statistics are, of course, widespread in research in many areas. The purpose of the article is to explore the role and value of all three of these building blocks of empirical inquiry.
For example, Popper’s falsificationism (Popper, 1980) recommends an initial period of hypothesis generation – which may be inspired by anecdotes, or by fiction or by statistical results – and then a testing phase which is essentially a search for anecdotes which falsify the hypothesis. And there are many recipes for various kinds of research based on statistics, and on case studies. My focus here is on the building blocks – anecdote, fiction, and statistics – not on the recipe for the whole methodology.
The next three sections explore what I mean by anecdote, fiction, and statistics in more detail. I then discuss the problem of stereotypes – such as the idea that case study research should not involve statistics, and the idea that scientific approaches should be statistical – which, if taken seriously, seriously restrict the range of options considered by researchers in many disciplines. The penultimate section sketches an example to show how anecdote, fiction and statistics might fit together. This example is based on a real study, but is fictionalised, which illustrates one of the advantages of fiction discussed below.
Anecdote
By anecdotes I mean descriptions of particular cases or events. They are the basis of case study research, any research where the detail of unique cases is reported or analysed, journalism in the sense of reporting the news, and ‘creative nonfiction’ (Akin, 2000). The use of the word anecdote is often take to imply that they are amusing or unreliable, but my use of the term here presupposes only that anecdotes are interesting from the perspective of the research. The key property is that they focus on particular incidents or cases.
The word ‘anecdotal’ is often used in a derogatory sense because anecdotes may be selected to prove whatever the narrator wants to prove (‘just an anecdote’), and because they do not appear to be a suitable basis for generalising to a wider context. There are, however, circumstances in which anecdotes are of obvious value. Take the case of a management creed promoted by the gurus but which does not usually work in practice. Suppose that an example of successful practice has been discovered. This would be an example of what is possible which may be worth studying and perhaps emulating. The example may be chosen because it shows the creed in a positive light, and there may be no basis for statistical generalisations, but it still illustrates an interesting possibility: what can happen, not necessarily what will happen with any specifiable probability. As well as explaining, or providing hints about, how the creed has worked, the detail in such an illustrative case may also be helpful for bringing concepts to life, for making them ‘stick’ (see the discussion of fiction below). There are many other contexts in which such illustrative research may be useful for similar reasons: for example, an analysis of cases of people with natural immunity to a disease may be helpful for the management of that disease.
I have chosen to use the word ‘anecdote’ rather than ‘case study’ because it covers incidental observations as well as formal case studies. Imagine we are conducting a survey on people’s experiences of internet scams, and one respondent comments that ze fell for a scam late at night after having too much to drink – this is a potentially interesting comment but not part of a formal case study.
To be studied these possibilities must be discovered. This may entail studying large populations, not to arrive at statistical generalisations, but to find illustrative examples from whose study useful progress may be made. Decision procedures for guiding such ‘prospecting research’ have been derived (Christy & Wood, 1999; Wood & Christy, 1999, 2001); these are, of course, entirely distinct from statistical decision procedures, and provide guidance on how large a sample should be and whether it is worth extending a given sample. The concept of ‘saturation’ (Saunders et al., 2017) is another approach to this issue.
Having been discovered these possibilities may be useful for helping to develop theories about the phenomena in question, and for testing predictions about individual cases (Robinson & McAdams, 2015). However, anecdotes cannot, of course, be used in any rigorous sense for demonstrating the validity or invalidity of statistical laws. They are also of limited value for supporting universal laws which assert that something is always the case. They may, however, be useful for demonstrating that such universal laws are false. The assertion, for example, that a particular management creed always works, can be falsified by a single anecdote about its failure. This is the basis of Popper’s description of how science works (Popper, 1980). It is however, of limited interest in many social sciences because non-trivial universal laws (as opposed to statistical patterns) are rare. The main value of anecdote in social science is that it can illustrate what is possible.
A good example (i.e. an anecdote) of the consequences of ignoring this role for anecdote is provided by the results of an experiment on the test for athletes to see if they have been taking the drug nandrolone (Mackay, 2000). This demonstrated, on the basis of a very small sample, that a positive test result could be caused by ingesting legitimate food supplements. This result was dismissed by a spokesman for the International Amateur Athletic Federation as having no scientific validity, largely because of the small sample used. However, it is this argument that has no validity: the experiment demonstrates the possibility, which is all that is being claimed. This episode illustrates well the dangers of the blinkers imposed by the crude statistical paradigm, which tends to dismiss research which demonstrates that something is possible as ‘mere anecdote’.
In the natural sciences anecdotes often play an even stronger role than this. Contrary to Popper’s ideas, Einstein’s theory of general relativity was widely regarded as being confirmed by observations made during the total eclipse of 1919. The general law was considered confirmed by an anecdote about what happened at a single event. There was no sample of eclipses; just the single event was viewed as sufficient.
Some social science research does claim that similar generalisations can be made from a single case: for example, Newton et al. (2003, p. 152) argue that their case study organisation was ‘typical’, and so their findings are ‘generalisable’ to other ‘similar’ situations. The difficulty here, of course, is that ‘similar’ in this context is a fuzzy concept, and the noise created by uncontrollable variables means that generalisations are rarely as secure as they are in physics.
The relativity example also illustrates one of the problems with anecdotes: what is ostensibly the same event may be interpreted in very different ways. According to Hawking (1988) ‘later examination of the photographs taken on that expedition [to view the eclipse] showed the errors were as great as the effect they were trying to measure. Their measurement had been sheer luck, or a case of knowing the result they wanted to get, not an uncommon occurrence in science’. (p. 32)
This emphasises the importance of a critical attitude to anecdotes, but it does not detract from the value of anecdotes for demonstrating what is possible.
Fiction
Anecdotes can only be used to explore possibilities that have happened, and that we can find. To go beyond these, to possibilities that may be feasible but of which we can’t find examples, to possibilities that may or may not be feasible, and to ‘possibilities’ that can be imagined but are not in fact possible, we need to turn to thought experiments or hypothetical examples. I will use the term ‘fiction’ to cover these because they are, in essence, inventions, made up, but also because some of these inventions may stem from literary fictions, films, plays and so on.
Anecdotes and statistics are empirical assertions about reality in a reasonably straightforward sense. Fictions are not intended as empirical assertions about reality, but they can be regarded as assertions about a possible or hypothetical reality, which may be useful for understanding ‘real’ reality. In what circumstances, then, is it a valuable research ploy to make fictional ‘empirical’ assertions?
There are a surprising variety of such circumstances – although this is perhaps less surprising if we reflect on the role of fiction in everyday life. Many books, TV programmes and films tell fictional stories. All cultures throughout human history have had important myths and stories. We often find fictional stories more interesting than accounts of real events. And apart from storytelling, we need to consider hypothetical futures – most of which will never come to pass – which are used to plan for the future. From this perspective it is hardly surprising if fiction has an important role to play in research.
Some people claim they can learn more about life – real life, one assumes – from fictional works, than from science. For example, the novelist, Julian Barnes, is quoted in the (UK) Guardian newspaper (29 July 2000): ‘When I read non-fiction I am often aware that it is merely a masquerade of the truth. When you read the great and beautiful liars of fiction you feel that this is what life is. This is true even though it is all made up’.
In a similar vein, the ‘parables of leadership’ recounted by Kim and Mauborgne (1992) were intended to ‘capture the unseen space of leadership’ – which they feel is difficult to achieve by more direct approaches.
Mar and Oatley (2008) provide what is effect an explanation for the power of some fictions which is neatly summarise by the title of their article: ‘The Function of Fiction is the Abstraction and Simulation of Social Experience’. Literary fictions ‘offer models or simulations of the social world via abstraction, simplification, and compression’ which are of obvious relevance to theorising about the real world.
The insights these writers are referring to tend to be of the form: ‘in this sort of situation it is plausible that this sort of thing will happen because …’ and the fiction will then delve into details of characters’ state of mind, and so on. The result may be that readers end up with a deeper understanding of the situations depicted in the stories. Taylor (2000) refers to the audience for his plays ‘get[ting] the idea in the gut. The ideas may then bubble up to their heads. This is important to me because I believe that ideas that are communicated intellectually generally don’t stick. The idea needs to be in someone’s gut and their head for it to stick’ (p. 305). And of course, if these are ideas are to result in productive research, it can only help if they are understood, at a deep level, by researchers and their audience.
Fiction is also used in sciences such as physics. It is not, however, called fiction; instead, fabricated stories are called thought experiments, or examples, or, if aimed at beginners, problems or exercises. The conclusions are not fictions, but the ‘data’ on which they are based are.
Some of the great theories of physics derive from thought experiments. Einstein’s special relativity (one of whose conclusions is that E = mc 2) was prompted by imagining someone travelling at the speed of light (Bernstein, 1973, p. 39). This would be impossible in practice but is possible in a thought experiment. Such thought experiments can do more than provide inspiration: they can provide a convincing means of establishing the truth of a conjecture. To take a more accessible example, it is possible to demonstrate that heavy objects must fall at the same speed as light objects (ignoring the effect of air resistance) by the following thought experiment:
Imagine two 1 kg weights falling freely. They will obviously fall at the same speed. Now imagine the two weights are joined by a piece of string. This will obviously make no difference to the speed of fall, but the two weights can now be regarded as one 2-kg weight, which obviously falls at the same speed as the two 1 kg weights. The argument can obviously be extended to demonstrate that any two weights will fall at the same speed.
In this case, it would be quite easily to perform these experiments for real, but if you accept the assertions described as ‘obvious’, as being obvious, then there is no need. A thought experiment proves the point. We can deduce real truths from fictitious stories (This is not to deny that the term ‘obvious’ deserves some consideration, and that there might be situations where one can be misled by what appears obvious but is in fact false. Reality may be a helpful check on thought experiments.).
One of the common approaches used in logic, which underpins a lot of mathematics and the sciences which depend on mathematics, is reductio ad absurdum. This involves imagining something is possible and then proving that its consequences are impossible, so the original assumption must be wrong. For example, 1 to prove there can be no lowest rational number (a number that can be expressed as a fraction like 2/5 or 31/67), we imagine there is a lowest rational number and divide it by 2. The answer is obviously a lower rational number so the supposed lowest rational number cannot be the lowest because we have found one which is lower. The possibility we started with turns out to be a fiction. Arguably, Einstein’s thought experiment described above is a similar example, because one of the results of special relativity is that travel at the speed of light is impossible.
On a more mundane level, the ‘examples’ in mathematics and science textbooks represent the application of general principles to particular circumstances. The (fictional) stories behind these particular circumstances are necessary to make the ideas ‘stick’, as anyone who has tried to understand a piece of mathematical reasoning without working through examples will appreciate.
For these reasons, stories of various types are often important for theorising. Factual anecdotes may play a similar role, but fiction has several advantages over fact:
Fictions generally require less research. It may be easier to make things up than to find out real facts. I have used fictional examples in this article when real examples would have added little to the argument. I invented the assertion below that men get paid 30% more than women: this is a trivial example where little would have been gained by using a genuine statistic. On the other hand, this is not to deny that many writers of fiction devote considerable effort to researching their topic so that they get the background right. The details may be invented, but the background context, and the way the situation ‘works’ are, in essence, true. Einstein’s use of thought experiments obviously depended on a very deep intuitive understanding of nature.
Fictions can be designed to test particular parts of a theory. A fictional story – a hypothetical situation – can be made up to put a theory to the most severe test imaginable – thus satisfying Popper’s requirement that one should really try hard to find stringent tests for a theory.
Fictional stories can substituted for anecdotes that could not be researched and told for practical or ethical reasons, or because of the sensitivities of the people or organisations involved. Journalists may fictionalise details of individuals to protect their identities; management researchers may write up an analysis of a fictional organisation for similar reasons; and in the example below about the professor of management I have substituted a fictional story for a true account.
Fictions can explore circumstances that have never happened. These may be possibilities which are worth encouraging, or they may be hypothetical circumstances whose feasibility we wish to explore. Writers of fantasy and science fiction may take this principle much further than conventional novelists – see, for example, Hanchett Hanson (2023). They may explore what happens if the background rules change. Another genre worth noting here are utopias and dystopias. There is a fascinating anthology of such visions over the last four millennia in Carey (1999).
Fictions can also be designed for impact and ‘stickiness’ – they can be designed to make the points in a way that ‘sticks’.
On the other hand, obviously, the advantage of factual anecdotes is that they have a built in reality check. Accounts of the moon based on NASA data have more credibility than science fiction. But the fictional accounts do still have their place – for the reasons explained above.
These arguments apply to physical science and social science. However, in social sciences the rules tend to be less firm and well understood; we can play out a fictional situation using Newton’s laws and be pretty sure the answer will be right, but with a social situation can we have the same level of certainty that our imaginary playing out of the situation will correspond to what would happen in reality? Probably not.
This means that the role of fiction in social sciences is likely to lie in establishing possibilities rather than general or statistical laws: in demonstrating what might happen, rather than what will happen. Prospecting research (Christy & Wood, 1999) involves looking for empirical illustrations of possibilities, but the fictional mode extends this by including fictional possibilities – things that might happen in the future although they may not have done so far.
The value of this goes beyond exploring general principles. A key concern of many social sciences is how to best manage the future: a crucial aspect of this is envisaging a wide range of possibilities – some of which may be worth striving for, whereas others may prompt avoiding action. Initially, some of these possibilities must be invented; they start their life as fictions.
The fictional stories useful to social research are of a variety of types ranging from fully fledged stories to trivial examples. There are also simulations and role plays, which are designed to explore the consequences of hypothetical, or fictional, scenarios. A computer simulation of a variety of new production systems, for example, may enable problems to be foreseen and the best system to be selected without the necessity to perform a real experiment. What-if models on spreadsheets, and mathematical optimisation models, are all designed to compare several hypothetical scenarios with a view to choosing the best in some specified sense.
Statistics
This is the pattern for much of the research reported in many social science journals. A sample of instances of whatever is of interest is observed and used as the basis for statistical generalisations. On average men get paid 30% more than women; performance related pay usually fails to improve productivity; standards of numeracy are lower in the UK than in other comparable countries; and so on. These are fictional examples, but they are adequate for my purposes, as discussed above.
The essential feature of statistical assertions is that they are based on the frequency of particular values of measurements, or categorisations, in the sample (Wood & Christy, 1999). To find out how much more men earn we need to know the frequency of earnings in each range so that we can work out averages. Similarly, it is not enough to have anecdotes of performance related pay failing to improve productivity; we need to know how often it happens so that we can see if words like ‘usually’ are justified.
The instances on which the statistics are based are, in effect, factual anecdotes, but the statistical researcher is not interested in the uniqueness of each case; all that is reported and analysed is the statistical summary of the sample, which, of course, is presumed to tell us about the underlying population. One of the key assumptions of statistical analysis is that cases are exchangeable (Draper et al., 1993); the sense in which they are unique is considered irrelevant.
In natural science, non-probabilistic laws –E = mc 2 on every occasion, not merely on average or on 80% of occasions – are feasible and so they are the goal; in the social sciences, everyone acknowledges that this is not often possible, so assertions qualified by provisos like ‘usually’ or ‘on average’ must do instead. Statistical assertions like these may be all that is feasible.
Most statistical research in the social sciences is observational; the situation is observed without any experimentation or manipulation of the situation. Sometimes, an experimental approach is taken – something is changed and the effects are assessed and compared with a control.
In either case, if the aim is to predict and manage what happens in the future (as it surely usually is), there are two important presuppositions:
The future situation will resemble the past – from which the sample of data is inevitably drawn – in relevant respects (This obviously does not apply if there is no intention to extrapolate to the future.).
The features of interest can be meaningfully and usefully summarised so that they can be compared or aggregated across cases.
To illustrate the problems when these presuppositions do not hold, consider research on companies whose main business is based on the internet. Statistical survey results based on the past – as they inevitably must be – are of limited use because the business and technological context is changing so rapidly. In particular, the most interesting types of internet business may not have been invented yet, so no amount of diligent statistical research can possibly uncover anything about them. The ideology of this mode of research – its unspoken assumptions – are conservative, in the sense that it cannot describe new possibilities, only old ones. If statistical research is to be useful, it is necessary to research a level at which the situation is not likely to change – for example, the biological level, or a behavioural, organisational or economic level at which internet-based companies are similar to other businesses. Only then is the conservatism inherent in the statistical approach reasonable.
The second presupposition is that the features of interest can be summarised as a number, a series of numbers, or a category, or in some other way that can be processed by statistics. If every case is seen as an individual, so that population summaries or comparisons cannot be made, then the statistical approach is not possible. The detail of the way an individual company works, or of an individual human being, may be too subtle to be captured in a form which can be processed statistically. Forcing it into a statistical straitjacket may result in a very shallow level of analysis devoid of any real insight. On the other hand, it is possible to take this argument too far; the surprisingly common assertion that no aspects of human subjectivity can be captured statistically is clearly dubious given the size of the attitude scaling industry.
As another illustration, consider the first regression model (Model 1) described by Dissanaike (1999). This model is based on a sample of share prices of large companies over a number of years – it is firmly based on statistical data. The model provides a prediction of the return which investors would receive from investing in one of the securities for a period of 4 years, from the return they would have received if they had invested in it in the previous 4 years. The regression coefficient is −0.112, which means that, on average, a security with a level of returns 10% above the mean for the last 4 years, would produce an expected return 1.12% below the mean over the next 4 years. On the other hand, if the returns over the previous 4 years were below the mean, the expected return over the next 4 years would be slightly above the mean. Needless to say, these are averages over a large number of companies and time periods; the R 2 value quoted (.0413) suggests that this prediction is extremely unreliable. However, the negative regression coefficient does show that there is a very weak tendency for stocks which have done well over the last 4 years to do badly over the next 4 years, and vice versa. This provides support for the hypothesis of ‘investor overreaction’.
What it does not show, of course, is that the fortunes of every stock will change, or that every investor overreacts. It just demonstrates a very slight tendency for the overreactors to outweigh the underreactors and the non-reactors but gives no further clues about the underlying causes. It is an empirical assertion, but, in many respects, rather remote from reality, although of obvious interest to investors.
It is also a rather fragile conclusion because it depends on the stability of investor psychology. If, say, this overreaction hypothesis became well known, and substantial numbers of investors start to buy stocks which have done badly in the past, the price of these stocks will be driven up and the statistical pattern would no longer be valid. The first of the two presuppositions above may not then be justified.
The obvious approach for exploring the reasons for investor overreaction, and the reasons why some investors do not overreact, would be to seek out some typical investors, and to study some of their decisions, and the detailed reasoning behind them. Such anecdotes may lead to further insights, which may be worth testing statistically.
The Stereotyping of Methodology
In many areas of social science discussions of methodology tend to distinguish two broad styles of research. For example, Easterby-Smith et al. (2002) contrast positivism and social constructionism; although they point out this is a stereotype, it is a stereotype which dominates many researchers’ thinking. In the positivist corner (Easterby-Smith et al., 2002, p. 30), we have ‘statistical probability’ and concepts that ‘can be measured’, implying a quantitative approach; these are both absent in the list of features of social constructionism. Hence, to adopt a positivist approach, statistical techniques and quantitative methods are necessary. Alternative terms for the first (positivist) style of research are ‘scientific’ or ‘quantitative’, and the second style is sometimes referred to as ‘qualitative’ or ‘phenomenological’.
It is obvious that terms such as ‘scientific’ and ‘quantitative’, and ‘social constructionist’ and ‘qualitative’, do not mean the same thing. But their use reveals the stereotype of two types of researchers. Either you are a positivist researcher, and you use statistics and numbers and do not use qualitative data and do not believe that reality is socially constructed, or you are in the other qualitative camp, and you use neither statistics nor numbers.
At first sight, the three categories explored in this paper seem to reinforce and extend this stereotype. Positivist research is based on statistics, qualitative research is based on anecdotes, and fiction corresponds to another category not normally dignified by the title research – the idea that insights gained from works of imagination are of value for understanding reality. However, this simple correspondence is far from accurate.
In practice, qualitative researchers often study a sample of cases and arrive at generalisations such as ‘most X are Y’. This is a statistical conclusion. The features studied may be subtle, and the research may require detailed investigation of individual cases, and the conclusion may be, from a statistical viewpoint, very simple, but it is still a statistical conclusion in that it concerns how frequently different things happen. Qualitative research is typically partly based on anecdotes, and partly based on statistics. As Miles and Huberman (1994, p. 40) put it in their book on Qualitative data analysis‘… numbers and words are both needed if we are to understand the world’.
This means that some of the methods and precautions of statistics may be appropriate for some qualitative research. Statistical conclusions are largely meaningless if based on inappropriate samples, and if the samples are appropriate but small, confidence intervals or hypothesis tests may to a useful way of assessing the extent of sampling error, and answering questions about whether other samples would yield similar results.
We have seen above that natural science – the supposed source of the idea of positivism and using scientific method in management research – makes use of fiction and anecdotes, and that many of its conclusions are not at all statistical. Science typically involves carefully contrived experiments to test hypotheses or measure parameters. The application of statistical methods to random samples of people or events is not part of the standard approach. Statistics may have a role on the fringes – analysing errors for example – but it is not central to the way most science works. Science is about imagining possibilities and searching out extreme or otherwise interesting circumstances to test ideas. Statistics is only necessary when there is no powerful theory; when all we can do is try to find out what will probably happen, or what will happen on average. It is the last resort, not the pinnacle of the scientific method. There is thus no justification for the often-automatic identification of a scientific approach with statistics.
The categories presented here – anecdote, fiction and statistics – cut across these stereotypes. Any crude stereotyping of types of research is unhelpful and may restrict the variety of approaches researchers use. As Wood and Welch (2010) argue, ‘qualitative’ and ‘quantitative’, and similar dichotomies, are not useful terms for describing research.
The example in the next section shows how a typical research project might use anecdote, fiction and statistics.
An Example
I had a colleague, a professor of management, who was engaged in a project for a large public sector organisation. She was looking at a key service process in this organisation: there were many delays, errors and anomalies in this process, and her task was to identify these, assess their extent and help the organisation to improve the system from the point of view of the main stakeholders. I am not at liberty to divulge the name of the organisation or the business it is in, so I have fictionalised the case study and transported it into another, very different business, in line with the third of the reasons for using fiction above. The story I will use is from a university setting: the process by which assignments for students are set and marked. This is a much more mundane process than the real process under investigation, but the two processes are sufficiently similar to enable me to use the university story to explain how the professor’s research used anecdote, fiction and statistics.
The (fictional) process involves lecturers setting the assignment, then the students completing the task and handing it in by the deadline, after which the completed assignments are marked and feedback comments written by the academics, some are second marked and sent to the external examiner as a check on standards of marking, and finally the marks are recorded in the database and the scripts, feedback comments and marks are returned to students. The process is beset with problems: assignments get lost, marks are wrongly entered, the second marking fails to happen, delays are frequent with students sometimes waiting months for work to be returned. The professor’s job was to look at the administrative processes: it was not part of her brief to look into the reliability or validity of the marks awarded – which some would say is an even greater problem.
The detail of the case and the professor’s findings are not relevant here. What is relevant is the way she approached the reality of this situation. Her initial phase was a phase of qualitative data gathering – talking to key stakeholders and collecting anecdotes about problems, good practices and so on. She also asked many of her interviewees to draw flowcharts of the process as they saw it. No attempt was made to gather statistical data, but simply to catalogue the variety of ways in which the system worked and failed to work. Much of the data gathered in this phase illustrated the importance of recognising that different stakeholders had very different interests, and that simplistic models which ignored this were unlikely to be helpful. The professor’s search for anecdotes such as this was not haphazard: she needed to be systematic in her selection of stakeholders to interview to reduce the danger of missing important points.
She then went on to explore some fictions. These were of two types. The first involved imagining alternative ways of running the system and then running a what-if analysis to see the impact it would have – usually in the imagination, but occasionally using a spreadsheet. One suggestion, for example, was that only a limited sample of assignments – chosen at random – should be marked. This had clear advantages in terms of the workload for academic and administrative staff, and equally clear problems in terms of the validity of the final marks and incentives for students. These factors needed to be balanced against each other: to some extent this was possible in the imagination, but the research team decided that a small-scale trial (or experiment), analysed statistically, was necessary to check whether the idea would work as expected.
The second type of fiction involved imagining how the process – the actual one or an imagined alternative – would react to unusual situations – perhaps students trying to manipulate the marking system. There are obvious advantages in being able play these scenarios out in the imagination. Both types of fiction required creative input from people familiar with the system: several brainstorming sessions were run for this purpose.
The professor’s final level of analysis was the statistical level. There is obviously no guarantee that things that seem to work in the imagination will work reliably and consistently in practice. Similarly, anecdotes can reveal possibilities such as documentation remaining unread, but cannot give any reliable information about how widespread this problem is. Statistical analysis of data from carefully designed samples was necessary to ascertain the overall pattern and to establish levels of confidence in conclusions. The professor used a modified version of the SERVQUAL instrument (Zeithaml et al., 1990) for much of this work.
This research project illustrates the importance of the three approaches to reality which are the subject of this article. Each had a vital role to play. Anecdotes alert us to possibilities which may be important and may teach us how a situation works in some detail. Without such an anecdotal base we may miss much of importance. Fictions allow us to consider new possibilities and so are necessary for genuine innovations. History may be necessary for improving the future, but it needs to be spiced up with some imagination if we are avoid repeating the mistakes of the past. And statistics are necessary to ensure that we have a grasp of the pattern in the whole population and are not being misled by memorable incidents or stories which are not common in the real world.
This is a small, fictional, example, but essentially the same issues are relevant in any research which aims to improve some aspect of social or business life.
Conclusions
Researchers can approach reality in three ways: by looking at true stories or anecdotes, by making up fictional stories, and by the statistical analysis of samples of data. I have tried to demonstrate that these modes are far more interdependent than they may appear and that many common assumptions are misguided. To summarise some of the points made above:
* Anecdotes are useful for demonstrating that something is possible, and for seeing how it works in practice.
* Fiction includes any approach based on imaginary events or cases. This includes thought experiments, simulations and role plays, examples constructed to test a theory, as well as imaginative works of art. Fictions may have the advantage over anecdotes of being more flexible and easier to derive – especially where sensitivities and confidentialities make factual anecdotes impossible to obtain. And, of course, possibilities that we have not managed to find, or have not happened, must initially be explored in the imagination. New possibilities for the future must start their life as fictions.
* Statistical concepts and methods are important to see the general pattern in a whole population or process. This is important to avoid being misled by memorable examples. However, it is crucial to remember that statistical approaches are only part of the toolkit of researchers – even, perhaps especially, those who describe their approach as scientific.
All three approaches may have a role to play in a typical research project. Research restricted to one of these three approaches may be seriously impoverished: we often need all three. In practice, however, some researchers tend to favour statistical research, others focus on anecdotes from case studies, and others take the view that the art of storytelling is most likely to lead to illumination. Each group may dismiss the other approaches out of hand, and the first two may join in dismissing fiction because it lacks an empirical basis. This, however, ignores the fact that it may not be helpful to focus exclusively on what is, and ignore what might be, or what should be.
Footnotes
Declaration of conflicting interests
The author declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Funding
The author received no financial support for the research, authorship, and/or publication of this article.
