Abstract
This paper aims to bring into the ethical debate on covert research two aspects that are neglected to date: the perspective of the research subjects and the special responsibility of investigators towards their observers. Both aspects are falling behind, especially in quantitative social research. From a methodological point of view, quantitative forms of covert observation involve a great distance between the researcher and the research subjects. When human observers are involved, the focus is usually on the reliable application of the measuring instrument. Therefore, herein, a quantitative study is used as an example to show how the protection needs of both the observed persons and the observers can be met in practice. The study involved 40 student observers who covertly captured everyday conversations in real-world settings (e.g. in cafés or trains) by a highly standardised observation scheme. The study suggests that the anonymity of the research subjects and their trust in the observers are crucial for their subsequent consent. However, many participants showed only little or even no interest in the written information they were provided. Further, this study strongly emphasises how mentally stressful covert observations are to the observers. Almost all observers were worried in advance that the people they were observing would prematurely blow their cover and confront them. Role-playing and in-depth discussions in teams are good strategies to alleviate such and other fears and to prepare student assistants well for their demanding work in the field.
Keywords
Introduction
Covert research is inextricably linked with the suspicion of the dishonest and unethical (Spicker, 2011). Whether it is justified for researchers to investigate people’s behaviour without their knowledge and, even more, to feign a false identity, has been the subject of intense debate in the social sciences for several decades (e.g. Calvey, 2017; Homan, 1980; Roulet et al., 2017). Initially, the debates centred on qualitative, ethnographic field studies in which researchers or their student assistants covertly gained access to groups on the margins of society over longer periods of time. Well-known subjects and contexts of investigations are patients in psychiatric institutions (Caudill et al., 1952), cult followers (Festinger et al., 1956), mafia members (Ianni and Reuss-Ianni, 1972), or homosexual encounters in public toilets (Humphreys and Rainwater, 1975).
In the course of the computational turn in social sciences (Alvarez, 2016), new, quantitative forms of covert field observation became the focus of interest (Giglietto and Rossi, 2012; van Atteveldt and Peng, 2018). Depending on the platform, the data generated by people through their online-mediated communication allow conclusions about the everyday behaviour of specific groups to broader segments of the population. Provided no communication behaviour is manipulated (an extreme example of this is the Facebook Contagion Experiment by Kramer et al., 2014), these studies do not expose the ‘data subjects’ to the risk of deception in the communication or interaction situation studied. The research ethics discussions here are primarily concerned with questions of data protection or the dangers of de-anonymisation and misuse of data (Niemann-Lenz et al., 2019).
Two aspects that are neglected in the ethical debates about the dangers of old and new forms of covert observation are, however, the perspective of the research subjects and – especially with regard to studies involving human observers – the special responsibility of the investigator towards his or her research assistants, who in most cases are students. Both aspects are falling behind, as I will justify in the following chapter, especially in quantitative social research. This can be problematic for two reasons: First, if we, as researchers, are wrong in our assessment of which aspects of covert observation pose a danger to the research subjects, our protective measures will probably not be (sufficiently) effective. Second, students are usually in a multiple dependency on us, as we are not only their supervisors in research projects but also their lecturers and examiners. This, as I will argue, makes them a vulnerable group for whom we have a special responsibility.
The aim of this paper is to bring both aspects into the discussion and to show how they can be addressed in research practice. Following a case-based approach to ethical research (McKee and Porter, 2009; Schlütz and Möhring, 2018), a quantitative, non-participant field observation from the field of communication studies will serve as an example. To assess how people deal with mass media content, 40 student observers captured a total of over 2500 natural groups’ everyday conversations in real-world settings (e.g. cafés, trains, public places). The case study demonstrates how observers can be prepared for such a demanding and challenging job. It also highlights the greatest difficulties of covert observation from the viewpoint of the observers and the research subjects. Although the case study refers to offline contexts of human interaction, its findings also have important implications for an ethically responsible approach to people who are covertly researched in online contexts.
Research ethical implications of quantitative research
Quantitative researchers, of which I am one, do not adequately discuss how research subjects evaluate the risks and dangers of covert observations and under what circumstances participation seems acceptable to them. This observation is closely related to their methodological approach. The aim of quantitative social research is to find distinct and generalisable explanations of people’s experiences and behaviour using standardised measurement procedures. Following the natural sciences, which have decisively shaped quantitative social sciences, human participants are primarily regarded as trait carriers. As interactions with the research subjects are a potential disturbance to the research process and thus are to be avoided or standardised as far as possible, quantitative researchers see themselves and their assistants primarily in the role of neutral, reliable observers of the events under investigation (Wimmer and Dominick, 2014: 114–119). According to Clegg and Slife (2009), the positivist paradigm underlying quantitative research can be translated into a modern (as distinct from a postmodern) research ethic. This considers empirical research and research ethics as two separate fields, as quantitative social research is ideally objective and free of value judgements. According to this understanding, research ethics problems can be solved across situations and contexts using a standard repertoire of predefined rules.
Although the gap between quantitative and qualitative social researchers has narrowed in recent years and triangulation has gained importance, quantitative methods still dominate my subject, communication studies, and related disciplines such as political science or psychology (Wimmer and Dominick, 2014: 49). This is reflected not only in the methods education of young researchers, but also in our debates on research ethics (Schlütz and Möhring, 2018). The risks and dangers that covert observations entail for the research subjects and how they can be protected from them are usually discussed by researchers exclusively among themselves. This is especially true when the covert observation procedures do not involve contact with the research subjects at all or almost at all. For example, the mere fact that users express themselves on publicly accessible platforms is often interpreted as implicit consent (Mahrt and Scharkow, 2013). Whether the research subjects themselves believe that the behaviour they exhibit on the internet or in real publics is a scientific common good, however, is a contention. The same applies to the anonymity of the research subjects, which is considered a kind of ethical gold standard in big-data research.
According to a survey of Twitter users (Fiesler and Proferes, 2018), most would like to be informed about the use of their data for scientific purposes even if they can remain anonymous: two-thirds would feel uncomfortable if one of their tweets was used in a research study without their knowledge. Every second respondent would even feel this way if he or she was debriefed afterwards. Contrary to current research practice, user postings cannot be equated with news items that are specifically addressed to the public and may therefore be investigated without informed consent. Whether user postings are private or public depends on the assessment of their authors and should not be determined for them (Henderson et al., 2013; Ziegele and Quiring, 2013). The idea that the individual is best protected from harm by disappearing in the mass is countered by the empirical finding that at least people from certain cultural circles may well be concerned with remaining recognisable as a person (Ntseane, 2009). Consequently, some researchers are already arguing that the principle of anonymisation can contradict that of personhood and lead to a dehumanisation of research subjects (e.g. Neuhaus and Webmoor, 2012).
When it comes to the protection of students we involve in our research, they are also mainly considered in their role as research subjects. For example, ethics codes of various disciplines (e.g. APA, 2017; ASA, 2018; DGPuK, 2017) emphasise that the principle of voluntary participation also applies to students, which is why the acquisition of course credits should not be made dependent on this. Surprisingly little information is found, however, on the question of how to protect students from harm they may suffer as a result of their work as research assistants. Even in the lively discussions around studies in which students work undercover in the field, their protection is only mentioned in passing. Erikson (1967), for example, pointed out that students acting as covert observers are exposed to enormous stress and bear a great burden due to their dependent position. Homan (1980: 52) was also aware of the fact that students ‘may involve [. . .] in a crisis of conscience [. . .] when lying [about their role as participant observers] or “acting a lie”’.
In contrast to qualitative social research, where the researcher or research assistant is understood as a measuring instrument, in quantitative social research he or she is merely its operator (Wimmer and Dominick, 2014: 116). In this vein, the success of quantitative studies depends – with regard to the results’ reliability – to a large extent on the fact that the application of a measuring instrument leads to identical or at least similar results independent of the person coding, interviewing, or observing. The striving for objectivity and standardisation leaves little room for subjective sensitivities. Correspondingly, the burden on human coders caused by media content analyses as a basic method in communication science has played almost no role to date. This is reflected, for example, in the fact that the corresponding textbooks (e.g. Krippendorff, 2019; Riffe et al., 2019) usually do without a separate chapter on research ethics. Since every media effects researcher should know what serious consequences the intensive reception of, for example, pornographic, violent, or racist content might have, this seems hardly comprehensible (Recuber, 2016).
The methodological procedure of quantitative observation corresponds to that of quantitative content analysis, with the difference being that the focus is not on authors’ action results but on people’s everyday actions (Gehrau, 2017: 13). As the observers here do not actively participate in what is happening, they leave the research subjects unaware of the study. This passive deception can feel extremely uncomfortable even if it is carried out in a public space, if the observers do not have to make any effort to hide themselves and when it comes to completely trivial behaviours. With regard to student observers, this mental burden has a special significance, since they are in various dependencies to us: We are not only their superiors when we hire them for research projects, but often also their lecturers and examiners because we know and recruit them from our courses. That they want to meet our (supposed) expectations and to present as good an image as possible can lead to students not actively approaching us with their fears. This particular social situation makes them vulnerable and requires us as researchers to be proactive in protecting them from harm.
In the following, I would like to suggest a way to include the perspective of the research subjects and the concerns of student assistants in the planning, implementation, and evaluation of empirical studies. A quantitative non-participant covert observation serves as a case study, as both research ethical shortcomings are particularly evident here: the application of this method theoretically requires no contact with the research subjects, which can lead to ignoring their perspective completely. Nevertheless, their implementation represents one of the greatest methodological and mental challenges for student assistants.
Applying covert observation in practice: A case study
Methodological considerations
Good research practice must be measured against methodological and ethical standards (Schlütz and Möhring, 2018). Before going into the ethical challenges of my study, I will first explain why I decided to conduct a covert observation. My study’s main aim was to examine how people deal with mass media content in their everyday conversations. It was inspired by a covert field observation of Kepplinger and Martin (1986), which is unique in communication studies but has never been replicated before due to methodological reasons and ethical difficulties. Usually, communication scientists investigate the contents of interpersonal communication through surveys (e.g. de Vreese and Boomgaarden, 2006; Eveland and Hively, 2009; Mutz, 2001). However, from a methodological point of view, self-reports have limited suitability for capturing the content and processes of interpersonal communication (Eveland et al., 2011). Since conversations about the mass media are not isolated events but closely interwoven with other topics, they are hardly memorable from a retrospective perspective. Accordingly, the results of surveys are distorted in favour of unusual, long, or particularly controversial conversations.
By contrast, direct observations are much better suited to define the role of the mass media for average, ordinary everyday conversations. Qualitative studies provide an example following a cultural studies approach by openly observing conversations within the family context (Klemm and Michel, 2015). However, due to the small number of families observed, they do not allow for generalisable statements about how frequently and in which other contexts people talk about mass media content. More recently, follow-up communication has been observed under laboratory conditions (e.g. Geber, 2019; Sommer, 2013). The participants were first shown a TV news report and then asked to discuss it in groups of two. From a methodological viewpoint, it is questionable whether the participants would voluntarily have watched the news report and whether they would have discussed it with others. Not only for methodological reasons, but also according to an ethics of inclusion (Eynon et al., 2009), such investigations have been criticised because politically less interested or less educated people rarely respond to requests for participation.
The methodological problems outlined can be solved by covert field observation following the model of Kepplinger and Martin (1986). In the present study, which was partly designed as a replication, 40 student observers captured more than 2500 conversations of natural small groups within 4 weeks in spring 2016. The study was part of a project funded by the German Research Foundation that was completed in 2018 (Podschuweit, 2019). The covert field observation covered four relevant contexts of interpersonal communication (Wyatt et al., 2000): (1) restaurants and bars; (2) public places and public transport; (3) the university; and (4) people’s homes. The groups were selected according to quotas (e.g. location, group size and socio-structural composition) to cover the widest possible range of different social contexts. In public spaces, the role of the students was limited to non-participant observers. In private space, students tried to stay out of the observed conversations among their family members or flatmates as much as possible. Conversations in the public spaces were coded in real time with a highly standardised observation scheme. For 20 min, the observers captured particular characteristics of the group members (e.g. estimated age, gender, and conversation role), their conversation characteristics (e.g. vividness), and the conversation content (e.g. topics, occurrence, and functions of media references) with specific letters and numbers. In private rooms, the observers first recorded the conversations and encoded them afterwards.
A central, much-cited methodological advantage of covert research findings is their high ecological validity (Hammersley, 2007; Spicker, 2011). In the present case, this was because the observed groups chose the environment of their conversations, the media content they referred to, and the type and number of their interlocutors, free from the specifications and (anticipated) expectations of the researchers. The direct capturing of conversations avoided distortions that resulted from interlocutors no longer existing or false memories. This also applies to the observers who were not involved in the conversations and, therefore, could fully concentrate on the data collection. Also, the quantitative approach and different conversational contexts allow results that are reliable and generalisable. However, the question of how such a study can meet ethical requirements is a distinct topic.
Ethical considerations regarding the research subjects
From an ethical perspective, the priority was to weigh up the expected benefit of the study for society against the possible harm to the people involved. How interpersonal communication moderates media effects can be regarded as one of the central questions of media effects research that has not yet been adequately answered (Podschuweit, 2017). For society and politics, a realistic assessment of the impact of the mass media and its limits is relevant to develop appropriate measures to protect people from harmful and undesired media influences (e.g. using legal regulations or the promotion of media competence). In typical media-psychological laboratory settings, the effects of mass media are likely to be overestimated. Their effects on isolated individuals are usually measured directly after (forced) exposure. In online contexts, there has recently been increased research into the links between interpersonal and mass media communication (Ziegele and Quiring, 2013), but not in face-to-face communication, which can still be considered as the most important and widespread form of interpersonal exchange (Berger, 2014).
There is no doubt that Kepplinger and Martin (1986) developed a methodologically promising approach to fill this research gap. From an ethical point of view, however, their approach is inadequate as they left the observed interlocutors completely uninformed about the investigation. Accordingly, current ethical standards (e.g. APA, 2017; ASA, 2018; DGPuK, 2017) strongly suggest debriefing as a substitute for informed consent. Following the authors of the pioneer study, it can be argued that no behaviour from the private sphere was observed, but only media-related conversations that were accessible to every other present person. The exact wording of the conversations was not captured; rather, numbers or letters were used to represent, for example, the type of media or a media reference’s function.
Furthermore, neither in public nor in the private sphere were personal data recorded; only social data (gender and age) were collected. It should also be noted that vulnerable groups were not the focus of research and that observers in public spaces had no personal relationship with the research subjects (McKee and Porter, 2009; Schlütz and Möhring, 2018). From a utilitarian perspective, the research subjects might even be protected by the absence of debriefing. In this sense, Homan (1980: 54) already pointed out that ‘the truth often hurts, causes discomfort or disturbs behaviour’. One fear concerning this study was that a debriefing would not only trigger mistrust in the actions of scientists, but in the worst case, paranoia. Accordingly, based on their knowledge of the study, extremely anxious or suspicious persons could also be afraid of being unknowingly under observation in other situations.
The problem with the arguments put forward so far is that most of them are speculations about how research subjects feel about an investigation. The observer can draw conclusions from the conversational behaviour of the persons observed on whether they do not want members of outgroups to listen to their conversation. However, as long as interlocutors do not know about the study and can thus give feedback to the observer, he or she does not know whether capturing and using the information in their conversation for scientific purposes is in their best interest. For example, from the research subjects’ viewpoint, the pioneer study might well have violated their privacy regardless of whether the research interest was in public topics. As people do not talk about public issues all day, it is likely that the observers also witnessed private conversations that were none of their business. From a legal perspective, people who withdraw from local seclusion and want to be recognisably alone or among themselves have a right to privacy in public space.
Moreover, it is doubtful that publicly accessible data – or in this case, public behaviour – automatically implies consent. As already noted above, the decisive factor is the appraisal of the people who generate the data or show the behaviour of interest (Henderson et al., 2013; Ziegele and Quiring, 2013). About covert observations, interactions between observers and research subjects also offer a possibility to increase common people’s trust and improve their understanding of a method, which otherwise will continue to lead to a hidden existence. Overall, the above considerations did not allow waiver of a debriefing.
Ethical considerations regarding the student observers
My first pre-tests made it clear to me that even the activity of non-participant observation poses – as already Erikson (1967: 369) aptly put it – ‘a good deal of personal discomfort’. However, the extent of the concerns and fears of my student observers only became clear to me in the course of their recruitment. During the job interviews, many applicants expressed great interest in my extraordinary methodical approach. However, they had only very vague ideas about their work in the field. This is because the method of observation is usually only marginally considered in the communication studies curriculum, and very few students gain practical experience in the application of this method during their studies. The job interviews revealed some students’ concerns. For example, it was not clear to them whether it was legally permissible for them to – as they put it – ‘eavesdrop’ on strangers or what consequences they had to expect if they were unable to gain the interlocutors’ consent to use their data. Overall, the job interviews made me aware that these and other questions had to be clarified at the beginning of the training phase: On the one hand, it would have been irresponsible if the observation on my behalf had been a mere burden for my student assistants. On the other hand, it was clear that a positive attitude of the observers towards their work was a prerequisite for their successful implementation of my study. On this point, I was again responsible not only to myself, but also to my third-party funding provider.
Regarding an ethically responsible approach to students in research contexts, the Code of Ethics of my scientific community only addresses students in their role as research participants by stressing the principle of voluntariness (DGPuK, 2017). However, neighbouring disciplines additionally state that non-participation or premature termination of participation must not have any negative consequences for students (e.g. APA, 2017; ASA, 2018). For example, they must be made aware of equivalent alternatives for passing a course. This point is relevant for numerous studies that are based on student convenience samples or use students as interviewers or coders for course credits (Meltzer et al., 2012). In the present case, the students applied voluntarily to work as observers and were paid for their efforts according to the applicable hourly rates for undergraduate or graduate students. Irrespective of this, there was an unequal balance of power between them and me, which resulted not only from my role as project leader and supervisor within the research context, but in some cases, also from my role as their lecturer.
Against this background, there were three main questions concerning the observers before starting their training: First, how could I make them feel that they were not doing the research subjects an injustice or preventing them from even doing them an injustice? Second, how could I prevent the observers from being harmed by the research subjects who could not understand why their conversations were being observed with intent without their knowledge? Third, how could I take away pressure from the observers resulting from their belief that they had to obtain the consent of the research subjects under all circumstances?
Measures for the research subjects’ protection
How the observers were supposed to deal with research subjects was regulated in detail in the observation instructions. In private homes, not only for ethical but also for legal reasons, much stricter requirements had to be set for the research subjects’ protection (Shilds, 1982; Spicker, 2011). In contrast to public spaces, the conversations there were coded based on a sound recording. Making records without the research subjects’ knowledge would have violated their right to their spoken word as a general right of their personality. To protect research subjects’ privacy, the observers sought their informed consent, but did not divulge the exact date and time of the investigation. Research participants agreed that a 20-minute sequence from one of their conversations at home within a given period of 2 weeks could be recorded and used for the study in coded and anonymised form. They were also informed about why the exact time of the recording could not be announced in advance. Also, each research participant was given a numerical code consisting of the observer number and the group number that each group received consecutively. In this way, they could request the deletion of the data relating to their conversation at any time while maintaining their anonymity. Research participants were informed that the observation was made immediately after the sound recording. Again, the observers pointed out to them the possibility of withdrawing their consent at any time. The observers also provided a written declaration not to disclose sound recordings to third parties and to make available to me, as the project leader, only the anonymised data entered in the observation scheme.
To protect the privacy of interlocutors observed in public spaces, the observers did not hide from them; they were just as recognisable as other persons present. However, they deceived the observed groups in so far as it seemed to outsiders that they were busy working on their laptops. In principle, any ⩾16-year-old German-speaking persons could have been enrolled in the study. Whether the observers had correctly estimated the minimum age was checked during the debriefing. No observations were made if there were indications that the groups did not want to be listened to. Indicators were (e.g. spatial separation, quiet speech or averted posture). Also, in public spaces, the observers made contact with the subjects immediately after the 20 minutes had elapsed. They introduced themselves and the study and then handed each interlocutor a written information sheet with further details (e.g. my contact details). During the debriefing, the observers were available to answer questions and, on request, gave the research subjects access to the completed observation schemes. If interlocutors refused to participate, the observers immediately deleted their data from their computer. If they consented, the observers also noted a four-digit code on the information sheet by which the data on a conversation could be subsequently removed from the data set while maintaining the research subjects’ anonymity. The procedure described and the information sheets received ethical approval from the IRB of the Faculty of Social and Behavioural Sciences of the Friedrich Schiller University Jena.
Measures for the student observers’ protection
Each of the 40 student observers underwent 21 hours of training and pre-tests before entering the field. The first of six sets of training served to explain to the observers why their job was scientifically relevant and how it could be classified from an ethical point of view. It was also intended to relieve the observers of the fears they had expressed during their job interviews. Since no one in my scientific community had experience in conducting a comparative study and the relevant methodological literature did not contain any tips for implementing appropriate training, I developed a novel concept. To support me, I engaged a professional communication trainer, who also accompanied the first training, which was composed of three parts. In the first part, we examined the observers’ concerns in detail. In the second part, we developed appropriate concepts for contacting and debriefing the research subjects. In the third part, the observers practised their application in role plays.
In the beginning, the observers wrote down all their concerns on index cards and pinned them to a whiteboard. The concerns could be divided into five groups: first, the violation of the privacy or even the intimate sphere of the observed persons; second, the premature disclosure of the observers; third, the research subjects’ incomprehension of the procedure; fourth, negative or even aggressive reactions as a result; and fifth, the disappointment of my presumed expectations due to the failure to obtain consent. Gradually, we discussed all concerns in the group. We distinguished the social sphere on which our investigation focused on a private and intimate sphere. We determined that nobody had to listen to a conversation if he or she was uncomfortable. We concluded that we were not ‘eavesdropping’ but listening to conversations because the interlocutors were aware of our presence. We made it clear that the anonymous coding of public conversations for scientific purposes was harmless from a legal point of view. Finally, I tried to take the pressure off the observers by not giving them a fixed number of conversations for which they had to obtain consent. The decisive factors were that they worked the specified number of hours, mastered the use of the measuring instruments, adhered as far as possible to the quota specifications, and showed themselves to be friendly and competent towards the research subjects.
In the second part of the training, the observers thought through the course of interactions with the research subjects and wrote down various scenarios. In particular, they considered how best to approach and explain the study to them and how to ask for their consent to use their data. The communication trainer and I had prepared text modules that should help the students develop individually appropriate conversation strategies. The observers’ main task was to provide the research subjects with all important information in an understandable way. However, they should not arouse unnecessary distrust in them. They were, therefore, urged to avoid terms such as ‘eavesdropping’ or ‘recording’ as they did neither. A concrete example of how to initiate contact was: ‘I am a student at the University of Erfurt, where we are currently conducting a study about conversations on public issues. Since your conversation was about the upcoming presidential election in America, I made some notes about it. . .’. It was no less important for the observers to be prepared for negative or even aggressive reactions. In such cases, the observers should not make any effort to obtain consent but, rather, should withdraw immediately.
After discussing the observers’ proposals in the group, interactions with the research subjects were practised in role plays. In groups of three to four, the observers sometimes took on their later role and sometimes the role of friendly, disinterested, or aggressive research subjects, or the role of waiters exercising their domestic authority. The top priority was a friendly, calm and professional appearance and the greatest possible transparency. Among other things, the observers had to be able to answer questions about the collection or the use of the data. For their protection, the observers also practised friendly withdrawal. When the persons observed yelled at them or wanted to be left alone, they apologised politely for the disturbance and withdrew immediately. If the situation allowed, they were to leave the written information sheet with my contact details.
In the subsequent pre-tests, the observers practised contacting and debriefing interlocutors under field conditions, both individually and in teams. In the teams, I always put an experienced graduate student next to a less-experienced undergraduate student. The team members observed conversations in the same environment and then exchanged results and experiences. Also, in the following training, the observers shared their individual experiences with the team. They demonstrated an exemplary ability to deal with unpredictable and challenging situations in the field.
Evaluation of the measures
Since the observers documented the total number of conversations they listened to, it was possible to calculate that 89% of the interlocutors gave their permission for the use of their data. The high consent rate indicates that the observers did their job very well. However, how they experienced the fieldwork and the contact with the research subject only becomes clear through a more detailed evaluation in the form of a survey. Thirty-seven of the 40 observers took part in the anonymous online survey immediately after the end of the observation. The central question in the present context was developed based on the fears expressed by the observers during the first training. The negative expectations were condensed into six items, which also reflect the ethical concerns raised about covert observation in general. To counterbalance these fears, several positive expectations have been added as response options. In the first step, the observers were supposed to recall how much they had expected, before the start of the training, that the observed persons’ privacy would be disturbed or they could react negatively to the debriefing (5-level scale from ‘firmly expected’ to ‘not expected at all’). In the second step, the observers were asked to assess whether their expectations in the field had been fulfilled ‘frequently’, ‘sometimes’, ‘rarely’ or ‘never’.
Negative expectations, which predominated before the training (Figure 1), were rarely confirmed in the field. Positive experiences were typical; in most cases, the observed interlocutors consented to the use of their data and took the observers seriously (Figure 2). Most of the observers felt that the majority of the people they contacted wanted to participate in the study. However, the concern of invading the research subjects’ privacy was an issue from the observers’ viewpoint. Despite all precautions, about one in three at least ‘sometimes’ had this impression. Overall, the survey suggests that covert observations such as this one can be realised in such a way that they represent a predominantly positive experience for both the observed persons and the observers. However, it also illustrates how stressful the very idea of covert fieldwork can be for student assistants. From an ethical viewpoint, this makes it all the more important for researchers to take possible fears seriously and mitigate them as much as possible. In this context, the survey findings suggest that an intensive exchange about these fears and dealing with them in concrete, practical exercises are a promising approach. Thus, 31 of the 37 interviewed observers already felt ‘very well’ or ‘well’ prepared for their fieldwork after the first training. Four of them were still ambivalent at this point, and only two felt that they were not sufficiently prepared at this stage. This, of course, raises the question of opting out, which the researcher should also point out to student assistants. In this case, there was a continuous exchange between my student assistants and me during the field phase. For example, through further training or a closed Facebook group, the students exchanged their experiences.

Observers’ expectations before the first training.

Observers’ experiences in the field.
Discussion
This paper sensitises to two aspects of research ethics that are marginal in earlier (e.g. Bulmer, 1982) and current debates (e.g. Giglietto and Rossi, 2012) about covert observations: the evaluation of the approach and its risks by the persons whose behaviours are covertly observed and the concerns of students in their role as covert observers. First, reasons were given as to why quantitative social research in particular runs the risk of neglecting these aspects. How researchers can protect both their student observers and the observed people from harm and evaluate how well their protective measures are working was then demonstrated using the example of a quantitative covert field observation of everyday conversations in real-world settings. Accordingly, a central concern of this paper is to provide assistance to other researchers in planning and implementing covert observations, which the present study explicitly would like to encourage.
My first thesis was that the great distance between researchers and research subjects, which quantitative research requires for reasons of objectivity (Wimmer and Dominick, 2014: 116), can lead to misjudging the research subjects’ (protective) needs (Clegg and Slife, 2009 argue similarly). This danger is even greater in the age of digitalisation, where automated analysis procedures make contact between researchers and ‘data subjects’ completely unnecessary from a methodological point of view. Nevertheless, recent empirical findings (Fiesler and Proferes, 2018) suggest that research subjects also want to decide whether researchers are allowed to examine their communication behaviour or communicative traces if these are publicly accessible. This need poses major challenges, especially for big data research. However, this does not change the fact that the concerns and fears of the people behind the anonymous data must be taken seriously from a research ethics perspective. In this context, it is worth pointing out the first practicable solutions through the coupling of automated content analyses with upstream user surveys (Stier et al., 2020).
However, debriefing is also a high hurdle for scientists planning covert observations in offline environments. The main reason for this is that observations for scientific purposes should be known to very few people. According to my observers, there were two main reasons why most research subjects reacted so positively to the debriefing: they were able to remain anonymous and trusted the observers as students at their local university. However, the observers also frequently reported that the research subjects refused the written information sheet or left it unread at the observation site. Also, in these cases, they usually reacted in a friendly manner, but showed little interest in the fact of the observation. Such behaviour, however, contradicts the wishes expressed by many Twitter users (Fiesler and Proferes, 2018). How (extensively) research subjects want to be informed, which information they want to disclose for research, is ultimately an empirical question to be answered. One way to address this would be to interview the very groups whose behaviour is to be researched, whether online or offline. By presenting them different variants of a disclosure sheet or asking them to recall details, it would be possible to find out what information they value and whether they can understand and retain it (Escobedo et al., 2007).
My second thesis was that the multiple dependencies that student assistants have on us as researchers make them a vulnerable group and thus worthy of special protection. As Roulet et al. (2017) explain in detail that covert participant observations pose a particular challenge for (student) observers, because they have to gain access to the field, record data unnoticed in the field, and leave the field at the right time. In the case of covert non-participant observations, access to the observed event can be a problem in other ways. As this study has shown, not every publicly accessible situation is suitable for documenting (conversational) behaviour. Reasons for this are, for example, that the observers do not get close enough to the research subjects to understand and classify their conversation behaviour or the content of their conversations. Furthermore, this study suggests that the risk of the observers’ camouflage being prematurely exposed is extremely low in public spaces. Nevertheless, from the perspective of observers, it can be a major problem. Many of my 40 student observers were worried in advance that the people they were observing would prematurely blow their cover and confront them. Because a significant proportion of them were initially afraid that they would not meet my requirements if they expressed this concern to me, this fear only became clear in the course of detailed discussions within the team.
In my personal experience, covert observation is one of the biggest challenges for student assistants. However, even seemingly harmless procedures such as media content analysis can cause great harm to the people who conduct them. Thus, we have a great responsibility towards our student assistants in a wide variety of empirical contexts. For the protection of student research assistants, I would argue in favour of more frequent interpersonal exchange. Undoubtedly, students are convenient research participants and assistants (Meltzer et al., 2012). However, since we as investigators, examiners, and lecturers are above our students in the hierarchy, we should be careful not to exploit this unequal power relationship (Brabeck and Brabeck, 2009). When working with students, we must consider that they might not express their displeasure or discomfort to us because they are afraid of getting a bad grade or of fostering a negative reputation. In my experience, the best way for researchers to find out how stressful a job is for student assistants, or what aspects their fears relate to, are regular team meetings and training. This is where the well-known phenomenon of conformity (Asch, 1956) can be used: If other students dare to express their concerns, their peers are likely to follow. Finally, I would recommend systematically evaluating how student assistants experience their empirical work, be it in real-world settings or in online environments. Provided that the number of students is large enough to maintain their anonymity, online surveys present a good opportunity to check how well the ethical measures for the protection of the persons involved in an investigation have worked in practice.
Footnotes
Funding
All articles in Research Ethics are published as open access. There are no submission charges and no Article Processing Charges as these are fully funded by institutions through Knowledge Unlatched, resulting in no direct charge to authors. For more information about Knowledge Unlatched please see here:
.
