Abstract
Increasingly, evidence-based policymaking, in the form of randomized control trials (RCTs) in particular, are advocated as a means for studying the effects of planned social policy measures. Additionally, a Finnish basic income experiment was conducted in 2017–2018 as an RCT as a means of exploring alternative policy solutions, which gained widespread national and international political, media and scholarly attention. Despite the popularity of RCTs, there is a lack of studies of participants’ experiences of participation in social policy RCTs. In this article, we depart from the notion of ‘lived experiences’ when investigating a bottom-up participant perspective of the Finnish social policy experiment with the purpose of contributing to the understanding and future planning of ethically and methodologically sustainable policy experiments. Drawing on a qualitative, in-depth interview study of 81 Finnish basic income experiment participants, we examined their lived experiences and related views on the experiment. The analysis shows that although the idea of experimenting to demonstrate ‘what works’ in social policy was supported by participants in principle, various questions arose both concerning the tactical and political purposes of the experiment and the nature of scientific ‘evidence’. Furthermore, the results demonstrate that the impact of the media and political attention often surrounding more controversial policy experiments, like the Finnish one, can also challenge the RCT principles of ‘non-contamination’. Participants in highly politicized experiments also easily feel that they become objects of strong moral expectations and judgements, which in the Finnish basic income case clearly resulted in feelings of frustration and personal failure.
Keywords
Introduction
Increasingly, evidence-based policymaking (EBPM), in the form of randomized control trials (RCTs) in particular, are advocated as a means for studying the effects of (planned) social policy measures, including a universal basic income (UBI) (for example, Neuwinger, 2022). The main advantage of RCTs is the opportunity they give to compare statistically randomly selected ‘treatment’ and ‘control’ groups to detect causal effects of the treatment. Also, the Finnish UBI experiment conducted in 2017–2018 as a means of exploring alternative policy solutions, in other words, ‘what works’ (Kangas et al., 2021), an undertaking which gained widespread international media and scholarly attention, was designed as a RCT (Jauhiainen et al., 2021). While there is a growing body of literature on various aspects of EBPM, including RCTs, empirical research focusing on participants’ experiences and reflections on taking part in such experiments remains scarce (also see Cox and McDonald, 2013).
Nevertheless, bottom-up approaches, focusing on participants’ experiences and using qualitative material and methodology, would add to the knowledge base on experiments, including methodological and ethical aspects for the social policy debate and planning of future social policy experiments, which often concern politically contested solutions.
By investigating the case of the Finnish UBI experiment from the perspective of participants’ lived experiences, this article aims to enhance knowledge on bottom-up perspectives when conducting RCTs as part of EBPM approaches to social policy in general.
Our specific research question is as follows: How did participants experience and view the experiment and their participation in it?
Empirically, in-depth accounts by 81 Finnish UBI experiment participants on being part of the trial and on the idea of social policy experimentation generally, obtained through face-to-face interviews, are examined. While focusing on the receivers of the UBI only, not on persons belonging to the ‘control group’ (see further below), we assume that the RCT type of experiment design utilized, including various constraints which followed from the policy context in which it was conducted, had an important impact on the ways in which our interviewees experienced and accounted for their participation in the experiment.
In the next section, we present previous studies on participants’ views on RCTs – most of which relate to experiences of medical treatments – and the theoretical framework of ‘lived experience’ that we will use as a frame of reference. Then we present our case, the Finnish UBI experiment, including conditions leading up to and related to it and some of its central characteristics. After presenting our data and methods, the empirical results section focuses on the lived experiences of trial participants. We sum up by discussing the results in light of the specific conditions surrounding the experiment, as well as its various implications for conducting policy trials regarding complex and politically controversial issues such as a UBI.
Participant experiences of experiments and policies
Existing qualitative studies containing RCT designs have to a great extent focused on medical treatments and highlighted trial participants’ (patients’) complex experiences and understandings of RCT participation (see, for example Midgley et al., 2016; Morris and Bàlmer, 2006; Norris et al., 2019). On the one hand, trial participation can have beneficial effects such as the satisfaction of feeling ‘useful’ for future research and being able to make a ‘worthwhile’ contribution (Carey et al., 2001; Naidoo et al., 2020). On the other hand, study participation has caused unanticipated harm, distress, and burden to participants (Tarrant et al., 2015; Tross et al., 2018). These include difficulties dealing with the variety and complexity of information, discomfort related to being randomized, struggles when answering questionnaires meaningfully, fear of being a ‘guinea pig’ and sense of loss at the end of the trial (Naidoo et al., 2020).
While behavioural RCTs may not involve the same magnitude of potential harm to bodily integrity as medical trials, non-medical trials are not devoid of risks relating to participants’ perceptions of the research experience and trial participation (Weitlauf et al., 2007). While some trial experiences could be of similar types, policy-related behavioural experiments may also contain various specific characteristics potentially affecting participant experiences and may provide additional reasons for reflecting on participant roles and protection during and after trials (see for example, McDonald and Cox, 2009).
Such experiment characteristics could relate to the not-always-unproblematic assumption of a one-way relationship between science and politics (see Teele, 2014) regarding highly politically controversial social policy solutions like the UBI (see Perkiö, 2020). For example, Neuwinger (2022), studying UBI experiments in various countries planned as RCTs, detected political-administrative as well as legal and economic constraints, which resulted in interference by non-scientific parties affecting the original study design, even to the extent of jeopardizing the original principles of the experiments. A further aspect of the interaction between politics in a broad sense, and science, as regards social policy experiments, is the often arising public and media discussion, which is known to have impacted both policy processes and people’s perception of reality in general (Couldry and Hepp, 2018; Mäkkylä, 2021). Thus, they may have also (re)shaped experiment participants’ perceptions of what they had been chosen to be a part of.
Therefore, we argue that valuing a plurality of sources and forms of knowledge (Fleming and Rhodes, 2018) by researching participant experiences is an important additional aspect of evidence-based social policymaking. As has been pointed out regarding general social policy reforms (see Speed and Reeves, 2023), the exclusion of those affected directly from policy design could also be viewed as a democratic deficit in policy experiments. Allowing participants to identify and articulate their basis for participation, as well as experiences regarding the research process, such as how they articulate their sense of agency or powerlessness and how they understand the risks, benefits, and, more generally, the overall experience of being involved in research, is also essential from an ethical perspective (Cox and McDonald, 2013).
Against the backdrop of such considerations, we regard the notion of ‘lived experience’ – used as an intuitive concept rather than having a precise definition, but often thought of as containing aspects of feelings as well as thought (McIntosh and Wright, 2019) – as an inspiring theoretical framework when striving to give a voice to experiment participants. While constituting the basis for a growing body of studies on social policy-related matters (see McIntosh and Wright, 2019: 450–1, for a conceptual discussion), to our knowledge, it has not yet been utilized in studies on social policy RCTs. However, as McIntosh and Wright (2019) argue, the concept of people’s ‘lived experiences’ seems especially fruitful when applied to subjective perspectives that evolve over time. They also regard lived experiences as ‘especially relevant where they are shaped and mediated by policies, policy-related discourses and the practices of frontline agencies’ (McIntosh and Wright, 2019: 452).
Considering that social policy RCTs like the Finnish UBI experiment explicitly depart from the idea of affecting experiment participants’ experiences and behaviour by (temporarily) altering policies for a limited test group and that these measures are, as in our case, often accompanied by struggles between policy actors and discourses, this approach would seem to offer fruitful insights for our analysis of the accounts of UBI trial participants. In their aggregate longitudinal study on welfare conditionality, Wright and Patrick (2019) identify a set of ‘shared typical’ lived experiences that ‘reaches beyond the uniqueness of the individual and particularities of their circumstances to reveal broader tendencies of major consequence’ (Wright and Patrick, 2019: 17), thus providing a useful complement to policy insights provided, for example, by traditional RCT methodology. In the case of the substantially differing types of UBI trials that have so far been based on RCT principles, an investigation of ‘shared typical’ lived experiences of participants in every trial separately in relation to its design would seem to constitute the limit of the operable while awaiting research from similar enough trials.
The Finnish UBI experiment and its implementation
In order to understand the lived experiences documented in our study, some details on the history, goals, implementation of the UBI trial and media attention it generated are needed.
While the political debate on UBIs has a relatively long history in Finland (see Perkiö, 2020), intended as an adaption to general changes in Nordic welfare policies especially regarding the dominating forms of labour market ‘activation’ policies (Halmetoja et al., 2019), some scholars have argued that the introduction of a Finnish UBI experiment instead echoes a cultural change in policymaking towards an entrepreneurial attitude and a shift towards knowledge production involving testing and developing novel policies (Ylöstalo, 2020). Thus, as part of this ‘attitude’, the Finnish UBI experiment has been associated with Prime Minister Sipilä’s centre/conservative/nationalist-populist government’s (2015–2019) introduction of a ‘culture of experimentation’, consisting of extensive trials as well as several smaller experiments to cast the government as a primarily pragmatic, evidence-based, ‘what works’ government (Ylöstalo, 2020).
While basic income experiments can take many forms and have differing goals (for example, Neuwinger, 2022), the 2017–2018 Finnish experiment came to be a partial, unconditional (in principle) UBI targeted at the long-term unemployed. The experiment, initiated by the state government and planned by a bid-winning consortium led by the Social Security Institution (Kela), became a scheme designed as an RCT, in which 2,000 randomly selected (25–58 years old) unemployment benefit recipients across the country received an unconditional monthly payment of €560 for a 2-year time-period instead of the means-tested unemployment benefit of the same amount, with the aim ‘to explore whether a basic income could be used to reform the social security system so as to reduce incentive traps relating to working’ (Kangas, 2021a: 31–32), using the rest of the benefit recipients (some 173,000 persons) as a control group.
This UBI experiment is the first to adopt a nationwide RCT design that included mandatory participation of randomly selected participants and was based on a special law. The Finnish Constitutional Law Committee accepted the experiment design with obligatory participation to avoid selection bias. Thus, improving welfare policy through evidence-based policymaking was regarded as constituting an acceptable justification for deviating from the principle of non-discrimination and free will as basic rights (Kalliomaa-Puha et al., 2016).
Regardless of such leeway, several different aspects and factors contributed to changing and challenging the originally proposed experiment design, among them politically related, as well as legal, institutional/administrative and budgetary considerations and constraints that emerged during the planning process (De Wispelaere et al., 2019: 16; Kangas et al., 2021). For instance, UBI participants’ rights to any additional benefits and services had to remain unchanged, which, in practice, for example, due to benefit conditions, could have affected incentives, as could the fact that the UBI remained untaxed regardless of possible additional incomes, opposite to original intentions (Simanainen, 2021).
Parliamentary scrutiny also led to additional aspects being evaluated, largely comprising issues stressed by traditional UBI proponents (for example, effects on health, well-being and capabilities). These additions also serve as a justification for the present study. All in all, both the proposed bill and final act have been described as a compromise between practical, economic, and scientific arguments (Kangas, 2021a: 33).
Notwithstanding the extended goals of the experiment, and following the principles of RCTs, the UBI experiment was to be evaluated only a posteriori, to avoid ‘contamination’, and to minimize the so-called Hawthorne effect (the treatment group changing its behaviour due to awareness of being under evaluation) (Jauhiainen et al., 2021). The trial group, however, knew not only that they were participating in the experiment, but also what was ‘expected’ of them, since the ‘primary’ aim of the experiment (to improve employment) was mentioned in the decision letter that they received from Kela just before the beginning of the experiment, informing them they had been chosen to receive a basic income. The employment aim of the government was also discussed widely in the media (see further below), which might have impacted both the treatment and the control group.
Thus, while the bid-winning evaluating research team (which partly consisted of the same researchers that had taken part in the planning of the experiment) was tasked with conducting the evaluation by means of labour market statistics analyses, telephone surveys and in-depth interviews only after the experiment had finished, participants may have been affected by many other factors, which might have complicated the evaluation of the experiment’s effects, above all, relating to employment (see below and also Kangas et al., 2021).
Among such factors were political and media connotations surrounding the experiment, as well as unpredicted political manoeuvres. In addition to significant national and international media interest before, during and after its implementation, it triggered a plethora of (national) political reactions, promoting or opposing various views on UBIs in general. In the media debate (also), the experiment itself was often framed in terms of its effects on incentives, work, and entrepreneurship (Mäkkylä, 2021), thus highlighting economic/labour-related effects rather than other effects named in the experiment act (Act on The Basic Income Experiment 1528/2016). Thereby it also echoed the dominant framing in the Finnish debate on the UBI prior to the experiment (Perkiö, 2020). The final Basic Income Experiment Act was also criticized in the parliamentary debate, for example, for being too expensive and not realistic, too employment-focused and poorly prepared. Despite the critique, which was frequently reported in the media, only the small Christian Democratic Party voted against the experiment in the parliamentary session on 20 December 2016. The experiment then started on 1 January 2017 (Kangas, 2021a: 30–31).
Halfway through the experiment, in 2018, the government signalled that it was not planning an extension of the experiment as originally proposed (De Wispelaere et al., 2019) while also introducing a new ‘activation model’ which meant stricter conditionality rules for receiving unemployment benefits, altering benefit conditions for those in the control group halfway through the experiment (Kangas et al., 2021).
Additionally, due to political pressure from the government, results based on register data from the first year of the experiment only, as well as preliminary results based on survey data were published in early 2019, immediately after the experiment period (see Kangas et al., 2019). The results based on register data, which showed no statistical differences concerning unemployment levels between the treatment and control groups at that point, gained widespread media attention and resulted in a subsequent decrease in political and media interest (Parth and Nyby, 2022), although the final results were yet to be published.
Data
In early 2019, soon after the experiment had ended, Kela (responsible for the evaluation of the experiment) delivered our research group’s interview invitation to 988 experiment participants by mail. Those willing to participate in a face-to-face interview were asked to mail their consent forms directly to our research group, which was the only party with access to the interview data at any stage. Since participation was voluntary, potential self-selection bias was possible.
The transfer of personal data was contingent on the explicit consent of the basic income experiment participants. Anonymity was guaranteed, and no incentives were offered. The participants did not give written consent for their data to be shared publicly. Due to the sensitive nature of the research, the supporting data is not available.
The actual interviews took place between February and June 2019, when the preliminary, first experiment year statistical results – showing no employment rate effects – had already been published.
Our 81 interviewees were aged between 27 and 61 years, residing in all parts of the country and represented diverse backgrounds (that is, education, marital status and so on) and current labour market statuses. Around half of them identified themselves as female (N = 42), half as males (N = 38), and one as non-binary. The face-to-face interviews took place in the interviewees’ homes, workplaces, public cafes, libraries or meeting rooms, depending on the interviewee’s preference. Interviews were conducted mainly in Finnish (N = 74), a few in Swedish (N = 4) or English (N = 3), lasting between 27 min and 2 h and 22 min. The tape-recorded material was transcribed verbatim, resulting in 3,893 pages of data transcripts.
The interviewees were free to discuss any aspects of life they found meaningful and were assured of not having to discuss any unwanted subject. We chose to conduct semi-structured interviews, enabling us to discuss freely several themes relating to participants’ experiences of taking part in the experiment and encouraging them to construct their feelings, thoughts and standpoints freely. They were asked to consider three distinct time perspectives: before, during and after the experiment. The interview guide included open-ended questions on various aspects related to, for example, one’s experiences, thoughts and ideas on the experiment, the public image and discussion of it and the idea of social policy experimenting in general.
Elsewhere (Blomberg et al., 2021), various other effects on participants have been reported. For example, for some interviewees the experiment meant significantly enhanced opportunities for making meaningful and sustainable decisions in the labour market and gaining a sense of security and autonomy in their lives. For others, effects were minimal or non-existent or caused disappointment. Concerning the experiment itself – despite possible personal benefits – the accounts were highly critical, as will be illustrated below.
Analysis
The analysis proceeded by following the principles of the inductive thematic analysis process (see Braun and Clarke, 2006). First, the researchers became closely familiar with the data by reading the interview transcripts and noting down initial ideas concerning the interviewees’ experiences related to experiment participation. Second, all the meaningful data segments in the interviewees’ recorded speech that had some relevance on the experiment participation and experimenting in general were collected. Third, all the collected data segments were coded in a table and the most relevant data extracts were chosen for further and closer analysis. During the coding process, patterns within the data as well as commonalities and differences in the interviewees’ accounts were then identified. Last, initial interpretations of the most meaningful text segments and individual codes were made. Then these were united into broader and overarching themes which were abstracted in relation to our research questions. As a result of the analysis, three core themes related to participants’ experiences of the experiment were defined and named: ‘on experimental design and implementation’, ‘on media attention’ and ‘on political experiment interpretations’, under which the key findings are presented. All data extracts have been drawn from different interviews. Some translations of the verbatim data have been included to reinforce the arguments. The translations have been tidied up slightly to make them easier to read in English.
Results
On experimental design and implementation
Typically, experiment participant interviews started by stating the general need for experimentation and being in favour of a ‘culture of experimentation’. Many were also quite positive towards the idea of a basic income per se. Soon after, however, they often brought up several flaws in the Finnish basic experiment design in their accounts. Some flaws related to the (too low) basic income amount, the (too limited) duration and (unjustified) targeting. Thus, many interviewees highlighted the need for planning the experiment design with more care prior to implementing it: An engineer like myself wants to experiment with all kinds of things, and experimenting is fun. […] It is my pleasure to be a guinea pig in these kinds of matters. […] I warmly support the idea that experiments are conducted for these kinds of things [basic income]. Rather than just deciding that this is the way to do something. […] But then again, it is completely another matter whether I think that this experiment was successful as an experiment.
The degree of comprehension of the experiment, however, varied across the accounts of our data, which might have affected experiment outcomes. Some interviewees expressed that they were not able to fully understand the experiment, expressing their confusion regarding crucial elements of the RCT design. For instance, some interviewees thought that they were purposely chosen instead of randomly allocated to the experiment and some were unaware that they could have kept the untaxed basic income on top of any earnings, showing that they had not comprehended key experiment features that were explained in the selection letter. This was particularly the case, when the interviewee knew nothing about the experiment prior to receiving the selection letter in late December 2016, only a few days before the experiment started.
Despite some confusion related to the experimental design, the vast majority of the interviewees reported that they were pleased in principle to be chosen for an experiment on UBI and referred to it as a ‘lottery win’. However, three interviewees referred to extreme opposite experiences, identifying themselves as ‘public test animals’ who were forcedly included in ‘an unsuccessful human experiment’. Some interviewees also had wished for an opportunity to terminate their participation at will, rather than being obliged to participate.
Halfway through the experiment, the government signalled that it was not planning an extension for the experiment and introduced a so-called ‘activation model’, meaning stricter conditionality rules for those in the control group, thereby also affecting the RCT design. This resulted in disappointment, confusion and frustration among some interviewees: It was very poor judgement that certain parts of the experiment were abandoned […] and that the experiment was not extended and was badly prepared. Well, it’s no surprise from this [Sipilä’s] government. But I don’t think […] it is good that the government even started to experiment [basic income] in the first place. And then, at the same time came this activation model […] from the same government. So to me, in my opinion, it was a very unclear idea. […] First, we are experimenting with a thing like this [basic income] and then comes this kind of a punishment-based system [activation model]. And I knew that these two years would provide no solution, especially towards the end [of the experiment], you knew that it would lead to no solution at all.
The interviewees also expressed that they hoped the experiment’s implementation would have taken into account participants’ individual and varied life situations. Some interviewees, for example, had wished to receive some other type of help – such as someone to talk to – during the experiment and particularly the transition phase. They described the ending of the experiment as very stressful, calling for ethical responsibilities when conducting human experiments. I am really angry […] I mean that, for the most part, all the good things I was able to achieve during these two years, to move forward, and all the things that were good that I was able to accomplish and get done are all draining away […] So in that sense this [experiment] was badly organised. […] It was poorly thought out that two thousand of us would participate in it. […] Everywhere it was glorified and they said, ‘Yes, yes, now they will receive free money.’ […] And, I guess no one had any kind of idea what would happen to us when these two years came to an end. It was imagined that all [of us] would jump back into an aquarium like a goldfish. […] And everything would move on as usual. […] But people’s heads do not work that way. […] So, you need to have something else than be given an ice-cream and told that it will last for a year, so be happy now. […] And after a year, puff, it is gone. […] And you will never ever get it back. During these two years, it would have been much better if there would have been some kind of statistics or even personal meetings like this […] Where people’s situations could have been mapped. […] And that we could have been offered some kind of path, a beginning of a road. […] For those of us, who will end up in unclear situations.
Many of the interviewees also resisted the causality assumption embedded in the RCT design, arguing that people do not live in laboratories and that multiple issues affect their lives.
In sum, expressions of confusion and disappointment regarding the experiment were common, relating to the somewhat complicated experiment design, the implementation as well as the way it was evaluated.
On media attention
The experiment resulted in great media interest before, during and after its implementation. Our data includes critical accounts of these media portrayals as many interviewees expressed their frustration at how the different stages of the experiment were reported and how this had impacted their lives.
Media representations in general were portrayed in most cases as harmful, since they included misleading information about the experiment which also affected the public image and opinions towards the experiment. However, the interviewees also stated that the media has its own logic which they always use to sensationalize things by, for example, reporting false or extreme details and narratives about the experiment.
Some interviewees had also been interviewed by the Finnish and international media about the experiences of the experiment. In most cases these experiences were negative, first because the interviewees thought that the reporters may have had limited understanding of the existing Finnish social security system and unemployment benefits. Second, the journalists were reported to have reached out to the participants in a sometimes overactive manner, which was described by some as ‘distressing’. There were concerns about whether or not the participants were understood correctly in interviews, if false information would be published and that wide public attention may come with a lack of privacy and personal stigma, such as ‘giving a face’ to online discussions that claimed basic income recipients were ‘slobs’ who received ‘free money for nothing on our expense’. For the same reason many of the interviewees stated that they had kept their experiment involvement a secret or shared it with only a few closest people to escape moral judgement.
The framing of news in the traditional media concerning individual experiment participants’ lives was often seen to be quite positive, especially during the first experiment year. However, some interviewees challenged the positive framings on the success of individual experiment participants vis-à-vis labour market participation. These were referred to as ‘media hype’ concerning success: Well, I have read interviews by some people from Iltalehti or Iltasanomat [ = Finnish tabloids] in which the experiment participants have praised this experiment to the skies and I have been very surprised by their opinion. […] But they have been people who have been able to participate in paid employment, so of course it has been nice to get a bit of money on top of your salary. But I haven’t had any benefit on top of this or any chance to access the labour market during this experiment. So those who have been interviewed in these media reports have been the ones who have succeeded, but us Donald Ducks who always fail, no one interviewed us.
After the preliminary results for the first experiment year were released many interviewees expressed their frustration concerning how the results were communicated and interpreted in the media. Some interviewees mentioned that the experiment was portrayed as a ‘failure’ before anyone had talked to or met the participants: And of course, you read the results in the papers, and no, [it] did not help at all. The entire experiment was described as crap. Money was lost, so I thought, well. […] it was typical Finnish mentality again. […] We will hit the less well-off [people] and at the same time say that we have this social democratic welfare state. They say, ‘You poor person there, shut up!’ […] We are back there again. […] I would have hoped that […] things would be more thoroughly studied during [the experiment] because no one has asked me any of these things. Just now, you are the first people to come and ask how this has influenced my life. Now afterwards, even though at the same time a result is released claiming that it [basic income experiment] sucked.
Immediately after the experiment period, a (non-mandatory) telephone survey was carried out by the bid-winning research team, which obtained a low response rate (Jauhiainen et al., 2021). Some interviewees expressed that people may have reasonable motives not to participate in this evaluation study due to the enormous media attention the experiment and preliminary results received in the Finnish media. As seen in the following extract, some also assumed that the media portrayals included moral judgements related to study participation, which may have affected their willingness to participate in the evaluation study as ‘public guinea pigs’. In particular, the modest reported employment effects were experienced as personal failures (see also Blomberg et al., 2021): I was irritated how in public, or in the media it was reported that none of the study participants even picked up their phones. […] I understand very well those people who did not answer, because if you have been unemployed for a long time, it perhaps feels that you should have been successful in getting a job when you got into this experiment. […] And then when you don’t succeed, it must feel that you are some sort of a loser. Some might think that now everyone thinks that this was a bad experiment and […] I guess one can think that way too. […] To be ashamed of it. […] So, I think it is really nice [ = sarcastically] to be this kind of a ‘public guinea pig’ here.
In sum, while the media framing of participants was felt to have changed over time (from ‘slobs’ to ‘losers’), the moral expectations and judgements concerning UBI participants were felt to have prevailed during the entire experiment and even after it.
On political experiment interpretations
As mentioned above, the government decided to proceed with a partial basic income model that did not count the basic income for tax purposes, which means that the experiment operated with a model that was substantially different from what would be feasible if a basic income were to be introduced as a regular part of the welfare system (De Wispelaere et al., 2019; Kangas, 2021b). Thus, some of the interviewees criticized the experiment’s design and overall knowledge production of the experiment as being unrealistic, as the following extract illustrates: Well. I was wondering whether you want to study the real thing here. […] It must be difficult to experiment with something within an experiment that does not correspond to the thing you want to find out. I think it is good to experiment and then to carry out real research on it. And of course you have to test a model before implementing it, but if it does not fully match the model that will be introduced, then there is a weakness.
Some interviewees also reported that the experiment by its design was unfair towards, or had very little to do with, the idea of a basic income as an unconditional and universal benefit to all members of a society. In these accounts, the experiment was portrayed as a political employment project targeted at unemployed people as a means to encourage them towards (precarious) labour market participation rather than a ‘real’ attempt to experiment with a universal basic income. In these accounts, the targeting of the experiment is described as a weakness, since people with prolonged unemployment histories may, for example, have multiple work ability limitations hampering their labour market access. Thus, in order to experiment properly and assess the benefit and pitfalls of the introduction of a basic income, the interviewees emphasized the need to include students, self-employed people or people living in low-income households in such experiments.
The interviewees also discussed the conflicting policy goals and objectives that reflect the experiment design, finding fault with the aims to activate the target group to find employment, while possibly neglecting other policy goals that are relevant to a basic income.
Not only the UBI design itself was regarded as a result of partisan politics. The interviewees also reported that the evaluation results were used – not only by the media – but also for partisan purposes to support arguments that either go against or favour a basic income, depending on the interpreters’ political standing. In their accounts, the interviewees emphasized the competing tensions about the nature of evidence as part of a policymaking process.
Some of the interviewees questioned whether the experiment was purposely designed to fail as part of a partisan ‘game’ to show ‘evidence’ that a basic income is not a feasible option. These accounts highlight that particular policy aspects are emphasized and other – often unquantifiable – aspects are disregarded when the results are assessed by policymakers: I don’t know if this experiment has been made to fail as here you do a so-called experiment that you already know the answer to. […] That was my assumption. And it still feels that those good sides [are disregarded], and here you focus so much on those negative [aspects] so this experiment has been made to fail. […] And it will never be [implemented]. These policy makers will never understand the value of it. Well-being or anything that has no direct monetary value. […] I would have wished for more. So it [basic income] could have been a noteworthy option. […] So it was disappointing to notice that it was just an experiment, and everything continued as normal after it.
Some of the interviewees also expressed that it was disappointing that the experiment did not lead to any concrete action and was conducted just for the sake of experimentation and research. Some were then also sceptical on how evidence of the experiment would potentially disregard important knowledge. Thus, the interviewees had observed that a policy analysis of the experiment involves an interplay between facts, values and norms in which ‘evidence’, ‘effectiveness’ and relevant information is diverse and contestable and can be used for tactical partisan and ideological purposes (see Head, 2008). In addition, the interviewees resisted the idea that this evidence may be presented as neutral and reliable information, that is, questioning what counts and is interpreted as evidence and who decides the nature of knowledge (see Pearce and Raman, 2014).
In sum, according to the participants, the feeling of being a part of a highly politicized experiment was present during the entire experiment. The political debate surrounding the experiment and unexpected political decisions resulted in confusion among many participants. It resulted in a feeling of their experiences, negative or positive, of the effects of the experiment (see Blomberg et al., 2021) not having any bearing as policy evidence.
Conclusions
Despite our thematically open interviews, we can discern noticeably ‘shared typical’ traits in the ‘lived experiences of the experiment’. Although most interviewees expressed that in principle it is desirable and necessary to experiment with social policy reforms – such as a basic income – prior to implementing them, they also raised multiple problems which they associated with this particular experiment.
The traits identified relate to ambiguities regarding the objectives of the experiment, factors affecting the implementation process and political and media (re)actions regarding the experiment processes and outcomes.These factors echo RCT experiment challenges identified using other methods (see Neuwinger, 2022).
First, we find a varying degree of comprehension regarding the UBI experiment design and its RCT-based methodology: overall, the experiment design and its implementation resulted in feelings of uncertainty and confusion among many participants during and after the experiment. In part, these related to the way in which participants were informed about their becoming part of the experiment. While some participants expressed that they had difficulty understanding the principles of the experiment and its RCT-based methodology, others, often being more accustomed to understanding bureaucratic information and principles of experiments, had difficulties in comprehending the nature of this particular UBI design: it was not seen to correspond to a ‘real UBI’ at all.
Second, while our bottom-up perspective revealed some accounts regarding the experiment’s RCT design similar to those detected in previous studies also on voluntary, medical trials (see for example, Naidoo et al., 2020), our study also highlighted additional aspects of the ‘shared typical’ traits of lived experiences of the UBI experiment which were related to the vast national and international media (including social media) and political attention of this particular social policy experiment. These aspects were related to feelings of a lack of privacy and perceived moral expectations and judgements. Consequently, many of the interviewees had kept their experiment involvement a secret.
Experiment participants were clearly affected by often rather negative political discourses reported in the media pointing out that the experiment had been a ‘failure’. In combination with media portrayals of Gladstone Ganders-like participants who had been successful on the basic income, feelings of being a loser who had failed on basic income were not that uncommon among the interviewees (also Blomberg et al., 2021). Other interviewees, however, took a clear stand against such prevailing positive news framings on individual participants’ success and expressed that the media was not interested in the ‘Donald Ducks who always fail’. Such results illustrate how participants in highly politicized experiments, in contrast to medical experiments, easily feel that they become objects of strong moral expectations and, also, judgements, which in the Finnish UBI case clearly affected the participants’ self-image negatively.
Third, and, again, largely with reference to what was discussed both before, during and after the experiment by politicians and experts in the media, including social media content, many of the interviewees felt that the experiment and its scientific results were being used for tactical partisan purposes. Questions were also raised concerning the nature of the scientific ‘evidence’. There was frustration among interviewees concerning the fact that only certain aspects of ‘what works’ seemed to be of interest to the government, and only certain ways of measuring ‘what works’ were considered relevant. Participants called for a more personalized/individualized way of measuring effects, in a ‘what works for me’ oriented way, considering a much larger array of outcomes. The simple causality assumption embedded in the research design was questioned: people do not live in laboratories, multiple issues affect people’s lives and choices, and ‘controlling for life’ was not perceived to be feasible. The interviewees argued that the employment effects (which were the main goal according to the government) should not be the only or the main indicator of the success of the BI experiment.
Therefore, when evaluating and developing evidence-based policymaking in general, and RCTs in particular, it is crucial to consider the impact of media and political attention often surrounding larger and/or more controversial policy experiments and how this can challenge the principles of ‘non-contamination’. While, in accordance with the RCT design, the evaluating researchers refrained from any contact with the research population, the interview accounts show that a ‘contamination’ in the sense of being reminded of being part of an experiment (with a certain aim) was continuously provided by the public political discussion reported in various media, strongly affecting participants’ experiences and thoughts on a multitude of aspects of the experiment. Further, the media had an important independent impact on how interviewees experienced the experiment, often resulting in feelings of frustration, stress, and personal failure during and after the experiment. Hence, we believe that our results do not only demonstrate how political and (subsequent) media debates surrounding UBI experiments impact participants but that the results might apply to more extensive socio-political experiments in general, especially if they are politically controversial.
For the reasons mentioned above, our findings are based on accounts produced after the experiment was finished, and the methodology of self-selection has probably affected the sample. However, our interviewees were not all individuals who had ‘succeeded’ on the basic income, but according to a previous analysis of the same data showed a large diversity in life situations and conditions (Blomberg et al., 2021). In future studies, it would however be worthwhile to consider gathering interview accounts beginning already from the initial moments of participation. Thus, in experiments, in which the ‘Hawthorn-effect’ can hardly be avoided, utilizing, at least as a complementary strategy comprising a smaller group of willing participants, a research design allowing for a recurring collection of data, and an approach treating participants as active, aware, and critical ‘experience agents’ would be one way of gaining valuable additional information on the policy solutions under testing, including also additional insights as regards factors relating to choices made by participants during the experiment.
In conclusion, our results (also) show that people utilizing benefits are often capable of political analysis – notwithstanding the fact that participants do not always obtain a complete picture of the experiment, its (changing) design or (blurred) objectives. Thus, the findings could be argued not only to highlight various methodological challenges, including potential (unclear) effects of such insights and reasoning on participants’ behaviour vis-à-vis the ‘dependent variable’ of an RCT, but various ethical, and democratic issues as well. One aspect, that was found to be important for participants in this UBI RCT, concerns the role of public debate and the media: while otherwise being expected in a democracy to critically examine policy reforms and their outcomes, and serve as an arena for the political debate, from an RCT perspective, media discussion could rather be viewed as a source of experiment distortion and frustration, confusion and feelings of shame and guilt among participants. Nonetheless, our study also shows that participants are actively exercising agency in relation to (social) policies, prevailing discourses and challenging experiment designs.
The qualitative approach of this study also highlighted ethical perspectives on the untypical (see Weitlauf et al., 2007) involuntariness of participation characterizing the chosen RCT experiment design, echoing instead the need for a caring and interactive two-way dialogue to invite participant input on problematic and ethical aspects of experiments (see Cox and McDonald, 2013; McDonald and Cox, 2009).
All in all, the lived experience perspective applied here could be argued to contribute to highlighting the wide variety of aspects that have to be considered when deciding on social policy experiments and on the choice of RCTs for obtaining answers to the policy questions at hand. These include, for example, an ethically sustainable experiment design and the handling of arising political debates and media involvement while respecting participants as citizens who actively reflect on and make choices based on their lived experiences during the experiment.
Footnotes
Declaration of conflicting interests
The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Funding
The author(s) received no financial support for the research, authorship, and/or publication of this article.
