Abstract
Moral framing and reframing strategies persuade people holding moralized attitudes (i.e., attitudes having a moral basis). However, these strategies may have unintended side effects: They have the potential to moralize people’s attitudes further and as a consequence lower their willingness to compromise on issues. Across three experimental studies with adult U.S. participants (Study 1:
Keywords
Moralized attitudes are attitudes that are embedded in people’s core beliefs and convictions and are related to what people believe to be fundamentally right or wrong (Skitka et al., 2005; Skitka & Morgan, 2014). People who hold moralized attitudes are generally harder to persuade (e.g., Aramovich et al., 2012) and unwilling to compromise on their positions (e.g., Ryan, 2017), leading to moral and political divides. However, some strategies can persuade individuals who hold moralized attitudes. These strategies are designed to counter moralized attitudes by casting persuasive messages in a new moral light, either by highlighting how a position on a moralized attitude may in fact be immoral (
This may be premature. Moral framing and reframing strategies could have unintended side effects that limit their potential to bridge divides. We consider two. First, these strategies could increase the moral relevance people attach to an attitude, leaving people persuaded and with their attitudes moralized. Second, these strategies could decrease people’s willingness to compromise, leaving people persuaded but with their attitudes entrenched. For example, an individual who thinks the use of hiring algorithms to hire employees is morally right because it is fairer and accurate could be persuaded with moral arguments highlighting that these technologies can be biased and unfair. However, this may lead to moralization of the attitude as well as a decreased willingness to compromise on the issue. Persuading and entrenching people may be a viable goal if one considers the changed attitude to be the morally correct one, but if moral framing and reframing are to be used to bridge political divides, such side effects are antithetical to the approach.
Potential Side Effects
In both moral framing and reframing strategies, the moral arguments used for persuasion could also induce change in people’s moral convictions. The content of the moral arguments is purposefully similar to factors that drive the process of moralization. For example, research suggests that moralization is based on the intuitive perception of harm (e.g., Schein & Gray, 2018), strong emotional reactions (Brandt et al., 2015; Wisneski & Skitka, 2017), or the linking of an attitude with a broader moral principle (Feinberg et al., 2019; Rozin, 1999). Moral arguments often contain all of these elements, tapping into people’s moral emotions, perceptions of harm, and their broader moral principles (e.g., Feinberg & Willer, 2015; Luttrell et al., 2016, 2019). These elements likely make the argument persuasive (Feinberg & Willer, 2015), but they could also moralize the target attitude.
A secondary moralization effect may not be worrisome. Moralized attitudes can be constructive because they can increase people’s political engagement and lead to more collective action and greater civic participation (Mazzoni et al., 2015; Skitka & Bauman, 2008; van Zomeren et al., 2011). However, moralized attitudes are a double-edged sword and can also have effects that may be less constructive (at least in certain situations) because people who hold moralized attitudes are less willing to compromise (e.g., Delton et al., 2020), show more anger (e.g., Mullen & Skitka, 2006), and are intolerant toward those with whom they disagree (e.g., Garrett & Bankert, 2020).
Statement of Relevance
Societies are divided over moral issues. One set of strategies to bridge these divides is to frame persuasive arguments in moral terms (e.g., “new technologies can cause harm and be used to discriminate against people”) or use alternative moral values. These strategies have unintended side effects that reduce the possibility that they can bridge moral divides. We found two such side effects. The first is that moral frames increase moralization (one’s attitude having a moral basis), and the second is that moral frames lower people’s willingness to compromise. These results imply that current moral-persuasion strategies designed to bridge moral divides by changing attitudes could unintentionally increase those divides by further moralizing and entrenching people’s attitudes. Scholars and practitioners should use these strategies cautiously and test for potential side effects in the domains in which they plan to use them. We also found that nonmoral frames were persuasive and de-moralized people’s attitudes. This strategy has the potential to persuade people but could also reduce the moral stakes by reducing levels of moralization.
We focus on a side effect that is particularly relevant for efforts at bridging moral and political divides: the willingness to compromise. Willingness to compromise in a democratic system recognizes pluralistic values and acts as an instrument to achieve mutual respect and stability. Resisting compromise and strongly favoring only one outcome can lead to a stalemate in governments in which problems go unresolved (see Ryan, 2017). People who hold strong moral convictions about their attitudes are less likely to compromise (Clifford, 2019; Delton et al., 2020; Ryan, 2017) and are even less likely to identify procedures for resolving issues (Skitka et al., 2005). This is because moralized attitudes are particularly strong attitudes, connected to right and wrong, and are often viewed like objective facts (Goodwin & Darley, 2008; Skitka et al., 2021). If one perceives the other side as holding an objectively wrong position, it does not make sense to compromise. For people who hold truly strong moral convictions, it would be akin to compromising on the answer to 2 + 2. Notably, if moral framing and reframing strategies induce an unwillingness to compromise, their utility in bridging divides will be curtailed.
There is some initial evidence for this curtailing. One study found that people exposed to moral rhetoric (compared with pragmatic rhetoric) used more absolutist reasoning and expressed more intense political attitudes (at least for two of the attitudes considered; Marietta, 2008). This study, however, was underpowered, did not directly measure moralization or compromise, and did not include a control condition. The latter omission is important because without a control condition, one cannot determine whether moral framing increases moralization or whether pragmatic framing decreases moralization. Another study (Van Zant & Moore, 2015) that included a moral, ambiguous, and pragmatic frame did not find any differences in moralization across the frames. However, very brief frames were used, which may not be sufficient to affect moralization. Nonmoral messages that contain pragmatic arguments highlighting economic and feasibility concerns can be persuasive for people who hold nonmoral attitudes and unpersuasive for those who hold moralized attitudes (Luttrell et al., 2019, Study 1). However, how these messages might affect moralization and the willingness to compromise is not known. Some research suggests that the consideration of financial costs can reduce the influence of moralization (Bastian et al., 2015), and others hint at using emotional de-escalation to reduce moralization (Clifford, 2019; Skitka et al., 2021). For example, emotional frames lead to greater attitude moralization compared with a control frame (Clifford, 2019), but whether nonemotional frames do the opposite is an empirical question yet to be tested. Nonmoral messages devoid of emotional content and containing economic concerns could potentially result in de-moralized attitudes and a greater willingness to compromise. By including moral, nonmoral, and control conditions, it is possible to test for unintended side effects of moral framing and reframing strategies.
The Current Research
We assessed whether moral and nonmoral frames affected people’s moral convictions (Studies 1–3) and their willingness to compromise (Study 3) on their position. We also tested whether the frames were persuasive to ensure that any differences in moral convictions or compromise were not due to differential effectiveness at changing attitudes (Studies 1–3). We also explored potential mechanisms (e.g., emotions, perceptions of harm) driving changes in moral convictions (Studies 2 and 3). All the studies focused on persuading people to oppose new big-data technologies because these issues involve relatively new attitudes that are often discussed using moral language (Corlett, 2002; Kleinberg et al., 2018) and because they have the potential to moralize those attitudes (Kodapanakkal et al., 2021).
Method
We describe the method of all studies in parallel, highlighting the similarities and differences. These are summarized in Table 1. We used a pretest/posttest design with two time points for Studies 1 and 2. Study 3 had only one time point. In all studies, we randomly assigned participants to at least one moral frame, one nonmoral frame, or a control condition.
Design of Studies 1 to 3 and Demographic Statistics of Participants
There was only one time point in Study 3.
In Study 1, we assessed whether moral and nonmoral frames were persuasive and whether they affected moral conviction. These frames presented arguments opposing crime-surveillance technologies. The primary analyses in Study 1 were exploratory. 1 We found that the moral frames were persuasive and moralized people’s attitudes, whereas nonmoral frames were persuasive but (marginally) de-moralized their attitudes.
We had three aims in Study 2. First, we wanted to replicate the moralization and de-moralization findings of Study 1. We predicted that the results would be the same as in Study 1 (the preregistration can be viewed at https://osf.io/7rzx8/). Second, we wanted to explore possible cognitive and affective mechanisms that could drive the effects of moralization and de-moralization. Third, we wanted to see whether the findings of Study 1 would replicate in a different technology setting—hiring algorithms.
We had three aims in Study 3. First, we aimed to replicate the moralization and de-moralization effects of Studies 1 and 2. Second, we aimed to further explore mechanisms of the de-moralization process intended to tap into a pragmatic reasoning style that might temper moralization. Third, we aimed to assess a second possible side effect: people’s willingness to compromise. We expected that people in the moral condition would be less willing to compromise, whereas people in the nonmoral condition would be more willing to compromise. These predictions were preregistered (https://osf.io/sqa9w/). The studies were reviewed and approved by the ethics review board of Tilburg University School of Social and Behavioral Sciences.
Participants
All studies were conducted online on Prolific (www.prolific.co) with participants from the United States. Given the similarities in all the studies, participants who participated in one study were excluded from the participant pool of subsequent studies. In Studies 1 and 2, we conducted power analyses with the R package
For Study 3, we calculated that a minimum sample size of 950 would be required to achieve a standardized effect size (Cohen’s
Design and procedure
In Studies 1 and 2, participants first read a neutral description of the technology under consideration at Time 1. This description included factual information about who uses the technology and what the technology does. The wording was as neutral as possible without any persuasive arguments for or against the technology, and it did not mention any benefits or downsides of the technology. Participants read about a crime-surveillance technology in Study 1 and a hiring algorithm in Study 2. We used hiring algorithms in Study 2 because they differed from crime-surveillance technologies in two ways: Hiring algorithms are used mostly by private companies (not the government) and have the potential for discrimination instead of privacy violations, which are more problematic in crime-surveillance technologies (see Kodapanakkal et al., 2020, for a justification of various technology domains).
After reading the descriptions, participants reported their support for the technology and the degree to which they felt that their attitude was based on a moral conviction. They also reported the extent to which their attitudes were grounded in specific moral foundations (see Note 1). Finally, they answered demographic questions related to age, gender, and political ideology. (See Tables S1 and S2 in the Supplemental Material available online for full descriptions of the technologies.)
At Time 2, participants in Study 1 were randomly assigned to four conditions (harm-based moral, liberty-based moral, nonmoral, and control) and participants in Study 2 were randomly assigned to three conditions (harm or fairness based, nonmoral, and control). The control message was the same as the neutral description presented at Time 1 for each study. The first part of all the other messages was the same as the control message. The second part of the messages included the potential disadvantages of the respective technology, and the third part presented a factual example of the disadvantage. In Study 1, the harm-based moral message included arguments that used keywords such as
At Time 2, after reading the different messages, participants reported their support for the technology and the degree to which they felt that their attitude was based on a moral conviction. In Study 2, we additionally assessed potential mechanisms of moralization and de-moralization. Participants reported perceived risks and benefits of the technology and emotional reactions of anger, disgust, fear, feeling creeped out, and gratefulness toward the technology.
The procedure for Study 3 was exactly the same as in Time 2 of Study 2, in which participants were assigned to the three conditions that were used in Study 2. Next, they reported their attitude toward the technology in the study (
Measures
Attitude support
Participants rated their attitude toward the respective technology with the following item on a 7-point Likert scale (1 =
Moral conviction
We assessed participants’ moral conviction with a two-item moral-conviction scale (e.g., Skitka et al., 2005): “How much is your position on the use of this technology connected to your core moral beliefs and convictions?” and “How much is your position on the use of this technology connected to your beliefs about fundamental right or wrong?” Participants responded to the items on a 7-point Likert scale (1 =
Potential mechanisms
Participants reported their perception of the risks of the technology by responding to questions such as, “This technology would be risky for people” (Study 1: α = .82; Study 2: α = .82; 1 =
Willingness to compromise
Support for compromising and uncompromising political candidates
In Study 3, participants reported their likelihood of supporting two candidates who were competing for a mayoral nomination. The description was written such that without its mentioning “oppose” or “support,” the candidates were portrayed as agreeing with the participant’s position. Participants read the following: Both candidates agree with your position on the use of this hiring algorithm. Candidate A is uncompromising and will vote against any proposal that does not support your position. Candidate B will dislike proposals that do not support your position, but will be willing to negotiate and make concessions in this area if it leads to a gain in other areas that are important to you.
Participants reported their support for the uncompromising and compromising candidates by answering the question, “How likely are you to support Candidate [A/B] for the nomination?” using a 7-point Likert scale (1 =
Willingness to work with compromising and uncompromising managers
In the second measure of compromise, participants reported their willingness to work with two managers who had the power to decide whether they would use the hiring algorithm or not. Again, the description was written such that, without its mentioning “oppose” or “support,” the candidates were portrayed as agreeing with the participant’s position. Participants read the following: Both managers agree with your position on this algorithm. Manager A is uncompromising and is not open to views on this algorithm that do not support your position. Manager B will dislike views that do not support your position, but is willing to negotiate and make concessions if it leads to a gain in other areas of the company that are important to you.
Participants reported their willingness to work with the uncompromising and compromising managers by answering the question, “How likely are you to work with Manager [A/B]?” using a 7-point Likert scale (1 =
Incentivized compromise game
The third measure of compromise was in the form of a fully incentivized economic game based on a modified version used in Delton et al. (2020). In this game, participants were presented with six different policies that ranged from fully implementing the technology to not implementing the technology at all. On the basis of their reported attitude, we told participants that they would be paired with a participant who had the opposite attitude. If participants selected the midpoint of the scale, they reported in a follow-up question whether they would support or oppose the algorithm if they really had to choose one side. Participants who supported the algorithm saw this description: “You said you SUPPORT the implementation of this algorithm. The other participant in this negotiation OPPOSES the implementation of this algorithm.” Similarly, participants who opposed the algorithm saw this description: “You said you OPPOSE the implementation of this algorithm. The other participant in this negotiation SUPPORTS the implementation of this algorithm.” Participants could choose policies that corresponded to different levels of compromise, and there would be a deal only if both participants picked the same policy. We operationalized compromise as the proportion of payoff to the opponent. The value of the proportion of payoff could be 0, .2, .4, .6, .8, and 1, depending on the policy they chose. A higher payoff for the opponent indicated higher compromise. For more details on the game, see “Details of Willingness to Compromise Measures” in the Supplemental Material.
Results
The means of the baseline attitudes and moral-conviction measures are shown in Table S4 in the Supplemental Material. Results were output into Word using the R package
Effect of condition on attitude support
We first tested whether the persuasive conditions were effective at persuading participants. To test this, we dummy-coded the condition variable (reference: control condition) in all three studies. In Studies 1 and 2, we regressed attitude support at Time 2 on dummy-coded condition and attitude support at Time 1, so that the effects of condition indicated changes in attitude support between Time 1 and Time 2. In Study 3, we regressed attitude support on dummy-coded condition. Results are shown in Table 2 and Figure 1. Across all three studies, we found that compared with messages in the control condition, messages in both the moral and nonmoral conditions significantly persuaded participants to oppose the technology (
Effect of Condition on Attitude Support and Moral Conviction in Studies 1 to 3
Note: Standard errors are given in parentheses. The reference group for the dummy-coded conditions is the control condition. In Studies 2 and 3, the moral condition included both harm- and fairness-based arguments, and there was no liberty-based moral condition. Attitude support refers to participants’ attitude toward the technology in the study.

Effect of condition on support for the technology at Time 2 in Studies 1 and 2 and effect of condition on support for the technology at Time 1 in Study 3. Colored dots represent observed data for each participant in each condition, and the accompanying distributions represent the density of the data. Black dots represent estimated means (controlling for Time 1 attitude support in Study 1 and Study 2). Error bars around estimated means denote 95% confidence intervals.
Side Effect 1: effect of condition on moral conviction
Results showed that all persuasive conditions were persuasive as intended, but was there a side effect of moral conviction? To test this, we dummy-coded the condition variable (reference: control condition) in all three studies. In Studies 1 and 2, we regressed moral conviction at Time 2 on dummy-coded condition and moral conviction at Time 1, so that the effects of condition indicated changes in moral conviction between Time 1 and Time 2. In Study 3, we regressed moral conviction on dummy-coded condition. Results are shown in Table 2 and Figure 2. Across all three studies, we found that, compared with participants in the control condition, participants’ attitudes in the moral conditions were significantly more moralized (

Effect of condition on moral conviction for the technology at Time 2 in Studies 1 and 2 and effect of condition on moral conviction at Time 1 in Study 3. Colored dots represent observed data for each participant in each condition, and the accompanying distributions represent the density of the data. Black dots represent estimated means (controlling for Time 1 attitude support in Study 1 and Study 2). Error bars around estimated means denote 95% confidence intervals.
In Study 3, we also tested whether the conditions similarly affected other dimensions of attitude strength (for full details, see Table S5 and Fig. S6 in the Supplemental Material). Moral frames increased all other dimensions of attitude strength (
Side Effect 2: effect of condition on willingness to compromise
We now turn to willingness to compromise, which was assessed only in Study 3 using two self-report measures and one behavioral measure. Each section below presents results for each variable. Results for all the variables are shown in Table 3 and Figure 3. For details regarding the association between moral conviction and willingness to compromise, see Table S6 and Figure S7 in the Supplemental Material.
Effect of Condition and Moral Conviction on Willingness to Compromise in Study 3
Note: Standard errors are given in parentheses.

Effect of the conditions on willingness to compromise in Study 3. Results are shown separately for support for the uncompromising and compromising political candidate (top row), willingness to work with an uncompromising and a compromising manager (middle row), and the proportion of payoff that the matched partner received in the incentivized compromise game (bottom row). Colored dots represent observed data for each participant in each condition, and the accompanying distributions represent the density of the data. Black dots represent estimated means. Error bars around estimated means denote 95% confidence intervals.
Self-reported willingness to compromise
To assess the effect of the condition on support for the uncompromising candidate and uncompromising manager, we regressed support for the uncompromising candidate or manager on dummy-coded condition (reference: moral condition). We used the moral condition as the reference group for these analyses because our hypothesis predicted a difference between the moral condition and the other two conditions. As predicted, we found that people were more likely to support the uncompromising candidate in the moral condition compared with both the control and nonmoral conditions (
To assess the effect of the condition on support for the compromising candidate or compromising manager, we regressed support for the compromising candidate or manager on dummy-coded conditions (reference: nonmoral condition). We used the nonmoral condition as the reference group for these analyses because our hypothesis predicted a difference between the nonmoral condition and the other two conditions. The results for the candidate were not in line with our predictions. We found that people did not differ in their support for the compromising candidate in the nonmoral condition compared with the control condition or the moral condition (
Incentivized compromise game
Next, using an incentivized compromise game, we assessed whether there was an effect of condition on whether people were more willing to pick policies that represented a compromise of their position. To test this, we regressed the payoff for the opponent (indicating more compromise of the participant’s position) on dummy-coded conditions (reference: control condition). As predicted, we found that people in the moral condition were less likely to compromise than people in the control condition (
Potential mechanisms of moralization and de-moralization
Across the three studies, we found that moral frames and nonmoral frames were equally persuasive but that moral frames increased the strength of people’s moral convictions and made them less willing to compromise, whereas nonmoral frames decreased the strength of people’s moral convictions. This shows that moral frames can be effective persuasive tools and at the same time cause side effects. It is less clear why the moral frames have these effects. That is, what about the frames might cause the moralization and de-moralization effects we observed? In Studies 2 and 3, we explored whether emotional reactions and perceptions of risks and benefits are impacted by the experimental conditions and correlated with moralization. In Study 3, we additionally explored the impact of condition on finding the technology financially costly and on weighing costs and benefits. If these are candidates for mechanisms, they should be differentially affected by the two persuasive conditions. The factors that are higher in the moral condition should also be positively correlated with moral conviction, and the factors that are higher in the nonmoral condition should be negatively correlated with moral conviction (for Study 2, this is moral conviction at Time 2). We conducted separate regression analyses with each of the possible mechanism variables as the dependent variable. Condition was dummy coded (reference: control condition). The main results are shown in Figures 4, 5, and 6. (More details are available in Fig. S8 and Table S7 in the Supplemental Material.)

Effect of condition on perceived risks (top row) and perceived benefits (bottom row) in Study 2 (left) and Study 3 (right). Colored dots represent observed data for each participant in each condition, and the accompanying distributions represent the density of the data. Black dots represent estimated means. Error bars around estimated means denote 95% confidence intervals.

Effect of condition on each of five perceived emotions in Study 2 (left) and Study 3 (right). Colored dots represent observed data for each participant in each condition, and the accompanying distributions represent the density of the data. Black dots represent estimated means. Error bars around estimated means denote 95% confidence intervals.

Effect of condition on the extent of weighing costs and benefits (left) and perceived financial cost (right) in Study 3. Colored dots represent observed data for each participant in each condition, and the accompanying distributions represent the density of the data. Black dots represent estimated means. Error bars around estimated means denote 95% confidence intervals.
For the sake of brevity, we focus only on the results that provide some evidence that the variable is a potential mechanism. These variables were anger, disgust, and perceptions of financial cost. In both Study 2 and Study 3, participants reported significantly more anger (Study 2: β = 0.15,
Notably, as detailed in the Supplemental Material (see Fig. S8 and Table S7), other potential mechanisms did differ by condition and were correlated with moral conviction. We do not think that they represent likely mechanisms because both the moral and nonmoral frames affected the measure in the same way (e.g., both increased perceived risks) or the measure was not correlated with moral conviction (e.g., fear was unassociated with moral conviction).
General Discussion
We tested for side effects of moral framing and reframing strategies on people’s moral convictions and willingness to compromise. We found that moral frames are persuasive and moralize people’s attitudes, whereas nonmoral frames are persuasive and de-moralize people’s attitudes. People who read moral frames are more likely to support uncompromising individuals and less willing to compromise themselves. We also found that anger and disgust potentially drive moralization and that considering how financially costly a technology is potentially drives de-moralization.
Theoretical and practical implications
We indeed found moralization and compromise side effects of moral framing and reframing strategies. Whether these side effects are an unexpected benefit or harm depends on the goals of the persuader. If the attitude change is considered as the morally correct attitude, these side effects may be beneficial. However, if the goal is to bridge divides, these side effects may be detrimental because they could entrench rather than bridge divides. For example, less willingness to compromise can delay policymakers from coming to a solution and cause a stalemate. Before we use these framing strategies to address delicate situations (e.g., the COVID-19 pandemic; Van Bavel et al., 2020), they should be tested in the specific context with careful attention paid to their side effects.
Our results confirm that moral frames are associated with moral emotions of anger and disgust, as shown previously (e.g., Feinberg et al., 2019; Wisneski & Skitka, 2017). We additionally found that they are specifically associated with moral frames and not with nonmoral frames, which further supports their association with moralization.
Importantly, moralization is not the only possible outcome. We found a de-moralization effect that occurs when people read the nonmoral frames. Previous studies have examined differences between moral and pragmatic rhetoric, but either they did not find an effect (Van Zant & Moore, 2015) or it was unclear whether moralization or de-moralization occurs because there was no control condition (Marietta, 2008). In contrast, we directly examined de-moralization and found that nonmoral frames reduce moralization compared with a control condition. We also found initial evidence for why de-moralization occurs. People consider the technology more financially costly, specifically in the nonmoral frame, and this is negatively associated with moral conviction. This is in line with the findings of Bastian et al. (2015), who showed that monetary costs diminished the negative effect of moral conviction on the acceptance of mining. Nonmoral frames also increased certainty and extremity, even as they reduced the strength of moral convictions, providing further evidence that moral conviction is a unique dimension of attitude strength.
Strengths and limitations
Our study had several strengths. First, the pretest/posttest design in Studies 1 and 2 measured change in people’s moral convictions. Second, multiple measures of willingness to compromise, including a behavioral measure, more comprehensively assessed situations in which people do or do not compromise. Third, we compared moral and nonmoral frames with a neutral control condition, providing differential evidence for moralization and de-moralization and teasing out mechanisms specific to each of these processes.
There are, however, constraints on the generalizability of the findings. First, new big-data technologies may not be politicized in the same way as other issues that have been studied. Although the baseline measures for moral conviction (
Although the effects related to de-moralization and compromise are small in magnitude, they are similar to effect sizes found in the modern persuasion literature (e.g., reducing prejudice; Broockman & Kalla, 2021; Paluck et al., 2021). Effect sizes might be increased by using reinforcing persuasive messages at various time intervals with multiple exposures to persuasion. Future research could test this.
Finally, our study relied on U.S. participants recruited through Prolific. This was to maintain comparability with prior studies in moral framing and reframing (Feinberg & Willer, 2013; Luttrell et al., 2019); however, testing in other contexts is necessary.
Conclusion
Moral frames are persuasive and moralize people’s attitudes, whereas nonmoral frames are persuasive and de-moralize people’s attitudes. Moral frames also reduce compromise. The use of moral frames as a persuasion tool should be considered cautiously and assessed for potential side effects; otherwise the goal of bridging moral divides with these tools may backfire.
Supplemental Material
sj-docx-1-pss-10.1177_09567976211040803 – Supplemental material for Moral Frames Are Persuasive and Moralize Attitudes; Nonmoral Frames Are Persuasive and De-Moralize Attitudes
Supplemental material, sj-docx-1-pss-10.1177_09567976211040803 for Moral Frames Are Persuasive and Moralize Attitudes; Nonmoral Frames Are Persuasive and De-Moralize Attitudes by Rabia I. Kodapanakkal, Mark J. Brandt, Christoph Kogler and Ilja van Beest in Psychological Science
Footnotes
Transparency
All authors developed the study concept and contributed to the study design. Testing, data collection, and data analysis were performed by R. I. Kodapanakkal under the supervision of M. J. Brandt, C. Kogler, and I. van Beest. R. I. Kodapanakkal drafted the manuscript, and M. J. Brandt, C. Kogler, and I. van Beest provided critical revisions. All authors approved the final manuscript for submission.
Notes
References
Supplementary Material
Please find the following supplemental material available below.
For Open Access articles published under a Creative Commons License, all supplemental material carries the same license as the article it is associated with.
For non-Open Access articles published, all supplemental material carries a non-exclusive license, and permission requests for re-use of supplemental material or any part of supplemental material shall be sent directly to the copyright owner as specified in the copyright notice associated with the article.
