Abstract
Purpose
This review paper addresses three major challenges hindering progress in the field of social-emotional learning in education.
Design/Approach/Methods
We first discuss how social-emotional skills could be conceptualized, contrasting identity versus self-efficacy approaches. Second, we argue to develop more objective assessments of social-emotional skills and their development, as a supplement of currently and frequently used self- and informant descriptions relying on Likert-based scales. Third and finally, we discuss recurrent issues that are raised when debating the implementation of social-emotional skill learning in the classroom.
Findings
The review suggests that, at this point in time, both self-efficacy and identity conceptualizations of social-emotional skills are valuable approaches. Future research will need to explore which conceptualization for which skills demonstrates the best predictive validity, taking into account the nature of outcomes and contexts. The desired characteristics of more objective social-emotional skill measures are defined, and we encourage to examine how these overlap with self-reports using construct maps. Finally, we underscore the importance of having a comprehensive and evidence-informed taxonomy of social-emotional skills to guide the debate on the inclusion of social-emotional skill learning in curricula and its implementation in the classroom.
Originality/Value
This short review is primarily written for practitioners and policymakers to give a quick and accessible update on the key challenges in social-emotional learning today. The review should trigger more in-depth reading on the topic, but also support readers to understand current problems and help them dealing with questions and potential objections that might be raised by different stakeholders.
The past decade witnessed increasing attention for the concept and development of social-emotional skills in education (Abrahams et al., 2019), in addition to long-term interest in achievement on math, languages, and sciences, such as reflected in educational monitoring studies like the Programme for International Student Assessment (PISA; OECD, 2023). Although various labels and models were suggested to denote social-emotional skills (e.g., Chernyshenko et al., 2018; John & De Fruyt, 2015; Kankaras & Suarez-Alvarez, 2019), there is now consensus that these skills can be conceptually grouped into five broad domains (see Shiner et al., 2022 for an extensive overview and integration of models). Interpersonal skills are grouped into “Engaging with Others” and “Collaboration/Amity,” reflecting initiating and maintaining contact, and the quality of social interactions, respectively. Skills important for task completion, such as organizing, implementing, and focusing are grouped under “Self-Management.” Skills like creativity, cultural sensitivity, and out-of-the-box thinking are subsumed under the domain of “Open-Mindedness.” Finally, skills such as dealing with stress, calming down, and controlling anger are grouped into “Emotion Regulation.”
Although there is considerable consensus on the comprehensiveness of this five-factor social-emotional skill model, scholars still diverge on how to define social-emotional skills. For example, the Collaborative for Academic, Social, and Emotional Learning (CASEL; Payton et al., 2000) and the Organisation for Economic Co-operation and Development (OECD; Kankaras et al., 2019) described social-emotional skills as having a more typical nature, representing how well an individual typically manifests a particular skill, hence introducing a more identity or personality-oriented operationalization of skills. In contrast, Soto et al. (2021, 2022a) proposed a self-efficacy operationalization of skills, indicating how well an individual can use a skill when the situation calls for it, to make skills conceptually more distinct from personality traits. An example is presentation skills (i.e., “How well can you present something in front of an audience?”) that most individuals only need to demonstrate when the situation requires that skill.
A second point of intense debate is the way in which social-emotional skills are assessed. In most studies, skills are assessed via self-reports using Likert-based scales, eventually supplemented with ratings by parents or teachers (Kankaras et al., 2019; OECD, 2013). However, self-reports have considerable limitations, especially when they are used in summative evaluation contexts or for making international comparisons (such as in PISA). These drawbacks are due to different rater characteristics and a range of potential answering biases (Duckworth & Yeager, 2015) that may vary across student age and cultures. There is hence a strong call for alternative and more objective social-emotional skill assessments (MOSA). The existing MOSA measures, though, are often relying on “camouflaged” forms of self-reports, are confounded with cognitive ability, or exhibit a lack of correlation with self/observer reports, thereby raising concerns about their construct validity (Berg et al., 2021; Humburg & van der Velden, 2015). In other words, it is often not clear what MOSA measures really evaluate. In addition, it remains to be investigated how MOSA measures can inform us about how social-emotional skills develop.
A third point of debate is on the best way to implement social-emotional learning in education and how to support teachers to achieve this goal. Social-emotional learning can be implemented in various ways, that is, through running specific social-emotional learning programs explicitly scheduled in the curriculum and the teaching program (Taylor et al., 2017). It can also be achieved, however, when studying other learning content, such as languages, science, technology, or mathematics (Sklad et al., 2012). For example, when learning vocabulary and facts on historical landmarks and political orientations, students can also practice and learn skills to respect each other, be open to multiple points of view or take turns. Alternatively, peer tutoring, in the form of students helping their peers who experience difficulties with math, will require perspective taking and creativity to come up with alternative ways of explaining, and will develop their empathic skills. Acquiring knowledge and learning skills are hence not contradictory, but can develop hand-in-hand. Moreover, the meta-analytic work by Sklad et al. (2012) showed that external consultants coming into the classroom to roll out social-emotional learning programs are not necessarily required, but that teachers are also effective to achieve skill learning. The critical question becomes then “how can we optimally prepare teachers for students’ social-emotional learning and support their efforts?”.
The discussions on the conceptual definition of social-emotional skills, the way skills are assessed, and how social-emotional skill learning is implemented hamper progress in this field, both at the level of theory development, but also in terms of the design of curricula and educational policies. The significance of these themes cannot be underestimated given the large attention the results of these assessments receive in mass-media and society and their implications for education and labor market policies in countries around the world. To feed the debate and update practitioners and policymakers, we discuss these three challenges in more detail showing at the same time opportunities for future research and development.
Conceptualizing social-emotional skills
Most assessments of social-emotional skills in education rely on Likert-based scale ratings in which students and their acquainted observers describe how students typically act, think, and feel (i.e., an identity conceptualization of skills). For example, the OECD in its first large-scale study on monitoring social-emotional skills in large cities (Kankaras et al., 2019) opted for such a conceptualization of skills, following similar approaches by CASEL (Taylor et al., 2017) and many others (Abrahams et al., 2019). They additionally chose for this identity conceptualization, because it allows for correcting ratings for the influence of acquiescence response bias, also called “yeah-saying” bias (Primi et al., 2020). Acquiescence is the tendency to agree with items, regardless of their content. This bias is known for its tendency to distort ratings, especially within younger and socio-economically disadvantaged groups, further complicating the replication of factor structures (Rammstedt & Farmer, 2013; Rammstedt et al., 2013; Soto et al., 2008) and cross-cultural comparisons. Identity-based operationalizations allow the use of positively and negatively keyed items, which demonstrated to be an effective method to control for acquiescence (Primi et al., 2018, 2020; Soto et al., 2008).
Soto et al. (2021, 2022a) recently proposed an alternative, self-efficacy-based operationalization of social-emotional skills. To make the assessment of skills more distinct from personality, they changed rating instructions from: “How characteristic is this for you?”—requiring a description of typical behavior—to “How well can you do this, when the situation calls for it?” asking for a self-efficacy description. To construct the Behavioral, Emotional, and Social Skills Inventory (BESSI), Soto et al. (2022a) started from the personality literature and Big Five framework and compiled an accompanying skill set with self-efficacy rating instructions. They subsequently examined the predictive validity of their BESSI skills relative to Big Five Inventory-2 (BFI-2; Soto & John, 2017) personality scales, demonstrating that BESSI skills predicted variance on top of personality traits for some specific outcomes in a sample of young adults (Soto et al., 2022b). Counter to the recommendations provided by Rammstedt and Farmer (2013), Rammstedt et al. (2013), and Soto et al. (2008), self-efficacy scales do not correct themselves or do not provide opportunities to correct for acquiescent responding, because self-efficacy items are by definition positively keyed.
The distinction between the two measurement conceptualizations proposed by Soto et al. (2021, 2022a) is not new, but goes back to a longstanding debate in differential psychology regarding maximal versus more typical operationalizations of individual differences, that is, cognitive abilities versus personality traits (Shiner et al., 2022). Cognitive abilities reflect a more maximal capacity (to solve a particular problem), whereas personality traits are indicative of how a person typically behaves across time and situations. Both cognitive ability (maximal performance) and personality (typical performance) have been shown to predict important subjective and more objective outcomes such as well-being and achievement in education, but also work and training performance and salary (De Fruyt & Bolstad Karevold, 2021). To bridge both conceptualizations with respect to learning, the construct of typical intellectual engagement has been proposed in the past (Ackerman & Heggestad, 1997; Goff & Ackerman, 1992; von Stumm & Ackerman, 2013; von Stumm et al., 2011). Alternatively, self-estimates of intelligence have also been studied (Syzmanowicz & Furnham, 2011), though the overall conclusion is that these cannot be considered as proxy assessments of cognitive ability.
Although cognitive ability and personality can be clearly differentiated at the level of their indicators, the concepts of personality and social-emotional skills represent a more complicated case, because the same item basis (technically called “stem”), can—depending on the instructions—refer to a personality trait or to a skill, in terms of Soto et al.'s (2022a, 2022b) definition. Moreover, from an applied perspective, it is not a priori clear which conceptualization of skills shows the best utility: conceptualized as typically manifested behaviors or as self-efficacy constructs that are activated only when the situation demands them. For example, you only need presentation skills in particular situations, such as when reading a poem in front of the classroom, or pitch a product for a customer audience, so in this case, a self-efficacy conceptualization is straightforward. A more typical, day-to-day manifestation pattern, however, is perhaps required for skills such as acting with care, demonstrating learning curiosity, behaving responsibly, or treating people with respect. The conceptual distinction between identity and self-efficacy approaches becomes even more permeable, especially when skills are contextualized within a specific profession, like teaching, nursing, or law enforcement. In such cases, the skills demanded by the situation are clearly defined and permanently at stake. Therefore it might be more accurate and useful to conceptualize the skills required in such cases as reflecting typical behavior instead of skills that emerge when needed.
The preceding discussion and examples may lead us to consider adopting a more hybrid assessment approach, combining both a self-efficacy and identity operationalization of social-emotional skills, so we can empirically examine what approach has the highest validity in predicting specific consequential outcomes. Primi et al. (2016, 2021) have proposed such a hybrid assessment approach with their SENNA measure designed to assess social-emotional skills in Brazilian students attending public education. They distinguish between six identity (three positively and three negatively keyed) and three self-efficacy items for each of their 16 skills grouped into the five social-emotional domains. An acquiescence score for each individual can be computed from the positively and negatively paired identity items and used to correct the self-efficacy items (Primi et al., 2020). With the currently available knowledge, it is likely premature to favor one conceptual definition over the other, so adopting a hybrid approach is probably a good strategy to foster our knowledge and empirically discriminate what definition has the best predictive validity for what kind of outcome measure we consider important.
Assessing social-emotional skills
So far, skills in large-scale applications have been mainly assessed via self- and observer reports. For example, the OECD in its study on social-emotional skills in large cities (Chernyshenko et al., 2018; Kankaras et al., 2019) has relied on self-reports supplemented with parent-, teacher-, and school director reports. Self- and observer reports have been very popular because they are easy to use, can cover a broad set of skills, and can be quickly assessed at a low cost in cross-cultural large-scale evaluations. They have considerable limitations, however, including various rater biases like acquiescence, group-reference bias, and socially desirable responding (Duckworth & Yeager, 2015), making them less suited for summative assessments where the result of the assessment has important policy or financial consequences. Also, from a research perspective, there is an urgent need to have more MOSA measures so we can run impact evaluations of randomized control trials of manualized educational interventions.
As far as we know, there is no comprehensive review available yet in the international literature on MOSA measures. This may be partly due to the fact that many of the attempts to construct MOSA measures are done by (semi-)commercial organizations giving poor insight into the construction and psychometric properties of their measures. Characteristic of several of these attempts and developments is that they often use technology (e.g., using virtual reality) or elements of gamification (Marengo et al., 2023; Stoeffler et al., 2018, 2020) to enhance face validity and make assessments more attractive for participants to improve customer experience. Alternatively, task-based assessments have been developed, but also situational judgment tests (SJTs) presenting situation descriptions and asking for a self-report on how one will or should react in those situations (Lievens, 2013).
The few MOSA measures available, however, have important limitations. First, although some are task-based, they still rely on one or another form of self-report (e.g., allocating 500 EUR to various goals to assess altruism). Others are developed in a gamified environment, though rely on self-report measures to assess the outcomes they are assumed to predict (Nikolaou et al., 2019). Second, several of these assessments are either confounded with general mental ability and/or attention/concentration, which is often a concern with gamified assessments and SJTs, that have been criticized to be knowledge-based (Krumm et al., 2015). Gamified assessments have gained significant popularity in the past years, given their attractive and engaging character (especially for students), though there is often a problem with their construct validity (Berg et al., 2021). Moreover, they do not or poorly correlate with self- and observer measures (Humburg & van der Velden, 2015), making it hard to evaluate what they really measure and how they can be used in triangulated assessments.
As far as we know, the developmental underpinnings of performance on MOSA measures has not been well documented. This is a serious limitation for the use of such measures in education, because we need to describe skill performance, relative to students’ chronological age and mastery level of skills. Task-based assessments, such as investigating how children share with others at different ages, reveal that the same behavior indicates very different levels of a skill depending on the child's age. For example, a 6-year-old is considered “advanced” in the skill “sharing with others/compassion” when sharing a piece of a cookie with another child. However, for a 12-year-old, this behavior is considered “poorly developed” because a child at that age is expected to first offer the whole cookie to their friend, before (eventually) taking a piece themselves. These examples illustrate the necessity to come with age-sensitive MOSA tasks and/or scoring. Finally, MOSA measures are particularly sensitive to practice effects and teaching-to-the-test strategies, meaning students can be trained to improve their performance, thereby constraining their utility for summative assessment purposes. Beyond these technical and psychometric issues, there are also concerns on the ecological validity and equivalence of these methods, so it needs to be examined explicitly whether such assessments can be used in cross-cultural comparative studies.
Ideally, MOSA measures should have the following characteristics to increase their utility for large-scale summative assessment: (a) various measures are needed covering the five main social-emotional skill domains, (b) they should be easy to implement in large-scale assessments, (c) they should be brief or have short administration time, (d) they should have demonstrated construct validity (i.e., being poorly correlated with psychometric intelligence and visual attention, for example), (e) they need to be ecologically valid within the culture they are implemented, (f) they need convergence to some extent with self- and observer reports (e.g., the Balloon Analogue Risk Task; Lejuez et al., 2002), and (g) it should be possible to map individuals’ scores obtained on objective measures onto more subjective survey-based ratings via construct maps (see Pancorbo et al., 2023 for a cross-validation example). This last crucial aspect needs thorough examination to determine whether MOSA measures can substitute or complement survey-based assessments and vice versa. Specifically, it involves investigating how levels on self-report-based measures can be effectively translated into scores on MOSA measures and vice versa.
Implementing social-emotional skills
Whereas the previous themes are mainly discussed among academics, social-emotional learning is also subject of debate among practitioners, politicians, and the general public (CASEL, 2023). The arguments usually center on four key issues.
A first point of discussion revolves around the feasibility and the necessity to focus on social-emotional skill development in education. People from various political spectra sometimes argue against paying explicit attention to social-emotional learning in education. Some contend that schools should not prepare students for the (capitalist) labor market, but instead need to focus on integral education, developing children to become fulfilled individuals, prioritizing their happiness in the first place. Others argue against focusing on specific skills, like those involving the appreciation of cultural diversity and openness to values different from one's own cultural group. To deal with such criticisms, the availability of an evidence-based taxonomy of social-emotional skills becomes crucial. A comprehensive taxonomy is imperative to prevent the discussion from becoming overly focused on just one or a few skills. Developing the variety of skills enclosed in such taxonomy is not only beneficial for students’ labor market position, but also for becoming happy and contributing individuals, parents, and citizens. While learning social-emotional skills holds benefits for all individuals regardless of their socioeconomic background, it does not inherently guarantee a substantial reduction in the socioeconomic achievement gap (Gruijters et al., 2024), so one should be careful with over-promising. Social-emotional learning, like any other measure or program, is probably not the solution to all problems.
A second obstacle when discussing implementation of social-emotional learning is that some people argue that there is already enough to learn. Students and teachers are already assumed to have a full-time responsibility focused on learning how to read, write, calculate, and instruct various sciences. Adding social-emotional learning to this task set is considered an extra that is difficult or impossible to achieve. Some may also argue that social-emotional learning is fundamentally the responsibility of parents rather than a burden for teachers. The positioning of learning content and knowledge vis-à-vis the acquisition of skills implies a perceived challenge, if not an impossibility, to combine. In this context, it is important to underscore that meta-analytic research has demonstrated that social-emotional learning not only facilitates academic achievement and classroom functioning (Durlak et al., 2011; Taylor et al., 2017), but can also be effectively integrated into various other learning activities. Including social-emotional learning to the agenda should thus not necessarily aggravate the working load for teachers and students.
Once consensus on the importance of skills is reached, the discussion starts on how to represent this learning in educational curricula, and how to assess whether learning objectives are met. Again, having agreement on a taxonomy of skills (Abrahams et al., 2019) is crucial in ensuring that all important skill groups are represented in the curriculum and operationally defined, so these can be subsequently evaluated. In the absence of a well-defined and evidence-informed taxonomy, there is a high risk of ending up with multi-dimensional and value-driven curriculum objectives that are unclear in terms of implementation, operationalization, and evaluation.
A final consideration is: How to strengthen and support teachers in this social-emotional learning process? In 2024, there is already a considerable shortage of teachers in several Western European countries, and up to 40% leave the profession within the first 5 years of their professional career (Smith & Ulvik, 2017). Besides these challenges, several current teacher training programs do not pay specific attention to social-emotional skill learning, so these programs need to be revised urgently. Ideally, the teacher training curriculum must be strengthened to include a specific course on students’ social-emotional development. Such courses should further provide specific didactics on how to support students’ social-emotional skill development and how to monitor and assess these processes and advancements. Alternatively, it is clear that we will also have to invest in the social-emotional skill development of teachers themselves (Scheirlinckx et al., 2023). Teachers serve as role models for social-emotional learning of students, making their own social-emotional skills directly significant in both professional training and ongoing development.
Conclusion
The present review has underscored the importance of social-emotional skills as critical assets for students to become happy and contributing citizens, affecting their personal and professional lives. Together with the development of psycho-motoric and cognitive skills and learning various academic subjects, social-emotional skills should be included as learning goals in education curricula. They serve as both means and end points of learning, having already a long-standing tradition in education. They moved more to the forefront in the past two decades, but have a prominent history in education for already many years, with a considerable and rich expertise practiced on a daily basis by teachers. The upcoming decade will provide further insights into the optimal conceptualization and assessment of these skills, their developmental trajectories across adolescence, and how teachers and schools, in collaboration with parents and other stakeholders, can effectively facilitate their growth.
Footnotes
Contributorship
Filip De Fruyt and Joyce Scheirlinckx equally contributed to the paper and its conceptual ideas.
Declaration of conflicting interests
The authors declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Funding
The authors disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: Filip De Fruyt received funding while preparing this manuscript by FWO-FAPESP grant G0F7719N.
