Abstract
Critical thinking stands out as one of the most important cognitive abilities needed for effective adaptation to a knowledge-based society in the 21st century. Despite its significance, there remains a lack of consensus regarding the conceptual and methodological frameworks for measuring it. This study aimed to design and validate a comprehensive assessment scale for critical thinking. A specification table was constructed based on the critical thinking components agreed upon in the current literature, from which corresponding items were formulated and subsequently validated by expert judges. Following adjustments to the test, 258 Colombian participants completed it. Sample adequacy (KMO), Bartlett’s sphericity, and collinearity were confirmed, and the results underwent exploratory factor analysis. Reliability analysis was conducted using McDonald’s ω, Cronbach’s α, Guttman’s λ6, and Greatest Lower Bound (GLB) statistics. The final test comprised 17 items organized into 2 constituent factors, demonstrating robust content and internal structure validity, as well as high levels of precision and internal consistency (overall GLB of .93). The Critical Thinking Evaluation Scale (CTES) exhibits validity and reliability for use within the Colombian population. Its adaptation for other contexts and countries, both Spanish and English-speaking, is recommended. The Spanish version, along with the validated English version for potential adaptations, and scoring norms are provided in the attached documents.
Plain language summary
Critical thinking is one of the most important cognitive skills, but there is no complete instrument for its evaluation. Therefore, this study aimed to design and validate a comprehensive evaluation scale for critical thinking. A table of contents of the components of critical thinking was constructed, the items were designed and validated by expert judges. After adjustments to the test, 258 Colombian participants answered it and validity and reliability analyzes were carried out. The final test was composed of 17 items organized into 2 constituent factors, demonstrating strong content and internal structural validity, as well as high levels of precision and internal consistency (overall GLB of .93). The Critical Thinking Evaluation Scale (CTES) is a test with high validity and reliability for use in the Colombian population. Its adaptation is recommended for other contexts and countries, both Spanish and English speaking. The Spanish version, along with the validated English version for possible adaptations and scoring standards, are provided in the attached documents.
Keywords
Critical thinking (CT) stands as a fundamental cognitive capacity essential for successful integration into contemporary knowledge societies of the 21st century (Alsaleh, 2020; Kocak et al., 2021; Nussbaum et al., 2021; Wechsler et al., 2018). It entails developing robust decision-making and problem-solving skills applicable in diverse practical contexts within an increasingly complex environment, while also addressing significant issues within specific disciplinary domains (Butler et al., 2012; Dwyer et al., 2014; Niu et al., 2013). Recent studies highlight that good critical thinkers demonstrate better decisions-making capabilities, even under pressure (Ellerton, 2022; Gambrill, 2006; Nussbaum et al., 2021); exhibit fewer cognitive biases (Facione & Facione, 2001; Hong & Choi, 2015; Georgiadou et al., 2018); engage more actively as well-informed citizens (Shutaleva, 2021) and frequently possess enhanced employability prospects (Dwyer et al., 2014). This has made CT a competence with little conceptual and methodological consensus in its measurement instruments due to the multifaceted attention it has garnered from diverse scholars and educators interested in the development of thinking skills (Bernard et al., 2008; Niu et al., 2013). Consequently, multiple conceptualizations of CT persist contingent upon the field or disciplinary context under studied (Butler et al., 2012; Ossa-Cornejo et al., 2017; Valenzuela and Nieto, 2008a).
Existing theoretical frameworks characterize CT as a purposeful, reasoned, and goal-directed thinking process, comprising a set of fundamental cognitive skills (An Le & Hockey, 2022; Black, 2012; Dwyer et al., 2014; Nieto & Saiz, 2008; Valenzuela & Nieto, 2008a). These competences enable individuals discern and interpret information (Valenzuela & Nieto, 2008a), scrutinize its validity, assess its reliability, interrogate its origins (Halpern, 2014; Shutaleva, 2021), and construct coherent explanations and conclusions (Nussbaum et al., 2021; Schroyens, 2005).
While the cognitive aspect predominates (Ossa-Cornejo et al., 2017), CT cannot be solely delineated by its constituent skills, as proficiency in these skills does not guarantee adept critical thinking (Nieto and Saiz, 2008; Saiz et al., 2015; Wechsler et al., 2018). Moreover, individuals must discern when it is convenient to use them and be willing and motivated to do so when necessary (Dwyer et al., 2014; Ku, 2009; Valenzuela & Nieto, 2008a). Thus, the behavioral component of CT manifests in the synergy between these components and their practical application (Halpern, 1998; 2014).
In an attempt to resolve the conceptual discrepancy, an interdisciplinary and international panel of CT experts formulated the Delphi Report (American Philosophical Association [APA], 1990), presenting CT as a construct organized around two dimensions: cognitive abilities and affective dispositions. CT is defined as: Intentional, self-regulatory judgment resulting in interpretation, analysis, evaluation, and inference, as well as an explanation of the visual, conceptual, methodological, criteriological, or contextual considerations on which that judgment is based (Facione, 1990, p. 3).
This conceptual perspective underscores CT as a multidimensional construct, consolidating the principal components agreed upon in current literature and representing the widely accepted definition of proficient CT (Alsaleh, 2020; Beckie et al. 2001; Dwyer et al., 2014; Miele & Wigfield, 2014; Sorensen & Yankech, 2008; Wechsler et al., 2018). Under this approach, CT comprises six core cognitive skills: interpretation, analysis, evaluation, inference, explanation, and self-regulation, each with their respective sub-skills, with analysis, evaluation, and inference holding particular significance (Dwyer et al., 2014); and two affective dispositions: approach to life and living and approach to specific themes, questions or problems, along with their sub-components (Facione, 1990, 2011; Ossa-Cornejo et al., 2021; Valenzuela and Nieto, 2008a). Definition of each component is presented below (Table 1).
CT’s Cognitive Abilities Definition (Facione 1990, 2011).
Furthermore, the list of affective dispositions characterizing proficient critical thinkers are: curiosity regarding diverse issues; acquiring and maintaining well-rounded knowledge; readiness to recognize and capitalize on opportunities for critical thinking; trust in structured deliberative processes; self-assurance in reasoning abilities; receptiveness to diverse perspectives; adaptability in considering alternative viewpoints; comprehension of others’ perspectives; impartiality in evaluating reasoning; honesty in confronting personal biases, prejudices, stereotypes, and inclinations; caution in suspending, formulating, or revising judgments; willingness to reevaluate positions where honest introspection warrants change; clarity in articulating questions or concerns; organization in handling complex tasks; diligence in seeking pertinent information; rationality in selecting and applying standards; attentiveness to current issues; perseverance in the face of challenges; and a degree of precision appropriate to the subject and context (Facione, 1990, 2011).
A literature review revealed prominent instruments for assessing CT based on the Delphi panel’s definition, including the California Critical Thinking Skills Test (CCTS), the Test for Everyday Reasoning (TER), and the Critical Thinking Disposition Inventory (Facione, 2011; Ricketts & Rudd, 2004). However, none of these instruments simultaneously measure both dimensions of CT, as the first two evaluate only cognitive skills components, while the third focuses solely on assessing related dispositions.
Other instruments that are not based con Delphi panel’s framework, such as the Watson-Glaser Critical Appraisal (Watson & Glaser, 1980), the Ennis-Weir Critical Thinking Essay Test (Werner, 1991), the Cornell Test of Critical Thinking (Ennis & Millman, 2005), the Halpern Critical Thinking Assessment using Everyday Situations (Halpern, 1998), and the Salamanca Critical Thinking Test (Rivas & Saiz, 2012), evaluate CT solely based on its cognitive abilities. Meanwhile, tests like the Motivational Scale of Critical Thinking (EMPC) (Valenzuela & Nieto, 2008b) are grounded solely in motivational dispositions.
Given the contemporary understanding of CT as a synthesis of highly interrelated skills and dispositions operating jointly and complementarily (Bernard et al., 2008; Ossa-Cornejo et al., 2017), the lack of psychometric tests that assess CT as a multidimensional concept comprising cognitive skills and affective dispositions, and notably, the absence of Latin American assessments (Ossa-Cornejo et al., 2017), the present study aimed to design and validate a CT assessment scale based on the theoretical framework provided by the Delphi report.
Method
Design
The present study adopts a quantitative empirical approach with an instrumental design, aiming to develop and validate a critical thinking assessment scale (Ato et al., 2013).
Participants
A non-probabilistic convenience sampling method was employed virtually, resulting in a sample of 258 individuals (55.04% women) aged between 18 and 63 years (M = 39.68; SD = 14.48). Participants represented diverse educational backgrounds, including high school (3.87%), technologist (3.49%), technician (5.43%), undergraduate (18.22%), professional (27.90%), specialization (19.38%), master (19.38%), and doctorate (2.33%). Inclusion criteria included being of legal age and Colombian nationality or residence. The adequacy of the sample size was confirmed with a Kaiser-Meyer-Olkin (KMO) statistic of .907, surpassing the minimum acceptable value of .8 (Pérez & Medrano, 2010).
Procedure
Initially, a literature review identified the APA Delphi Panel CT conceptualization as the most appropriate framework (APA, 1990). The primary CT components were delineated, and a specifications table was developed, allocating each factor’s percentage (%) load and the appropriate number of items (see Appendix A). Expert validation was then conducted independently by five psychologists, including three Ph.D. holders and two candidates, all possessing extensive research experience pertinent to the design and subject matter of the present study. Additionally, one expert specialized in assessment and evaluation, while another had considerable expertise in university teaching, research methodology, and psychometrics. The remaining three experts specialized in thinking skills and cognitive development, teaching and consulting in life-skills education, and linguistic and decision-making processes, respectively. The items were evaluated based on relevance, clarity, sufficiency, and necessity using a scoring scheme adapted from Escobar-Pérez and Cuervo-Martínez (2008). The scores underwent analysis using Lawshe’s Content Validity Index (CVI), with values exceeding 0.6 deemed satisfactory (Tristán-López, 2008). Subsequent adjustments were made to the scale based on the evaluation results.
The scale was administered and validated using the Microsoft Forms tool. Participants were required to confirm eligibility, provide demographic information (age in years, sex, and academic level), and follow instructions for responding to the scale items. Validity and reliability tests were then conducted on the collected data, and a database was established.
Data Analysis
To analyze the internal structure of the test, sample adequacy (KMO), Bartlett’s sphericity coefficient (p < .05 expected), and collinearity were assessed, verifying correlation values were less than .9 (Pérez & Medrano, 2010). Once confirmed, exploratory factor analysis (EFA) was performed using a weighted least squares extraction with a promax oblique rotation method, given the scalar condition of each item and the theoretical relation of these factors (Lloret-Segura, 2014). Factor loadings below 0.4 and factors comprising less than three items were eliminated iteratively until all retained items exhibited unique factor loadings.
Reliability analysis involved evaluating McDonald’s ω, Cronbach’s alpha (α), Guttman’s λ6, and Greatest Lower Bound (GLB) statistics for the complete test and each factor. Values exceeding .7 were indicative of internal consistency (Chadha, 2009). In addition, sample normality was verified with the Kolmogorov-Smirnov test. Due to normality not being founded, Pearson’s product-moment correlations (r) between each item and the test and Spearman’s rank correlations between items of the same factor were analyzed, expecting values above .3 for all (Chadha, 2009). Finally, the rules for scoring and interpreting the test were developed. Given the sample size (n > 200) (Aragón, 2011), it was consolidated into a scale using Z scores for direct score interpretation (Valero, 2013) (see Appendix C). All the analysis were conducted using Jasp software (JASP team, 2022).
Ethical Considerations
This research received approval from the Research and Ethics Subcommittee of the researchers’ Faculty of Psychology, with record number 158. Furthermore, participants’ rights were upheld throughout the research, as their participation was entirely voluntary, and their dignity, integrity, privacy, and autonomy were all maintained. They were provided with the opportunity to give informed consent before the questionnaire was administered, which included information about the study’s authors, its purpose, justification, the advantages of participating, the procedure to be followed and confidentiality and anonymity agreements (American Psychological Association (APA), 2017).
Participants were assured of no risk to their well-being following resolution 8,430 of 1993s article 11 (Colombian Health Ministry, 1993), and their data were strictly used for research purposes only, with no feedback provided on the results due to the ongoing validation process.
Results
The specifications table, constructed in alignment with the two dimensions of Critical Thinking (CT) proposed by the Delphi report and its sub-components, guided the development of the Critical Thinking Evaluation Scale (CTES) (see Appendix A). Results of the validation process by expert judges of the corresponding items are shown in Table 2.
Lawshe’s Content Validity Index (CVI) for Each Item According to the Scoring Criteria by Judges and Subsequent Decisions on Their Permanence in the Test.
Note. For the decision, S means that the item was kept the same, M means that it was kept and modified, and D means that it was deleted.
Based on the obtained Content Validity Index, the validation process by expert judges resulted in the deletion of 22 items (CVI < 0.6), the retention of 32 items (CVI > 0.7), and modification of 25 items based on qualitative feedback from each judge. Two motivational disposition sub-categories were removed due to none of their items meeting the minimal sufficient value for two or more rating criteria. Additionally, items with lower frequency CVI ratings of 1 or 0.75 were deleted to maintain the percentage loadings of the original theoretical proposal. Consequently, the scale comprised 57 items for validation and administration to 258 participants. Factor analysis was conducted to examine the underlying factor structure of the CTES (see Table 3).
Correlation Matrix of the Exploratory Factor Analysis.
Note. Items’ order was randomized for test administration purposes.
The final scale consisted of 17 items distributed across 2 factors: Factor 1 comprised 8 items, while Factor 2 included 9 items. Reliability values for each factor and the overall scale demonstrated high internal consistency and appropriate reliability (see Table 4).
Reliability Statistics of the General Test and Each Factor.
All statistics indicated values exceeding .8, signifying robust internal consistency. Positive and significant correlations were observed within both factors (p < .01) and between all items, with item-test correlations exceeding .4 (Table 5). Each item contributed significantly to the high-reliability values of the test, as demonstrated by the hypothetical decrease in reliability values upon item elimination.
Item Hypothetical Elimination and Item-Test Correlation.
According to the results, the cumulative proportion of variance explained by the final scale was 40.2%
Discussion
This study aimed to design and validate the Critical Thinking Evaluation Scale, acknowledging the pivotal role of critical thinking (CT) in 21st-century society, as underscored by various scholars (Alsaleh, 2020; Kocak et al., 2021; Nussbaum et al., 2021; Wechsler et al., 2018). Despite the acknowledged significance of CT, there remains a notable scarcity of psychometric instruments that comprehensively assess it as a multidimensional construct (Ossa-Cornejo et al., 2017). Therefore, our endeavor sought to address this gap by constructing a scale grounded in the theoretical framework provided by the Delphi panel, which integrates the main components related to CT agreed upon literature: cognitive skills and affective dispositions.
Our meticulous methodology involved constructing a specifications table based on CT components, followed by item construction, expert validation, and analysis using Lawshe’s Content Validity Index. Upon adjustments, the scale was administered to a sample of 258 Colombian individuals. Subsequently, the assumptions of sample adequacy (KMO), Bartlett’s sphericity, and collinearity were confirmed, and exploratory factor and reliability analysis were conducted.
The results yielded a CT evaluation scale comprising 17 items distributed across two factors, demonstrating high indices of general reliability, and supporting the accuracy and internal consistency of the test (Chadha, 2009). Factor 1 is finally composed of 8 items, while Factor 2, of 9. Factor 1, termed Analytical Ability, primarily encompasses cognitive skills related to evaluation and analytical information processing (6 of 7 items), while Factor 2, termed Argumentative Ability, encompasses a distribution between motivational dispositions (3 items) and cognitive skills, reflecting the strategic application of skills and cognitive strategies in generating and utilizing information.
These findings align with agreed established definitions of CT as an active and skillful application, analysis, and evaluation of information (Alsaleh, 2020; Choy & Cheah, 2009; Nussbaum et al., 2021; Paul & Elder, 2003; Paz et al., 2010; Tung & Chan, 2009). Moreover, this scale ensures comprehensive coverage of the construct by incorporating both cognitive skills and affective dispositions within a unified measurement framework, marking a significant contribution to the existing literature. Notably, this scale represents a pioneering effort within the existing literature as the first to encompass CT as a multidimensional construct. The validity of the instrument is supported by its internal structure, as evidenced by the alignment between factor analysis clusters and the theoretical proposal, high explained variance, expert judgment validation, and adequate item-test correlations (Barraza, 2007).
The only initially integrated component of CT, whose items are not present in the scale’s final version after the corresponding analyses, is the cognitive skill of self-regulation. However, while some literature suggests including self-regulation as a metacognitive component of CT (Facione, 1990), the scale’s theoretical coverage encompasses this aspect within its broader framework. Moreover, empirical evidence suggests that metacognition, while related, constitutes a distinct cognitive process that enhances the direction and prediction of CT outcomes (Choy and Cheah, 2009; Dawson, 2008; Dwyer, 2011; Dwyer et al., 2014; Ghanizadeh, 2011; Heydarnejad et al., 2021; Kuhn and Dean, 2004; Magno, 2010; Melsert & Bicalho, 2012).
The present study has limitations, including the lack of predictive validity testing and convergent validity analysis. Since this study marks the first instance within the current literature review of designing a scale covering the Delphi panel’s CT conceptualization thoroughly, there was no endeavor to obtain valid evidence based on the response process and other variables. Therefore, it is imperative to conduct predictive validity studies with other variables, such as measuring and correlating CT with academic performance across various knowledge areas or comparing CT-trained and untrained individuals. Similarly, only exploratory factor analyses were performed due to the primary aim of designing a scale tested for its metric qualities for the first time. Consequently, it is recommended to validate this factorial structure through confirmatory factor analysis with independent samples in diverse contexts for future research. Another limitation of this study is the absence of a convergent validity analysis with other established measures of critical thinking. Despite the lack of instruments comprehensively covering all dimensions, as demonstrated by the present study, it is advisable to apply the current scale alongside others to assess their correlation in measurements.
In conclusion, this research presents a robust and reliable psychometric instrument for evaluating CT in the analytical and argumentative skills and dispositions within the Colombian population. The scale was named the Critical Thinking Evaluation Scale (CTES), and application-ready version and norms for scoring and interpretation are provided, facilitating its use in various contexts to enhance CT thinking skills and motivational dispositions (see Aprendix B, C and D). Furthermore, critical thinking (CT) is a fundamental skill for students, enabling them to effectively plan their learning, evaluate their performance, and monitor their progress (Silva & Rodriguez, 2011; Alwehaibi, 2012). This skill is equally applicable in scientific and business contexts (Lin, 2014). Moreover, within organizations, strong CT abilities facilitate problem identification, contextualization based on complexity, and the application of methodologically sound solution (Zúñiga, 2015). Thus, enhancing CT proficiency is expected to address the challenge of constructing adequate instruments for measuring and evaluating CT due to the existing conceptual diversity (Dwyer et al., 2014; Ossa-Cornejo et al., 2017). In today’s rapidly evolving information society, the ability to identify individuals with adequate CT skills is more crucial than ever.
Footnotes
Appendix A
Specifications Table.
| CT 100% 94 items |
Cognitive skills 59.574% 56 items |
Analysis 12.766% 12 items |
Ideas examination 4.255% 4 items |
Examine the role an idea plays in the context of an argument. |
| Arguments identification 4.255% 4 items |
Identify the role an argument plays in concluding to support or deny a statement, opinion, or point of view. | |||
| Argument analysis 4.255% 4 items |
Analyze the structure of an argument to detect its propositions, the conceptual relationships between them, and the role they play in concluding, determining whether it expresses sufficient reasons to justify it or not. | |||
| Evaluation 8.511% 8 items |
Claims evaluation 4.255% 4 items |
Evaluate the acceptability, credibility, probability of truth, and confidence level of the relevant factors of a counterargument. | ||
| Arguments evaluation 4.255% 4 items |
Assess the credibility of an argument and whether its degree of acceptability justifies accepting it as accurate or most likely true by anticipating and raising objections against its weaknesses. | |||
| Inferences 12.766% 12 items |
Evidence consultation 4.255% 4 items |
Search and collect pertinent elements to decide the information’s acceptability, plausibility, or credibility. | ||
| Alternatives consideration 4.255% 4 items |
Postulate and consider alternative information to develop a range of possible conclusions. | |||
| Conclusion creation 4.255% 4 items |
State and justify reasonable conclusions considering, among several options, the appropriate point of view according to the consulted evidence. | |||
| Explanation 9.574% 9 items |
Results statement 3.191% 3 items |
Describe the results of reasoning processes using precise statements or representations. | ||
| Procedures justification 3.191% 3 items |
Indicate the evidential, conceptual, methodological, criteriological, and contextual considerations that were executed in generating the results of the reasoning activities. | |||
| Arguments presentation 3.191% 3 items |
Present the results of one’s reasoning activities, giving reasons for accepting them and facing objections to evaluative or analytical criteria. | |||
| Interpretation 9.574% 9 items |
Categorization 3.191% 3 items |
Distinguish and classify the meaning of a situation or argument, grouping it into an appropriate information condition. | ||
| Decoding 3.191% 3 items |
Identify and describe the informational content, affective meaning, and directive purpose (expressed through language or behavior) of a situation or argument. | |||
| Meaning clarification 3.191% 3 items |
Clarify a situation or argument’s contextual, conventional, or intended meanings to eliminate unintended confusion, vagueness, or ambiguity. | |||
| Self-regulation 6.383% 6 items |
Self-examination 3.191% 3 items |
Conscious and objective monitoring of one’s reasoning to question it, judging the extent to which it is influenced by limiting factors such as prejudices and emotions. | ||
| Self-correction 3.191% 3 items |
Conscious, objective and reflective monitoring of one’s reasoning to validate its results. Design adequate procedures to remedy or correct the errors and their causes if necessary. | |||
| Affective dispositions 40.426% 38 items |
Approach to life and living in general 25.532% 24 items |
Curiosity regarding various issues. 2.128% 2 items |
||
| Concern about being and staying well informed. 2.128% 2 items |
||||
| Alert to opportunities to use critical thinking. 2.128% 2 items |
||||
| Confidence in reasoned consultation processes. 2.128% 2 items |
||||
| Self-confidence in reasoning ability. 2.128% 2 items |
||||
| Open-mindedness to diverse points of view. 2.128% 2 items |
||||
| Flexibility in considering alternatives and opinions. 2.128% 2 items |
||||
| Understanding others’ opinion. 2.128% 2 items |
||||
| Fairness in reasoning evaluation. 2.128% 2 items |
||||
| Honesty by facing biases, prejudices, stereotypes, and one’s tendencies. 2.128% 2 items |
||||
| Prudence in withholding, making, or altering judgments. 2.128% 2 items |
||||
| Willingness to reconsider and revise positions where honest reflection suggests that change is warranted. 2.128% 2 items |
||||
| Approach to specific issues, questions, or problems 14.894% 14 items |
The clarity in raising questions or concerns. 2.128% 2 items |
|||
| Order in complex work. 2.128% 2 items |
||||
| Diligence searching for relevant information. 2.128% 2 items |
||||
| Rationality in selecting and applying criteria. 2.128% 2 items |
||||
| Being careful about focusing on the current concern. 2.128% 2 items |
||||
| Persistence in facing difficulties. 2.128% 2 items |
||||
| Precision degree allowed by the subject to oneself and to the circumstances. 2.128% 2 items |
||||
Appendix B. Critical Thinking Evaluation Scale
Please complete the following information:
Age: _____ Gender: F___ M___
Current Academic Level: ______________________
You will find a list of questions linked to critical thinking’s analytical and argumentative abilities below. Please carefully read each one and select only one response for each statement based on how much you believe each statement pertains to you. You will have four response options: Strongly Disagree, Disagree, Agree, and Strongly Agree. Use the scale at the top of the questionnaire to rate each statement.
Please, be as honest as possible, remember that there are no right or wrong answers, what really matters is you to give the answer that best suits you.
Appendix C. Escala Evaluativa de Pensamiento Crítico
Por favor complete la siguiente información:
Edad: __________
Sexo: M__ F__
Nivel Académico actual: ______________________________
A continuación, usted encontrará una serie de preguntas relacionadas con las capacidades argumentativas y analíticas relacionadas al pensamiento crítico. Por favor, léalas con atención y elija una sola respuesta para cada uno, según el grado en el que considere que cada enunciado aplique para usted. Tendrá 4 opciones de respuesta: Totalmente en desacuerdo, En desacuerdo, De acuerdo, y Totalmente de acuerdo. Use la escala que encontrará en la parte superior del cuestionario para puntuar cada uno de los enunciados.
Por favor, sea lo más sincero posible, recuerde que no hay respuestas correctas o incorrectas, lo importante es que dé la respuesta que más se ajuste a usted mismo. Agradecemos su sinceridad.
Appendix D. Qualification and interpretation rules
The following steps must be followed for the evaluation:
For results interpretation, compare the obtained with the following table:
Declaration of Conflicting Interests
The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Funding
The author(s) disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: This article is derived from the project “Risk and Protection Factors Associated with Risk Behaviors and Problems Affecting Mental Health in Children and Adolescents: Understanding, Analysis and Modification of Risk and Protection Factors Associated with Risk Behaviors”, with funding code PSIPHD-4-2023, of the Faculty of Psychology and Behavioral Sciences, of the Universidad de La Sabana.
Data Availability Statement
Data are available upon request to the authors.
