Abstract
The well-known tripartite expression, ‘Tell me and I’ll forget; show me and I may remember; involve me and I’ll understand’, is a powerful expression of what still represents the struggle of learning, that is to make learners knowledgeable, skilled and therefore competent. The expression reflects well the basic concept applied in a recent initiative to measure and compare learning in higher education in Europe. This paper outlines the philosophy of the CALOHEE project, the context in which it was set up, the phases distinguished and the process followed, the outcomes so far, the intended testing model and next steps.
Introduction
The well-known tripartite expression, ‘Tell me and I’ll forget; show me and I may remember; involve me and I’ll understand’, is a powerful expression of what still represents the struggle of learning, that is to make learners knowledgeable, skilled and therefore competent. The expression - in a slightly different form - seems to have entered the English language area already in 1928, but can be traced back to Chinese proverbs (Quote investigator). I came across it again in February 2019 in an elementary school in Manilla, the Philippines, as part of a collection of famous education-related expressions, featured in the reception hall of the school.
The expression reflects well the basic concept applied in a recent initiative to measure and compare learning in higher education in Europe, although the initiative does not stop at the notion that learning should lead to understanding. It makes the point that higher education in the 21st century has actually to achieve - as a final outcome - that the learner is able to show and apply what he or she has learned and to take initiative, act with authority and with responsibility. Insights collected by the Tuning Educational Structures in the World initiative (Wagenaar, 2019, chapter 10) show that this aim is not met by far most higher education systems in the world. The same observation has been made by the Organisation for Economic Co-operation and Development (OECD) (Van Damme, 2018). This should be perceived as worrying in the setting of the continuously identified skills gap between higher education learning and the needs of society (Chinn et al., 2020; Moore and Morton, 2017; National Association of Colleges and Employers, 2017; Society for Human Resource Management (SHRM), 2019). To formulate it differently: is the higher education sector able to satisfy its stakeholders in terms of having prepared its graduates for taking up their role in society? And how can this be determined?
These questions lead to a set of others. Have we sufficiently well-defined what we expect from a 21st century graduate in his/her field of study? If so, do we evidence the learning in a convincing way? Are we able to define more effective learning and teaching strategies and approaches in an international perspective, taking globalization as a given? Do we have the mechanisms of assessing student learning to assure that the learner has the most appropriate set of competences at graduation, either at bachelor, master or doctorate level? Is present learning taking into account the flexibility required in the workplace and is it preparing sufficiently well for civic, social and cultural engagement- in its combination to contribute to the welfare and sustainability of society at large? To put it more bluntly: is higher education doing its job?
With these notions in mind, in 2016 the project Measuring and Comparing Achievements of Learning Outcomes in Higher Education in Europe (CALOHEE) was initiated in close cooperation with and co-financed by the European Commission of the European Union (CALOHEE Website, Partnerships). The project built on the work done in the many Tuning projects in Europe and other parts of the world and the experience resulting from the OECD Assessment of Higher Education Learning Outcomes (AHELO) feasibility study (Tremblay et al., 2012–2013). This paper outlines the CALOHEE project’s philosophy, the context in which it was set up, the phases distinguished and the process followed, the outcomes so far, the intended testing model and next steps. The key question answered is whether it is desirable and possible to evidence learning by developing and applying instruments which on the one hand respect diversity, autonomy of higher education institutions and the particular mission and profile of individual study programmes and on the other hand allow for measuring the achievements of learning on the basis of internationally agreed references or standards, to judge whether these are respected and achieved.
Tuning-CALOHEE philosophy
The Tuning-CALOHEE project started from the assumption that ‘measuring is knowing’. It set as an ambitious goal to develop a sophisticated and comprehensive model and instrument to measure and compare learning in international perspective, initially to be limited to the European Higher Education Area (EHEA) for reasons of consistency and comparability of the learning environment. Having in mind the disappointing experience of the top-down approach of the AHELO global study, it opted for a multi-level involvement of a maximum but feasible number of key players from the higher education sector and a bottom-up approach offering the academic experts a central role. These experts, representing their department and institution, were not only to be directly involved in all steps, but they should also feel real ownership of both the development process and the actual assessment instruments and formats. It also involved European university networks and organisations as well as European quality assurance organisations to guarantee understanding and commitment at policy level (CALOHEE Website, Partnerships).
Autonomy of higher education institutions was (and is) the driving concept here: the higher education sector is in the very first place (made) responsible for defining, delivering and upholding quality of its learning and degrees. As a consequence, it has the moral obligation and should have the capability to perform well by creating trust and confidence in its products by its stakeholders in particular, and society in general. With the notion in mind that the proof of quality and relevance is ‘in the eating of the pudding’, the European ministers of education asked already in 2003 for European standards and guidelines for quality assurance, which were agreed in the framework of the Bologna Process in 2005 and were updated a decade later (European Association for Quality Assurance in Higher Education (ENQA), 2005; ENQA et al., 2015). Although perceived as a major step forward (but not yet fully implemented in all European countries) (European Commission/EACEA/Eurydice, 2018), these standards and guidelines have clear limitations, because they are mainly process-oriented and give little insight into the actual level, societal relevance and quality of achievement in international perspective. This approach of ‘quality control’ has the additional challenge that the quality assurance systems (as they operate in European countries and beyond) are based on peer assessment. This requires knowledgeable and skilled reviewers, who are well informed about current pedagogies and state of the art strategies, methods and approaches for (aligned) learning, teaching and assessment. Those experts are rare because most academics operating in higher education have not received much training as teachers, if any. It is widely accepted now that they ‘drive without a license’ or at best ‘fly planes building on the experience of a passenger’ (High Level Group on the Modernisation of Higher Education, 2013; Wagenaar, 2019). Up-to-date staff training and development based on the concepts of student-centred and active learning is limited to some European countries and rather exceptional in others as a result of a lack of informed trainers. It is no wonder that the (very) limited progress made in implementing the paradigm change of learning (European Students’ Union (ESU), 2018; European University Association, 2018, Hoidn & Klemenčič, 2020) - that is focusing on the needs of the student to be able to function successfully in society- has led to stressing the need for staff development in the 2018 Paris Bologna Process Communiqué of the European ministers of education (Paris Communiqué, 2018). A message that is repeated again in the 2020 Rome Communiqué (Rome Ministerial Communiqué, 2020).
Context
Based on some fifteen years of experience in contributing to the reform of higher education, initially in Europe and later worldwide, by initiating and promoting the paradigm shift for expert-driven education to student-centred and active learning, it was concluded by Tuning (but also by the Bologna Follow-up Group, the governing body of the Process) that the modernization of higher education learning was not landing widely (Bologna Follow-up Group BFUG, 2014). Only in countries where incentives were defined and implemented in terms of additional budgets to reform degree programmes and quality assurance mechanisms were set-up which enforced outcome-based learning, real change could be observed. On a lower level, that of European-wide cooperation projects supported by the European Union, it was noted that (often substantial) reforms had been implemented as a direct result of these projects. However, they were limited to pockets of modernization without much impact on other programmes or academic sectors. Nevertheless, these projects, both initiated by Tuning as well as by so-called Thematic Network Programmes (TNPs), which were consortium frameworks serving as platforms for cooperation in academic fields, proved to have – although limited – a lasting impact. The TNPs were a key action in the educational EU-SOCRATES programme which ran from the final years of the previous century until 2006. Many TNPs applied the Tuning approach to modernize higher education programmes (Birtwistle et al., 2016).
For a large range of subject areas (a concept broader than disciplines) Tuning reference point documents where published in 2008-2009 for the design and delivery of degree programmes, both at bachelor and master level (Tuning Europe Website). They were followed-up by Tuning Sectoral Qualifications Frameworks (SQFs) for the Social Sciences (2010) and Humanities and the (Creative and Performing) Arts (2012) (Tuning Educational Structures in Europe, 2010, 2012). In particular the SQFs were related to one of the two existing overarching European qualifications frameworks to define what defined a typical bachelor and what defined a typical master degree independent of the subject area. SQFs were meant as sets of intermediate level descriptors between the overarching qualifications frameworks and the subject area ones. The structure might seem straightforward, having three types of level descriptors, but in practice this was not the case. Two obstacles were observed over time. At European level two competing meta-frameworks were developed. The first one, the Qualifications Framework for the EHEA, was defined in the setting of the Bologna Process and endorsed in 2005. A second, covering all learning, was prepared by the European Commission and adopted by the EU decision-making bodies in 2008. Based on different concepts and parameters they proved not to be fully compatible. The situation was further complicated by transferring these two overarching frameworks into two types of qualifications frameworks at national level. The variety of (inconsistent) meta frameworks resulted in limited use in day-to-day practice as a basis for founding and developing outcome-based learning.
A second challenge – perceived as a serious weakness - was the insufficient alignment of the meta-QFs, the Sectoral QFs and the Tuning subject area based QFs. Solving this situation became one of the targets of the CALOHEE project. The solution was found in merging the two overarching European Qualifications Frameworks- something which was already proposed by Tuning in 2013 with the SQF experience in mind (Wagenaar, 2013).
In designing CALOHEE, additional lessons were learned from the AHELO experience. First of all, the opposition within the higher education sector against standardized testing not doing justice to the peculiarities of (national) educational cultures and the profiles of individual degree programmes related to the mission and orientation of a higher education institution (e.g. regional, national and/or international competitive player; research and/or applied driven) was well understood. Because Tuning was praised for its role in the Bologna Process championing autonomy and diversity, it was in a more comfortable position than the OECD to develop an international measurement model for comparing learning (Van Damme, 2001). It was clear from the very start such a model should be fully flexible and fair, to avoid comparing apples and oranges (Morgan, 2015c). AHELO showed that developing a test to measure and compare the level of learning at global level was simply too ambitious, because of significant cultural differences, besides differences in educational traditions (Altbach, 2015; Ashwin, 2015; Douglass et al., 2012) and provoked sever opposition (American Council on Education/Universities Canada, 2015; Morgan, 2015a, 2015b, 2015d) and doubts about the intentions of the OECD (Harmsen & Braband, 2019, Morgan and Shahjahan, 2014).
It was well understood that these concerns were also valid for Europe, encompassing at least four established educational traditions, the Napoleonic, Anglo-Saxon, Humboldtian and Soviet model representing different educational philosophies and cultures (Kuraev, 2016; Sam & van der Sijde, 2014). However, the advantage in Europe – compared to other world regions - was (and is) that more than 30 years of experience in multi-million student-mobility and operating with a common credit system had significantly helped to build trust and confidence among higher education institutions and their academics. What was also learned from and confirmed by the AHELO study that general academic skills/competences should not be decoupled from its domain of learning e.g. abstract thinking, analytical and synthesising skills in engineering requires a different mindset than for example in humanities. There are also significant differences and requirements between sectors and subject areas regarding oral and writing skills, creativity, problem solving, leadership, teamwork, decision-making, project management, etc. These are all developed in the context of a particular field of study and/or body of knowledge. The generic skills test implemented by the standardized testing model Collegial Learning Assessment (CLA), focusing on critical thinking, proved to be one of the weakest elements of AHELO, not appropriate for covering a wide range of different cultures in comparative perspective (Douglass et al, 2012). At the same time AHELO showed us that there was sufficient common ground to find agreement on what a subject area/discipline represents (OECD Organisation for Economic Co-operation and Development, 211a, 2011b, 2012a, 2012b). Nevertheless, an ultimate attempt to continue AHELO in 2015 by OECD Dirk van Damme failed (Van Damme, 2015).
From its launch in 2001 Tuning had two ambitions: (1) to offer a sustainable and convincing methodology to reform higher education degrees to boost their quality and to make them more relevant for the learner and society and (2) to showcase what can and (probably) should be learned in current high level higher education bachelor and master programmes by offering clear reference points for individual subject areas (González & Wagenaar, 2003, 2005). It proved not to be sufficient for accelerating change. With the experience in mind of more successful countries in modernising their higher education programmes, such as the Netherlands, Finland, Norway and Flanders in Belgium, it was concluded that an additional step had to be made, left to quality assurance organisations and agencies so far, that is to develop peer pressure on the management and teaching staff of higher education institutions by offering a diagnostic instrument for their own performance. But at the same time, offering far better qualifications reference frameworks to be complemented with much more detailed assessment reference frameworks and examples of good practice of (aligned) learning, teaching and assessment to make the move to student-centred/output based learning.
Phases, process and outcomes
When designing the CALOHEE project, it was well understood that it would take time and effort to come to feasible results. Therefore, three phases were distinguished, which each would offer useful outcomes to be applied by the academic world at large. The first phase which covered the period 2016–2018 was devoted to developing sophisticated qualifications reference frameworks plus assessment reference frameworks for five subject areas, representing five academic sectors and therefore a good representation of all academic fields: history (humanities), teacher education (social sciences), nursing (health care), physics (natural sciences) and civil engineering (engineering). The second phase, which will cover the period 2020–2022, is reserved for developing and validating actual tests by defining a large set of test formats and assessment items for the five subject areas mentioned, involving also generic competences. The third phase is planned to consist of the actual testing of final year bachelor students throughout Europe.
As part of the first phase five working groups, with an international membership, were established of around 10–12 subject area/disciplinary experts and one or two students enrolled in the subject area involved. The selection of experts was based on an open call. The student representatives were appointed by the European Student Union. 1 Each group was coordinated by two leading experts, which were fully acquainted with the Tuning methodology. The groups met three times over two years. They all followed a set agenda. The project was supervised by a large steering group representing European university networks and organisations and quality assurance organisations. The strategic consideration here was that this group would be a soundboard for the project, but also an intermediary with the outside world. From the first day also experts from Educational Testing Service (ETS) based in Princeton were involved to inform the expert groups about the possibilities and limitations of comparative testing in an international environment (CALOHEE Website, Partnership and Subject Area Groups).
The first assignment for the groups was to develop a qualifications reference framework which in practice meant to complete a table based on a merger of the two European meta-frameworks. See Figure 1. This table has two axes, the vertical one based on the five descriptors of the Bologna Process QF for the EHEA (knowledge and understanding, applying knowledge, judgement, communication and learning skills) and the horizontal one on the three ‘learning types’: knowledge, skills and autonomy and responsibility of the EU EQF for LLL, representing an ascending level, knowledge being the lowest and autonomy and responsibility the highest (CALOHEE, 2018-1).
As a first step each group was asked to identify a set of dimensions, which are in practice labels for constitutive constructive key elements which together represent the subject area. The list of dimensions varies from 5 to 9 per subject area, nursing having the shortest list and civil engineering the longest. The outcome was a set or menu of 15 to 27 learning outcome descriptors, indicating three levels for 5 to 9 dimensions of learning. In Tuning terminology a learning outcome descriptor represents a level of competence. The purpose of the table is twofold: it is meant as a reference for each degree programme and as a basis for developing test items of varied complexity for each of the domains identified. In practice the groups had not much difficulties in defining the descriptors for the first two types of learning, ‘knowledge’ and ‘skills’. Skills to be understood as the practicing of knowledge on the basis of assignments. This was completely different for the third type: autonomy and responsibility. From the consultations which were completed by each expert, it became clear that this type of learning was not practiced/developed in the vast majority of programmes (CALOHEE Website, Working papers).
A rather troublesome conclusion, because particularly this highest level of learning should prepare for the societal role of graduates, that is the workplace and civic, social and cultural engagement. It took the explanation of a CALOHEE policy paper and extensive and deep discussions to define the descriptors for the learning type ‘responsibility and autonomy’. To develop understanding of the concept the project referred to existing competency reference frameworks for employability produced by a number of organisations and companies and by developing in the setting of the project a competency reference framework for civic, social and cultural engagement. For the latter framework four dimensions were identified: societies and cultures: interculturalism, processes of information and communication, processes of governance and decision making and ethics, norms, values and professional standards. As a response to the UN Sustainable Development Goals (SDGs) a fifth dimension will be added in the next phase of the project: ‘sustainable development’. All qualifications reference frameworks, prepared by the five subject area groups, have included employability and the first four civic competences in their descriptors. These qualifications frameworks became the core of the 2018 edition of the Tuning Reference Points for the Design and Delivery of Degree Programmes brochures (CALOHEE Website, Publications for Printing).
The second challenging task for each of the subject area groups was to develop Assessment Reference Frameworks for the bachelor and master by breaking down the descriptors of the Qualifications Reference Frameworks in more detailed sub-descriptors and to underpin descriptors and sub-descriptors by the most appropriate (aligned) learning, teaching and assessment strategies to develop the competences involved (Wagenaar, 2018b). This work resulted in a separate publication (Wagenaar, 2018a). These assessment frameworks are offering the basis for developing the assessment formats and assessment items in phase 2 of the CALOHEE project. They cover a very rich menu of detailed learning formulated as learning outcomes. The basic assumption is that individual programmes will only cover a part of these statements, based on their mission and profile. This allows for the rich and wide diversity of degree programmes in Europe, which has been promoted by Tuning since its launch. However, the strength of both the subject area based qualifications reference frameworks and related assessment reference frameworks is that they make the actual learning (and intended learning outcomes of individual degree programmes) comparable and compatible.
In retrospect it is remarkable how easy the experts of a wide range of countries found common ground in defining both the qualifications reference frameworks and the assessment reference frameworks. Only one group, Teacher Education, experienced this as more challenging due to the different national arrangements which proved initially to be a barrier to come to agreement at a more abstract level. It is interesting to note that the historians struggled less with this issue, being fully aware that the focus should be on theoretical and methodological principles and the more general movements of European and world history and not on regional and national ones. It showed that the often applied argument of academic freedom to oppose comparative testing proved not to be very valid. In particular because the five groups also found common ground in the tasks and responsibilities their graduates had taken on in the workplace and society at large.
However, it is fair to stipulate that the impression of all five groups was – when the frameworks were defined - that the descriptors of the learning to achieve are rather ambitious and the highest level of learning autonomy and responsibility are only (very) partly covered in reality. It showed that the learning, teaching and assessment strategies were not really in place yet for this type of learning which confirmed the outcomes of the consultations implemented among the participating institutions during the process of composing these frameworks; institutions which perceived themselves as a good representation of their country and the education offered.
Testing model and next steps
As part of phase 1 the CALOHEE project developed an assessment model based on four parameters: knowledge: theory and methodology; application of knowledge and skills, employability and civic, social and cultural engagement. These parameters should identify both the common body expressed as knowledge, skills and autonomy and responsibility (wider competences) but also allow for identifying variety in level of achievement related to the parameters (Wagenaar, 2018b). The assessment formats and items to develop should reflect these. This model is used as the foundation for phase 2, as said covering the period 2020-2022, for which five working groups of 7-8 high level subject area experts have been established. Its members, which are a good representation of their academic field in terms of countries of origin and specialisation, have a double task:
matching their own first and (if appropriate) their second cycle programmes (bachelor and master) in detail against the CALOHEE frameworks. These being representative for degree programmes of the same subject area and country; developing as content experts a series of test formats and assessments items which reflect the assessment reference frameworks based on the dimensions that organise the core competences identified for a particular academic sector/subject area for all three learning types/levels identified. This implies that the experts are asked to develop test items of progressive complexity which do justice to not only the academic field but also the preparation for a societal role.
The first action, the detailed comparison of existing programmes with the reference frameworks, which was executed in the period May–October 2020, confirmed the impression that these have been formulated as ambitious, but not unrealistic. For the vast majority of universities, independent of the discipline, it proved difficult to match the highest level of learning for many of the dimensions defined.
For the regulated professions, civil engineering and nursing as well as physics a substantial overlap between both the learning defined as well as the intended level to achieve, was found. This proved to be different for history and teacher education, representing humanities and social sciences respectively, where substantial differences were identified in the covering of both the (sub)dimensions and the intended levels to achieve according to the programme learning outcomes. The outcomes of the analyses have been recorded in a report for each of the five subject areas (CALOHEE Website, Publications for Printing).
In the final months of 2020 the five groups have made a start in identifying main overlap in what is actually assessed in their programmes and also continued the work on identifying good practices of assessment to be taken into account when developing the test formats and assessment items.
For reasons of reliability the test formats and items are expected to allow for machine-based testing. This might create the impression of standardized testing, but this is not intended, although the outcome of this phase will be a blueprint/test items bank for each of the subject areas involved. What is looked for are assessment approaches which allow for identifying/measuring (real) understanding, analytical and critical thinking/awareness, making solid judgement and preparing for a societal role, both for the world of work and for civic, social and cultural engagement. They should also allow for measuring the level of application of the most relevant generic competences/transferable skills, such as working in teams, leadership, problem solving and entrepreneurial thinking, etc. in the setting of the subject area involved.
This is ambitious and it means (partly) entering new territory, asking for sophisticated formats which take into account different cultures and educational traditions. It will require ‘adaptive’ testing for example, implying differentiation in questioning - from less complicated to more complicated - on the basis of the obtained responses. It will also imply scenario-building where academics can learn from strategic computer gaming based on algorithms. It will in addition ask for defining complex problems which require not only knowledge and understanding, but also generic competences, to solve them. One can also imagine that assessment questions are used that have to be answered by a short essay which is analysed by computer. Another form is showing footage of a real event, about which questions have to be answered. All these formats are already in use, although in some cases still (very) limited or in an experimental phase of development.
In three meetings, spread initially over 14 months, but as a result of the COVID-19 pandemic now extended to 26 months with homework in between, the groups are expected to come up with a test blueprint. This timeframe is required given the complexity and number of test items to be developed. It is expected that the experts consult other disciplinary experts in the process. Each meeting should result in a progress report and an initial draft. Both drafts and final products will be evaluated and ultimately validated by test experts. Some initial testing with a control group of students is foreseen as part of the validation process. From the test bank tests can be constructed which cover different aspects. They can be taken by the same group of students/learners, but also by split groups of learners. The main objective of developing and ultimately implementing the test is to find out whether the learning of a particular programme meets particular agreed standards, but also the exceptions defined as part of the profile of a particular programme. This implies that the tests to be developed are meant to be diagnostic, and will not offer the ultimate achievement of learning of individual learners. The charm of the format chosen is that it allows for ordering questions and answers to the type of programmes to allow apples to be compared to apples and oranges to oranges. Another way to make distinctions is to tailor the assessment criteria to specific – priority defined – groups of higher education institutions and programmes with comparable profiles.

Template first cycle – bachelor – level 6 European qualifications framework.
As an outcome of this phase the project will make its policy and procedure to develop its test formats and assessment items public. This will offer degree programmes the opportunity to reference their programmes against these materials to identify possible holes and weaknesses in the learning offered. Starting from the assumption that the project is able to develop the blueprint for testing and the underpinning test item bank successfully a third phase can be implemented. This is the actual transnational comparative test, which is expected to offer evidence regarding performance, in addition to the existing quality assurance methods. This phase will require a substantial budget for computerizing the test items as well as the translation of these items in a variety of languages, and setting up the test environment making use of large variety of multi-dimensional test formats. The challenges involved are well known to the project with the experience of the AHELO feasibility study in mind. In principle, it is intended to develop two test environments. A first one in which groups of students at selected and interested higher education institutions take the test and a second one in which individuals take the test to check their level of learning. By aggregating data, it will ultimately also be possible to say something about the performance of educational systems, the objective of the OECD’s AHELO.
Conclusion
This paper started with the question ‘Is higher education doing its job?’ and the statement ‘measuring is knowing’. To the question could be added ‘sufficiently well to meet the needs of the learner and society at large’. The assumption expressed implicitly in this paper is that this is not the case. The Tuning initiative was originally taken in 2000, because it noted a disconnect between what was taught in higher education programmes and required to operate successfully in society. Meta-qualifications frameworks as a basis for defining level and quality, which were introduced worldwide from 2000, proved in practice an insufficient trigger for change, although they were (often) meant to be. The intended change is to move from an expert-driven educational model to a student-centred one which today is widely perceived as a requirement for developing the competences required. CALOHEE was initiated with the notion in mind that an appropriate format had to be found to define the competences to be learned and the level to be achieved. This should be done at subject area level to have measurable and fair indicators. This exercise proved to be successful and resulted in very sophisticated qualifications reference frameworks and much more detailed assessment reference frameworks. Both offer menus for comparison of existing programmes and offer inspiration for further enhancement. Past experience has shown that showcasing what can and probably should be learned is an insufficient incentive for upgrading degree programmes and the application of state of the art learning, teaching and assessment strategies and methods. Much more informative is to actually check – by measuring in comparative perspective - whether the learning is up to standards. It is assumed that such information will be perceived as a more effective means of (peer) pressure/motivator for change. CALOHEE has showed us that common ground can be found to decide what to expect from a degree programme and therefore to be measured. The prospective challenge is to define testing formats and assessments items which allow for computerised measuring of complex learning, such as analytical thinking, but also generic competences. This brings us in partly new territory which should offer the evidence to answer the question whether higher education institutions and their degree programmes are educating the set of competences required by society.
Footnotes
Declaration of conflicting interests
The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Funding
The author(s) disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: Tuning and CALOHEE Projects are co-funded by the European Commission of the European Union.
