Abstract
English is required as the official working language for oceangoing seafarers across the world. In China, much time and effort has been dedicated to improving maritime students’ Maritime English (ME). However, low English proficiency is still one of the main barriers for Chinese seafarers to compete in the international maritime labor market. Online technology has introduced great opportunities for ME education in China. A deep understanding of the current status of online ME education in China is essential for enhancing its quality and effectiveness. This article investigates the current status of online ME education in China from the perspectives of Chinese maritime students and ME teachers by examining four key factors: online ME materials, online assessment and feedback, online interactions, and related online support. In total, 255 maritime students and 34 ME teachers from different maritime education and training (MET) institutions in China participated in this research. The study finds that online ME education in China was underdeveloped. Limited and simple online methods were being used in ME teaching and learning. The exam-oriented teaching mode greatly impeded the implementation of creative online methods. Many online functions had not been fully exploited and individual learning needs should be highlighted.
Introduction
Maritime English (ME), as a branch of English for Specific Purpose (ESP), aims to ensure seafarers worldwide can effectively communicate with each other. Shipping is a highly globalized industry (Kahveci et al., 2002). In such a globalized working environment, communication failure (Park, 2017; Ziarati, 2006), including seafarers’ deficiency in English (Ahmmed, 2018; Apostol-Mates & Barbu, 2015; Ziarati et al., 2009), is considered as one of the major factors responsible for maritime accidents. As such, ME proficiency is essential for multicultural and multilingual crews to ensure a safe and secure working environment (Progoulaki & Roe, 2011).
However, research shows that Chinese maritime students and Chinese seafarers are still low in ME proficiency (Fan, 2017; Fu, 2008). Despite the fact that a number of ME classes and many methods are put in place to improve students’ English proficiency, many Chinese seafarers still regard English deficiency as the major obstacle to effective communication on board (Kang et al., 2013; Tang et al., 2016). The quality of ME education in China does not meet the expectations of ME teachers’ and seafarers’ employers (Fan et al., 2017; Wang et al., 2017).
In view of the language challenges faced by the Chinese seafarers, it is imperative to develop strategies to improve the outcomes of ME education in China. Information and communication technology (ICT) provides new forms and possibilities, which can be complementary to traditional teaching and learning (Kentnor, 2015). While ICT plays a more and more important part in ME education worldwide (Valle & de la Campa Portela, 2011), up till now few online courses or strategies have been applied to ME education in China (Shi, 2019). Cole and Trenkner (2012) point out that although using online methods is appropriate to improve students’ ME ability, online learning has developed much slower in ME education in China than it has in the field of general English. Nowadays, most of the current students are living in a world inseparable from the internet and digital devices (Wet, 2013). To enhance the learning outcomes of ME education, the learning and teaching mode needs to be adjusted to meet the learning habit of these students. A further investigation shows that there is a dearth of research on the exploration of the feasibility of online ME education in China. Therefore, this research endeavors to fill the gap between the thriving online learning and its relatively meager practice within ME education in China.
Based on the gap of this study, two research questions were proposed:
What is the current status of online ME education in China?
What are the recommendations for implementing online ME education in China?
The first research question examines how various online methods are currently integrated in ME education in China. Its aim is to identify key issues, such as the current teaching mode of ME education in China, assessment and feedback, teaching materials, learners’ interactions, the availability and usability of online technology, and the support of online ME education.
The second research question focuses on exploring the ways in which ME education can be enhanced by online technologies to better meet the language requirements for future Chinese seafarers. Knowledge of such recommendations is derived from the literature review and the findings of this research.
Literature Review
A Review of Online ME Education in China
To provide a full picture of the current status of online ME education in China, a review was done by examining the published studies associated with this topic. The review attempted to identify the overall status and critical issues in the field of online ME education in China. The publications of this topic are mainly written in two languages: Chinese and English. For Chinese publications, China National Knowledge Infrastructure (CNKI) was selected because it is the largest mainland Chinese full-text database, containing no less than 99% of all the Chinese journals and papers (Zheng & Zheng, 2013). ProQuest, Informit, ERIC, Web of Science and Scopus were chosen as the databases for English articles on account that they are comprehensive and widely used ones covering the majority of journals in social science, arts, and humanities. In addition, International Maritime English Conference (IMEC) has enjoyed a high reputation in the field. The IMEC was established in 2002 and before the establishment of the IMEC, it was called the Workshop on Maritime English (WOME). As an important source of ME research, the previous IMEC proceedings available on the IMEC website (http://www.imla.co/imec/) were searched for this review.
The World Wide Web was introduced in 1992 (Harasim, 2000), so the review covered papers from 1992 to 2018. The same inclusion criterion was applied to the search of both Chinese and English publications, that is, the papers should be related to practical online ME teaching and learning in China. The online library of the University of Tasmania was used to access the databases. Given the fact that the search of keywords alone might lead to the exclusion of potentially relevant studies, this review involved the search of “titles,” “themes” as well as “keywords” of papers in the databases by using several combinations. The search terms include (“Maritime” OR “Marine Engineering” OR “Navigation” AND “English”) AND (“online” OR “e-learning” OR “blended learning” OR “internet” OR “virtual learning” OR “mobile learning” OR “multimedia” OR “flipped class*” OR “MOOC” OR “micro learning” OR “independent learning” OR “digitalisation” OR “informatisation”) AND (“education” OR “teaching” OR “learning”) AND (“China”). In Chinese language, synonyms of one word are not as many as they are in English. Therefore, the search did not use other Chinese synonyms of the above keywords. The search was completed in June 2018.
After screening and selection against the above criteria, a total of 40 articles were assessed as relevant. Among them, 34 articles were papers written in Chinese searched from CNKI, with one master thesis and two published in core journals (In this research, core journals refer to the journals listed in the “List of Core Journals of China” developed by Peking University Library). As for English papers, five relevant articles were found in IMEC proceedings and one from the above listed English databases. Chinese researchers were the authors of almost all of the English papers, and only one paper was co-authored with a foreign researcher. Regarding online ME education in China, the earliest article found in CNKI was published in 2001 while the earliest English paper was found in 1999, and the latest ones in Chinese and in English were published in 2017. Figure 1. shows distribution of publications on online ME education in China.

Distribution of publications on online Maritime English education in China.
It can be seen from Figure 1 that the volume of published literature fluctuated throughout the years. Despite a sharp rise in 2010 when the requirements of ME communications were modified considerably in the STCW Manila Amendments, there was a drop in the following years. However, in spite of the fluctuations, publications were generally on the rise across the years. Since 2010, more papers have been published in this area compared with previous years. The low publication volume before 2003 implies this research topic had not come into most researchers’ attention until then. For Chinese papers, except for the master thesis, around 85% of the journal articles were limited to one to three pages in length. The longest article written in Chinese was six pages. Most articles that less than three pages did not have a full and detailed exploration of the research topics. In addition, Chinese papers published in core journals only accounted for a very small proportion (5.88%). Although English papers were normally longer than Chinese ones, which range from three to 10 pages, most of the English papers, up to 83.3%, were conference proceedings.
To identify the critical issues in this field, entire papers of the 40 publications were consulted to determine their themes. Through this classification process, the themes of the publications can be grouped into four categories: suggestions or implications for using online tools or methods (
Most of the reviewed papers (87.5%) were related to non-empirical studies and only five empirical ones were identified. A closer look at the methodological approaches adopted in the empirical studies revealed that four of them used quantitative methods in the form of questionnaires or tests. Only one study adopted a mixed methods approach. The main data analysis methods were descriptive or inferential statistics. Specifically, Yang (2008) designed a questionnaire to evaluate the strength and limitations of MarEng (a kind of online ME learning resource). Y. Liu and Yu (2016) investigated the possibilities of implementing online ME education by analyzing the availability of online devices to maritime students and the length of their everyday online time. They found that most of the Chinese maritime students could access various digital devices and had time to learn ME online. T. Yan and Hu (2016), J. Liu (2016) and Weng (2015) conducted surveys to examine the effects of a certain type of online learning tools, such as online learning platform, network corpus and online resources. They all concluded the used online methods enhanced students’ learning results and interest.
For the non-empirical research, the majority of the researchers introduced some online methods or resources, such as online software (Zhong, 2010), ME dictionaries and online question banks (Ma, 2010), examined the characteristics of some new online educational forms or technologies, such as online learning platform (Ma, 2008), network corpus (J. Wang, 2011), P2P (Yuan, 2009), MOOC (Yu, 2017), multimedia (L. Liu, 2001), micro-course and flipped class (L. Chen, 2016), analyzed their feasibility (Lü & Liu, 2014; T. Yan, 2015) or possible problems (Song, 2013), and offered some suggestions (B. Chen, 2009; J. Zhou, 2012), evaluations (J. Wang, 2010), or implications (Zhang, 2009) regarding their applications. These articles indicate that online technologies had the potential to be accepted as highly useful tools to improve the quality of ME education. Although many of the published paper provided suggestions (B. Chen, 2009; J. Zhou, 2012) or analyzed the feasibility (Lü & Liu, 2014; T. Yan, 2015) of the application of certain online instructional methods, few of them designed a systematic approach to apply such methods or evaluate the effects of such applications. Network corpus and multimedia were two hot topics in this area. Many researchers provided ideas on enhancing the outcomes of ME education by incorporating network corpus (Gu, 2011; Z. Sun, 2006; J. Wang, 2010, 2011; Zhao, 2012) or multimedia (X. Li, 2005; L. Liu, 2001; Su, 2005; F. Wang & Wang, 2014) into the teaching and learning process. However, more studies are needed to put these ideas into practice and report the real effects with substantial data or literature support. Generally, there was a lack of quality papers in this area, which implies that the research conducted in online ME education in China is still in its infancy.
The review revealed a dearth of research into online ME education in China. Although there are many courses or resources available online, there is limited access to online ME courses or resources in China. The major search engines available in China, such as Baidu, Sogou and Youdao, are better at processing Chinese information (Ursell, 2017) but provide limited access to the information of other languages. Therefore, Chinese learners have few chances to learn from some well-developed online ME resources. Furthermore, few online methods have been implemented in ME education in China. Many studies in this review were mainly limited to suggestions to apply online methods to ME teaching and learning without empirical evidence. Scant research was performed from learners’ perspectives, for example, taking into account their needs and readiness for online learning. Thus, there is a great need for in-depth research into online ME teaching and learning to accommodate leaner’s practical needs and circumstances.
Quality Matters Rubric Standards
The Quality Matters Rubric Standards was first initiated by MarylandOnline, Inc. to provide a set of replicable and scalable methods to measure and guarantee online education quality (MarylandOnline, 2018). It involves the use of rubrics and peer review to evaluate the quality of an online course. The QM developers made references to the best practices and research achievements to develop a faculty-centered, peer-review process that could be applied to various online courses (Legon, 2015). The QM program has now received wide acceptance for the research-based rubrics and the inter-institutional, peer-review processes. It currently has more than 1,300 institutional subscribers and 52,000 trainees throughout the world (QM, 2018a).
QM (2018c) developers state that QM is founded on four underlying principles: continuous, centered, collegial and collaborative. The continuous principle indicates QM is a continuous program for educational institutions to assure the quality of online courses. The centered principle means QM is centered on three aspects: research, student learning, and quality. The third principle indicates, as part of a faculty-driven, peer-review process, the QM review process strives to be diagnostic and collegial instead of being evaluative and judgmental. The last principle means the QM review process uses collaboratively identified evidence reported in online courses and many ways can be used to meet every standard. The QM rubrics are a set of guides that are useful not only in creating, or evaluating online courses, but also in their strong adaptation when needed (Puzziferro & Shelton, 2008). The aim of the QM rubrics is to assure the quality of online course design (MarylandOnline, 2018), which is one aspect of this research topic. Therefore, the QM rubrics were served as one resource for this study.
Up till now, there have been six editions of QM (2018b) Higher Education Rubric and the latest version with eight general areas and 42 specific standards was released in mid-2018. Compared with the fifth edition, the sixth edition has made some updates to its rubric standards, such as the merging of a few existing standards and additions relating to digital literacy skills, modeling academic integrity, commitment to accessibility, and including a variety of technology (K. Wilson, 2019). The standards listed in QM are grouped into eight general standards that are essential for online or blended course design (Legon, 2015). The eight general standards consist of Course Overview and Introduction, Learning Objectives (Competencies), Assessment and Measurement, Instructional Materials, Learner Activities and Learner Interaction, Course Technology, Learner Support, and Accessibility and Usability. Some of its standards can be referred to this research.
However, some QM rubrics may not be suitable for evaluating the present situation of online ME education in China. For example, providing links to the institution’s accessibility policies and services is not a common practice yet (Alizadeh, 2019). In addition, QM was designed for trained peer-review teams to evaluate the quality of online learning (Alizadeh et al., 2019). In this research, the major participants were current maritime students and ME teachers, who normally did not know whether “The instructional materials represent up-to-date practice in the discipline” or “The assessments used are sequenced” both of which are listed in the QM rubrics. Furthermore, effective online courses should accommodate the differing needs and backgrounds of multicultural learners (Gao & Legon, 2015). The different Chinese cultural characteristics require some adaptations of the QM standards. Therefore, this research adapted the QM rubrics to make them more relevant to the research aim, the practical situation, the specific quality criteria of ME education in China and the Chinese background. For this purpose, this research deleted three sections in QM which were not applicable or suitable in ME education in China: Course Overview and Introduction, Learning Objectives (Competencies) and Accessibility and Usability. Feedback was added into the assessment because feedback is a critical component of online assessment to ensure the quality of online courses (Moore & Kearsley, 2011). As such, the QM rubrics were adapted into the following five aspects: online assessment and feedback, online ME learning materials, online learning interactions, technologies related to online ME study, and technical support related to online ME study.
Method
Research Instruments
To have a comprehensive portrait of the research topic, the researcher not only needs to know the in-depth thoughts of the participants, but also should have a general picture of the related issues. Therefore, a mixed methods design strategy was employed to achieve the objective and provide a more comprehensive understanding of this study (Creswell, 2013).
Questionnaires provide an objective method to gather information about respondents’ views (Sapsford, 2006). In this way, questionnaires were employed as a research instrument to collect quantitative data. The questionnaires were composed of 30 close-ended questions that investigated the current status of online ME education in China. Except for the eight questions in the biographical section, the questionnaire items used 5-point Likert-type scales to assess the respondents’ attitudes toward online ME education in China since Likert-type scale is easy to be understood by the respondents and its data are relatively easy to be analyzed (LaMarca, 2011). Taking into account the finance and time constrains of this research, online questionnaire was adopted as the main means to obtain the quantitative data for this research. The questionnaires of this research were posted on the free survey website QuestionPro.
Interview is particularly useful for exploring in-depth information about the research topic (McNamara, 2006). Hence, for the qualitative research of this study, interview was employed to collect qualitative information. In this study, although the research questions were addressed in the prepared interview questions, the respondents were able to express their ideas related to the research topic in a flexible way that did not constrain the discussion to the research questions as expressed in the schedule. Therefore, semi-structured interviews were conducted for this research so that the researcher had control over the topics of the interview while a flexible range of responses to each question was possible.
Recruitment
The target population of this research was current maritime students and ME teachers from various MET institutions in China. The participants were invited from the lists of MET institutions that were available at the website of the Ministry of Transport of the People’s Republic of China.
For questionnaires, an information sheet with a description of the research project as well as the investigators’ contact details was provided to all the potential participants through a link in the advertisement. Advertisements and posters of the questionnaires were used in permissible and appropriate places, such as public areas of popular websites for maritime students and ME teachers. Advertisements were also circulated in the discussion areas of the campus website or Bulletin Board System of the campus networks of the MET institutions.
To recruit interview participants, this research first grouped the MET institutions according to their geographical locations and educational tiers. In each group, the researcher randomly selected two MET institutions from the official list available at the website of the Ministry of Transport of the People’s Republic of China. Then the researcher contacted the administration staff of these institutions (e.g., director or manager) using the contact information provided on their websites. This initial contact was made either by phone or email, depending on the contact information that was provided. If permission for carrying out the interview was obtained from the management board, then the related staff was asked to send an invitation email together with the information sheet to all the maritime students and ME teachers. If the potential participants were interested in participating in the interview, a consent form was then provided to them by email or mail before the interview. Their signed consent form could be sent back to the researcher either by mail or email or collected at the interview. Then an interview was planned to be carried out. If the number of respondents had not reached the sample size, another round of random selection would be applied to recruit more participants.
Data Collection
The questionnaires were completed and collected online. The participants were provided with a link to the information sheet and instructions about how to complete the questionnaires before they began to answer the questions. When the participants finished the questionnaires and clicked on the “Submit” button, their answers to the questionnaires were automatically stored in the QuestionPro website.
The semi-structured interviews of this research were conducted either in a face-to-face setting, online or over the phone. The interview conversations were either recorded by digital devices or written in a notepad with the interviewees’ permission. After the interviews, such information was transcribed into textual data for further analysis.
Reliability and Validity
Reliability can be divided into external reliability and internal reliability (McLeod, 2007). Internal reliability measures the consistency of results across the items that assess the same construct within a test (McLeod, 2007). The internal reliability of this research was explored through Cronbach’s Alpha values, which is acceptable if its value is greater than .7 (Wessmann et al., 2014). The Alpha value for related online support is .842, for online interactions is .836, for online feedback and materials is .749 and for online assessment is .726. All the results were above .70, which indicates good internal reliability of the measurement for the questionnaire.
The external reliability evaluates the extent to which consistent results can be obtained across a range of measurements (McLeod, 2007). Generally, two kinds of biases, namely, respondent bias and instrumentation biases, may jeopardize the external reliability of the research (Hair et al., 2015). Instrumentation biases occur when research instruments are not appropriately designed or presented (Hair et al., 2015). To avoid possible instrumentation bias, every effort was made in various aspects of the design stage. The draft of the questionnaire and interview questions were examined by the members of the research team, academic staff, colleagues of the researcher, maritime students and ME teachers. Their feedback provided the researcher with a broader view to avoid bias and prejudice in revising and refining the final version. Effort was made to avoid ambiguity or double-barrelled questions. The accuracy of translation was guaranteed by a rigorous process. In addition, sensitive information was avoided so that the participants would not feel offended or challenged in answering the questions.
Respondent bias occurs when the participants are influenced by some factors during the survey (Hair et al., 2015). To eliminate respondent bias, as suggested by Neuman (2012), the study included multiple sources of data, multiple instruments and multiple participant groups; and the questions were asked from different aspects. In this way, this study is considered reliable as the results would not be significantly different if it is conducted again under the same circumstances. Merriam (1998) believes that ensuring reliability involves conducting the research in an ethical manner. Before taking part in the research, the participants were informed the collected data would be kept confidential and anonymous. In this way, the participants could feel free to express their ideas.
Validity indicates the extent to which the measuring device or technique is truly assessing what the research is intended to measure (Pallant, 2016). There are also two kinds of validity: internal and external. Internal validity includes criterion validity, construct validity, and content validity while external validity refers to the extent to which the findings can be generalized to other settings (Punch, 2013). This study focuses more on achieving internal validity rather than external validity because Generalizability was not directly related to the aim of this research.
Common threats to internal validity are defined as: history, testing, instrumentation, maturation, selection, regression, experimental mortality, and interaction of threat (Slack & Draugalis, 2001). These threats were regarded as minor in this research since it involved no experiment and was conducted during a relatively short period. In addition, some methods employed to guarantee reliability were also beneficial to the internal validity of this study, such as using multiple sources of data, the rigorous translation process and consulting experts as well as the potential participants about the research instruments.
Results
Participants
A total of 289 volunteers participated in this research, including 255 maritime students and 34 ME teachers, who were currently learning or teaching ME in MET institutions in China. The questionnaires involved the participation of 243 maritime students and 22 ME teachers. Among them, 10 completed questionnaires (nine maritime students and one ME teacher) missed more than 50% of data. Therefore, these questionnaires were excluded from data analysis. The remaining 255 responses, including 234 responses from maritime students and 21 from ME teachers, were used for quantitative data analysis. The interviews recruited 24 participants, including 12 maritime students and 12 ME teachers.
Biographic Information
The biographic part of the questionnaires includes eight questions, which refers to students’ educational qualification, students’ major, students’ grade, dominant teaching mode, the employment of an online learning platform, students’ ME ability, online experience and class size. The biographic information grouped by maritime students and ME teachers is shown in Table 1.
Biographic Information of Maritime Students and ME Teachers.
From the students’ perspective, it was found that the traditional teaching mode (96.6%,
Similar to the responses of students, ME teachers also believed that the traditional teaching mode (95.2%,
Quantitative Data Analysis
Descriptive analysis
Part B of the questionnaires was composed of 22 questions that investigated the current status of online ME education in China. The questions in Part B were divided into five sections. The first section, from Question 9 to Question 13, focused on the current status of online assessment and feedback of ME study. The second section, which was about online learning materials, had 3 questions. Online learning interactions were investigated in the third section of Part B, from Question 17 to Question 22. The next four questions, that is, Question 23 to Question 26, examined current online technologies applied in ME study. The last part, including Question 27 to Question 30, was related to the relevant technical support. The values on the Likert-type scale represent the degree of agreement. The scale range was from 1 to 5, accordingly from “Strongly Disagree” to “Strongly Agree.” Table 2 provides a descriptive analysis to each section in Part B. Frequency, percentage of each choice, mean, median and mode values are presented to obtain the general information of the participants’ responses.
Participants’ Responses to Part B.
From Table 2, it can be seen that the medians of most of the statements were 3 or 4. That means a majority of the participants held a relatively positive view toward most of the statements. The participants generally held a negative view regarding the statements Q10, Q12, and Q13, whose median values were 2. This is in alignment with the percentage of each item. Only around 15% of the participants agreed or strongly agreed on Q13 that “diverse types of online feedback are provided to me/my students, such as in written, video or audio forms.” Abound 30% of the participants chose either “Agree” or “Strongly Agree” on Q10 and Q12. This indicates that the forms of online assessments were not varied and online feedback was not widely used, the reasons for which should be explored further.
Exploratory factor analysis
Exploratory factor analysis (EFA) is a data reduction method that is employed to examine the underlying constructs within the collected data (DeCoster, 1998). By clustering variables along dimensions, factor analysis summarizes a large set of variables into a smaller set of meaningful components (Pallant, 2016). Although the design of the questionnaires was developed from some research findings and frameworks in the existing literature, the correlations among the variables need to be tested and verified. For modified or newly developed scales, EFA would be appropriate in exploring or verifying the underlying structure of observed variables (Pallant, 2016).
The pattern matrix in Table 3 shows four components: (1) related online support; (2) online materials and feedback; (3) online interactions; and (4) online assessments. Communalities, which present the extent of the variance in each item, need to be suppressed if the absolute value is very low (<0.3) (Pallant, 2016). All the loadings shown in Table 3 exceeded the absolute loading value of 0.3 or −0.3 which are considered acceptable for factor analysis (Hori et al., 2011). The four-factor solution represented the expected underlying theoretical constructs and the values of the Cronbach’s Alpha for each section were .841, .733, .836 and .714, suggesting acceptable internal consistency among the items (Pallant, 2016). Therefore, a decision was made to retain the four-factor solution for the subsequent analyses. However, for the cross-section loading items Q21 and Q29, they were deleted with a strikethrough because they had close loadings with less than 0.2 differences.
Pattern Matrix of Part B.
Rotation converged in 19 iterations.
The first extracted component shown in Table 3 included seven distinctive items (Q23, Q24, Q25, Q26, Q27, Q28 and Q30) regarding the related online support. This component accounted for most of the total variance in Part B (35.613%). The second component which was composed of Q12, Q13, Q14, Q15, Q16, and Q22 was mainly related to online materials and feedback. Specifically, Q12 and Q13 were associated with online feedback while the other three questions were related to online materials. The factors (Q17, Q18, Q19, and Q20) in the third component were about online interactions. In this case, the third component was identified and labeled as online interactions, which accounted for 6.580% of the total variance. Very high factor loading values were found in the factors of asynchronous and synchronous online interactions with loading values of .912 and .908 respectively. The fourth component (Q9, Q10, and Q11) was related to online assessments. It accounted for 5.153% of the total variance.
Kruskal–Wallis and Mann–Whitney U-tests
This section reports the factors that influenced the respondents’ opinions. Kruskal–Wallis and Mann–Whitney U-tests were performed on the individual questions and four components obtained from the EFA, respectively. This part explored the factors that influenced the participants’ choices on specific issues. The individual questions were Q10, Q12, and Q13, each of which had a relatively low score of median and percentage of agreement. The four components were (1) related online support, (2) online interactions, (3) online assessment and feedback, and (4) online materials. The nine independent values: participants’ occupation (ME teacher or maritime student), students’ educational qualification, students’ major, students’ grade, dominant teaching mode, employment of online learning platform, students’ ME ability, online experience, and class size were performed on each question and component to determine which of these values were associated with the participants’ views. It should be aware that there were a limited number of the participants who were in “the 5th grader or higher.” In the Kruskal–Wallis and Mann–Whitney U-test, the data of this group were just calculated for a reference purpose.
The author first applied Kruskal–Wallis test on the selected questions and the components by the independent values. When there is a significant association between the independent values and the statements or the components which was marked in Table 4, Mann–Whitney U-tests were used to determine where the significant difference lies between the groups (Pallant, 2016). Cohen’s d was adopted to calculate effect size. According to Cohen (1988), the interpretation of the values of Cohen’s
Factors That Influence the Participants’ Opinions.
Qualitative Data Analysis
The coding process of this study involved the breaking down, comparison and categorization of data (Punch, 2013). The first coding stage was open coding. At this stage, as suggested by Neuman (2012), the researcher carefully went through the raw data line-by-line, identified and compared conceptual units emerged from the data, and placed relevant units in the same coding group. As a result, 15 open codes were generated from open coding.
The second operation of the qualitative data analysis was axial coding. During this process, the researcher interconnected the open codes by examining the patterns emerged from them. Some existing categories from the questionnaires of this research, such as online assessment and feedback online learning materials, and online interactions, were considered and adopted when establishing the categories for the interview data analysis. Besides the existing categories from the questionnaires, emerging categories were also established, such as the functions of multimedia classrooms, ME learning applications (Apps), and online platforms. The open codes obtained from the first stage were reclassified into six categories.
The third coding process is selective coding within which categories obtained from the second stage were further integrated and refined into core generalizations and ideas (Bryman, 2015). In this study, the theme was labeled as the current status of online ME education. Table 5 shows the distribution of categories and codes under the theme of current status of online ME education.
Distribution of Categories and Codes Under the Theme of Current Status of Online ME Education.
The category of “Functions of multimedia classrooms” was the most frequent mentioned one in the interview and the most frequent code is “Computers are used without internet access.” From the feedback of interviewees, the multimedia classrooms were not sufficiently utilized for online learning and they mainly served for the dominated teacher-centered teaching mode.
For the category of “Online assessment and feedback,” four out of these six interviewees stated that the online feedback was automatically generated by the teaching and learning software. Without constructive feedback, the software simply provided correct answers to the multiple-choice questions. However, two interviewees stated that some software were useful for listening and speaking or writing classes.
For the category of “Online learning materials,” many interviewees raised their concerns about the lack of access to resources relevant to ME. This can be partly explained by the fact that many overseas resources are not accessible due to censorship. Except multiple-choice questions for ME exams, limited online resources could be found in China.
The less frequently mentioned categories include “ME learning applications (Apps),” “Online platforms” and “Online interactions.” The Apps were not fully explored for the purpose of ME teaching and learning. As stated by one teacher interviewed, preparing ME Apps required a lot of extra time and effort from the teacher and it was suggested that there should be some incentives for ME teachers to do the work. Also, online platforms were not used properly or effectively since the teaching and learning activities were carried out in classrooms. It required more effort to motivate students and teachers to use it. Even though online interactions were convenient, online interactions among students and teachers were not popular as expected. This can partly explain online platforms or Apps were not well utilized as expected.
Discussion
General Status of Online ME Education
It is revealed through this study that online ME education in China was underdeveloped. Many aspects of online ME education, such as online learning materials and online interactions, are just the repetition or compilation of classroom-based courses. Some online functions have not been fully exploited. For example, online learning platform is largely ignored in practical learning and teaching.
The data from this research shows that a vast majority of the ME classes in China were dominated by the traditional teaching mode. This finding is consistent with the literature which states that the current ME learning in China is mainly performed in an old-fashioned manner (Weng, 2015; S. Zhou et al., 2013). It indicates that limited online teaching and learning facilities are applied in the current system of ME education. However, it is found that the application of some online methods, such as using online learning platform and online assessment, influenced the participants’ views toward online learning. In this study, compared with the group mainly taught with a traditional teaching mode, the group utilizing online methods had a more positive attitude toward all the factors related to the current status of online ME education in China. In this way, to make online learning more appreciated by students and teachers, much more effort should be put into overcoming obstacles to implementing online education.
Although ICT has been applied in ME education in varying degrees, generally the application is mainly limited to the adjunct mode which means ICT is used as an auxiliary tool to assist in traditional teaching rather than an inseparable component of the learning process. This finding is consistent with the survey results by Tham and Tham (2011) that indicates blended learning in Asian countries, including China, is only a form of support learning since a severe lack of consideration is provided to instructional design and strategies as well as the varied learning needs and learning styles of learners.
Current Status of Online ME Materials
Online learning materials must be properly designed to ensure the effectiveness of online courses (Ally, 2008). Constructivism places emphasis on personal differences, thus online learning materials should address the needs of individual learners, such as learning styles and learning ability (Ally, 2008), which is largely neglected in ME learning in China. The data from the questionnaires of this research shows that although some online learning materials were provided in ME courses, less than half of participants agreed that the online learning materials provided to them were appropriate to their English level and online ME learning materials were not provided in various forms. This implies that the presently used online ME materials do not meet the individual needs of many maritime students. Research shows that students will be more motivated in their learning when a range of relevant online materials are provided in different formats to them (Condie & Livingston, 2007).
The information obtained from the interviews of this research shows some specific problems in online materials. Although there were some ME learning resources available online, most of them were reported to be used as self-study materials without proper guidance or requirement. In addition, according to the interviewees, few of these online resources had been systematically applied in ME teaching and learning. The current online materials in use were mainly presented as an e-version of traditional textbooks and exam question banks. This may explain the findings from the questionnaires that the online materials were not presented in various forms and the content of current ME textbooks did not satisfactorily meet the practical needs of learners. The above features of the current online learning resources not only blurred the boundary between ME and other professional courses, but also made the books boring, inauthentic and impractical (W. Wang, 2015). Exam question banks were compiled to assist maritime students to obtain competency certificates (Shen & Wang, 2011). However, the widely used question banks turned ME education into urging students to recite questions rather than improving practical language abilities, since many test questions were directly adopted from the question banks (L. Yan & Pyne, 2013). Fan et al. (2017) believe some exam questions lack practical value since the focus of these questions is on testing such tricky technical knowledge that even a captain would choose wrong answers. On account of the aforementioned analysis, it was not surprising to find that the online materials were inappropriate to a high proportion of students’ individual English level as indicated in the questionnaires. In this case, there was little difference between the online materials and printed materials in terms of content and format if online materials were mainly used to pass the exams. In fact, under such a teaching mode, printed materials are more preferable than online ones since printed materials present less cognitive load compared with online materials (Chang & Ley, 2006). This can partly explain the reason why some interviewees in this research agreed that they preferred the traditional mode to pass ME exams.
Current Status of Online Assessment and Feedback
Assessment is among the key factors that influence learning and teaching (Gaytan & McEwen, 2007). However, assessment alone does not ensure the high quality of learning. Only carefully and systematically planned assessments can improve the learning outcomes (H. W. Wilson, 2004). Gaytan and McEwen (2007) maintain that a variety of assessments should be employed to meet students’ different learning preferences and needs. However, the results of this research not only show online assessments were not widely used for current ME study, but also indicate most of the participants believed their online assessments were not varied in form and did not take students’ individual learning preferences into account. In addition, the content of current online ME assessments was largely not authentic for the purpose of real-life communication, while authentic learning and testing content is one of the features of an ESP course (Carver, 1983; Gatehouse, 2001).
Feedback regarding the quality of students’ work is a critical component of online assessment to ensure the quality of online courses (Moore & Kearsley, 2011). Learners should be provided with feedback for their assessment so that they know how they are doing and how to improve (Ally, 2008). Multiple forms of feedback, with a consideration of individual needs, can lead to better learning outcomes and higher satisfaction by correcting and improving students’ learning activities from different perspectives (P.-C. Sun et al., 2008). As for online feedback, the questionnaire data indicate providing online feedback was not a common practice in ME education in China. Considerations for individual needs in feedback were still scarce in practice—only 15.7% of the questionnaire participants and two interviewees admitted some kind of personalized feedback was provided.
This research found that the multiple-choice questions, which could be automatically evaluated through software, were the main form of online assessments. Generally, the design of such software was very simple and only provided limited constructive feedback or explanation apart from the correct answers to multiple-choice questions. As such, the benefit of online assessment and feedback is just limited to providing instant feedback without further constructive comments. If the feedback is simply in the form of answers to multiple-choice questions, it can hardly contribute to the development of students’ critical thinking or their improvement of English proficiency. However, some ME teachers favored such simple form of feedback because the instant statistical data provided by the software made it easy to check the learning progress of the individuals and the whole class. As such, it greatly reduces the workload of ME teachers in terms of marking and monitoring the learning progress.
The majority of the participants in this research were not satisfied with the current assessments and feedback of ME which were largely centered on ME exams. Many of them believed that the current ME exams could only evaluate students’ familiarities with the exam questions instead of their real ME proficiency. Although Chinese seafarers had passed ME exams and obtained seafarers’ certificates of competency, many of them still have difficulty in communicating with foreign seafarers on board (Fan et al., 2017). Therefore, the validity of the current assessment of ME could be doubted.
Current Status of Online Interactions
Online interaction is an inseparable constituent of effective online instruction (Swan, 2002). Studies found increased interactions were related to better performance and higher learning satisfaction (Bocchi et al., 2004; Swan, 2001; G. Wilson & Stacey, 2004). This research indicates that the practice of online interactions in ME education still needed a great deal of improvement. None of the interaction types this research investigated had an agreement rate of over 50%. Synchronous interaction was the least frequently used in ME instruction while the student-teacher interaction was the type that was the most frequently performed. More online asynchronous interaction was conducted than the synchronous one. Students interacted more frequently with ME teachers than with their peers.
The interviews revealed that student-teacher interaction mainly focused on explaining the correct choices of exam questions or completing some assignments. Students normally interacted online with the teacher in chat groups or through email. Text communication became the main communication channel in this case. This finding is consistent with the conclusion made by LiC. (2016) that text communication is the main form of online communication for Chinese students. Teachers’ limited technical skills may be one of the major contributors to this problem (Su et al., 2005). As found from the research, although teachers now perceived interaction as an important aspect of successful learning, many of them had difficulty in making their online courses as interactive as they wish. Some teachers admitted they would avoid using sophisticated technologies when designing or delivering online courses.
However, text communication alone has its limitation in solving complicated issues (Swan, 2001). For example, such form of interaction cannot develop the real communication skills that seafarers need in their career. Moreover, text communication lacks other clues, such as facial expressions and body language, to aid learning and would probably lead to text ambiguities (C.-Y. Chen et al., 2011). Many opportunities that ICT provides, such as voice communication, visual aids and learning platform, cannot be realized through text communication. As such, the text-driven by technology is under criticism about online learning (Thorne, 2003). More innovative strategies in the online interactions should be introduced into ME teaching and learning, such as case studies, debates, role-plays and games (Northrup, 2009).
This research found that most of online interactions among Chinese students occurred out of class rather than in class. The main form was the discussion in the chat groups formed in mobile apps such as Tencent or WeChat. Within the groups, members could send messages in the forms of text, picture, voice and video clip. However, in most cases found in this study, online interactions were largely limited to posting text questions on the screen and seeking answers to the questions. The role of online interactions needs to be emphasized to develop students’ critical thinking ability, increase their engagement and encourage them to express their ideas.
It is noticeable that even in an online environment, Chinese students rely heavily on their teachers during the learning process (L. Chen et al., 2015). As reported by the questionnaires, the student-teacher interaction occurred more frequently compared with student-student one. Some interviewees believed that without the proper guidance and supervision from the ME teachers, the learning quality of the chat groups could not be guaranteed and the group would easily be attracted by other topics or go silent. Rourke et al. (2007) believe even simple online interactions among students require a great deal of facilitation. The high reliance on the teacher can lead to a low level of engagement in online interactions, which becomes a major concern for online ME courses in China (Davis et al., 2016). One reason for the high reliance on the teachers is that Chinese students influenced by the traditional Chinese culture are usually too timid to openly express their ideas (Gao & Legon, 2015). Another reason is that some exam questions are very tricky and most peer students had no confidence to figure out the correct answers by themselves (Fan et al., 2017). In this way, reliable answers were expected to be provided by their teachers.
Currently, online interactions conducted for ME studies were quite simple and limited, which did not contribute to promoting interactions compared with the traditional teaching mode. As reported from the research, a large amount of the participants thought the online interactions conducted for ME studies were not beneficial enough. The current forms of online interactions may influence their practical effect. As discussed above, text communication became the main form of online interactions while it alone cannot embody all the opportunities that ICT provides. Its limitations would quickly drain the learning interest of some students, especially of those with a low level of English proficiency and a lack of online communication skills who need some outside incentives (K. Li, 2017). In this research, only a small proportion of maritime students considered their ME level as good (9.8%) or excellent (2.6%) and no ME teachers regarded their students’ ME level as good or excellent. Considering the relatively low language proficiency of Chinese maritime students, more interesting and sophisticated online interactions, such as online games and online cooperation, should be introduced into ME education to engage them in online interactions. Special attention should also be paid to improving students’ internet language as well as basic English competence (Fu, 2008).
Current Status of Related Online Support
It is found in this research that the majority of Chinese MET institutions provided some kinds of online tools and peripheral support for students to study ME. A survey shows that most Chinese maritime students possessed at least one online device (Y. Liu & Yu, 2016). The possessing of such online tools and support is one of the prerequisites for the implementation of online ME education in China. However, it was reported that some online tools did not perform stably during the learning process, technical support was not provided timely and online devices provided on campus were not sufficient for students. Improvement is needed in the aspects of internet access and quality. For example, sometimes the internet access in the ME classes was forbidden to avoid possible learning distractions. In addition, WiFi was far less reliable in Chinese universities than that in Western universities (Zhu & Krever, 2017).
Most of the present online support for ME education was limited to technical assistance and library resources. In fact, online learning support comprises a much wider range of considerations, such as study online educational counseling, skills assistance, ongoing program advising and access for students with disabilities (Moisey & Hughes, 2008). More supportive aspects related to online learning should be taken into consideration.
It should also aware that the value of online support depends on the way it is managed or designed. The provided online tools would be perceived as little value when their implementation is ineffective (Armstrong, 2011). For example, timely technical support was closely related to instructor adoption of online teaching (W.-T. Wang & Wang, 2009) and students’ level of learning satisfaction (Lim et al., 2007). If the online library is not well-designed, most of the information repository would be untouched (Moisey & Hughes, 2008). As such, every effort should be engaged in making online support effective.
Conclusion
This study mainly explored four components of the current status of online ME education in China through analyzing the perceptions of Chinese maritime students and ME teachers: online materials, online assessment and feedback, online interactions and related online support.
It found the forms of online materials applied in ME education were simple and outdated, and their content could not meet the practical needs and standards. Few online assessments and feedback were provided with the considerations for individual needs. Online interactions with ME teachers were largely limited to seeking explanations for the questions and students were dependent on their teachers in online interactions. Some basic online support was provided by the majority of Chinese MET institutions, although most of the present online support for ME education was limited to technical assistance and library resources.
Consideration for individual needs was highlighted in this research. This element should be incorporated into every aspect of blended learning, such as designing assessment and feedback, choosing learning materials, performing online interactions and providing online support.
The research participants in this research were limited to ME teachers and students. Other interested parties such as institutional policymakers, administrators, and technical support staff may have their own opinions on online ME education. Further study could be research into online ME education in a specific MET institution taking into account these interested parties involved.
