Abstract
Manufacturing is a crucial subject in most undergraduate engineering studies. The hours dedicated to manufacturing comprise theory, exercises, training using specific software, and lab activities. Evaluating students’ work typically involves several activities, including submitting lab reports, solving numerical problems, and taking exams. Each method has its drawbacks and advantages; instructors choose the examination type based on their experience and expertise. Multiple-choice exams are usually selected when the number of students is significant, when the time for grading and revising the grades is short, or when the instructor needs to evaluate specific concepts of the subject's theory. These exams must be carefully designed to ensure fairness in the evaluation, with particular attention to the number of questions and the penalty for incorrect answers. Moreover, images further complicate the design of exams. Artificial intelligence is being tested in education for different tasks. The current study tests the ChatGPT model to balance the difficulty of manufacturing multiple-choice exams that include both image- and text-based questions with examples. The methodology can help the instructor reflect on how the exam is designed, correct errors, revise writing, and create questions that are easier to interpret. Instructors can find help adjusting the difficulty of a question when needed by modifying or substituting some answers and choosing suitable images.
Get full access to this article
View all access options for this article.
