Abstract
Artificial Intelligence (AI) is increasingly used in ergonomics, particularly for assessing musculoskeletal disorder (MSD) risks. Recent advancements in vision-based AI have enabled the monitoring of MSD risks using ordinary cameras, providing more accessible and less intrusive alternatives to traditional observation-based methods. However, existing AI models, trained on generic computer vision-domain datasets, lack the keypoints necessary for calculating intricate angles in high-degree-of-freedom (DoF) joints. We present the design and building process of a large-scale 3D human motion dataset designed to train vision-based AI models for ergonomics risk assessments. The dataset captures 47-keypoint 3D human pose selected for high-DoF joint angle calculations and vision-based pose estimation, capturing 7 million frames of 10 subjects performing 9 categories of manual material handling tasks. A baseline MotionBert model trained on our dataset achieved a mean absolute angle error of 3.5° and demonstrated its generalization capability on real-world industry videos.
Keywords
Get full access to this article
View all access options for this article.
