Abstract
As Artificial Intelligence (AI) evolves from tool to teammate, traditional approaches to evaluating human-AI team (HAT) performance no longer capture the dynamic complexity of these interactions. This panel brings together experts from industry and academia to address the critical challenges in modeling and evaluating human-agent teams in the era of agentic AI. Distinguished panelists will examine novel methodologies that integrate objective performance metrics with subjective assessments to capture emergent team states, trust calibration, and communication patterns that define effective human-AI agent collaboration. Through interdisciplinary dialogue, this panel aims to establish research priorities and methodological approaches for developing evaluation frameworks that can adapt to increasingly fluid team compositions and operational contexts. Outcomes will include a roadmap for future research focused on HAT effectiveness in high-stakes operational environments where humans, HAI systems, and autonomous agents will increasingly share roles and responsibilities.
Keywords
Get full access to this article
View all access options for this article.
