Abstract
The integration of artificial intelligence into high-risk domains presents unique design challenges that balance automation efficiency with necessary human oversight. This article presents a case study of an AI-assisted regulation change management platform designed for financial institutions, where compliance failures can result in significant penalties. The two-stage verification process implemented in this system demonstrates how human-centered design principles can effectively mitigate known psychological pitfalls in human-AI interaction, including automation bias and complacency. Through careful interface design and workflow structuring, the platform successfully addresses the accuracy gap in the LLMs predictions that could otherwise lead to substantial financial and reputational risks.
Get full access to this article
View all access options for this article.
