Algorithmic dashboards promise sharper performance yet often erode autonomy and trust—a contradiction that Goal-Setting Theory (GST) cannot explain. Guided by paradox theory, this study constructs the first competence-paradox model for AI-mediated work. Method: A PRISMA-ScR review across Scopus, Web of Science, and PsycINFO (n = 82) was followed by qualitative meta-synthesis. Specifically, results reveal four mutually reinforcing tensions: metric specificity versus discretion, cadence versus psychological safety, optimization versus learning depth, and transparency + voice versus trust. Thresholds surface when prompts refresh every 5 minutes, exceed 30 per hour, or push 20 optimization nudges per shift; at those points autonomy drops 0.40 SD, safety 0.50 SD, and exploration 15%. Consequently, a five-lever HRD sequence—goal-calibration, hybrid coaching, explainable dashboards, rotational upskilling, and moderated voice forums—converts losses into gains. Overall, the model equips scholars with falsifiable propositions and provides practitioners a unique roadmap for steering AI systems toward both productivity and human growth.