Abstract
Background
Patient safety and improved outcomes are core priorities in healthcare, and effective handoffs are essential to these priorities. Validating handoff tools using simulation is a novel approach.
Methods
The construct validity and instrument reliability of the I-BIDS© tool were tested. In Phase I, construct validity was substantiated with a convenience sample of 21 healthcare providers through an electronic survey. Content Validity Ratio (CVR) was tabulated using Lawshe’s CVR. Interrater reliability was tested in a simulated handoff scenario, in Phase II, with graduate nursing students and two raters, and simulation effectiveness was assessed by students.
Results
Construct validity was evaluated, and 17 of the 25 items were found significant at the critical level (0.42). Items scoring below were removed, and the tool was reduced by one category. Weighted kappa (Kw) with quadratic weights was run from the scenario data to determine if there was an agreement between raters of handoff performance. There was a statistically significant agreement between the two raters, Kw = .627 (95% CI: .549–.705), p < .001) with good strength of the agreement. SET-M Total mean was 55.64 (SD = 2.46).
Discussion
The tool showed beginning validity and interrater reliability. The SET-M Learning subscale showed the widest range of scores which suggests the most opportunity for improvement. Use of the tool in simulated scenarios may be one way to test the items further.
Conclusions
Simulation was effective in facilitating the evaluation of the tool.
Get full access to this article
View all access options for this article.
