Abstract
Practitioners of Human Factors and Ergonomics (HF/E) must frequently adapt to the constraints of organizations, multidisciplinary teams, changing customer requirements, and development lifecycles. Tradeoffs are made in applying techniques and levels of rigor; made to ensure that the highest caliber of HF/E is achieved that is also adoptable and impactful in practice. HF/E professionals, and particularly early career professionals, benefit from positive examples of HF/E practice in a variety of applied settings. In this panel, we will present perspectives and success stories that enable community dialogue about the art of the possible for HF/E in application. Success stories applying HF/E methods and practices at work is a thread deserving of continual discussion to add to the adaptive repertoire of our field and support the next generation of HF/E professionals.
Keywords
Glenn Lematta, The Mitre Corporation
Panel Chair
As an early career professional in Human Factors and Ergonomics (HF/E), my role in the last two years focused on increasing adoption of HF/E methods in applied settings. I am currently working on developing guidance for Human-Machine Teaming assessment (Lematta et al., 2024). This work aspires to make methods and tools more accessible to audiences that generally lack expertise in HF/E and related fields. Developing practical guidance requires confronting a question that is central to applied HF/E and the focus of this panel: How do we achieve a high caliber of HF/E practice given constraints of organizations, multidisciplinary teams, changing customer requirements, and development lifecycles?
Klein et al. (2023) introduced the concept “Minimum Necessary Rigor” to make sense of this questions in the context of human-AI work system assessment. They posited eleven requirements for assessment, including “The participant’s task should be ecologically valid.” This article adds to the relatively small number of published articles that describe tradeoffs made to implement HF/E techniques in practice. There is a strong demand signal to elevate discussion on the adaptive repertoire of our field to solve practical problems, help early career professionals, and cultivate expert consensus and novel research topics on the art of practice. Panelists will share success stories and lessons learned applying HF/E in a variety of domains such as healthcare, aviation, and the military. They will also share their interpretation of “Minimum Necessary Rigor” as well as discuss ways to further adoption of HF/E in practice. Overall, each panelist that follows was asked:
• Question #1: What is at least one example of a success story applying HF/E in systems development?
• Question #2: How do new HF/E practitioners know they are doing a good job?
• Question #3: What is your perspective on tailoring assessments to work constraints and “Minimum Necessary Rigor”? What tradeoffs are you comfortable making and what are your non-negotiables?
• Question #4: What are some ways to further adoption of HF/E In practice? What has worked well for you?
Russell J. Branaghan, Research Collective and Arizona State University
I am excited for this panel because the questions span the practice of HFE in multiple domains. This reflects my 35-year career progression. I started in industry (HP and IBM), worked in consulting (e.g., Fitch and Lextant), spent 15 years as an academic (Arizona State University and Northwestern University), and have found myself right back in consulting (Research Collective). Some of my most relevant work has been in consulting, but I cannot share details of those proprietary projects. Instead, I will mention a few of ones from my academic days.
In graduate school, I became interested in how domain experts organize their knowledge. I focused on uncovering and representing that knowledge in a way that informs product design. It enabled the design of superior menu-based systems, instructions for use, infographics, educational curricula, and so on. For example, Branaghan et al. (2010) redesigned a police cruiser information system to display only the most pertinent information for each call. This decluttered the display so information could be found quicker and more easily.
Branaghan et al. (2011) used a similar approach to redesign an instructor-operator station for fighter pilots. The redesigned system yielded significantly faster performance due to a more intuitive clustering of commands. Importantly, the improvement was immediate, even though the operators had used the previous design for years. Finally, Jolly et al. (2013) used cognitive task analysis to redesign and validate a cognitive aid for reprocessing endoscopes. This reduced the number of reprocessing errors significantly. This can reduce endoscope acquired infections.
Regarding necessary rigor, my colleague David Huron, at Ohio State, convinced me to think of each research project as a transaction. I invest time, effort, and money to answer a research question. Klein et al. (2023) opinion is related to this. In every environment, even the most buttoned-up academic ones, we attempt to do our best work with limited resources and numerous constraints. One problem, however, lies in the reporting and retelling of the research. People often fail to remember the method and constraints of the work, instead remembering only the results. Because of this, results from even limited work receive just as much credence in the retelling as the most high-quality, controlled experiment.
I find it useful to ask people, especially executives, to use their own products. When they do this, problems become abundantly clear. Similarly, I often require developers to observe usability tests. These two activities enable colleagues to recognize the importance of HFE.
Sylvain Bruni, Aptima Inc
Four years into my tenure at Aptima, I was asked to join my boss in writing a proposal with a local hospital in Boston. The goal was to humanize patients and reduce harm in intensive care units (ICUs) through technology insertion. This was quite a departure from the typical work I performed in the defense domain. We won the work and 12 years later, it is still ongoing! Leveraging core human factors techniques (persona and use case generation, co-design with patients, families, and clinical and nursing personnel, rapid prototyping, iterative testing and feedback collection, human-centered agile engineering), we developed a series of applications aimed at fostering better patient and family engagement, identifying and responding to risks, and optimizing rounds in ICUs (Bruni & Cocchi, 2016; Bruni & Sarnoff Lee, 2016; Bruni, 2018a). The work was such a success that the hospital invested its own fund into transitioning our patient engagement application into a commercial-grade product: MyNICU (to support parent-clinician engagement in the neonatal ICU) was first deployed in December 2017, Bruni, 2018b). Six years later, version 3 of the product is now available in multiple languages and has supported over 2,200 families and 2,250 babies! Feedback from parents and hospital personnel has been exceptionally high (Bruni & Erickson, 2022). We believe this is in no small part due to our practice of human factors engineering creating a solid core foundation for this product.
Measuring success is a trick question because the definition of “success” varies, for example depending on the organization an HFE practitioners belongs to and the domain in which they practice. At a high level, I believe there are three core types of measures I look for:
(1) User feedback: How do my users respond to what my team delivers to them (whether that’s a product itself or a report)? Is it useful, usable, understandable? Does it improve their performance (however defined) and satisfaction?
(2) Peer feedback: How do my colleagues (whether other HF/E practitioners or not) respond to what I contribute? Are my skills and my work aligned with the larger context of work?
(3) Business metrics: How does my work contribute to higher technology/product adoption, or transition, or revenue?
Ultimately, measures of success are dynamic and adaptive. As others have suggested, engaging with more senior HF/E practitioners and mentors is an excellent way to learn how to navigate these waters.
My experience has been that tailoring methods to achieve minimum rigor is often a requirement. While some of my colleagues work on furthering the science of human performance, I practice human factors engineering on the D side of the R&D spectrum, meaning an end “product” needs to be created, from which value can be demonstrated. In my 20 years in the field, only once has a customer requested (i.e., funded) what I call a “no-kidding assessment,” that is, a controlled experiment as learned in grad school! Most often, user feedback is collected at various levels of iterative design and development, but from a very limited or approximate pool of users. One key lesson learned from this is the need for instrumentation of prototypes and products, such that performance data can be collected unobtrusively and consistently, without creating a full experiment or test session. In other words, assessment continuously occurs through primary use.
I have several recommendations for furthering adoption of HF/E. First, I’ll advise that we start showing, stop telling. HF/E has been in practice for decades and examples abound to make the case for human factors (whether that’s with before/after use cases or hands-on demos of human-centered technology). Trying to convince decision-makers with a scientific or academic lecture on the values and benefits of HF/E is doomed. That also means celebrating small victories and milestones within a project and showing to leadership accomplishments achieved through the practice of human factors engineering. Second, I have observed changes in the role of human factors engineers in industry; I support fully embracing those changes. HF/E practitioners are uniquely positioned to be project managers, product managers or owners. In such roles, we advocate for our users and can lead HF/E practice in support of that advocacy. In so doing, we expose non-HF/E staff to human factors techniques and methods. Ultimately, HF/E become the underlying foundation for great product development even if it’s not explicitly affirmed as its own discipline.
Steve Dorton, The Mitre Corporation
One success story was in applying a human factors approach to the development of a common user interface for satellite command and control. We employed a variety of methods (participatory design tools, focus groups, cognitive walkthroughs, etc.) to drive requirements and designs, which were implemented into a prototype where real-world operators with less than 2 hr of training had better performance, lower workload, and higher situation awareness than with the current user interface with which they had years of experience (Dorton et al., 2021).
There are several measures of success as an HFE, none of which are perfect by themself. Concurrence from senior practitioners (especially ones who may be jaded from being embedded non-mature organizations) can be helpful, as can impact on the project, and the project’s impact on the users and their work. These things are not always measurable, and are often out of your control (e.g., you can do everything right, and the project will still fail). Because of these challenges with outcome-based measures, it’s important to focus on process-based measures. Are you bringing to bear the breadth and depth of your knowledge? Are you using the right methods? Are you translating findings into actionable outputs for other engineers and scientists? Ultimately, is the user satisfied with what you’re doing on their behalf?
Minimum necessary rigor should be embraced in practice, where we demonstrate return on investment (ROI) as a function of impact versus investment in HFE (dollars, schedule, etc.). One of the easiest ways to increase ROI is by reducing the denominator-choosing leaner methods. There can be a powerful and subconscious draw to view HFE activities as a controlled experiment, especially for new graduates having recently completed a thesis or dissertation. In practice, we should err on the side of rapid and lightweight methods, not relying on analyses with inferential statistics unless necessary. There have been numerous calls for developing more pragmatic tools for human factors, cognitive, and resilience engineering (e.g., Bruni, 2022; Dorton et al., 2020; Dorton et al., 2023a; Dorton & Stanley, 2024), especially as agile development becomes the default technology development lifecycle. As such, I believe scenario-based methods such as tabletop exercises and wargames provide immense ROI, elegantly gleaning insights on human cognition in the context of complex sociotechnical systems (Dorton et al., 2023b).
Furthering HF/E adoption boils down to impact and the communication of that impact. As I’ve argued in the past (Dominguez et al., 2021), the best way to increase adoption is to be a competent and enthusiastic practitioner of your tradecraft. Having great impact but not communicating it effectively is a problem. Improvements in situation awareness, workload, usability, resilience, and even mission performance require some translation to make their impact obvious to program managers who are usually measured by none of those things, but budget and schedule of development. Conversely, having great communication but poor technical work with little or no impact is also a problem- it dulls the brand and reinforces tropes that HFE is “common sense” and that untrained engineers from other disciplines can do it as a collateral duty. We must be both impactful and communicate that impact effectively.
Eric Holder, Devcom Army Research Laboratory
I have been working as a HF professional for more than 20 years; I have been the new practitioner, mentored new practitioner, and advocated as well explained HF results to management and other stakeholders. Here are some of the core lessons I have learned for new practitioners.
First, you will almost always need to adapt your methods and approach to the given situation. It rarely will exactly match any book description, academic experience, or even your prior work. This flexibility to find creative ways to get good information to answer the key questions will be your best friend. This can be for many reasons but is often based on the access you have to representative users and their time, related knowledge, context, and system readiness level.
As an example, more and more HF practitioners are evaluating systems, such as artificial intelligence and cutting-edge tools that users are not familiar with. This forces HF practitioners to find surrogates or focus on the existing workflows and how they will have to be adapted. You will need to be very clear with explaining the proposed system and address assumptions and realistic expectations, and often need to extrapolate from current practice to potential value added. There are many ways to accomplish this, such as task analysis methods (with cognitive components or not) but the core of your work will be driven by understanding the target users, workflows, gaps, needs and constraints.
One of my early mentors gave me the advice to say, “Eric, you need to get out there with users and start asking questions and writing down what you learn.” Find ways to identify and document those findings in ways that can help guide others on the team and a long formal report will rarely be the way to do that. Related to this is taking the time to identify what the fidelity you need is to get those answers. As an example, I was working on a project developing a visual flight tool for pilots. The team needed a way to conduct task analysis and test out concepts and prototypes. Flight time added big costs and safety concerns, and available flight simulators were not concerned with real world features of the environment. In this case, comparison to real paper maps was an important aspect of our assessment. We found a flight simulation tool buried in Google Earth which contained the realistic world features we needed but did not include realistic flight features, and that worked well for the data and questions we needed to answer.
Knowing that you are doing a good job is about recognizing your impact on design, development, or whatever outputs you are involved with. In the real world, there are often not direct feedback loops for what your input did. Features or other improvements you identify may show up months to years later and tracing them directly back to your input may not be possible. Often vendors and developers will have a large backlog of feature improvements and, unless you work for them, it is not in their interest to give you credit. A good indicator of your usefulness is if you find you are invited to more and more emerging events, asked to provide input regularly, or see your input showing up in other people’s products and briefings. Another indicator that you are on the right path is establishing a useful panel of users for feedback and knowing who to ask for what questions. This will position you to be adaptable to get the answers you need.
To increase adoption of HF input I will talk about 2 channels: formal requirements or leadership buy in. Regardless, you will need leadership to buy in to the need for HF if you want to get paid to do your work. This will be dependent on leadership changes until the need gets formalized in requirements from a domain (e.g., safety, medical, military) such as Human Systems Integration requirements for military acquisition programs. For any other situation the management needs to see the value to spend money on it and this is often a very lean HF thread that is justified, mostly interface testing rather than more conceptual or developmental testing. Sometimes learning to moonlight at other roles (e.g., customer rep) on a diverse team and then showing value would be likely best path. Quick and easy access products like personas and use cases have shown value in helping to get the larger teams starting to think about users earlier. We have also had success organizing user engagement session and inviting the whole development team to attend as they often learn things that open their eyes to the need to understand the users. Do your due diligence and learn as much as you can about the users and workflow before these engagements as it will improve your outputs. This can be reviewing documents, online research, informal initial interactions with SMEs or whatever you can find that can help. It may be disappointing but one of the most effective focus areas for gaining buy in is to identify human-related hazards that could lead to legal liability.
Emily Patterson, The Ohio State University
I have worked as an HF academic practitioner in intelligence analysis, NASA mission control, DOD research, DARPA, and numerous healthcare settings. I have also been an applied usability and user experience professional at Apple and the VA. I draw on all these experiences for these questions.
For success stories, I tend to lean in to experiences where there was a positive impact on operations. These include:
• adding an overview display in Bar Code Medication Administration software to display planned activities that fell through the cracks (Chapman & Carlson, 2004).
• modifying an Electronic Health Record’s clinical reminder function and demonstrating in a simulation improved learnability and usability (Saleem et al., 2007); these results were framed as “design seeds” that informed re-engineering efforts for the VA’s electronic health record (CPRS) (Militello et al., 2008).
• implementing in three affiliated hospitals new auditory tones on a Secondary Alarm Notification System for continuous cardiac monitoring, which contributed to a clinically significant reduction in response time for Code Blue alarms (Hansen et al., 2023).
An indicator that you are doing a good job is being invited to support strategic decision making about new opportunities. For example, as a hospital considers integrating new wearable sensors that could impact alert fatigue or a new automated “bot” scheduler that autonomously calls patients to prepare for surgery, are you invited to join the discussion about whether to purchase the hardware or software and how to safely integrate it into operations, including adequate assessments prior to implementation? The long-term goal is a long-term partnership with domain experts and administrators who are considering making a change, and then supporting their vision while also advancing their cause by providing specialized expertise.
Regarding tailoring assessments to work constraints, nearly everything is negotiable that does not impact on safety or the core mission of the domain when you are supporting design and implementation. If the role is an objective third party to validate effectiveness or safety, then following best practices that are well-documented in standards is more important. There are some special cases where there is a “floor” for participation, such as being informed on how a “black box” algorithm works to assess it with challenging evaluation scenarios.
To further adoption of HF/E, some strategies that have worked for me are:
• “Design seeds” enhance the ability for others to incorporate portions of an integrated concept without requiring adding a new software application (McDonald et al., 2017)
• Adding an overview display, potentially on a tab, to avoid changing interfaces with which users are already comfortable and trained to use
• Separating recommendations that require different timelines and stakeholders to agree for successful implementation; for example, hospital and unit business rules are under the local authority of implementers whereas features in a software package require developers to change their product. Similarly, a national clinical reminders committee for a medical system with standardization requirements is a different stakeholder group than someone who has the authority to modify the interface and content on a locally developed and implemented clinical reminder.
Footnotes
Declaration of Conflicting Interests
The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Funding
The author(s) received no financial support for the research, authorship, and/or publication of this article.
