Skip to main content

It is well known that healthcare systems are facing continuous resourcing challenges due to increased demand and a highly stretched workforce. As a consequence, lots of funding is being channelled into the development of AI in an attempt to address these challenges. Undoubtedly AI will have an impact on the future of healthcare, but the question is how seamlessly AI will be adopted and integrated into clinical pathways to genuinely support better outcomes.

The expectation surrounding AI is high and the promise of AI’s potential in addressing the challenges faced by the NHS, for example, is huge. Some of the potential benefits cited are enhanced diagnostic accuracy and speed, predictive analytics for early interventions, reductions in healthcare costs, support for clinical decision making and enhanced access to care.

However, there are significant risks and challenges to overcome. Some include integration challenges, over-reliance on AI leading to clinician de-skilling and medical errors, data bias in AI models, ethical concerns, and others. It is important to understand that while AI holds potential benefits it also has limitations that developers and users should understand in order to optimise the application of AI technology effectively.

Real World Issues with AI Development

There have already been some examples of AI applications in healthcare, however, some of these have seen some teething issues.

IBM’s Watson supercomputer faced criticism for providing inaccurate and unsafe treatment recommendations while being used for diseases like cancer and multiple sclerosis. The AI suggested treatment plans that neither aligned with standard clinical practices nor matched real-world patient data. These issues raised concerns regarding the reliability and safety of AI in making complex medical decisions, leading to decreased confidence in the system.

Google Health’s DeepMind developed an AI system that accurately detects diabetic retinopathy and age-related macular degeneration from retinal scans. The AI achieved high accuracy rates in identifying these conditions, often matching, or exceeding the performance of expert ophthalmologists. However, the AI struggled with generalisation across different populations and varied imaging conditions, leading to inconsistent performance compared to initial research trials. This highlights the challenge of ensuring that AI systems are effective and reliable across diverse clinical environments and patient populations.

CureMetrix developed an AI system to assist radiologists in interpreting mammograms. While the AI showed potential in early studies, there were real-world application issues. The AI sometimes produced false positives or missed abnormalities due to variations in mammogram quality and differences in radiologist practices.

While AI holds promise for transforming healthcare, its successful implementation is dependent on the resolution of challenges relating to data quality, system validation, user acceptance, and integration into clinical practice.

Human Factors and Usability in AI

Applying human factors principles and usability engineering methods to the development of AI systems for healthcare applications could address many of the associated limitations, challenges, and risks, which in turn, could potentially influence the adoption and acceptance of such devices.

In addition, integrating usability engineering methods into the development of AI systems for medical devices will ensure that these systems are user-friendly, effective, and aligned with the needs of healthcare professionals.

Here are five key ways in which usability engineering can support AI development for medical devices:

User Research

The application of new AI technologies to medical devices and diagnostics is likely to be novel; and the more novel the application, the more useful the early-stage user insights are for development success. User research is not only helpful for spotting opportunities, by identifying clinical pain points and challenges, but also for understanding the broader clinical context and pathways in which the new technology would be used, the demands on the users, and what might be most useful for supporting clinical decision making.

Safe, efficient and effective AI systems are ideally driven by user research, and should be optimised for the real world using insights that uncover the ‘actual’ versus ‘assumed’ user need. User and clinical needs are ideally derived from primary research rather than assumptions made by development teams.

Mapping Clinical and User Workflows

One significant challenge encountered with AI is the integration into existing healthcare workflows and IT systems. Poor integration can result in workflow disruptions, increased cognitive load on healthcare professionals, and decreased overall efficiency. Outcomes can be influenced by things beyond the performance capabilities of AI in one specific task and could potentially just shift bottlenecks elsewhere in the clinical pathway. For example, an AI tool that requires extensive manual patient data entry or diverges from established procedures can cause frustration and inhibit its adoption.

It can be helpful to map both the clinical pathways and user workflows surrounding your AI technology as you can’t completely take the human out of the equation. Humans still play a role at various key steps within the whole care pathway. A feedback loop still exists between the AI and clinician. For example, system feedback may be required to inform clinical decision making.

Usability engineering methods can be used to support, rather than hinder, workflow efficiency, allowing clinicians to focus on patient care, rather than the tech.

Applying Human-Centred Design Principles

Applying human factors and human-centred design principles to AI development for medical devices can support in humanising the algorithms used, by representing the diversity of user and patient populations and humanising the user interface design.

A 2020 study published in Nature found that AI models used to analyse medical images, such as chest X-rays and mammograms, showed performance disparities based on patient demographics. Some models performed worse for women and racial minorities, as they were often trained on datasets that were not representative of diverse populations. These biases can lead to less accurate diagnoses for underrepresented groups, affecting the quality of care they receive. Usability engineering can help address bias by ensuring that AI systems are designed and tested with diverse user groups in mind.

Usability engineering helps in designing intuitive and efficient user interfaces that facilitate easy interaction. This includes developing clear visual displays, straightforward navigation, and responsive controls. A well-designed UI reduces the cognitive load on users, minimises use errors, and improves overall user satisfaction. It ensures that healthcare professionals can quickly and accurately use the AI system in their clinical workflows.

Optimising Human-Machine Interfaces

While AI may be suited to specific computational tasks within a clinical care pathway, it certainly will not be best placed for all clinical tasks, such as those that require sense checking, creativity in problem solving, or using intuition derived from years of clinical experience.

Optimising task and function selection between human and machine is important for improving outcomes. Playing to the strengths of each is useful to optimise the human-machine interface, that may help in increasing efficiency, preventing medical error, and optimising patient care.

Determining the right degree of automation is an important consideration. Should the AI system be supporting decision making or should it be autonomous? Automation does not replace people; it just changes the nature of the tasks. A risk with AI technology is changing clinical tasks from being hands on in nature to more of a ‘remote supervising’ or ‘monitoring’ task. This has the potential to divert the clinician’s attention away from the patient and reducing their situational awareness.

Consideration should also be given to both human and AI limitations around information processing, as this could impact safe and effective use of a medical device. This may include the acquisition of information such as perception and sensing, analysis, understanding, decision making and determining suitable actions to implement.

User Testing

As with user research, user testing is also beneficial when developing novel technologies to ensure that devices are safe and effective for the clinical context in which they will be used.

Usability engineering involves conducting iterative usability testing with real users to gather feedback on the AI system’s performance and functionality. This iterative process helps to identify and address usability issues early in the development cycle, making the integration of such technologies as seamless as possible.

Development teams involved with new technologies often forget that users are not always aligned in their thinking and can make incorrect assumptions regarding the use of products or systems in the clinical setting. I see this all the time where assumptions regarding use are incorrect due to the broader context of use, or where clinical implications are not fully appreciated or understood. I see clever ideas implemented that just don’t work optimally in a real-world clinical context. Engineers are often surprised when users don’t think like them or perform tasks differently than they anticipate, and it often comes down to a lack of upfront user research.

By refining developments based on user feedback, the final product or system is more likely to meet user needs, fit seamlessly into clinical practices, and have a reduced likelihood of user errors or dissatisfaction.

Conclusion

Applying human factors to AI development for medical devices in healthcare could be helpful in improving usability and efficiency, enhancing safety, and reducing use error. This in turn will facilitate its integration into clinical workflows, increasing user acceptance and trust in the technology, whilst addressing diverse user needs. It is crucial for ensuring that these technologies are effective, safe, and user-friendly.

Applying usability engineering methods and human factors principles to the development of AI systems for medical devices and diagnostics can help ensure these devices are not only functional but also user-centred. This will improve their adoption and success in real-world clinical settings leading to greater benefits, smoother integration, enhanced user experience and ultimately better patient outcomes.

If you need help integrating human factors and usability engineering into your development or would like to chat with one of our team about your product design and development requirements, please do not hesitate to get in touch:

Via email on design@egtechnology.co.uk, by giving us a call on +44 01223 813184, or by clicking here.