Evaluation of electronic decision support systems

The National Electronic Decision Support Taskforce report Electronic Decision Support for Australia's Health Sector (published January 2003) identified the need for evaluation of electronic decision support systems (EDSS). In particular the importance of promoting evaluation of the efficacy and effectiveness of electronic decision support systems as a matter of course, using rigorous and validated methodologies.

It would be difficult to propose a single evaluation methodology that meets the diverse needs of the EDSS community. Different user groups have different evaluation tasks and objectives (depending on factors such as the stage of system development, intended goals for the system).

By providing a set of evaluation guidelines this web site is the initial stage in promoting the evaluation of EDSS. Over time this initial set will be added to in what is hoped to be an evolving resource for the EDSS community.

Guideline development

The topics of these guidelines are based upon typical EDSS evaluation questions. These questions were identified during focus groups and individual interviews with those involved in the development, purchase and evaluation of EDSS. The Centre for Health Informatics, University of New South Wales, authored these guidelines based on experience, literature reviews and consultations with local and international experts in the field.

The intended audience of these guidelines is novices at evaluation of EDSS, rather than experts. The aim of the guidelines is to raise the understanding of topic areas, with pointers to useful journal references, books and web sites for those seeking more information. They are not intended to cover every aspect of each topic, but to stimulate thinking around key techniques and to foster an appreciation of the importance of evaluation.

Evaluation - an ongoing process

Øvretveit (1998) provides the following definition of evaluation:

"Evaluation is making a comparative assessment of the value of the evaluated or intervention, using systematically collected and analysed data, in order to decide how to act. Evaluation is attributing value to an intervention by gathering reliable and valid information about it in a systematic way, and by making comparisons, for the purposes of making more informed decisions or understanding causal mechanisms or general principles".

Ammenworth et al (2004) use the concept in the following sense:

"Evaluation is the act of measuring or exploring properties of a health information system (in planning, development, implementation, or operation), the result of which informs a decision to be made concerning that system in a specific context".

While evaluation of an operational system is important, the need for evaluation during system development is also an important priority (Ammenworth et al 2004; Brender 1998; Miller 1996; Moehr 2002). Evaluation of EDSS as an important activity is recognised as a global priority in the medical informatics community. Iterative cycles of design and evaluation at each stage in the development of an EDSS, with refinement based on the results of the evaluation, will lead to an improvement in the quality and safety of these systems.

Miller (1996) points out the need for evaluation to be part of core activities, not just when a system is developed, trialed in a lab then clinical setting or implemented, but as part of the ongoing maintenance of a system. Ideally, system evaluation should be an ongoing, strategically planned process, not a single event or a small number of episodes.

Such a process would ensure that if changes are made to they system, (such as modification of a knowledge base, or an upgrade to the systems software), their impact is evaluated.

Using these guidelines

People, technologies (such as EDSS) and conversations occur as a complex "system" in a specific context in which health care occurs (Coiera, 2004). In order to design and evaluate a system, all three components of the system must be understood. To design systems that take into account social and technical influences, we must understand the interaction between these three key areas. These guidelines are intended to provide an understanding of how we can evaluate aspects of these three key areas.

Each guideline provides an overview of each topic, with reference to other guidelines and specific evaluation techniques (such as how to conduct a focus group). A glossary of terms is also provided. Guideline support is provided for the following areas:

Objectives of the EDSS

You may need to ensure that the EDSS is achieving its objectives, such as whether using EDSS improves compliance with recommended protocols for treatment. If the EDSS does not support users to achieve the intended outcomes, it is unlikely that work practices are being supported. For guidance in this area please refer to the guidelines:

  • How do I evaluate the clinical impact of an electronic decision support system?
  • How do I evaluate the effect of an electronic decision support system on work practices?
  • Functionality and requirements of the EDSS

    EDSS can over-support or under-support its users. An EDSS that is situated in an environment that is under resourced, such as not having enough terminals, is likely to prevent its users from working at their optimal level. To ensure that you have a set of requirements that meets the needs of health professionals and their work context, it is important to evaluate the knowledge content of EDSS and to understand what questions need to be asked about the technical aspects of the system. For guidance in this area please refer to the guidelines:

  • How do I determine the requirements for a system?
  • How do I evaluate the knowledge content of an electronic decision support system?
  • What must I consider when conducting a technical evaluation of an electronic decision support system?
  • User acceptance of an EDSS

    For a system to become part of routine practice it needs to be usable and accepted by users. A system that is not user friendly is not likely to be used or accepted by users. An EDSS that increases unnecessary cognitive load is likely to increase the chance of errors and encourage bad work practices, such as working around the system. To avoid these problems, please refer to the guidelines:

  • How is the usability of an EDSS evaluated?
  • How do I evaluate user acceptance of an electronic decision support system?
  • Systems integration

    An EDSS that is not compatible with other systems can cause unnecessary workload for its users, such as having to double-order a drug prescription, or having to get decision support from one system and carry out that decision using another system. For more information on how well the EDSS is integrated with other information systems, please refer to the guideline:

  • How do I evaluate the interoperability of an EDSS?
  • Managing an evaluation

    A foundation for the conduct of any evaluation is the structure and management of the evaluation project. An overview of the importance of this and principles to consider are included in the guideline:

  • How do I manage an EDSS evaluation project?
  • Feedback

    As with any guidelines, and particularly in an emerging area such as medical informatics, it is expected that refinement of evaluation techniques will occur over time.

    We would welcome your feedback on this set of guidelines and suggestions for topics for future guidelines. Please send your feedback or suggestions to [email protected].

    References

  • Ammenworth, E., Brender, J., Nykanen, P., Prokosch, H, Rigby, M., and Talmon, J. (2004) Visions and strategies to improve evaluation of health information systems. Reflections and lessons learned based on the HIS-EVAL workshop in Innsbruk. International Journal of Medical Informatics;73, 479-491
  • Brender, J. (1998) Trends in assessment of IT-based solutions in healthcare and recommendations for the future. International Journal of Medical Informatics;52, 217-227
  • Coiera, E. (2004) Four rules for the reinvention of health care, British Medical Journal, 328, 1197-1199
  • Miller, R.A. (1996) Evaluating evaluations of medical diagnostic systems. Journal of the American Medical Informatics Association;3, 429-431
  • Moehr, J.R. (2002) Evaluation: salvation or nemesis of medical informatics? Computers in Biology and Medicine;32, 113-125
  • Ovretveit, J. (2000) Evaluating health interventions, London, Open University Press, pp.158-180.