Abstract
Use of medical imaging continues to increase, making the largest contribution to the exposure of populations from artificial sources of radiation worldwide. The principle of optimisation of protection is that ‘the likelihood of incurring exposures, the number of people exposed, and the magnitude of their individual doses should all be kept as low as reasonably achievable (ALARA), taking into account economic and societal factors’. Optimisation for medical imaging involves more than ALARA – it requires keeping individual patient exposures to the minimum necessary to achieve the required medical objectives. In other words, the type, number, and quality of images must be adequate to obtain the information needed for diagnosis or intervention. Dose reductions for imaging or x-ray-image-guided procedures should not be used if they degrade image quality to the point where the images are inadequate for the clinical purpose. The move to digital imaging has provided versatile acquisition, post-processing, and presentation options, and enabled wide and often immediate availability of image information. However, because images are adjusted for optimal viewing, the appearance may not give any indication if the dose is higher than necessary. Nevertheless, digital images provide opportunities for further optimisation, and allow the application of artificial intelligence methods.
Optimisation of radiological protection for digital radiology (radiography, fluoroscopy, and computed tomography) involves selection and installation of equipment, design and construction of facilities, choice of optimal equipment settings, day-to-day methods of operation, quality control programmes, and ensuring that all personnel receive proper initial and career-long training. The radiation dose levels that patients receive also have implications for doses to staff. As new imaging equipment incorporates more options to improve performance, it becomes more complex and less easily understood, so operators have to be given more extensive training. Ongoing monitoring, review, and analysis of performance is required that feeds back into the improvement and development of imaging protocols. Several different aspects relating to optimisation of protection that need to be developed are set out in this publication. The first is collaboration between radiologists/other radiological medical practitioners, radiographers/medical radiation technologists, and medical physicists, each of whom have key skills that can only contribute to the process effectively when individuals work together as a core team. The second is appropriate methodology and technology, with the knowledge and expertise required to use each effectively. The third relates to organisational processes which ensure that required tasks, such as equipment performance tests, patient dose surveys, and review of protocols, are carried out.
There is wide variation in equipment, funding, and expertise around the world, and the majority of facilities do not have all the tools, professional teams, and expertise to fully embrace all the possibilities for optimisation. Therefore, this publication sets out broad levels for aspects of optimisation that different facilities might achieve, and through which they can progress incrementally: Level D – preliminary; Level C – basic; Level B – intermediate; and Level A – advanced. Guidance from professional societies can be invaluable in helping users to evaluate systems and aid in adoption of best practice. Examples of systems and activities that should be in place to achieve the different levels are set out. Imaging facilities can then evaluate the arrangements they already have, and use this publication to guide decisions about the next actions to be taken in optimising their imaging services.
Keywords
MAIN POINTS
Optimisation of radiological protection in diagnostic imaging and image-guided procedures requires provision of clinical images for individual patients that are of sufficient quality to ensure an accurate and reliable diagnosis, with radiation exposure minimised according to the applied imaging technology. In medical imaging, optimisation of protection is at two levels: (i) the design and construction of the equipment and the installation where it is used; and (ii) the day-to-day working procedures performed by the staff involved. Optimisation will only occur if all staff are properly trained in their roles, and equipment operation is assured through a comprehensive quality assurance programme, with ongoing review of performance that feeds into affirmation and development of protocols. Different aspects contribute to optimisation. These are: professionalism within optimisation teams comprising radiologists, radiographers, and medical physicists, each using their unique sets of skills to improve imaging performance; methodology and technology coupled with the necessary expertise to evaluate performance; and organisational processes to manage quality improvement within a structured framework. Complex digital x-ray equipment allows dose levels to be reduced without compromising image quality. This requires high levels of knowledge and skill from imaging professionals, as if features are used incorrectly, patient doses can be unnecessarily high without this being apparent. All members of the imaging team must be given the necessary expertise through training, updated regularly, so they fully understand equipment operation. The degree to which an organisation has implemented optimisation will depend on the personnel, facilities, level of knowledge and experience available, and regulatory oversight. This publication sets out a layered approach to the development of optimisation with broad categories for systems that might be expected to be in place to achieve different levels: Level D – preliminary; Level C – basic; Level B – intermediate; and Level A – advanced. The aim is to guide managers and staff in decisions about the next step to take in their programme of optimisation.
EXECUTIVE SUMMARY
(a) Optimisation is a key principle of radiological protection. Medical exposures make the largest contribution to the exposure of populations from artificial sources of ionising radiation worldwide, so optimisation of such exposures is particularly important. Optimisation of radiological protection for imaging requires radiation dose to be minimised in a manner that is consistent with providing the images/information required for the intended purpose. Digital radiology encompasses all radiological techniques that present images in digital form, for which the appearance can be manipulated to display the image in a form that best suits the purpose, and includes digital radiography, fluoroscopic techniques, and computed tomography. The emphasis on image quality has become crucial in digital radiology with more versatile image acquisition, post-processing, and presentation options. These techniques require a more rigorously defined optimisation process, awareness of underlying technical factors that are not always obvious, and comprehension of the impact of information technology on the displayed image. The clinical risk of patient mismanagement resulting from an examination for which the dose has been reduced to the point at which the image quality is insufficient to allow changes in diseased or damaged tissue to be characterised is likely to be high compared with any additional risk from a higher radiation exposure that gives sufficient image quality. However, cumulative radiation doses from the ever-increasing use of radiology may result in health consequences that, although not immediately apparent, could manifest at a later point in time. Thus, it is a question of balance between different types of risk (potential long-term effects from dose and more immediate clinical consequences), and achieving the correct balance is a challenging task from both technical and professional perspectives.
(b) To achieve successful optimisation, a facility must have sufficient imaging equipment, and enough staff who have been adequately trained in use of the equipment and the information technology features that are available. The optimisation process starts with specification of the equipment required to fulfil the clinical need, and continues through its purchase, installation, acceptance, and commissioning. It includes maintenance and the quality assurance programme which continue throughout the life cycle of the equipment. Optimisation then continues during clinical use of the equipment, with requirements for provision of necessary clinical information by those referring patients for examinations based on accepted guidelines, and appropriate processes for reporting and acting on results of imaging procedures.
(c) Optimisation requires the input of knowledge and skills on many different aspects of how radiological images are formed, and so requires contributions from different healthcare professionals working together as a team. A radiologist, other appropriately trained radiological medical practitioner, or radiographer can judge whether the image quality is sufficient for the diagnostic purpose. A radiographer should know the practical operation and limitations of the equipment and associated information technology, and have a basic knowledge of the physical principles of image formation and interpretation of measurements on images. A medical physicist should have a deeper understanding of the physical principles behind image formation, and be able to perform and interpret measurements of dose and image quality. In order to achieve optimisation, the three specialities, together with other healthcare professionals who will sometimes be involved, must have mutual respect for their individual skills and work together as a cohesive group (i.e. professionalism). Unfortunately, at the time of preparation of this publication, the levels of knowledge and skills in many countries are often inadequate to achieve good optimisation on more complex digital radiology systems due to lack of resources. Increasing technical and computational complexity in radiology equipment and applications underlines the importance of multi-professional collaboration and dependency on the combined knowledge of different professionals. Dedicated time must be made available for professionals to work together to meet emerging challenges in optimisation as applications of new equipment are developed.
(d) Digital imaging provides the potential for images to be obtained with lower exposures than previously possible using film screen combinations, enabling levels to be adapted to the diagnostic requirements of specific examinations. New techniques are continuously becoming available that can improve image quality and potentially enable diagnostic images to be obtained with lower patient doses. As an example, automated exposure control systems are continuously developed to be more effective in ensuring consistent image quality while reducing patient dose by adapting the radiation level to each procedure and the patient. However, all of these features introduce additional complexities and require settings to be chosen correctly for proper operation of software controls. If users do not deploy them effectively due to limited awareness of their mode of operation, the doses received by patients may not be optimal, but this will not be apparent to the user. Therefore, more complex equipment requires knowledgeable staff with more extensive training for its operation. Knowledge and skills, in combination with the instruments and test objects to evaluate the performance of the equipment, form the basis of optimisation (i.e. methodology).
(e) A key component of optimisation is keeping the radiation dose to the patient as low as reasonably practicable, while maintaining an adequate level of image quality and diagnostic information. At the basic level, this requires regular assessments of doses from groups of patients to determine the dose levels, and comparisons with diagnostic reference levels to confirm acceptability. Evolving technical optimisation features and quality management systems will enable extension of the optimisation process to individual patients and procedures based on clinical indication. Operators must have the knowledge and skills to use such features appropriately, and if they do not, important opportunities will be lost. Such an indication and patient-specific level of optimisation is applied routinely every day in radiology departments, and is a fundamental extension of the conventional optimisation principle (known as ‘as low as reasonably achievable’) as applied to patients. Indication orientation and patient specificity connect the optimisation process directly to the justification process, and enable them to be mutually supportive and comprehensive, forming a unitary process for radiological protection.
(f) Evaluation of image quality as part of quality assurance/quality control programmes typically involves evaluation of clinical images by an experienced radiologist, other appropriate radiological medical practitioner, or radiographer against established good image quality criteria, and objective analysis of phantom images by a medical physicist. Further net improvements could be gained in the future through automated image quality evaluation based directly on clinical patient images, and may involve artificial intelligence algorithms implemented directly into image archives or imaging modalities. Regardless of the present or future methodology, the process of measuring image quality involves many interdependent parameters and, due to this comprehensive nature, is a pivotal part of the overall assessment of performance. Results from evaluations of clinical image quality, coupled with results from patient dose and image quality measurements, feed into the development of examination protocols optimised for the clinical purpose. To ensure that optimisation processes are carried out consistently, management systems need to be in place to confirm that measurements and assessments are made, to ensure that available data from clinical use and performance measurements are used in making adjustments to protocols to address any deficiencies, and to monitor the progress that is made (i.e. process management).
(g) The degree to which any organisation has implemented optimisation in digital radiology will depend on the personnel, facilities, and level of knowledge and experience available. Within the aspects of professionalism, methodology, and process, there will be different levels of performance that radiology facilities will have achieved. This publication sets out broad categories for the systems that would be in place to achieve different levels of optimisation: Level D – preliminary; Level C – basic; Level B – intermediate; and Level A – advanced. It is hoped that evaluation of the arrangements that radiology facilities already have in place will provide a guide to decisions about what actions should be taken next to improve optimisation of their imaging service. It is also noted that these categories (Levels D, C, B, and A) with increasingly advanced optimisation methods also reflect the increasing capability to reach indication-oriented and patient-specific optimisation processes.
(h) There is a need for a cultural change in order to enable improvements and developments in optimisation methods, and to avoid key processes being overlooked. Optimisation will only be achieved through facilities investing in adequate staffing levels to operate their imaging equipment, and providing the appropriate training, together with continuing professional development opportunities for their staff. This begins at the stage of entry into medical imaging professions with sufficient courses for the education of trainees with opportunities to learn under the guidance of experienced practitioners. Knowledge and understanding are key to successful optimisation of radiological imaging. The cultural shift towards multi-professionalism required can only occur if the professional roles and competences are built to support this fundamental shift.
(i) This publication provides guidance on the adaptation of levels of dose and image quality to clinical tasks, taking advantage of the wide dynamic range offered by digital imaging equipment. Practical aspects that depend on specific x-ray image acquisition techniques are covered in a separate companion publication.
1. INTRODUCTION
1.1. Background
(2) X rays have been used to obtain images of the body to aid in diagnosis of disease since their discovery by Roentgen in 1895. X-ray imaging has provided an invaluable aid in diagnosis, follow-up, and management of patient treatments, and over the last few decades, with the rapid development of interventional techniques, it has allowed many complex procedures in cardiology and in specialities dealing with other parts of the body to be performed with reduced surgical intervention, improving patient comfort and survival. X-ray imaging procedures are the most widely used form of medical imaging, and make the largest contribution to human exposure to ionising radiation from artificial sources. As such, an x-ray examination carries an associated risk that, although not large, must be taken into account when patients are imaged. (3) The benefits to the patient undergoing a radiological procedure that is going to influence management of their treatment or aid in diagnosis will almost always outweigh the risk resulting from the radiation exposure. The term ‘patient’ includes not only persons undergoing medical treatment, but also volunteers subject to exposure as part of a programme of biomedical research. Similar principles with regard to optimisation will also apply to planned non-medical imaging exposures carried out for legal purposes of any kind. (4) If there is no benefit from performing an exposure, it is not justified. Awareness of associated risks has encouraged the development of facilities and tools on equipment to allow radiation doses to be kept as low as reasonably achievable (ALARA), consistent with the intended clinical purpose (i.e. according to the traditional ALARA principle). Those using x rays need to understand the imaging process and the interplay between equipment factors and settings, as well as being trained in the practical techniques for use of information technology, in order to ensure that patient radiation doses are kept to the minimum for obtaining the image quality required for the specific imaging task. The need for this understanding has become more crucial with the increased complexity of digital radiology techniques, which include digital radiography, computed tomography (CT), and interventional x-ray equipment, which can deliver significant radiation doses if used incorrectly.
1.2. Image quality levels and clinical diagnostic requirements
(5) Optimisation in simple terms involves achieving a balance between clinical benefit and risk from radiation exposure. First and foremost, it requires provision of clinical images for individual patients that are of sufficient quality to ensure accurate and reliable diagnoses, so that correct care decisions can be made. In addition, the radiation doses used in acquiring such clinical images should be adjusted so that, while being adequate to produce the images, they are minimised to the level appropriate for the applied imaging technology. (6) The level of image quality can affect the diagnosis, and the aim is to achieve a balance between the clinical and radiation risks in order to minimise the overall risk. In many imaging indications, the clinical risk related to possible sub-optimal image quality from an examination, for which the exposure has been reduced more than it should have been, is likely to outweigh the small additional risk from using a higher radiation exposure. A patient will not benefit from an examination that is incapable of visualising the appropriate pathology, and the dose will be wasted no matter how low it might be. Thus, there could be a consequent clinical risk of misdiagnosis, which may increase as image quality declines (Fig. 1.1); in such situations, there may be a need to increase the dose (Samei et al., 2018). Therefore, while in the general context of the System of Radiological Protection, optimisation is understood as keeping doses ALARA, in the case of medical imaging, this means delivering the lowest possible dose necessary to acquire adequate diagnostic images. This is best described as ‘managing the radiation dose to be commensurate with the medical purpose’ (ICRP, 2007a,c). Managing the radiation dose for any application requires an understanding of the way in which an image is formed, and how different factors influence both the image quality and the radiation dose received by the patient (Martin et al., 1999).
The total net risk from a radiological examination is a sum of radiation risk and clinical risk. The radiation risk is assumed to increase linearly with dose according to the linear non-threshold model. Clinical risk is assumed to decrease with dose as the image quality is improved to provide adequate clinical information. In this example, the clinical risk decreases according to an exponential model, but there will be a lower limit where residual clinical risk is maintained regardless of the imaging method. The minimum net risk for the summed components will be the optimisation target point. Adapted from: Samei et al. (2018). (7) This publication addresses both radiation dose and image quality. Since assessment of an image is both a clinical task and reader dependent, it is not a trivial matter to decide what quality of image is adequate for the clinical task in hand (NCRP, 2015). Significant reliance is placed on judgements made by radiologists or other radiological medical practitioners, but opinions vary about image quality requirements, and less experienced practitioners may need a higher level of image quality to make a diagnosis. Thus, quantification that can aid in a decision about the appropriate level of image quality is difficult when it is based on subjective evaluation of image quality against quality criteria (Section 5.4). (8) The tools used for measurement of image quality for image receptors during performance tests relate to the ability to detect low contrast objects of varying size and shape within a uniform background, and depend on the noise level and texture. These cannot easily be compared and translated into analogous clinical tasks. Research groups are investigating methods of image quality analysis that can be more closely allied with clinical tasks using simulations with model observers or by artificial intelligence (AI)-based methods (Section 5.3). Although the field is developing, there is still some way to go before such methods provide solutions which can be implemented more widely in clinical practice, but their application is likely to be important in the future. (9) The optimisation process may involve not only selection of the appropriate level of image quality, but tailoring the examination protocol to the clinical needs of each individual patient (Samei et al., 2018). Decisions may have to be made about the extent of the imaging required to answer the clinical question (e.g. this might include the possible need for rotational three-dimensional (3D) imaging in image-guided procedures). In some countries, this option is included in the initial justification, but reasons for electing to carry it out are part of optimisation. As more technical optimisation features and quality management systems (QMSs) evolve, the optimisation process will extend to focus more on the needs of individual patients. (10) The ultimate objective is to maximise the benefit-to-risk ratio for the patient, and is related to clinical effectiveness and, with more controlled scenarios, to clinical efficacy. Since clinical outcome depends on a very large number of factors and clinical data types, such outcome quantification cannot be done with any simple model or modality. This will include AI algorithms, tailored to the needs of individual patients; for example, based on physical characteristics and body habitus as well as clinical indication. Adapting the protocol to individual patients will also have to take their medical conditions/limitations into account. Therefore, a big data approach using methods related to AI would seem to offer considerable potential for the handling of such multi-dimensional data, constructed from many clinical data types, and involving complex correlations and interdependencies, and is likely to become increasingly important in the future. This would enable an indication-oriented and patient-specific optimisation methodology to be implemented as an organisation-wide and consistent process, with measurable effectiveness and extensive use of other performance indicators.

1.3. Risks from radiation exposure due to medical imaging
(11) Something should be said about risks from radiation in order to establish the context for optimisation. Potential effects of radiation exposure are tissue reactions (deterministic effects that occur in the days, weeks, and months following an exposure) and stochastic effects (risk of induced cancer or hereditary effects in the long term). There will only be a risk of tissue reactions for interventional cardiology or radiology for a very limited cohort of patients with serious medical conditions, who undergo one or more complex procedures within a period of a few months. However, there could be a risk of lens opacities from cumulative exposures of the eyes to doses >500 mGy received over an extended period, and these may develop with time (ICRP, 2012). Methods for the avoidance of tissue reactions are addressed in (12) Although the evidence is derived predominantly from the atomic bomb survivors, other studies on radiation workers in the nuclear industry, patients receiving high localised radiation doses from medical therapies, or individuals exposed during radiation accidents provide further evidence that the risk exists. It is often only through meta-analyses combining data from several studies that results for population sizes with sufficient statistical power to show a link between radiation and cancer are obtained. The epidemiological results are consistent with a linear relationship between the risk of cancer induction and mean absorbed organ dose at doses extending below 100 mGy. Based on this, for purposes of radiological protection, a linear non-threshold (LNT) model is used to extrapolate down to lower doses in order to estimate potential risks (ICRP, 2005). Recently, the National Council on Radiation Protection and Measurements (NCRP) has published a review of epidemiological studies, including those of the atomic bomb survivors, pooled results for nuclear industry workers, and data from other exposed populations, undertaken in order to assess the quality of the data and evaluate the support that they provide for the LNT model (NCRP, 2018; Shore et al., 2018). NCRP judged that, although the risks are small and uncertain, the available evidence provided broad support for an LNT model as the most pragmatic approach for radiological protection. Although the atomic bomb survivors received their radiation dose as a single exposure at one point in time, other populations such as nuclear industry workers were exposed to small doses incrementally over time, and the cumulative value was tens of mSv effective dose. In addition, more evidence is emerging from recent reviews of epidemiological data that doses from exposures <100 mSv are associated with cancer risks in both children and adults (Lubin et al., 2017; Little et al., 2018, 2022; Hauptmann et al., 2020; Rühm et al., 2022). It is within this context that the risks from medical exposures should be appraised. (13) Medical imaging modalities are designed to investigate conditions that generally only affect certain parts of the body, so regions of the body irradiated in any imaging procedure are localised. Moreover, the x rays are attenuated as they pass through the body, so superficial tissues receive higher radiation doses than those deeper within the body. Therefore, the organs and tissues irradiated and the distributions of radiation dose within individual tissues are different for every type of examination, and also depend on the size and shape of the body for each patient. As individual tissues also vary in their sensitivity to radiation, this means that the risk of any stochastic effect from every examination will be different, and will depend on the exact conditions of exposure, and the age and size of the patient (ICRP, 2021). The mean absorbed doses to organs and tissues from diagnostic radiology are generally in the range from fractions of a mGy to tens of mGy. The potential detriment to health from sequences of exposures performed for diagnosis and management of disease could be significant if dose levels are higher than they need to be or if examinations are repeated unnecessarily, although patients form a sub-group of the general population who may have other competing morbidities.
1.4. Justification and optimisation of medical exposures
(14) (15) Justification requires the radiologist or other radiological medical practitioner to weigh the expected benefits from imaging against the potential cost including radiation detriment, and to consider available alternative techniques that do not involve exposure to radiation. For radiological medical practitioners to make such decisions, they should understand the clinical indications and the health status of their patients in order to determine which imaging tests or fluoroscopically guided interventions (FGIs) are appropriate. The process of justification in the medical context will not be considered here, except in relation to highlighting the need for radiological medical practitioners, whether they be radiologists or other clinicians, to always be provided with the relevant clinical history for the patient who is to undergo the procedure, in order that the justification process can take place. Collaboration between the referring clinician or healthcare professional and the radiological medical practitioner to provide the information about the patient’s condition is the first step in the process, and is crucial for ensuring that the imaging task is adapted to the clinical need of the patient so that optimisation is carried out satisfactorily. (16) Optimisation is defined as the process of determining the level of protection and safety to make exposures ALARA, with economic and societal factors being taken into account (ICRP, 2007b). The Commission explained the concept and principle of optimisation as applied to medical exposures in (17) The two levels of optimisation outlined above are not sufficient to ensure that the radiological protection of procedures is optimal, as there will be continual development in equipment facilities and knowledge and skill of the operators that should feed into a process of steady improvement. Therefore, the proper training of operators with periodic sessions to update knowledge on new techniques and improved facilities on equipment is essential. Evolving technical optimisation features and QMSs will extend the optimisation process to focus more on individual patients and procedures based on clinical indication, which is an extension of the ALARA optimisation principle as applied to patients (Oenning et al., 2018). Optimisation is not a static process to be ignored and forgotten once it has been achieved. It requires constant attention with frequent monitoring and analysis of performance, reject analysis, feedback of experience, and regular review to provide continual improvement in every aspect of the imaging process and refinement of the service to the patient. This last component is key to achieving higher levels of optimisation. (18) Technical requirements for optimisation for the various modalities used within radiology, namely radiography, fluoroscopy, and CT, are very different. Therefore, subsequent to (19) Both justification and optimisation have become increasingly important with the passage of time as part of the effort to ensure that patients receive the best-possible service from their imaging departments. The two principles are to ensure that patient doses are not only low enough to justify a particular examination, but also, through optimisation, are kept ALARA without being reduced to the extent that the level of image quality required for the clinical task is jeopardised. The mutual connection between optimisation and justification may be strengthened with more indication-oriented and patient-specific optimisation processes in the more advanced categories described in this publication. (20) Individual patients may undergo many imaging procedures from which they could receive effective doses reaching hundreds of millisieverts (Brambilla et al., 2020; Rehani et al., 2020). Although the majority of patients who receive recurrent exposures will be in the later stages of life when risks will be less (Martin and Barnard, 2022), there are some children with particular health problems that require frequent follow-up with recurrent imaging. Particular attention should be paid to developing care plans for these individuals in which the frequency and performance of imaging are optimised (IAEA, 2021a). Significant further reductions in dose from recurrent imaging procedures may be possible through optimisation based on information from earlier procedures. (21) Furthermore, increasing access to diagnostic and clinical data by evolving radiological information systems (RISs), picture archiving and communication systems (PACSs), and hospital information systems (HISs) will help to implement a more advanced comprehensive process of justification and optimisation. The expansion in the use of radiological imaging worldwide in recent decades, coupled with the introduction of new digital technologies that require higher levels of expertise to operate, makes the effective practice of optimisation techniques more important than ever. The welfare of patients and the population at large will be enhanced if radiation exposures resulting from x-ray examinations can be kept to a minimum without reducing the medical benefits.
1.5. The scope for optimisation with digital imaging modalities
(22) Radiographic imaging is, in essence, a fairly simple procedure, with x rays being used to produce a ‘shadow’ image of tissues in the body. As components of the tissue attenuate the x rays to different extents, structures can be visualised within tissues. The denser abdominal and pelvic tissues attenuate x-ray beams more than lung tissue, but attenuation of the x-ray beams also depends on the energies of the x-ray photons. Any potential health detriment will depend on the tissues and organs irradiated, and the distribution of absorbed dose within them. Simple examples of poor optimisation are if a larger field size is used for a radiographic exposure than is necessary, or if a chest x ray is performed with a lower energy beam (e.g. 70–80 kV) from which little scattered radiation is generated, but an anti-scatter grid is inserted behind the patient which also attenuates the primary beam, as these will increase the dose to the patient. Radiographers or radiological medical practitioners will operate the imaging equipment, and the skill of these radiological professionals encompasses selection of the best imaging exposure factors, equipment, and technique available for each type of examination and personalised for each patient depending on their size, shape, and weight. Approaches have changed as techniques with digital equipment have evolved and become more sophisticated, with the need for a greater knowledge of information technology and system software. (23) The appearance of images recorded and stored in digital form can be adjusted through post-processing to give an acceptable range of grey levels for optimal viewing (ICRP, 2004). If a much higher dose is given than is appropriate, the image may appear slightly better or essentially the same, but this may be difficult to determine from simply viewing the image. A high dose in digital radiography will not produce a black image, as it would with film, so more attention needs to be paid to monitoring dose levels. (24) Digital images offer many advantages and have the potential to allow images to be obtained with lower exposures, adapted to the diagnostic requirements of particular examinations. However, this facility is often not considered, and standard image detector exposure levels are often used for a wide variety of examinations. The relevance of image processing in digital radiology is more significant than might be anticipated from a first glance, as the digital image data typically include a range of thousands or even tens of thousands of grey-scale values whereas the human eye can see <1000 separate grey-scale levels even in optimal lighting conditions and when using advanced medical displays (Kimpe and Tuytschaever, 2007). Therefore, there is the potential for digital image processing to enhance the features relevant to diagnosis, and present them more clearly in the final images. Achieving the correct balance between dose and image quality becomes a more complex task with the additional need to understand the operation of the software controls. However, proper application of digital radiology should enable sufficient image quality to be achieved, often with lower dose levels. (25) CT scanners have become more complex, and although they have more capabilities to enable doses to be kept at a reasonable level, achieving this requires a high standard of knowledge, skills, and scientific expertise from the healthcare professionals involved in radiological imaging and optimisation. If these are not in place, doses delivered to patients could be unnecessarily high without staff being aware that anything is wrong. Even in countries with highly developed healthcare systems, optimisation is frequently not fully implemented. For example, radiation accidents involving tissue reactions from CT scanners have been reported in the USA, where the necessary expertise might always have been expected to be available (ICRP, 2007a; Martin et al., 2017). The availability of staff fully trained in use of all the hardware and software features when new equipment is purchased is of utmost importance. (26) There have been substantial developments in the application of FGIs during recent decades. These allow surgical procedures to be performed with less invasion of the body than is required by conventional surgery, resulting in lower risks, shorter recovery times, and lower costs (Maudgil, 2021). FGI is frequently the method of choice for complex interventions by a variety of medical specialists (UNSCEAR, 2008), so the number of procedures has increased substantially. FGIs may be performed in a variety of settings, and sometimes by radiological medical practitioners with less training in radiological techniques and awareness of radiation exposure than radiologists. In addition, the increasing complexity of the procedures that are now possible means that longer exposure times may be required, which carry a potential risk of radiation tissue reactions in the skin (ICRP, 2000a, 2010, 2013a; IAEA, 2010). (27) As the level of sophistication develops, the variety and complexity of procedures that are possible increases (NCRP, 2019), and the level of optimisation should be increased in parallel. Recent technological innovations that are now being implemented have the potential to provide a higher degree of optimisation through analysis of the levels of image quality necessary for imaging different organs, tissues, and pathologies, and through the collation and analysis of image-related data. However, effective use of these techniques requires that continual attention is paid to monitor the performance of equipment and develop examination protocols based on experience gained. Practical optimisation applied to the different imaging modalities will be considered in a separate publication.
1.6. Adaptation of patient dose levels
(28) The dose that a patient receives from imaging should be consistent with the clinical question that needs to be answered. Radiology and other medical imaging facilities aim to keep doses at a reasonable level based on good practice, using reference dose levels as a guide. To achieve this requires input from radiological medical practitioners, radiographers, and medical physicists working together as a team within an organisation that provides a structure which facilitates the process. An important component of the optimisation process is having information on doses that patients are receiving, and knowledge of whether these dose levels are reasonable. (29) Although many users may only have limited awareness of radiation doses for the examinations they perform, dose is a quantity that can be measured or calculated with relative ease. So, when optimisation programmes are set up, there can be a tendency to place undue emphasis on dose reduction, which can be quantified or read directly from the equipment, ignoring the potential detriment to the provision of clinical information, which, in almost all cases, is a far more important factor for the quality of care and effective clinical outcome. (30) A tool that ICRP adopted over 20 years ago to aid in ensuring that doses for procedures are at reasonable levels is the DRL, the application of which is described in detail in
1.7. The process of optimisation
Components of optimisation.
(32) Optimisation of radiological protection requires input from several groups of staff with different skill sets, and these must have been acquired through proper training. The staff groups need understanding from training and experience to be aware of dose levels and their dependence on different factors that affect image quality. They need to be able to make judgements and determine reasons for any deficiencies, and be able to adjust protocols and procedures to address them.
(33) Optimisation also requires collaboration between the professional groups; without this, progress is unlikely to occur. Performance tests on equipment may be carried out by medical physics staff, but without feedback of information from physicists to users, assessment of the optimal equipment settings, and adjustment to clinical protocols, there will be little progress. Physicists might provide dose information from surveys and trace technical aspects related to image quality, but it is the radiographer and radiologist or other radiological medical practitioner who can judge whether the quality of the clinical image is adequate. Unless the three groups work together to identify when doses for any procedure are higher or substantially lower than expected, or the image quality is higher than necessary or too poor for diagnosis, there will not be any change in practice. Encouraging staff engagement in these aspects enables optimisation to become a habit that is part of routine practice.
(34) There need to be systems in place to manage optimisation, as follows: (i) ensure that monitoring, review, and analysis of performance are part of an ongoing process; (ii) establish and modify clinical protocols, taking account of available data; and (iii) apply the results across the whole organisation.
(35) This publication considers how the different aspects of the optimisation process might be addressed by radiology services and countries with varying levels of infrastructure and optimisation tools. It attempts to provide guidance across the full spectrum from countries with limited expertise through to advanced services with access to patient radiation exposure monitoring and image quality assessment software, taking into account the greater flexibility in image processing and presentation afforded through new techniques. Section 2 will deal with management of x-ray equipment through its life cycle. Section 3 will examine the structure of the optimisation process. Section 4 will review practices in measuring and analysing patient dose data. Section 5 will consider the assessment and requirements for image quality in more detail than in previous reports. Finally, Section 6 will consider the requirements and provision of training, which is a key element in establishing a successful optimisation programme.
(36) The target audience for this publication includes not only radiologists, radiographers, medical physicists, cardiologists, other radiological medical practitioners, and healthcare professionals operating x-ray equipment, who deal directly with the processes described, but also managers involved in allocation of resources for equipment and training, x-ray equipment vendors, x-ray engineers, applications specialists, and regulators.
2. THE X-RAY INSTALLATION AND X-RAY EQUIPMENT LIFE CYCLE
2.1. The life cycle of medical imaging equipment
(38) Setting up a new x-ray imaging service, or replacing an existing service, requires careful planning by a team that includes radiological professionals, radiology managers, facilities personnel, and clinical engineers. The equipment life cycle is a well-understood concept, and describes medical equipment, including imaging equipment, from ‘cradle to grave’. X-ray equipment is procured through a tender process wherein equipment suppliers are invited to submit a bid to supply the equipment or services. The team needs to prepare a technical and operational specification based on the clinical requirements, stating what the equipment is to be used for, where it is to be installed, the major system components, any accessories (such as contrast injectors and QA test objects) that might be required, and include the maintenance and repair arrangements. They should also agree an evaluation/scoring system to select the equipment that meets their needs. Once a contract has been agreed, the equipment will be installed and commissioned according to agreed standards, with personnel trained in its use, and a QA programme put in place to ensure that standards are maintained.
(39) The initial conception of the clinical need for medical imaging must first be developed into a proper robust justification for procurement. This is the embryo stage of the life cycle shown at the top of Fig. 2.1. The life cycle of imaging equipment should be included in a healthcare organisation’s planning process, which should aspire to incorporate a systemic approach to the procurement, deployment, maintenance, quality control (QC), repair, and disposal or alternative use of imaging equipment. Every stage in the life cycle is critical in terms of optimisation of patient protection. Professional skills, methodology, and process all play a vital role in the management of the equipment life cycle; understanding and managing this appropriately is essential if optimisation is to be achieved.
(40) Fig. 2.1 shows the basic life cycle of x-ray equipment, and how it involves a continual sub-cycle to maintain performance and improve optimisation once the equipment is put into clinical use. It also shows the acquisition process in some detail; appropriate acquisition is essential if optimisation is to be achieved. The stages are described below, with an emphasis on relevance to optimisation.
The imaging equipment life cycle.
2.2. Acquisition of x-ray equipment
2.2.1. Justification of equipment
(41) The stages in the life cycle of equipment include justification, procurement, installation, acceptance, commissioning, user training, clinical use, and disposal or alternative use. The procurement of all medical imaging equipment should be justified in terms of clinical need and radiation dose. Justification should be evidence driven, and should consider present and future clinical applications and revisions of workflow whilst ensuring that there is no unnecessary proliferation of equipment. Justification of new or replacement equipment requires the involvement of radiologists or other medical radiological practitioners, radiographers, medical physicists, and administrators.
2.2.2. The acquisition and procurement process
2.2.2.1. Specification
(42) Once procurement of equipment has been justified, it is essential that a full performance specification of the entire system is established before any purchases are made in order to reduce the possibility of inappropriate devices being purchased. In the context of optimisation, the performance specification should include consideration of the intended clinical use of the equipment and technical requirements relating to patient dose and image quality.
(43) The type and amount of training required should be specified, as should the manner (e.g. procedures and their resulting technical documentation) in which the manufacturer/installer demonstrates that the equipment supplied meets the performance specification and local regulatory requirements (see Section 2.3.1). Maintenance requirements should also be included in the specification, as should detail of any regulatory requirements that the equipment will be expected to meet. Delivery timescales should also form part of the specification.
(44) Specification is a task that requires input from radiologists or other medical radiological practitioners, radiographers, managers, medical physicists, information and communications technology (ICT) professionals, engineers, and procurement experts. The specification document should address the issue of enabling any infrastructure work required; for example, what level of connectivity is required for the equipment to function appropriately, and how will the vendor address those requirements within the organisation’s ICT infrastructure? Specifications should also include the resourcing and vendor activity involving the initial optimisation of equipment imaging or exposure protocols. This will ensure that the purchase not only includes the technology and applications, but also the correct setting of the technology that is appropriate for the first practical phase of optimisation. In our modern world with wide connectivity to networks, it is also important that data security and data safety issues are considered in the specification.
(45) In the case of used or refurbished equipment, the specification should be clear that the equipment should function as originally intended, and meet all the performance and safety requirements that it did when new. IEC 63077 describes and defines the process of refurbishment of used medical imaging equipment (IEC, 2019b).
2.2.2.2. Tender, evaluation, and acquisition
(46) A tender comprises the specification and terms and conditions under which the equipment is to be procured. A tender is normally issued after a call for expressions of interest is issued to potentially interested parties. Responses to the tender will form the basis for the evaluation process, so it is important that the questions posed by the specification document and stipulations regarding terms and conditions are formulated correctly. The tender may require the vendor to identify options for the disposal of redundant equipment.
(47) On receipt of tender returns, a multi-disciplinary group comprising radiologists or other medical radiological practitioners, radiographers, radiology managers, medical physicists, biomedical engineers, ICT professionals, and procurement experts should convene to consider the responses from those vendors offering their products. Evaluation should be carried out in an objective manner against predetermined criteria to maintain neutrality and to ensure that the most optimal equipment or system is chosen. After evaluation, a purchase order can be placed, and lead-in times identified. The contract should address all the items included in the specification and the associated terms and conditions, including the initial protocol settings.
2.3. Enabling and installation of x-ray equipment
(48) Enabling and installation works are essential components of the equipment life cycle. Planning and construction of the x-ray room, protection, and electrical and other services all need to be prepared beforehand, and consideration needs to be given to facilitating the appropriate movement of the patient and positioning of the attending staff. If the installation is not completed correctly or the correct infrastructure and building work is not carried out appropriately then at best delays will be encountered. There are likely to be ongoing issues throughout the life of the equipment. Basic connectivity issues and possible mitigation should be identified at this stage, as should issues around licensing and registration (WHO, 2019).
2.3.1. Acceptance
(49) Acceptance testing is the process whereby the purchaser satisfies themselves that the equipment supplier has provided what has been ordered, that it is safe to use, and that it functions according to the manufacturer’s and purchaser’s specification. This will involve both medical physicists and radiographers, in consultation with radiologists or other medical radiological practitioners, and will include identifying the inventory, and performing electrical and mechanical safety checks. Regulatory requirements may require demonstration of radiation safety, which should be carried out at this stage. Acceptance tests often involve quantitative measurements to demonstrate that the equipment specification is met. These tests are vendor-specific and follow the vendor’s methodology. Information technology (IT) connectivity and configurations to PACSs and other relevant IT systems (e.g. image processing workstations and analysis servers) should also be verified during acceptance testing.
(50) Depending on the complexity of the equipment and its specification, the manufacturer/equipment supplier may be best placed to demonstrate conformity with certain aspects of the specification. For example, if the equipment specification quotes a modulation transfer function (MTF) at a particular spatial frequency, the installer can reasonably be expected to demonstrate this in some way, which may be by presenting the factory test sheet or direct measurement. Section 2.2.2.1 discusses specification during the procurement phase. The presence of operator and service manuals should be verified at this stage.
2.3.2. Commissioning
(51) In the commissioning phase, the purchaser should ensure that the equipment is ready for clinical use, and establish baseline values against which the results of subsequent routine performance tests (constancy or QC tests) can be made (IPEM, 2005; Stevens, 2021). The set of QC tests should guarantee that the system parameters, modes, and programmes are optimised for the intended clinical use, and their deviations during the life of the equipment are within acceptable limits. Protocols to be used for performance testing purposes should be identified; if clinical protocols are to be used for performance testing purposes, commissioning should not take place until they have been installed.
(52) After any major work on the equipment, the relevant baseline test may have to be repeated; for example, when a detector or x-ray tube is replaced. Commissioning should also address issues of interoperability in the case of highly complex digital imaging equipment (AAPM, 2019b).
(53) Clinical protocols for acquiring images should be evaluated at the commissioning phase, and checked for consistency with other equipment operated by the healthcare organisation to ensure that, to as great a degree as possible, there is a systemic approach to imaging. In the case of digital radiography, for example, expected values of exposure index and technique factors should be established for routine examinations. Another example is that of CT, where all examinations for specific clinical indications in an organisation should be performed with similar protocols, or protocols matched to give as similar a level of performance as equipment factors permit. As mentioned before, the purchase should not only include the technology and applications, but also the optimised initial setting of the technology for clinical use.
2.3.3. User training for clinical use
(54) User training is critical for safe, optimised use of any imaging equipment. Organisations should have a policy for user training that should be part of the quality management programme, where one exists. Vendors have responsibility for providing users with training that includes a full understanding of imaging options available that can enable full optimisation. Initial user training should ideally be provided by the representative of the installer/manufacturer (applications specialist) following acceptance and before the equipment is put into clinical use. Different users may well require different levels of training; for example, medical physics personnel may be required to use equipment in service or similar modes. It is fairly likely that some end users of the equipment will not be able to receive this initial training, which should also be given to anyone who is required to use the equipment after installation, including qualified medical physicists. In this case, training should be delivered by an agreed cascade process. It is important that the most educated ‘superusers’ are identified for dissemination of user knowledge, and should provide practical guidance for subsequent refinement of protocol optimisation as members of the local multi-professional team.
(55) Users need to understand the intended use and normal functioning of the device in order to use it effectively and safely. Training should cover requirements for equipment once in clinical use. For example, the UK Medicines and Healthcare Products Regulatory Agency (MHRA, 2015) requires that, where relevant, training should cover:
any limitations on equipment use;
how to fit accessories and be aware of how they may increase or limit use of the device;
how to use controls appropriately;
the meaning of any displays, indicators, alarms, etc., and how to respond to them;
requirements for maintenance and decontamination, including cleaning;
how to recognise when the device is not working properly, and know what to do about it;
understanding the known pitfalls in use of the device, including those identified in safety advice from government, vendors, and other relevant bodies; and
understanding the importance of reporting device-related adverse incidents (a fault book or similar should be linked to the equipment, and should include details of faults and when they were rectified).
Training should be recorded for quality, continuing professional development (CPD), and safety purposes.
2.4. Operational requirements for x-ray equipment in clinical use
2.4.1. Quality control
(56) QC in medical imaging is a continual multi-disciplinary process, and should not be confined to performance or compliance testing. QC involves collecting and analysing data, investigating results that are outside the acceptable tolerance levels for the QC programme, and taking corrective action to bring these results back to an acceptable level (Jones et al., 2015). The establishment of equipment performance and a QC programme constitute a tool in the process of optimisation of all radiology equipment. A QC programme should be structured, and should involve radiologists or other medical radiological practitioners, radiographers, and medical physicists. A medical physicist or a senior radiographer should be appointed to supervise the whole programme, to oversee the records, and to review the data, especially in larger departments (IPEM, 2005; Stevens, 2021). Ideally, a QC programme should form part of a wider, managed, QA programme (see Section 3.7).
(57) The move to digital imaging has resulted in a need to change the approach to QC in a radiology department, especially, but not exclusively, in the field of plain film imaging. In traditional ‘pre-digital’ imaging, the film itself acted as a final QC tool. Inappropriate exposure or processing would result in a film being marked as ‘reject’. This is no longer the case. However, standardised tools are now available to identify inappropriate exposures, and these should be put into routine use. Reject analysis and artefact identification should form an essential part of radiographer-led QC.
(58) Many QC measurements may be undertaken by radiographers, but the programme, especially for more complex systems, should be performed with the guidance and advice of a qualified medical physicist. The radiographers and medical physicists should understand how the system works, its characteristics, modes of operation and image acquisition, image quality requirements, and image processing for different clinical programmes and clinical uses. In addition, they should be able to interpret test results and advise on parameters to be measured. Close cooperation with the equipment vendor and service engineers is needed, as well as involvement of clinical staff operating and using the equipment.
(59) Routine performance testing should include task-specific evaluation of the imaging system to reflect the intended clinical use of the equipment, and to guarantee the production of required image quality at a reasonably low dose, commensurate with the desired clinical outcome. The test results should be compared with baseline performance values recorded at installation, and there should be criteria for acceptable changes in performance. These limiting values and the test frequency should be specific to the clinical task for which the equipment is used. If, during the life of the equipment, the clinical use of the equipment changes, that will require a change to the system settings and default programmes, and the QC baseline values and the QC programme will need to be modified accordingly. These tests should be carried out at regular intervals, or after service or repair.
(60) The level of complexity of the performance test often dictates who performs it and how often it is performed. In some regions, performance testing is split into two levels: Level 1 and Level 2. Level 1 tests are generally of a simple pass/fail nature, and do not require sophisticated test equipment or analysis. They are performed by radiographers at regular intervals that may be weekly or even daily depending on the equipment. Level 2 tests are carried out less frequently, perhaps at intervals of 6, 12, or even 24 months depending on the complexity of the system, and require more resources and expertise. They are usually performed by a medical physicist, biomedical engineer, or vendor service engineer (IPEM, 2005; Jones et al., 2015), and the results are reported to radiology staff. Medical physicists should also undertake investigations when regular Level 1 QC tests identify performance factors that are out of tolerances, and after any relevant changes in the system’s acquisition (e.g. an x-ray tube change) or major post-processing software updates.
(61) Simpler tests of image quality characteristics are based on observer evaluation using test objects (AAPM, 2001; IPEM, 2005; Stevens, 2021), and the user should follow guidance on use of the specific object and be aware of its limitations. Reproducibility of the measurement conditions, including geometry, exposure settings, and the viewing conditions, could have a significant impact on the results. More detailed image quality assessment may involve physical measurements to define conventional system characteristics, such as contrast, noise, and resolution parameters represented objectively by technical parameters such as MTF, noise power spectrum (NPS), and detective quantum efficiency (DQE) (Annex A). The future trend is towards more clinically realistic test objects that enable task-based evaluations of system imaging performance (see Section 5.3.3 and Annexes B, C, and D).
(62) Optimisation is a continual process and is inextricably bound up with the minutiae of the imaging equipment life cycle. Each element of the life cycle contributes to successful optimisation. QA of the whole system helps to ensure that this is achieved through focusing attention on the many different aspects of performance that need to be maintained.
2.4.2. Upgrades and refresher training
(63) Upgrades occur at all points during the life cycle of imaging equipment. It is important that the purpose of an upgrade is understood by users and radiology management. It is equally important that appropriate commissioning tests are performed after an upgrade (software or hardware), and that staff groups are properly trained, either by an applications expert from the company or via cascaded documentation, as training is critical for safe, optimised use of any imaging equipment. Staff should be provided with refresher training throughout the life of the equipment and after any upgrade. All training should be recorded, which might be through a QMS to provide ready access and traceability.
2.4.3. Safety issues
(64) An adverse incident is an event that causes, or has the potential to cause, unexpected or unwanted effects involving the safety of patients or other persons (MHRA, 2015). In the context of the optimisation of medical imaging, the definition of adverse incident could include exceeding a notification level for deterministic effects in an FGI (NCRP, 2010; ICRP, 2013b). Equally, any overexposure to a patient (or staff member) that required reporting to a regulator would count as a safety issue. However, it is important to consider incidents with the potential to cause harm, so near-miss evaluation and local adverse event reviews should be integral to the routine use of medical imaging equipment.
2.4.4. Contract management and maintenance
(65) All medical imaging equipment must be maintained appropriately. Equipment often comes with a limited warranty, providing maintenance to vendors’ specifications for a set time. In addition to this traditional model, there are other arrangements such as those whereby equipment purchased is part of a comprehensive supply and ongoing maintenance and repair arrangement for a set period (i.e. a ‘managed service contact’). Whatever the model, subsequent arrangements should be made using an evidence and risk-based approach to decision making – costs alone should not be the determining factor. Decisions about maintenance and contract management are often made by radiology management, and it is important that these key stakeholders understand the clinical implications of any decisions made. Maintenance contracts should be specific and auditable, and those personnel (in-house or external) performing service and maintenance should be adequately trained and competent on the equipment they work with. Appropriate calibration of measuring equipment used in maintenance (to verify the performance or radiation output of the imaging equipment) should be a requirement of maintenance contracts. Contracts must ensure that schedules are available for planned preventative maintenance (PPM), and when equipment is returned to clinical use from either PPM or repair, service personnel should leave an indication of what changes they have made and whether those changes could affect patient dose or image quality. If a repair or PPM has resulted in a potential change to image quality or dose, the radiographer should perform a predetermined QC test in collaboration with, or with advice from, a qualified medical physicist to confirm that the equipment is safe to return to clinical use.
2.5. The end of clinical use and equipment disposal
(66) At some point during its life cycle, the equipment will become a candidate for retirement and disposal. This may be, for example, because it can no longer be repaired or be brought economically back to acceptable specification by the manufacturer, it is no longer supported by the manufacturer, a lease or managed service contract has expired, it is obsolete, its clinical performance is no longer sufficient for the task, or repurposing is required. At that point, a decision to remove it from service might be made. However, a policy on removal from service is an essential part of device management (MHRA, 2015), and planning for replacement should be in hand before any decision is necessary. The planning cycle should include considerations on the justification for the new equipment that is to be obtained, and go on to consider all of the other items in the equipment life cycle identified above. The cycle should consider Health Technology Assessments where they exist.
(67) Due to their diversity and complexity, there are many methods for disposal of medical devices such as x-ray equipment. Options range from scrappage to resale for subsequent reuse. In most cases, consultation between the user and manufacturer or perhaps prospective reseller is critical, especially for high-technology items, in order to decide the best method for disposal (WHO, 2017). No equipment should be scrapped without appropriate consideration of environmental impact and relevant regulatory controls.
(68) Charitable donations of x-ray equipment can be very helpful; may improve the efficiency of health facilities; may save the costs of purchasing new equipment; and may make some diagnoses or therapies accessible to patients, especially in resource-limited settings. Such donations can also cause health risks if their safety and performance are not verified prior to donation. They should also be furnished with full documentation sets in the correct language. The donor should ensure that the infrastructure exists for appropriate and cost-effective maintenance and QC in the recipient country. As emphasised earlier, used or refurbished equipment should function as originally intended, and meet all the performance and safety requirements that it did when new (IEC, 2019a).
(69) According to the World Health Organization (WHO), quality problems associated with donated medical devices have been reported in many countries. These problems often result in receiving countries incurring unwanted costs for maintenance and disposal, and may also create the impression that the equipment is ‘substandard’ and has been ‘dumped’ on a receiving country (WHO, 2017). Specific advice on the donation of medical imaging equipment can be found in WHO (2011) and THET (2013).
(70) All donated equipment should meet the suitability criteria defined by WHO (2011), namely:
the equipment is appropriate to the setting;
the equipment is good quality and safe;
the equipment is affordable and cost-effective;
the equipment is easy to use and maintain; and
the equipment conforms to the recipient's policies, plans, and guidelines.
3. THE OPTIMISATION PROCESS
Radiological professionals working in the facility and their communication at different levels. Levels D, C, B, and A represent levels of service performance development, categorised as preliminary, basic, intermediate, and advanced, respectively.
Aspects of methodology that should be in place to achieve different levels of optimisation.
QC, quality control.
Processes that should be in place for organisations at different levels in optimisation.
Components of a radiology quality system.
DRL, diagnostic reference level.
Impact of basic factors on patient dose and image quality if kV is kept constant in conventional radiographic imaging.
PKA, kerma area product; Ka,e, entrance surface air kerma; FOV, field of view; FRD, focus to image receptor distance.
Assumes field size remains the same at patient.
Arrangements that should be in place for facilities at different levels.
KAP, kerma area product; CTDIvol, volume averaged computed tomography dose index; DLP, dose length product; CT, computed tomography; DRL, diagnostic reference level.
3.1. The status of optimisation and the challenges
(72) The areas that need to be tackled first to improve optimisation in any facility depend heavily on the available tools, the technical infrastructure, and the professional expertise available. At the present time, the majority of facilities around the world do not have the necessary tools, teams, or expertise to fully embrace optimisation and take it forward to the same end-point. There are specific concerns when digital imaging equipment is introduced into centres for the first time. The replacement of older equipment with digital equipment often creates a perception that the digital equipment is ‘intrinsically’ better and safer just because it is newer or digital. However, the dose levels could be unreasonably high or too low without anyone realising, because the grey-scale images are scaled based on the recorded data. What is important depends on the available tools and expertise. Lower-income countries with less-developed facilities may not have the capability of making full use of methods that are accessible to them with the existing technical infrastructure and limited availability of multi-professional expertise. (73) Access to diagnostic imaging facilities enables accurate diagnosis, treatment, management, and optimal outcomes, but this is limited in lower-income and low–middle-income countries, and in some rural parts of high-income countries, due to a lack of adequate resources (DeStigter et al., 2021). However, access to imaging services needs to be developed at every level of the health system. This includes provision of radiographic x-ray equipment for community primary healthcare services, as this is a mainstay for the investigation of common conditions such as pneumonia and fractures in many parts of the world, as well as CT and interventional equipment in specialised hospitals. (74) This publication attempts to address this range through introduction of a layered approach with resources and activities being added as a radiology facility develops more of the requirements for optimisation (Fig. 3.1). An imaging service would move upwards through Levels D to A as more aspects to improve optimisation are added or achieved. It is hoped that imaging facilities will be able to identify arrangements that they have in place, and use the information provided to prioritise the next step. When a primary care radiography facility is set up, this would be at Level D, and basic prescriptive requirements in terms of staff and equipment would need to be put in place to achieve optimisation at Level C. The majority of established x-ray facilities are likely to be at Levels B and C, and the aim to achieve Level A will require considerable development of multiple aspects of the service. Regardless of level, optimisation requires a continuous process of improvement through a quality dose management programme. The model of level-based imaging optimisation is designed to inform and guide policymakers and radiology managers in prioritising requirements and budgetary decisions. Illustration showing layers linked to resources and activities in the implementation and development of optimisation within a radiology department. DRL, diagnostic reference level.

3.2. Steps in the development of optimisation
(75) Optimisation depends on a comprehensive set of factors which have to work together in order to reach a continuous and effective process. Continuous improvement and consistency of the outcome do not occur with separate functions in compartmentalised environments. The goals and development steps can be described in terms of three different perspectives or aspects in the order in which they would evolve chronologically:
professionalism (professional skills and collaboration);
methodology (methodology and technology); and
processes (organisational processes and documentation).
(76) This is a development of proposals by Samei et al. (2018). Within each of these areas, there are different levels of performance and optimisation that radiology facilities will have achieved. A combination of multi-professional skills, utilisation of clinically relevant parameters for measurement and evaluation, and integration with organisation-wide processes with continuous monitoring are required to enable an effective optimisation process. Fig. 3.2 sets out broad categories for the system that would be in place to achieve different levels of optimisation.
The three main components in the development and maturation of optimisation. Processes are placed on the left, as once the system has been set up, these would set in motion the performance of other tasks. The levels represent different stages in achievement moving upwards from Level D towards Level A. Level D represents a basic infrastructural level as a prerequisite for initiation of the optimisation process. Levels A, B, and C set out the arrangements that will be in place for each component when that level is achieved. DRL, diagnostic reference level.
(77) The first step is for facilities to evaluate the arrangements that are in place, using this system to identify how much of each component they have in place in order to guide them in decisions about what actions need to be taken. Use of the model can be flexible, in that it might potentially be applied either to x-ray rooms within a single hospital, or to several facilities that come under the management of one organisation. The levels achieved within each component have been labelled Level A – advanced, Level B – intermediate, Level C – basic, and Level D – preliminary for centres that have been set up recently. The layer levels currently fulfilled and those that it is possible for different facilities to achieve will vary by facility type and by country.
(78) Facilities may be at different levels in the three components in Fig. 3.2. For instance, medical physicists may undertake all the compliance testing needed to check that dose and image quality performance is maintained, but communication channels with radiographers and radiologists may be limited, perhaps because testing is done by an external medical physics group, and arrangements may vary from one facility to another within a multi-site organisation. Thus, the levels within the model would be: professional skills, Level C; methodology, Level B; and processes, Level C. Taking another example, there may be a medical physicist based within the radiology department with regular communication with other specialities, but still undergoing training and accumulating experience, who has only limited equipment for testing x-ray equipment performance, and still developing arrangements with other sites. The levels for this organisation within the model would be: professional skills, Level C; methodology, Level C; and processes, Level C, but with the potential to move to Levels B, B, and B and onward over time.
(79) The next step in the improvement process in each aspect will depend on the level (Level D, C, B, or A) of performance for the facility, linked to the professional expertise, technical optimisation tools available, and organisational infrastructure. Level C for each component represents a basic acceptable standard of performance, and the starting blocks that need to be in place for the optimisation process to move forward. However, there are many centres throughout the world that will not be able to achieve this basic level at the present time, because of limited input or skills of professional groups, particularly medical physicist availability (professional skills), limited equipment and experience in performance testing (methodology), or an inadequate organisational support network with only ad-hoc arrangements to address failures (processes). For facilities in this preliminary stage (Level D), the personnel, tools, and structure will need to be put in place to start the optimisation process.
3.3. Professional collaboration and the team approach
(80) Professionalism covers the behaviours, attitudes, and roles of management, radiologists, other radiological medical practitioners, radiographers, medical physicists, and supporting professionals. Through developing collaboration, staff should aim to move away from traditional, hierarchical cultures to multi-disciplinary approaches with multi-professional teams to enable continuous improvement. (81) This section will cover the main professional roles of those involved in optimisation, starting from management and including radiologists, other radiological medical practitioners (especially those carrying out interventional procedures), radiographers, medical physicists, and supporting professionals such as nurses, vendor application specialists, biomedical engineers, data scientists, and IT and informatics specialists. It will show the path from traditional and hierarchical organisational cultures to more multi-disciplinary and jointly organised tasks, finally reaching multi-professional teams to ensure continuous improvement of knowledge in the fast-developing field of medical imaging. To enable improvement and development in optimisation methods and processes, there must be a cultural change, strong leadership, and organisational buy in. A cultural shift towards multi-professionalism can only occur if the professional roles and competence are built to support this fundamental shift. Although much of this publication refers explicitly to radiologists as the predominant clinicians using x rays, similar principles of collaboration and a team approach apply to cardiologists and other radiological medical practitioners involved with the operation and use of x-ray equipment. (82) Features which may provide indicators for this evolving culture are consistency, a systematic approach, and coherence. Consistency means that tasks are performed according to the same set of rules and principles, regardless of time and location. It also refers to consistency of quality where the variation is reduced to produce more homogeneous outcomes. A systematic approach means that all operations are planned and can be described as processes, and duty assignments and responsibilities are clearly determined. Coherence refers to the principle that ‘each piece of information is only stored in one location with regular back-ups of data’, aiming to avoid contradictory or overlapping information. (83) However, these cultural and system aspects are dependent on more fundamental levels of safety and trust within an organisation. This can be seen as an aim to move towards an open and non-incriminating culture where faults and deviations do not lead to personal accusation, but to a search for any faults in the process so that they can be fixed. Thus, any deviations are actively reported to enable corrective actions. Leadership commitment is a prerequisite for this cultural maturity, which ultimately lays the foundation for other development areas of optimisation: methodology and processes. (84) Management plays a key role in the organisation and staffing of the radiology service as a whole, whilst ensuring that staff are appropriately trained. This also involves decisions on equipment replacement including specification, procurement, installation, acceptance, commissioning into clinical use (Sections 2.2 and 2.3), optional alternative uses, and decommissioning (Section 2.5). The preparation of specifications, review of tenders, and selection of imaging equipment requires input from all members of the imaging team, as well as biomedical engineers. The operations undertaken to achieve optimisation require support from healthcare facility managers to ensure safe use and adequate training of staff. (85) At the start of the referral process, communication between the referring clinician and the radiologist or other radiological medical practitioner is essential for appropriate justification. If radiologists do not have access to the relevant aspects of their patients’ clinical histories, they cannot determine what imaging is appropriate. Communication between medical physicists, radiographers, radiologists, cardiologists, and other radiological medical practitioners is key to achieving optimisation of imaging, and establishing and reviewing clinical protocols. (86) In some countries where the process has not yet begun, the various tasks that contribute to optimisation can be undertaken by different groups that may not even communicate. For example, equipment testing can be performed by government medical physics personnel, images can be evaluated by radiologists and other radiological medical practitioners, dose audits can be undertaken by university researchers, and radiographers have responsibility for operating the equipment (Martin et al., 2013). If there is little communication between these groups, effective optimisation cannot take place. Practical advice is needed for all these groups on the importance of developing a team approach and pooling information. The communication channels between professional groups that would be in place to achieve various levels are set out in Table 3.1. (87) Regulators, health authorities, and professional societies have important collaborative roles to play in setting acceptability criteria for equipment, staffing requirements, and other aspects important for optimisation. There will be a need to update radiation protection legislation in many countries, including requirements in regulatory inspections to ensure that appropriate actions are taken. Requirements for optimisation and radiological protection should be embedded into the quality requirements for medical practices developed by the health authorities, and, where appropriate, linked to the rules of reimbursement for medical procedures. (88) ‘Optimisation teams’ comprising radiologists, other radiological medical practitioners, radiographers, and medical physicists should be established to deal with each type of procedure. The preferred approach is for these multi-professional teams to focus on specific radiological areas (e.g. organ-specific sub-specialities in radiology practices). Ideally, the leading expert available in each type of procedure should be utilised. This team-based process should be implemented consistently throughout the radiological organisation in order to achieve appropriate coverage for all relevant diagnostic and image-guided procedures. Building such teams also requires sufficient allocation of resources to make this team work effectively in routine practice in order to support continuous improvement which is a built-in principle in optimisation. (89) Advice is also needed for regulators and hospital administrators, as well as healthcare staff, on how processes should operate, together with suggestions on how they might promote greater degrees of collaboration in practice. Finally, there needs to be communication between radiological medical practitioners and radiographers, and the patients they are treating and their carers (outward facing) in conveying and following the appropriate processes for management of the patient’s treatment. Everyone involved has a responsibility to understand the radiological protection requirements in medicine at some level. The groups and individuals involved will vary in different regions, so an approach of adopt and adapt by region is appropriate. Each unit must decide on priorities, based on their clinical indicators and budgets, but guidance is required about the decisions to be made for those centres with more limited experience.
3.4. Methodology, technology, and expertise
(90) Methodologies should move from basic performance tests and evaluations to multi-modal monitoring of performance and functions, eventually using patient-specific parameters linked to care outcomes with more clinically relevant metrics for evaluating image quality and clinical information.
(91) This component concerns the different levels of methodology used for optimisation, starting from the primary level with basic performance tests and evaluations, moving ahead to more inclusive and multi-modal monitoring of performance and functions, finally dealing with the care outcome and patient-specific parameters. Therefore, the methodology aims to utilise more clinically relevant metrics for evaluating image quality in order to provide more effective results.
(92) The practical process of optimisation begins with understanding the performance of the equipment hardware and software, and this requires both a necessary level of expertise and access to tools for testing the equipment. The next stage involves setting up clinical protocols using a team approach that includes communication between professionals working together, each bringing their own expertise. The final stage is the analysis of results from surveys of dose and image quality which are then fed into the knowledge base to refine protocols. The main aspects of this are dependent on the skill of the operator, the influence of training (Section 6), and methods of improvement through self-evaluation.
(93) The approaches and requirements for optimisation in centres and countries with varying levels of facilities, access to tools, and radiological and scientific expertise will differ. Steps in the optimisation process need to be prioritised in terms of increasing requirements for tools, facilities, and expertise in practice in order to set goals that can reasonably be achieved with the available resources. Steps will include: basic exposure factor optimisation; adjustments to automatic exposure control (AEC) devices; evaluations of equipment performance and patient dose; adjustments to equipment settings; and software-supported patient-specific optimisation using dose management systems.
(94) Examples of steps in the optimisation process that would be in place at the various stages include:
Level C: set the basic parameters [common to equipment type or modality (e.g. distance or projection direction)].
Level B: adjust indication-specific parameters to maximise image quality per dose unit (e.g. spectral optimisation – kV and filtration); and patient-specific parameters (typically mAs by AEC device) in individual examinations to achieve diagnostic image quality with the lowest dose.
Level A: harmonise exposure parameters in order to achieve consistent image quality throughout the organisation.
(95) The practical tasks that would be implemented to achieve the various levels of performance in optimisation will depend on the equipment, facilities, and expertise available. The most important are listed in order of priority in Table 3.2, where they would be expected to lie within the levels of expertise set out in Figs 3.1 and 3.2.
3.5. Processes, control, and documentation
(96) Systematic processes with documentation should be implemented to ensure that results from performance testing, clinical surveys, and patient dose audits are used in review of protocols, and aim eventually to achieve harmonisation of organisation-wide protocols utilising the connectivity provided by IT systems. (97) This component concerns the means and motivation of systematic processes, process description, process flow, and related documentation within the organisation. The stages in this component begin with isolated activities such as practical performance testing, development of clinical protocols, and performance of patient dose audits, but without a consistent approach across an organisation. This needs to be developed into a structured system that encourages, facilitates, and, to some extent, controls the regular performance of the various functions by different professionals within the imaging team. The input from the different groups then needs feeding into a review and development process. (98) As the processes become more comprehensive, clinical protocols become harmonised and practical activities are managed as part of a quality agenda. The aim is to achieve organisation-wide systems and eventually connectivity through IT systems with high levels of sophistication. This includes verification of the quality of archived data, and implementation, training, and assessment of changes to achieve optimisation. Some guidance on different aspects that would be expected linked to the level in the model are given in Table 3.3. (99) Requirements for more advanced processes include modelling of the clinical task and observation in order to obtain image quality metrics (e.g. detectability of low contrast objects) which would be objective, quantitative, standardisable, and more relevant to clinical scenarios than the current metrics. In other words, the development of image quality parameters bridging the gap from technical parameters to clinical parameters (i.e. describing in numerical terms what is needed to reach sufficient diagnostic accuracy for specific indications and diagnostic tasks). (100) Acceptable differences in imaging parameters and patient doses relate to estimated uncertainty (accuracy and precision) of the objective image quality metrics. The aim is to reach not only sufficient diagnostic image quality but also to do it consistently.
3.6. Levels of performance and approaches required for optimisation
(101) Facilities should analyse arrangements they have in place to identify which criteria set out in Tables 3.1–3.3 they fulfil currently, and use this to guide decisions about what actions need to be taken next to progress the optimisation process in their organisation. Each unit must decide on priorities based on the level of performance (Level D, C, B, or A), the equipment, tools available, staffing and level of expertise, prevalent clinical indications, and budgets. The analysis should be used to set objectives that are achievable within the organisation.
(102) Different levels of organisational and technical development (Levels C, B, and A) have been identified in Figs 3.1 and 3.2, and Tables 3.1–3.3 give examples of practical tasks and processes to help in ranking facilities based on the availability of multi-professional expertise and steps that have been achieved in developing the technical infrastructure. Sequences of actions setting out possible approaches that might be used to take the optimisation process forward are given here. These may be followed for specific components to be picked out to suit the situation. The aim is to provide guidance on the approach based on what might be achievable in different facilities.
(103) Level D – preliminary: isolated radiological professionals with little or no diagnostic radiology medical physics support and limited organisational structure for implementation of optimisation:
set up links with professionals in larger hospitals to mentor and exchange ideas;
employ, educate, and train radiological staff in radiological imaging science, technology, and practice, including optimisation techniques, through attendance at external courses;
ensure access and sufficient allocation of medical physicist involvement in radiological protection, QC, dosimetry, and optimisation;
prepare clinical protocols using evidence from professional societies with assistance from radiological colleagues from other facilities, and utilise web resources of professional societies; and
purchase equipment for measuring performance of x-ray equipment for individual facilities or groups of hospitals.
(104) Level C – basic: exposure factors based on historical practices with centres in initial stages of developing expertise in performance testing:
established referral guidelines used in evaluation of all requests for imaging;
review of protocols in the main centres by a small team of national or international experts to assess the level of optimisation for all types of x-ray procedures, especially CT;
initial small-scale survey of patient doses to establish the level of optimisation already in place;
review of protocols in the main centres by groups of national or international experts with the aim of identifying where effort is required;
aim to train a multi-disciplinary group of local professionals in optimisation requirements through visits to international centres, followed by joint visits to at least one centre within the country with a view to optimising protocols;
provision of training courses in optimisation techniques for radiologists, other radiological medical practitioners, radiographers, and medical physicists at both local and national levels; and
national team of experts may be set up to visit some centres to optimise protocols, especially for CT, using results from dose surveys. These could be linked to any national training courses in optimisation.
(105) Level B – intermediate: knowledge of optimisation practices is widespread but may not be put into practice in all centres:
referral guidelines provided to all referrers for guidance;
national survey of patient doses, initially for CT and radiography to evaluate the level of optimisation. Results can be used to identify broad needs for optimisation, and may also be applied in establishing DRLs that can be used in identifying target facilities where optimisation is required more urgently;
set up optimisation teams comprising radiographers, radiologists or other radiological medical practitioners, and medical physicists for each modality with the aim of reviewing and optimising clinical protocols;
provision of advanced training courses in optimisation techniques for individual modalities for radiologists, other radiological medical practitioners, radiographers, and medical physicists in optimisation teams; and
members of the optimisation teams provide cascade training for other members of the radiology department.
(106) Level A – advanced: optimisation undertaken routinely involving multi-disciplinary teams:
national survey of patient doses for all types of x-ray procedures to establish DRLs, and identify where further optimisation is required. Possible utilisation of national dose registries in large-scale optimisation process, benchmarking, advanced data analytics (e.g. machine learning), and radiological research;
continual sharing of experiences in optimisation techniques in order to maintain the level of optimisation, and ensure that procedures using new techniques are optimised as soon as they are introduced;
implementing more clinically relevant assessment of image quality by, for example, utilising clinical task models with model observers to achieve objective quantification of detectability for diagnostically meaningful contrast targets;
utilising exposure monitoring systems for wider scale determination of patient doses, including organ dose estimates;
utilising integrated systems for protocol management and equipment management to enable more consistent quality from the technical to the clinical level, and to strengthen the harmonisation and standardisation of diagnostic and care processes; and
utilising referral guidelines integrated into HISs/RISs to implement clinical decision support (CDS) in order to ensure correct examinations are performed on the right patients (based on indication) at the right time. CDS should optimally connect to modalities to take into account both the available access to different modalities and the queue status (length of worklists) with consequent delays in performance.
3.7. Quality systems
(107) A QMS can provide a framework to facilitate a systematic organisational approach, through aiding the identification of risks and possibilities for improvement, and so establishing a strategy to aid the achievement of optimisation.
3.7.1. Quality management
(108) If professionalism, methodology, and process are to be harnessed in such a way that the optimisation goal is achieved, there needs to be an underpinning framework that facilitates the systematic organisational approach to achieving that goal. This requires a system which ensures that all of the tasks, including QC tests, dose audits, investigation of failures to meet set standards, use of clinical and QC data in protocol optimisation, and communication of updated information, are carried out and recorded on an ongoing basis (Annex F). Fig. 3.3 provides a diagrammatic representation of the relationships between quality management, QA, and QC. One approach is through the adoption of QMSs with that explicit aim.
Relationships between processes and tasks relating to optimisation within a quality system. DRL, diagnostic reference level.
(109) The ISO publication ‘Quality Management Systems – Fundamentals and Vocabulary’ (ISO, 2015a) defines a QMS as a suite of activities by which an organisation identifies its objectives, and determines the processes and resources required to achieve the desired results. A QMS can therefore be viewed as an enabler for identifying actions required to address both intended and unintended consequences encountered in the provision and development of a service. It can also be used as a tool to manage resources in both the long and short term.
(110) Successful implementation of a QMS should enable an organisation to identify both risks and opportunities, and identify possibilities for improvement and change. It will also give the organisation the ability to demonstrate conformity with specified obligations, such as legislative requirements concerning radiological protection, provided they are incorporated within the objectives of the QMS. In this context, one of the requirements of a QMS might be the involvement or establishment of a working group to establish a vision and a strategy for optimisation. A QMS could also be used as the framework supporting the development of the levels of optimisation outlined in Section 3.2 and depicted in Figs 3.1 and 3.2. The successful execution of a QMS is dependent on the systematic definition and oversight of processes and adoption of the Plan–Do–Check–Act (PDCA) cycle (ISO, 2015b). The PDCA cycle is analogous to an audit cycle in which the steps are:
plan – define the objectives and the processes required to achieve those objectives;
do – implement the plan;
check – monitor/measure the outcomes against relevant comparators; and
act – take action where required to improve.
(111) To aid the process, there are seven key principles that should be used to support the development and subsequent maintenance of a QMS. There is a focus on management responsibility; resource management; service realisation; and measurement, analysis, and improvement. There is strong emphasis placed on providing high-quality, reliable, and consistent customer service as well as leadership. The seven principles are shown in Fig. 3.4 but are not depicted in any order. The relative importance of the principles will vary from organisation to organisation, and will change over time (ISO, 2015c).
(112) Each of the seven principles in Fig. 3.4 will play a role to some extent in progression from Level D to Level A in the professional, methodological, and process elements of the optimisation strategy outlined in Fig. 3.2.
3.7.2. Quality assurance
(113) QA is an essential part of a QMS and is defined by the International Organisation for Standards (ISO) as being the part of quality management that is focused on fulfilling quality requirements. In essence, this means the planning and documentation of policies, procedures, and processes that underpin an organisation’s approach to quality management. For example, suppose that one of the objectives specified by an organisation was the routine implementation of equipment performance testing for regulatory or optimisation purposes. The objective would be reflected in a written policy which would require the generation of procedures and work instructions relating to the operational implementation of equipment performance testing. The policy would include procedures concerned with what expectations there are on the equipment, when to test it, approaches to testing it, training and competence of staff who will do the testing, and what to do with the results of the tests. Key performance indicators could also be set to measure specific activities and assess aspects of performance (ESR, 2020).
The seven principles underpinning quality management and their interconnection.
3.7.3. Quality control
(114) QC, also an essential part of a QMS, is defined as being the part of quality management focused on providing confidence that quality requirements will be fulfilled. Put another way, QC can be thought of as being the actual work done to meet the requirements of the QA programme. In the context of the example above, it involves carrying out, recording, and analysing the measurements performed in accordance with an equipment performance testing schedule. The American Association of Physicists in Medicine point out that QA is a proactive process that seeks to prevent defects in products or deliverables (e.g. medical images), while QC is a reactive process that seeks to identify defects in products or deliverables (AAPM, 2015). QC is itself an essential part of the equipment life cycle (see Section 2.4.1). Table 3.4 lists components of the quality system that would be regarded as quality management, QA, and QC.
(115) Formal QMSs under ISO standards can encourage the development and maintenance of an optimisation strategy by setting goals and monitoring performance. Large healthcare organisations with one or many radiology departments will find that appointing a member of staff to perform the role of quality manager, with clearly defined responsibilities and resources, is a definite advantage. This identifies an individual who is given the power and responsibility to ensure that the QA programme is kept up to date, and evolves as circumstances require. There will be reassurance that changes in procedures and protocols are shared across all relevant facilities and staff within the organisation. Audits performed by the quality manager will ensure that regular QC tests are recorded and kept up to date. The quality manager can monitor planned developments in the optimisation programme, and ensure that any incidents are followed up and appropriate actions taken to prevent reoccurrence in the future.
(116) However, formal QMSs, such as those adhering to the ISO procedures, are not essential. They are valuable aids in improving and developing the approach, and continuing the optimisation, of performance, but during the early stages of setting up and establishing procedures, organisations may find that it is more cost-effective in terms of staff time to concentrate on setting up individual components of the overall system, optimising protocols, and carrying out QC tests. These are the steps required in developing Level D facilities to meet Level C and start on the road to optimisation. Once this has been achieved, implementation of a QMS can ensure that optimisation is performed, and progress in developing further improvements is maintained.
4. ANALYSES OF PATIENT DOSES
4.1. The influence of exposure factors on radiological images
(118) A key component of optimisation in medical imaging is keeping the radiation dose to the patient ALARA, while maintaining a level of image quality that is sufficient for the diagnostic purpose. The magnitude of the radiation dose is not immediately obvious from the appearance of a digital x-ray image, so assessments of doses from groups of patients perform an important role in demonstrating what the dose levels are so that they can be considered in the context of the optimisation process. The level of doses is determined by the exposure settings used for imaging, and the optimum choices may not be immediately apparent.
(119) The process of image interpretation is both task and reader dependent, so the choice of factors that influence both patient dose and image quality depend on the patient, the clinical question, the examination, the operator performing the procedure, the equipment used to image the patient, and the person interpreting the eventual image.
(120) Exposure factors have a significant effect on patient dose and image quality. For example, an increase in mA (or mAs) without any adjustment to kV will result in an increased photon fluence at the image receptor and the patient entrance surface. There will be a consequent increased radiation dose to the patient and an improvement in the contrast-to-noise ratio (CNR), an indicator of image quality (see Section 5.2), because of the Poisson nature of the image formation process. An increase in kV without adjusting the mA (or mAs) will cause an increased photon fluence at the image receptor and a higher patient dose, but may also result in a potential reduction in CNR because of the variation in tissue mass attenuation coefficient with energy.
(121) In practice, therefore, the outcome of an increase in mA (or mAs) will be increased CNR at the expense of increased patient dose. Use of a higher kV will result in a greater relative number of high-energy photons reaching the image receptor, and will therefore necessitate a reduction in mA (or mAs) to achieve the same dose to the image receptor. The net effect will be to reduce the patient entrance surface air kerma and, to some extent, the radiation doses to the exposed organs, especially those nearer to the surface. There will be a consequent reduction in effective dose,
(122) Other external factors, such as the incorporation of anti-scatter grids into the imaging chain, field size, beam filtration, the use of differing focus to image receptor distances, different focal spot sizes, and anode angulation, have well-documented effects on patient dose and/or image quality. These are summarised for conventional radiographic imaging in Table 4.1. A fuller discussion on factors affecting patient dose and image quality in conventional radiographic imaging, CT, and fluoroscopy can be found in the companion publication.
4.2. Surveys and audit of patient doses
(123) Knowledge of the doses delivered to patients by imaging procedures is the first step in the optimisation process. This can only be gained by surveys of doses to real patients because of the nature of the distributions in dose. Patient dose surveys are essential in the development and implementation of an organisation’s dose management strategy.
(124) Correctly performed patient dose surveys will provide information about the range and distribution of doses delivered to real patients for each of a range of examinations at a facility. The use of phantoms as a surrogate for patients in a dose measurement programme is not appropriate as this approach effectively assesses machine output alone. However, another important element of the optimisation process – understanding the performance characteristics of the equipment – may well involve the use of phantoms to assess output. Examples would be the measurement of CT dose quantities or calibration of AEC devices.
(125) Patient dose audit is the process whereby the results of a patient dose survey are compared against relevant standards – the most relevant current standard is the DRL. DRLs that are used as the comparator in dose audits can be set at either national or local level (ICRP, 2017). An essential component of audit is that actions are assigned based on the outcome of the comparison. Depending on the complexity of the comparison task, the outcome will either be ‘do something’ or ‘do nothing’. In either event, it should be recorded, and if intervention is required, it should be undertaken prior to the next audit being initiated. Audit is, by nature, a cyclical process, and Fig. 4.1 shows an example of how the dose audit cycle can be carried out (ICRP, 2017). It is essential that the basis of any comparison should include the overall uncertainty associated with the physical measurements recorded during the dose survey.
(126) Patient dose surveys and subsequent audit should be carried out in a scientifically justifiable manner. A full survey programme in a hospital should ideally cover representative examinations from all radiological tasks performed within the hospital, and should include equipment from across the hospital and work done by a range of operators. The programme should include work done outside the radiology department (e.g. in cardiology and theatres). Priority should result from consideration of the highest dose examinations, the most common examinations, and the most relevant patient cohorts. Prior to surveys being initiated, the standard against which the results are to be compared at audit must be known.
The patient dose audit cycle (ICRP, 2017). DRL, diagnostic reference level.
(127) ICRP recommends that a survey of any particular examination should involve the collection of data from a minimum of 20 patients, and for diagnostic fluoroscopy, a group of at least 30 patients is preferable (ICRP, 2017). However, Sutton et al. (2021) have suggested that the minimum sample size for patient dose audits should be 300–400. In practice, many more will usually be included if the data are extracted from an RIS, radiation exposure monitoring system, or other information system. Any constraints on patient weight and age should ideally be those associated with the standard that is being used as a comparator. If this is not the case, some attempt should be made to, at a minimum, understand the overall uncertainty associated with the ensuing comparison against the standard. The clinical task associated with each procedure surveyed should be recorded to facilitate appropriate comparison.
(128) In regions with limited infrastructure for data collection, survey intervals of approximately 3 years will be appropriate for many diagnostic radiography and diagnostic fluoroscopy examinations, provided that there are no substantial changes in equipment or software. Annual surveys are recommended for CT and image-guided procedures because they subject patients to higher doses of radiation. As automated systems for patient data collection and management become more widely available, the dose audit process may take the form of a regular review (ICRP, 2017).
4.3. Measurement and retrieval of patient dose data
(129) Metrics used in surveys should be representative of how the dose to the patient varies, so quantities such as air kerma-area product (KAP, PKA), entrance surface air kerma (ESAK, Ka,e), dose length product (DLP, PKL), and CT volume averaged dose index (CTDIvol, Cvol) are preferred. These tend to be those set as standards for comparative purposes. Abbreviations will be quoted in the text, but the symbols approved by the International Commission on Radiation Units and Measurement, included as the second term in the brackets, should be used in equations. These are all measurable dose quantities and are not linked directly to doses to patients’ organs, which will not be considered in this publication.
(130) Ideally, the metrics recorded should be transferred automatically to, and retrieved from, the RIS, radiation exposure monitoring system, or other information system to avoid issues caused by transcription errors. In practice, automatic transfer is aspirational in many situations, and is not possible in the case of the majority of computed radiography installations. In this case, relevant metrics need to be recorded manually in the RIS. In some cases, manual recording on paper may be the sole practical method of recording the data required for the survey. This may be the method used in the early stages of establishing a patient dose survey programme. Whatever the method by which information transfer or recording is performed, it is good practice to use standard examination codes for the different types and variants of radiological procedures to avoid introducing errors due to incorrect categorisation of examination types. It is often difficult to incorporate patient weight into the results of patient dose surveys, as it is often not assessed in the first place. The caveat is that the assessment of patient size, whether using weight or some other metric, is of great importance if paediatric dose audit is to be undertaken (ICRP, 2017).
(131) The Digital Imaging and Communications in Medicine (DICOM) standard has defined the radiation dose structured report (RDSR) (IEC, 2014; Sechopoulos et al., 2015; DICOM, 2017; NEMA, 2020) to handle the recording and storage of radiation dose information from imaging modalities. Patient dose data monitoring is facilitated by transferring this information to PACSs, RIS/HISs, and dedicated vendor neutral electronic dose registries (AAPM, 2019a), interoperability among which is guided by the Integrating the Healthcare Enterprise Radiation Exposure Monitoring Profile. Patient radiation exposure monitoring systems are now available and facilitate the establishment of databases as repositories of dosimetry data. These, therefore, have the potential to be used as a convenient way of carrying out patient dose surveys for those who have them.
(132) The use of RISs and patient radiation exposure monitoring/management systems for data retrieval enable large numbers of patients to be included in dose surveys. Commercial exposure monitoring systems or functionalities as integrated into PACS/RIS software provide access to substantial amounts of data, and so enable an overview of the doses associated with specific examinations to be obtained more easily, and, for example, allow comparisons between different CT scanners (Nicol et al., 2016).
(133) A problem that might occur when downloading data for large numbers of patient examinations is the lack of a standard nomenclature for procedures. There may be variations in names for certain examinations used by different departments across an organisation, or even by different staff within the same department. There may also be variations in the interpretation of protocols by different radiographers, and use of the same protocol for different clinical objectives. For example, a chest abdomen pelvis CT scan might be used for cancer staging or for follow-up of treatment – each require different levels of image quality.
(134) When dose data are submitted continuously to an automatic electronic database or registry, review of the registry data should be performed regularly and at least annually. When no automatic dose registry exists, audits could be performed by means of annual surveys, collecting data manually from dose displays, DICOM headers, or PACS/RIS archives (ICRP, 2017; ACR, 2022).
(135) As optimisation is continued, all the protocols across different departments and hospitals coming under the same organisation should be aligned. This can only be achieved by establishing a process to ensure that the level of image quality delivered should be optimal for the intended purpose, has been agreed by all members of the clinical team, and should not differ according to reviewer preference. Such developments may well form part of a quality management programme. The delivery of varying levels of image quality to cater for the preferences of individual reviewers cannot be justified.
(136) The calibration of equipment used in patient dose surveys should be verified regularly, preferably at intervals of no more than 1–2 years, and should be traceable to national standards. Several studies have shown that KAP values indicated by x-ray unit consoles may deviate by 10–40% from the real value, and the variation in the calibration factor as a function of beam quality for a given x-ray set-up was typically within 10–20% (Vañó et al., 2008; Jarvinen et al., 2015). Calibrations of meters and displays should be verified, preferably at intervals of no more than 1–2 years. The International Electrotechnical Commission (IEC) allows a tolerance of 25% for KAP meter calibration using a coverage factor of 2 (IEC, 2020). The results should be incorporated into the measure of overall uncertainty associated with the survey, as should the results of radiological equipment QC tests.
4.4. Analysis and feedback of patient dose data
(137) Without sufficient feedback on doses from digital images, there is a risk of increasing dose over time or leaving doses at a high level to ensure that image quality is good. Such exposure creep will not be recognised unless dose levels are monitored.
(138) The comparator that is most often used when patient dose survey data are used in a clinical audit is the DRL (ICRP, 2017). In general, the outcome of the comparison is a decision on whether the radiation dose delivered for a particular examination exceeds that which the majority of radiologists agree will produce images that are sufficient for the clinical purpose. There is also an argument that patient dose survey data can be used to identify where patient doses are not high enough, as this may imply that adequate diagnosis cannot be achieved. However, patient dose survey data collection is predicated on the fact that only examinations with sufficient image quality to achieve a diagnosis should theoretically be entered into the survey in the first place; any non-diagnostic examination should be rejected after being taken (see Section 2.4.1), included in the departmental review of practices, and excluded from the survey post hoc. This issue has added more complexity than was previously the case because of the wide dynamic range associated with digital imaging modalities. Previously, the image on a film acted as its own QC in that if it was overexposed, the film was too black, and if it was underexposed, the film was too light. The advent of digital radiography means that is predominantly no longer the case, and underexposed images may be considered to be diagnostic unless an appropriate QC regime involving the use of exposure indices is in place along with a reject analysis programme (IEC, 2008; Jones et al., 2015; Dave et al., 2018). If such a QC programme is not in place, it might not be possible to satisfy the underlying premise used in the setting of DRLs that all images must be diagnostic in the first place.
(139) DRLs are often referred to as being the first step on the path to optimisation; this is a reference to the action that should be taken if a dose survey reveals doses that exceed the DRL. If this is so, an investigation should be undertaken to identify why it is the case for the examination in question, and action should be taken to effect remediation if necessary. The investigation should include a review of equipment performance, the settings used, and the examination protocols. The factors most likely to be involved are survey methodology, equipment performance, procedure protocol, case mix, operator skill, and procedure complexity. Framing the bounds for, and reflecting on the results of, the investigation should be carried out in a multi-disciplinary manner, and should include input from appropriate professionals. For example, whilst a medical physicist may be able to comment on the performance of the measuring equipment used (and relate it to the results of QC performance tests), operator training issues or issues concerning patient case mix are more the remit of those clinical staff involved. The establishment of multi-disciplinary optimisation teams is of great value in this regard. Detail concerning setting up and reflecting on investigations was first developed by the Institute of Physics and Engineering in Medicine (IPEM, 2004), and subsequently extended by ICRP (ICRP, 2017).
(140) All personnel involved in x-ray imaging examinations should have a feeling of ownership or involvement in the process of dose audit and be familiar with the DRL concept. This multi-disciplinary team approach helps to ensure that results of dose surveys and any consequent changes that need to be made are fed back to equipment operators. Patient dose surveys and subsequent analysis should be performed with the collaboration of, and input from, these people using readily understood aids, such as bar charts and tables. Dissemination of results should be similarly presented using easily accessible tools, such as histograms of dose distributions. The process then becomes a natural part of clinical audit. These personnel are best placed to understand the clinical implications and reasons for any findings from a dose survey. They are also essential when it comes to the enactment of any clinical remediation that might be required as a result of the audit process – for example, the adjustment of protocols or operator training.
4.5. The outcome of the audit process
(141) DRLs should not become or be thought of as dose limits. This is why they are considered the first step in the optimisation process and must be recognised in the audit process. The fact that doses are below a DRL should not mean that there is no further scope for optimisation. Median values of DRL quantities at a health facility that are above or below a particular value do not indicate that images are adequate or inadequate for a particular clinical purpose. Substituting compliance with national or local DRL values for evaluation of image quality is not appropriate.
(142) In this context, the concept of ‘achievable dose’ has been defined as a level of patient dose (metric) achievable by standard techniques and technologies in widespread use, without compromising adequate image quality. NCRP suggested that achievable dose values should be set at the median value of the distribution of a national DRL quantity (NCRP, 2012), and ICRP concluded that this approach may be useful as an additional tool for improving optimisation (ICRP, 2017). Local optimisation teams are ideally placed to consider adoption of this principle, and to compare patient dose results with the 50th percentile value of the data used to derive national DRLs as well as with the DRL itself. Such a comparison is especially important since the median value of the distribution used to derive the DRL can be considered to be one below which image quality should be regarded as being of greater priority than dose when additional optimisation efforts are performed (ICRP, 2017). Consideration of such issues makes the use of patient dose surveys and audit an integral part of an organisation’s dose management strategy.
(143) In situations where DRLs do not exist at a national level, local DRLs can be set using data from 10–20 x-ray rooms in a local area based on the third quartile of the distribution, and the results obtained can be used as the basis for operation by local optimisation teams. The feasibility of this approach will depend on the number of x-ray rooms from which data collection is possible on a national or local basis. For assessments on smaller numbers of rooms, ‘typical values’ based on the median values of a distribution might be used. These alternative local values are useful because they encourage users to identify units that require optimisation in the earlier stages of setting up programmes to survey patient doses, so that actions required can be investigated and taken soon after the survey has been completed. Alternatively, values from other centres, or values reported in the scientific literature, can be used as an initial guide. The adoption of DRL values from other countries should be done with great caution, given the potential for differences in technical aspects of practice. One example of the establishment of international DRLs in paediatric CT that can be used in countries without sufficient medical physics support to identify non-optimised practice is given in Vassileva et al. (2015).
(144) DRLs set at national level tend to be based on anatomical regions, such as thorax, pelvis, and skull. The result of an investigation into why such a DRL is exceeded might reveal the cause to be case mix; for example, the requirements for a chest x ray for a cohort of patients attending a chronic obstructive pulmonary disease clinic are different from those for a chest x ray for the general population, and may well result in higher patient doses. The result of the investigation might well be to establish a local indication-specific DRL (comparator) for that specific patient cohort. There is no reason why other indication-specific comparators cannot be developed, and this may well be more easily managed at a local level than a national level. One example of an examination suitable for an indication-based DRL is the evaluation of the cerebrospinal fluid shunt function in hydrocephalus using CT, where a lower dose of radiation is required than for a skull CT to achieve the required outcome. Another commonly quoted example is the use of imaging for the evaluation of renal stones. The identification of suitable examinations and consequent development of indication-based comparators is a task well suited to a multi-disciplinary optimisation team as described above, and is a natural evolution of the optimisation process. It represents a further stage in optimisation over and above the comparison with anatomically based DRLs. Values of indication-specific DRLs have been proposed by some centres, and work is ongoing in this field (Treier et al., 2010; Jarvinen et al., 2015; Lajunen, 2015; Brat et al., 2019; Paulo et al., 2020; Jaschke et al., 2021; Tsapaki et al., 2021).
(145) When RIS and patient radiation exposure monitoring/management systems are used for data retrieval, task-based dose surveys are conceptually no more complex than anatomically-based ones, provided that appropriate task-related codes are in place. Task-specific coding is an important step on the route to achieving a systematic optimisation process which may be targeted in a clear way at various types of procedures and enable benchmarking of results between examinations, examination groups, vendors, equipment models, organisations, regions, and countries. However, it is very unlikely to be in place in the majority of healthcare facilities at the present time.
(146) Exceeding a DRL value should trigger investigation and, if appropriate, corrective actions should be taken to optimise patient protection. In addition, if the median dose is substantially less than the DRL, a check should be made to ensure that image quality is not adversely affected. One of the outcomes from a patient dose audit might be a desire to change a protocol associated with image acquisition. The effect on patient dose metrics of simple changes involving adjustments in kV and mA (mAs) can easily be determined experimentally or by calculation. More sophisticated dosimetry of any such proposed alteration can be assessed using patient dose modelling software based on anthropomorphic phantoms and Monte Carlo transport modelling. The effects of more subtle changes, such as those achievable by adjusting the performance of an AEC device or alteration to tube current modulation for CT, whilst being very important, are more difficult to characterise accurately because of the influence of individual patient anatomy. This can only be taken into account fully after the examination has been performed when patient-specific estimates of organ and effective dose can be derived using the a-priori information obtained from the examination itself. This approach requires sophisticated modelling and software as is provided in some patient radiation exposure monitoring software or other bespoke products. Patient-specific dosimetry does not have a role in patient dose audit, other than in the widest sense. Patient dose audit against DRLs will only contribute to optimisation if action is taken to address dose levels that are high, and any other deficiencies when they are identified.
4.6. Patient radiation exposure monitoring/management systems
(147) The process of dose audit based on analysis of downloads of patient dose data, followed by protocol adjustment and regular re-audit can make major contributions to optimisation. However, this is not the limit in improvement that could be made if data are analysed in greater detail. One possible next stage requires implementation of patient radiation exposure monitoring systems in which exposure data are fed in from the RDSRs linked to each imaging device (Loose et al., 2021). This can, in theory, provide a wealth of data, but to make best use of the opportunity provided, examination protocols need to be standardised and systems set up to carry out the analyses, feedback results, and implement changes to improve protocols at regular intervals.
(148) The analysis of dose data may often be done predominantly by medical physicists, but to take full advantage of the facilities offered, results need to be readily available to both radiographers and radiologists or other medical radiological practitioners. Feedback of information could be realised, for example, through use of interactive dashboards that can provide fast access to enable analysis of data, as well as allowing progress to be tracked. Whichever method is implemented, it needs to be easy to use and interrogate, and enable data to be easily found and shared, to encourage appropriate actions.
(149) Such systems make follow-up of patients conceptually easier, so that checks can be made readily to identify problems, trace them, and find out whether the problems have been fixed. Processes could be set up to check protocol use, follow dose trends, provide current dose values, and identify outliers. Results could be highlighted in dose histograms showing dose distributions, and allow individual examination data to be interrogated to allow investigation to determine possible causes of anomalies. Inclusion of the weight or patient dimensions in such a system would provide even more potential for analysis and improvement. They will, however, require increased human resources to implement adequately and will need to be subject to QC tests.
(150) Some of the steps discussed in this section are set out in Table 4.2 in terms of the levels of optimisation discussed in Section 3.
5. EVALUATION OF IMAGE QUALITY
5.1. Introduction
(152) Image quality in medical imaging relates to the capability of providing anatomical or functional information that enables accurate diagnosis, and informs care decisions or provides guidance for image-guided intervention. The provision of information is dependent on the image data itself, but also concerns the interpreting observer who can be a radiologist (or other radiological medical practitioner) or a computer application. When ionising radiation is used for medical imaging, as is done in radiological x-ray modalities, there is always a trade-off between the image quality achieved (in terms of noise) and radiation exposure. Thus, the optimisation task is characterised by the balance between reaching an adequate image quality for diagnosis and avoiding excessive x-ray dose to the patient. The dose is not governed by strict limitation for any individual patient at a particular exposure. However, if adequate image quality is insufficient for adequate clinical interpretation, the reliability of the diagnosis is at risk, and the correct care decision for that specific patient is jeopardised (i.e. unnecessary clinical risk is caused). In such a case, the radiation dose aspect becomes subsidiary. Therefore, sufficient clinical image quality is an absolute requirement and should be considered thoroughly in the overall optimisation process.
(153) The clinical value of images is dependent on physical characteristics of the imaging modality, image capture and presentation system, and also the interpreter who reviews the images. One of the main reasons why optimisation has frequently focused on the dose aspect (according to the ALARA principle) is the ease of acquiring radiation dose information from x-ray equipment. The physical dose information is standardised and available through dose displays in most modern x-ray devices after the medical exposure has been made. On the other hand, image quality information is not provided automatically by imaging equipment, but has to be resolved separately and retrospectively, and this typically involves laborious evaluation by expert radiologists (clinical image quality, generally based on scoring patient images subjectively) and medical physicists (technical image quality based on objective phantom image analysis). Efforts to create automated objective methods for clinical image quality measurement and monitoring, utilising model observers or AI, are the subject of extensive research and development. These methods are expected to be important in the future, but are not yet available on a wide scale for clinical application.
(154) From the physical imaging chain and parameter perspective, image quality extends further in the imaging process compared with radiation dose. Overall, the process of measuring image quality is more demanding, complicated, and involves a larger amount of dependent and intertwined parameters compared with standard physical radiation dose metrics in radiology. However, regular evaluation of clinical image quality is the backbone of a successful optimisation process, and therefore should be given sufficient resources, methods, references, and tools to make it an ongoing activity of the radiological department.
(155) For the sake of conciseness, more information about the image quality metric descriptors discussed in the following sections can be found in Annex A.
5.2. General image quality metrics: contrast, spatial resolution, and noise
(156) Basic image quality is characterised by contrast, resolution, and noise. Contrast and resolution describe how different targets are represented in grey-scale and sharpness. Noise represents a distractor that affects image texture and visual detection of features.
5.2.1. Contrast
(157) In x-ray imaging techniques, contrast (or contrast resolution) is fundamentally based on the differences in x-ray attenuation between target and background materials, providing signals seen as differences in grey-scale in resulting images. Due to the characteristics of the primary physical interactions (absorption and scattering) between x rays and human tissue materials, radiological contrast resulting from small naturally occurring variations in x-ray attenuation between pathological and normal soft tissue only produces subtle differences (NIST, 2009). On the other hand, contrast between bone and soft tissue, and between soft tissue and air, is far greater. The soft tissue contrast can be improved by using contrast agents injected into the blood stream (e.g. iodine)
5.2.2. Spatial resolution
(158) Spatial resolution describes the level of detail which can be observed on a medical image. It may concern boundaries between tissue types or structural patterns within tissue such as bone fractures. Comprehensive methods to determine spatial resolution span a continuous range of object dimensions in order to evaluate image system performance, not only for the smallest details, but also for all other spatially distributed features in the image. Traditional spatial resolution measurements have been made using high contrast target and high radiation dose exposure level where the effect from image noise is minimal, enabling higher precision of the assessment method.
(159) In digital radiology, images are comprised of discrete picture elements where the pixel size sets a clear boundary on what can be resolved spatially in the image. However, if a very small object has sufficient contrast to boost the integrated signal within a pixel to make that pixel stand out among the neighbouring pixel grey-scale background, it is still possible to detect such an object even if it is smaller than a pixel.
(160) On the other hand, there are many relevant object features that are significantly larger than the pixel dimension. The typical digital radiography detector pixel size is of the order of 150 microns, which is small enough for many clinical imaging purposes if the other imaging factors are optimal. Imaging detector resolution capabilities are changing rapidly, enabling the visualisation of smaller structures [e.g. with photon counting techniques (in 3D for CT) or smaller detector elements].
(161) At a physical level, spatial resolution is fundamentally described as the spread of the image signal about the true original location corresponding to a signal source object. This spread is referred to as a point spread function (PSF). Basically, PSF describes the blurring of an image due to all relevant factors in the imaging chain. The same blurring occurs with line objects, contrast edges, and tissue textures in the image. Therefore, spatial resolution does not only affect the sharpness of small focal details but, together with contrast and noise characteristics, affects the overall appearance of the image.
(162) The practical spatial resolution of an imaging system is a combination of several technical factors and mathematical operations. Spatial resolution may be described in the spatial domain (e.g. in the form of high-contrast line-pair patterns as seen in Fig. 5.1) or in the frequency domain. In the frequency domain, MTF provides a comprehensive description of contrast representation with a continuous range of spatial frequencies. Spatial frequencies can be thought of as a visual line-pair pattern of white and black lines next to each other, where the density of the lines increases at higher frequencies (e.g. 10 line pairs per cm) until the imaging system starts to lose the original black and white contrast, and eventually just becomes grey as white and black parts are averaged. A line-pair pattern can also be described as a sinusoidal contrast signal with a wavelength corresponding to the visual line-pair pattern size.
5.2.3. Noise
(163) Small details and lower contrast structures may be hidden under the noise texture which is seen as the graininess of the image. The main component of image noise is provided by quantum noise. Quantum noise is governed by Poisson statistics, such that the observed noise, defined as the standard deviation of the grey-scale pixel values in a certain homogeneous part of the image, is inversely proportional to the square root of the dose. For example, by lowering the dose to one-quarter of the original level, the image noise is doubled. This gives a simple rule for predicting the effect on image noise if there is a change in the radiation dose level:
which applies in an approximately similar manner for all x-ray modalities, but may not apply when iterative or AI-based reconstruction is used. This relationship is demonstrated visually in Fig. 5.2.
(164) There are also other noise components in addition to quantum noise which play a role in x-ray imaging modalities, such as electronic noise (especially with low-dose imaging with a lower level of detected x-ray signal) and anatomical noise (interference of anatomical structures and tissue textures).
Examples of spatial resolution line-pair patterns with varying spatial frequency presented in computed tomography (CT) images reconstructed with a standard reconstruction kernel (left) and a high-detail kernel (right). The image pair demonstrates the significant effect of image reconstruction on the sharpness/blurring of the final axial image. Images have been acquired with 64-slice CT using a commercially available image quality phantom, and the same raw data were used in both images. On the standard kernel image, the visually limiting spatial resolution is approximately 8 line pairs cm−1 (approximately 0.63 mm), whereas for the high-detail images, it is approximately 11 line pairs cm−1 (0.46 mm). Although not apparent from the image, the image noise is increased significantly in the high-resolution image on the right, which may limit the use of this option, especially in low contrast diagnostic tasks. The high-contrast point-source for frequency domain analysis of spatial resolution (by modulation transfer function) is shown in the central part of both images. Source: Mika Kortesniemi, Finland. Schematic picture of image quantum noise values described in terms of grey-scale standard deviation σ measured from a homogeneous region, ranging from 100 (left) to 25 (right) in parallel with the corresponding exposure level (relative dose D, ranging from 1 to 16) used in x-ray imaging. The noise in the image decreases as the exposure level increases, and vice versa. The relationship follows the inverse square-root law. One of the most effective optimisation steps in x-ray imaging is the definition of appropriate balance between image noise and radiation dose. Source: Mika Kortesniemi, Finland.

(165) As with spatial resolution, noise can also be presented in the frequency domain by determining NPS, which can be thought of as the grain size distribution of the noise. NPS is an important descriptor of image quality because it represents the image texture (the noise structure of homogeneous parts of the image). The human visual system is fairly sensitive to differences in noise texture. This may become relevant with image post-processing in digital radiography, fluoroscopy, and CT. NPS has particular importance in CT due to the available options in terms of reconstruction methods.
5.2.4. Combined effects of basic parameters
(166) The main image quality descriptors may be used in combination to provide more comprehensive estimation parameters for the observed image quality. CNR can be used as a simple measurable physical image quality parameter to describe how a certain level of contrast may be detected by signal as compared with noise. As such, it provides the simplest type of observer model, trying to estimate the level of contrast detection with only two measurable parameters. However, as anticipated, the actual observed clinical image quality is a much more complicated entity which entails many more image-quality-related features. Such additional and clinically relevant object (e.g. lesion) features include target size, shape, texture, edge profile, etc. An example demonstrating different levels for visualisation of circular objects depending on object contrast, dimension, and image noise is shown in Fig. 5.3.
(167) In addition to fundamental contrast, spatial resolution, and noise evaluation, other factors, such as image uniformity and image artefacts, are also relevant for traditional image quality assessment. Image uniformity describes the ability of the imaging chain to keep the contrast representation constant (i.e. no additional contrast gradients or alteration to the background signal level) in the entire image field of view. Image artefacts refer to additional contrast features in the image which are not present in the imaged object. There are many types of artefacts in imaging systems, and they vary between radiological modalities (e.g. radiography plates may have scratches and punctate densities that mimic stones, while CT images may present ring artefacts if detector air calibration has not been performed successfully). Artefacts may also be caused by physiological and non-physiological patient motion, or medical devices that are inside, on, next to, or under a patient. Overall, image non-uniformities and artefacts should be monitored and avoided as they interfere with the image review regardless of the x-ray modality.
(168) The preceding image quality description concerns the technical imaging chain. At the image review and observer level, other parameters such as room illumination, display monitor performance, display viewing distance, and even operator noise (concerning inter- and intra-observer variability) eventually have an effect on image quality optimisation. Thus, a comprehensive evaluation of clinical image quality for optimisation is a highly challenging task. To develop the optimisation process and methods further, specifically to move further from the traditional image quality parameters and utilise more clinically relevant image quality metrics, model observer methods are introduced in Section 5.3.3 and expanded in Annex B.
Example of circular contrast targets with two different contrast levels (stronger contrast on lower row images) and varying noise level (increasing noise from left to right images). Each image includes five targets with varying target size and random positions in the field of view. The images demonstrate the different visibility of targets, and how smaller targets are more difficult to detect despite the same contrast-to-noise ratio as the larger targets in the same image frame. Source: Mika Kortesniemi, Finland.
5.3. Objective technical image quality assessment: metrics and phantoms
5.3.1. Geometric image quality phantoms: requirements
(169) The three x-ray modalities – digital radiography, fluoroscopy, and CT – have in common certain basic tests and hence phantom design requirements to measure the technical performance of the systems in terms of image quality (Mah et al., 2001; Xu and Eckerman, 2009; DeWerd and Kissick, 2014; Hernandez-Giron et al., 2016). The phantoms used should contain patterns or test objects to enable measurement of uniformity, noise level, spatial resolution, low contrast detectability (threshold for the smallest object and/or contrast level of similar attenuation to the surrounding background) in terms of CNR, and the presence of possible artefacts in the images (an example for fluoroscopy is shown in Fig. 5.4). These basic image quality parameters do not require sophisticated tools for quantification, but they still reflect the most important features contributing to technical image quality. Therefore, they will define the Level C methods.
Fluoroscopy images of an image quality phantom containing two radial distributions of low contrast targets (highlighted in green), dynamic contrast targets (highlighted in blue), and spatial resolution patterns (middle square). The selected abdominal protocol offered a low-dose (left) or high-dose (right) set-up. The low-dose protocol provided less x-ray quanta reaching the detector, and hence a lower image quality level, as can be seen in this image pair. Source: Irene Hernandez-Giron, The Netherlands.
(170) Image quality evaluation may cover an extensive set of methods beginning from the signal generation of the imaging modality, and ending with the diagnostic task using clinical data. An example of the fundamental technical level is the testing of the calibration curves used to translate the signal reaching the detector into the grey values in the images. As another example, a technical CT image quality phantom with a set of materials with well-described attenuation for the applied x-ray energy can be used (Fig. 5.5, left) to test the correct contrast performance of the scanner.
Computed tomography (CT) image of a commercial phantom used to measure linearity, containing different materials with known attenuation properties (left). On the right is an example of a CT vendor-specific phantom used in regular testing. Source: Irene Hernandez-Giron, The Netherlands.
(171) There are numerous commercial phantoms that are widely accepted worldwide for these tasks by medical physics organisations, and used as standard in guidelines for QA. These phantoms are usually called ‘geometric phantoms’ because they have simple geometric shapes (such as cylinders, squares, lines, line pairs, and points). They contain patterns of objects in a uniform material background to measure the image quality metrics. The vendors of imaging devices often have their own basic phantoms that can be used to quickly check most of these parameters, if the specifications of such phantoms are known (Fig. 5.5, right).
(172) For all three modalities, especially for optimisation of clinical protocols, it is recommended to mimic (at least) the total attenuation of the patient. In the case of digital radiography and fluoroscopy, commercial phantoms are usually thin and do not reproduce the total attenuation of the patient’s head or body. They should be combined with polymethyl methacrylate blocks or copper plates placed at the x-ray tube exit to reach an equivalent total attenuation to a patient. This enables the imaging system to perform in automatic mode as it would in clinical use, and results in measurements of image quality and dose closer to the clinical set-up. If such an attenuator is not available, even a simple water container may be used to create the relevant net attenuation. In the case of CT, the phantoms used for protocol optimisation should not only reproduce the total attenuation of the patient, but also their shape and overall size in the x–y direction for the investigated indication (body region). This is particularly crucial in CT protocols combined with AEC. Some commercial phantoms have external rings that can be bought separately for this purpose (Fig. 5.6). As an alternative for soft tissue, water slabs or bolus can be used to increase the diameter and attenuation of the phantoms to make them closer to the desired patient size (Gardner et al., 2014).
External Teflon ring to mimic the attenuation of the skull (left) placed around the Catphan phantom low contrast module, computed tomography image of this configuration, and the effect on low contrast detectability (centre), and two abdominal rings of different sizes (right). Source: Irene Hernandez-Giron, The Netherlands.
5.3.2. Low contrast detectability
(173) Low contrast detectability is frequently assessed by human observers estimating the number of objects (with a similar attenuation to the surrounding background) that can be detected in the images for CT (Fig. 5.7), digital radiography, and fluoroscopy. The patterns and object distribution are frequently known beforehand by the observers, which introduces bias in the results.
Examples of the improvement in low contrast detectability in computed tomography phantom images with round contrast targets. Different visualisation of low contrast detectability is apparent and improves when dose is increased (between A and B) and slice thickness is increased (between B and C). Source: Irene Hernandez-Giron, The Netherlands.
5.3.3. Model observers
(174) Human observer studies are generally applied to validate and optimise the image quality of novel imaging systems before entering routine clinical use. This approach is time consuming, complex, and expensive. A simplified version with skilled observers (e.g. medical physicists) performing simple detection tasks, such as the assessment of low contrast detectability of targets (surrounded by uniform backgrounds) in geometric phantoms, is widely used. The results of these perception studies are constrained to the range of conditions and type of images analysed, which rarely represent all the available options in the imaging device. Besides this, wide intra- and inter-observer variability may appear, as will be discussed in the next section. Thus, alternative objective and reproducible methods are needed to avoid these bottlenecks of human observer studies.
(175) Consequently, there is a growing trend to use statistical decision theory for image quality assessment in medicine. Model observers are mathematical algorithms that were first introduced into medical imaging as surrogates of human observers for the detection and discrimination tasks of simple objects. As such, they are not a substitute for the clinical validation of protocols or systems, which is still crucial and needs the intervention of radiologists scoring patient images (Hernandez-Giron et al., 2011; Solomon and Samei, 2016; Ba et al., 2018; Viry et al., 2021). Furthermore, model observers cannot currently be regarded as routine tools for optimisation, but as a more advanced methodology which requires specific image processing or medical physicist knowledge for successful implementation. Nevertheless, model observers provide valuable tools as they pursue the characterisation of image quality in diagnostic tasks in an objective way, although in an approximate manner.
(176) Model observers have two main applications in medical imaging that will influence which model to select and how it should be implemented. The simplest application is to evaluate and optimise the acquisition performance of the medical imaging system. In this case, it may be sufficient for the ideal model observer to approximate a human observer (Barrett et al., 1993; He and Park, 2013). The second application is to test the image reconstruction process. This is a more demanding task, especially with modern CT scanners that include more complex image reconstruction algorithms. More complex model observers must be applied in this case, leading to anthropomorphic model observers.
(177) The anthropomorphic model observers include approximations to certain aspects of the visual perception process and its spatial frequency dependence in their implementation, expressed in mathematical form. These aspects can be related to the way that the human eye filters the frequencies present in the images or how the detection process is triggered in the human visual cortex. Two main subclasses of anthropomorphic model observers are used in medical imaging: the non-prewhitening matched filter with an eye filter, and the channelised Hotelling model observer. More detailed information about implementation of the model observers can be found in Annex B.
(178) Besides the basic image metrics already mentioned, there is a more complex level related to the individual diagnostic tasks that will be dependent on the indication, disease, and patient variability. This would be related to applying model observers to anthropomorphic phantoms containing lesions or even patient images, and is an active field of research. In future, model observers are expected to be applied in connection with AI-based image quality assessment methods in order to provide these new methods with a well-established reference (ground truth) for training, validation, and testing.
5.4. Subjective evaluation of image quality
5.4.1. Evaluation of clinical image quality
(179) Subjective evaluation of clinical image quality by an expert reviewer is a cornerstone of practical radiological optimisation. Judgements of image quality must be made by professionals with appropriate training and experience. Regular and systematic clinical image quality evaluation has been undertaken as part of QA, or in part of that process referred to as ‘self-assessment’. Clinical image quality evaluation is also an ongoing task of the radiographer during the normal clinical workflow, in cooperation with medical physicists when required (e.g. when a previously unseen or unknown artefact appears). For example, in projection radiography, this will be done immediately after the image acquisition, in order to verify that the image has been successfully produced with appropriate projection, collimation, and post-processing. Clinical image quality evaluation should also be an integral part of radiologist and other radiological medical practitioner image review, with prompt feedback to those involved in image acquisition if sufficient image quality is not achieved due to technical or procedural reasons.
(180) Subjective expert evaluation of clinical image quality by radiologists forms part of the routine self-assessment process included in the QA programme of the radiological department. The subjective evaluation of clinical image quality should be graded based on image quality criteria, when available, for each modality and clinical indication. Ideally, this would be paired with patient dose audit. Applied image quality criteria explicitly describe the anatomical features that should be seen in a patient image, the coverage or projections that should be included, and how the patient should be aligned to secure a reliable and reproducible appearance of possible pathological findings. Eventually, this diagnostic quality and reliability can be described by sensitivity, specificity, accuracy, and predictive value related to specific clinical indications.
(181) To illustrate good image quality criteria with an example, a regular chest x-ray postero-anterior projection will be used. Many professional societies in medical imaging have offered guidelines for this traditional projection radiography view. Hereafter, the European guidelines that have already been in use for over two decades are summarised (EC, 1996).
(182) According to these guidelines, the postero-anterior chest radiograph (e.g. Fig. 5.8) diagnostic requirements should fulfil the image criteria shown in Table 5.1. The diagnostic requirements in any modality and examination should be accompanied by criteria for patient dose in terms of DRLs (and possible local DRLs), and recommended criteria for exposure or acquisition technique applied to the available imaging equipment.
An example x-ray chest postero-anterior projection image produced according to the image quality criteria. Source: Ninewells Hospital, Dundee, UK.
(183) Establishment of simple image quality scoring criteria for subjective evaluation of clinical images for a range of scenarios, based on adequate visualisation of pertinent anatomical structures and the usefulness of the image, could be used to assess images in busy departments and help to reduce variability between observers. An example of this approach using image quality scoring criteria developed for paediatric CT images has been reported by Padole et al. (2019). Simple (and practical) scoring criteria of this type should be ‘indication-based’, and radiologists participating in the evaluations should first ensure that the criteria are applied consistently, as discussed earlier in this section.
(184) Subjective image quality evaluation is also related to the concept of visual grading characteristics (VGC) analysis, in which the main task is to score or grade how well relevant anatomical structures are reproduced in the images for a given indication. For example, several sets of images of the same patient (with varying acquisition and/or reconstruction parameters) can be presented to the radiologists who have to determine if the relevant anatomical or pathological structures are represented adequately (Båth and Månsson, 2007; Verdun et al., 2015). Another approach can be to ask the observers to pick the image set they prefer in terms of diagnostic image quality. The outcomes of the optimisation based on the scoring of patient images based on VGC will be highly dependent on the selected patient cohort characteristics. This cohort should be representative of the general target population for the indication being studied. The final stage of the optimisation process should be tailored to the clinical application, and involve the multi-professional team of radiographers, medical physicists, and radiologists.
(185) This approach to protocol optimisation, although necessary, is complex, expensive, and time consuming. As an alternative in some cases, anthropomorphic phantoms represent, to some extent, the normal patient anatomy and even disease stages (e.g. different types of lung nodules or liver lesions, with varying composition and shape), and can be used for concrete task-oriented protocol optimisation. Although these phantoms mimic patient anatomy and attenuation, and are realistic for dosimetric purposes, some of them lack certain relevant tissues (e.g. lung parenchyma), realistic tissue texture, or sufficient variability in terms of pathology distribution or characteristics (Gavrielides et al., 2017; Hernandez-Giron et al., 2019). Although they are a good starting point for clinical protocol optimisation and testing of artefacts, and are more realistic than the traditional geometric phantoms, extrapolation of the outcomes of such phantom studies for patients has to be done with caution.
(186) The recent developments in 3D printing, as a customisable and low-cost alternative to create image quality phantoms, will probably improve the ability of phantoms to replicate tissue characteristics, and widen the range of patient and disease variability that can be used in evaluations (Filippou and Tsoumpas, 2018). An example of such phantoms for CT, mimicking a small section of the lung vessel distribution and combined with nodule surrogates (whose detectability can be analysed with model observers as a function of the selected protocol), is shown in Fig. 5.9 (Hernandez-Giron et al., 2019; Zhai et al., 2019).
(187) Various techniques can be useful in the detection of features or abnormalities, and these are described in more detail in the annexes to this publication. Receiver operating characteristic curve analysis can be used for comparing performance between observers or between two imaging protocols in detection tasks involving decisions as to whether a case is ‘normal’ or ‘abnormal’ (Annex C). A multi-alternative forced choice study consists of several images displayed simultaneously containing different alternatives, from which the observer is forced to choose one of the images as being ‘abnormal’ (Annex D).
Computed tomography (CT) images of a thorax phantom with two three-dimensional printed inserts mimicking lung vessels, combined with lung nodule surrogates. For an ultra-low dose CT protocol (dose equivalent to a chest x ray), the images were reconstructed with filtered-back projection on the left and with iterative reconstruction on the right. Source: Irene Hernandez-Giron and Wouter J.H. Veldkamp, The Netherlands. CLUES project (CLUES, 2021).
5.4.2. Role of medical displays and their performance in the image quality chain
(188) The scoring of medical images should be performed following internationally recommended visualisation conditions in a darkened room appropriate for diagnostic purposes (AAPM, 2019d). Medical diagnostic displays have to be calibrated to visualise DICOM images and comply with the requirements of pixel size related to the desired imaging modality. For instance, the requirements for mammography images in terms of pixel size are more demanding than for CT or digital radiography. The most widely used criteria are those proposed by the American Association of Physicists in Medicine (AAPM) Task Group 270 related to medical displays (AAPM, 2019d). AAPM also released a set of test images (phantom and patient) that can be used to check display performance.
(189) The use of mobile devices, such as mobile phones and tablets, has been proposed as an alternative for the visualisation of medical images. The lifespan and performance of these types of screen have not been thoroughly studied so far, and with the available technology, they should never replace a calibrated diagnostic display. These mobile devices might be given clearance for use for medical applications, but they should always be calibrated to display medical images and also used in the correct ambient luminance conditions. Due to the shortage of radiologists worldwide, and, to a greater extent, in low- and middle-income countries, wide expansion of the utilisation of mobile phones has already started and is expected to increase markedly in the next few years. Therefore, concurrent evidence-based research studies are required to ensure the best performance of the mobile screens to be used in medical diagnostic applications (AAPM, 2018).
5.5. Future aspects of artificial intelligence in determining image quality
(190) AI and its subsets, machine learning and deep learning, are developing quickly to provide versatile methods for a wide range of optimisation-related tasks, and the image quality framework needs to evolve to address their impact in the image chain and patient outcomes.
(191) AI was originally defined as an area of science where machines perform tasks which typically require human thinking (Boden, 1977). Within the concept of AI, machine learning is seen as a subset of AI methods aiming towards data-driven decisions via models created from large-scale training data (Natarajan et al., 2017). As such, machine learning may provide outcome prediction on new unseen data based entirely on earlier training data without previous programming or hand-crafted models. Therefore, machine learning methods learn from experience (Meyer et al., 2018). Further in the hierarchy of machine learning methods, deep learning forms a subset of machine learning with a gradually increasing level of abstraction as the data are fed through several data processing layers in a neural network architecture, providing higher abstraction level feature maps from the original input data (Krizhevsky et al., 2012; Adam et al., 2017; Meyer et al., 2018). The hierarchy of these methods is presented in Fig. 5.10.
(192) Various methods related to AI, machine learning, and deep learning are developing rapidly around health care as in all sectors of science and industry (Ranschaert et al., 2019). Deep learning is already being used in CT image reconstruction. Increasing interest has been shown in machine learning for radiology because typical imaging objects, such as lesions and organs, appearing in medical images are, in practice, far too complex to be described by a simple equation or hand-crafted model as used in conventional computer-aided diagnostics (Litjens et al., 2017; Suzuki, 2017). Deep learning methods, especially convolutional neural networks and its variants, have already shown convincing results in medical imaging related to many diagnostic tasks traditionally handled by human experts. Such tasks include lesion or tissue localisation, segmentation, classification, and clinical outcome prediction (Litjens et al., 2017; van Assen et al., 2020; Barragán-Montero et al., 2021; van Leeuwen et al., 2021; Castiglioni et al., 2021).
Hierarchial concepts from artificial intelligence to machine learning and deep learning. Methods of artificial intelligence cover a wide range of applications where the availability of big data and increased computing power has enabled the rapid development of machine learning. As a subset of machine learning, deep learning has demonstrated versatility in application to several tasks also related to optimisation.
(193) The main challenge in AI methods (in supervised learning) has been access to a sufficient amount of annotated and representative training and validation data, which is a fundamental prerequisite to achieve sufficient robustness in making AI methods more applicable to clinical regimes (Adam et al., 2017; Litjens et al., 2017; Meyer et al., 2018). This robustness must also be proven with retrospective and prospective clinical validation trials extending to varying multi-centre data (involving, for example, multi-vendor data) before such methods can be safely applied in wider clinical routine.
(194) AI methods can also be applied directly in the optimisation of the radiological chain. Image quality classification and grading, in addition to patient-specific dosimetry, may be realised with a machine learning/deep learning approach (Samei et al., 2018). These fast-developing objective and efficient AI algorithms may complement and ultimately replace traditional methods such as model observers for image quality assessment and Monte Carlo simulations for dosimetry calculations (Lee et al., 2018; Maier et al., 2018). The first attempts to develop model observers to deal with patient images were based on deep learning to tackle the variability of anatomical background and its influence in low contrast detectability (Gong et al., 2019). The principle of optimisation goes much further than just balancing image quality and dose. In medical imaging, it should finally lead to objective and reliable quantification of diagnostic value in terms of care outcome. Therefore, the final conceptual level of optimisation should concern risk vs benefit assessment performed for individual patients and clinical procedures (Samei et al., 2018). In order to achieve such a comprehensive level of optimisation, many types of clinical and healthcare data are likely to be needed in combination with the diagnostic imaging data to produce adequate clinical metrics and multi-dimensional features used for clinical outcome classification and prediction models (Esteva et al., 2019).
(195) In general, AI in health care can develop in synergy with the exponential growth of available curated data to create possibilities for better-informed decisions. Finally, these developments are expected to improve quality and safety of health care, and also reduce costs by enabling more predictive, preventive, and personalised care (Adam et al., 2017; Mollura et al., 2020).
5.6. Overview of stages of development
(196) Some of the steps discussed in Section 5 are set out in Table 5.2 in terms of the levels of optimisation discussed in Section 3. More information on implementation of image quality assessment relating to different modalities in terms of levels is given in Annex E.
6. EDUCATION AND TRAINING OF CLINICAL STAFF IN OPTIMISATION METHODS
6.1. Introduction
(198) The use of ionising radiation in medicine may result in unnecessary radiation exposure where equipment is in the hands of untrained or undertrained operators. However, this could be largely avoided if the operators were adequately trained in techniques for the optimisation of protection (Bor et al., 2008). Although the delay in manifestation of long-term health effects resulting from exposure to ionising radiation makes the associated risks difficult to comprehend or monitor, the overarching requirement ‘to do more good than harm’ makes radiological protection of patients an important ethical duty (ICRP, 2018). There are differences in perception of radiation safety among professionals (Moore et al., 2022). Education and training in radiological protection can enhance the understanding of personnel; foster the development of a culture of safety, teamwork, and professionalism; and improve workers’ satisfaction and commitment to radiological protection principles (ICRP, 2018). Investment in an adequate staffing level, with trained healthcare staff and a commitment to their CPD, is essential when considering investment in new imaging equipment and software.
(199)
6.2. Professionals with a role in the optimisation process
(200) Key professional groups each need a specific set of knowledge, skills, and competencies (KSCs) to ensure their effective contribution and participation as a team in the optimisation process. These KSCs are obtained during undergraduate education, and maintained throughout their careers during periods spent in training positions, residencies, and through focused courses for CPD.
(201) Education and training of health professionals involved in medical imaging should provide the KSCs needed for them to perform optimisation effectively as part of their role. This is not constrained to radiology, radiography, and medical physics professionals, but applies to the full range of professionals involved in medical imaging, as exemplified in Table 6.1. The training of these individuals needs to be built on throughout their careers, with the level and detail being dependent on their role. For technologically-based roles, it will include new protocols, software, imaging methods, and technologies as they become available. For other roles, such as those of the anaesthesiologist and management, simple awareness of the issues may be enough. To facilitate targeted and appropriate delivery of training, up-to-date training plans should be developed based on assessment of the needs of the local facility and staff (e.g. infrastructure, staffing, clinical workload, and available optimisation options).
(202) A core team for optimisation should include the medical physicist, the radiologist or other radiological medical practitioner, and the radiographer. It is imperative that an appropriate training regime is developed for this core group to become familiar with each other and the new technologies. Once this has begun, they will be more responsible and prudent, more productive, and able to fine-tune equipment settings to achieve better results for the specific imaging modality.
(203) Each of the professionals shown in Table 6.1 has an important role to play in optimisation, but due to the specific education as a healthcare scientist, the medical physicist has a key role in ensuring a link between the equipment/software and its clinical users. In many circumstances, the lack of access to a medical physicist qualified in medical imaging is an obstacle to optimisation. This problem is of particular importance for rural/small facilities or low- or middle-income countries where the profession does not exist or is not recognised as a healthcare profession. In addition to being responsible for the technical QC and dosimetry, clinically qualified medical physicists have specific skills and competencies in optimisation. The Commission recommends that access to medical physicists qualified in medical imaging is ensured in all activities related to diagnostic and interventional radiology, and that their education and clinical training is adequate for performing their role in optimisation.
6.3. Understanding requirements of equipment operation for optimisation
6.3.1. The two pillars of optimisation
(204) There are many aspects to optimisation for building quality and safety in diagnostic imaging (see Section 1.2); these come under the two pillars of optimisation from
(205) It is important that training reinforces the concept that optimisation is iterative in that it requires ongoing monitoring, team review, and analysis of performance to maintain and improve protocols, dose reduction, or dose increase when image quality is not adequate, aided by continuous learning and feedback. Radiological protection culture relies on – and is built on – a safety culture existing within a healthcare facility or organisation (IRPA, 2014).
Two foundational pillars of optimisation on which quality and safety in diagnostic imaging are built: the facility design, equipment, and software on the left; and the trained professionals performing the workflow process and imaging protocols on the right.
6.3.2. Training issues arising from the complexity of digital imaging equipment
(206) In response to increased awareness of the need for patient radiation exposure management, vendors of medical imaging equipment have developed many technological solutions to improve image quality and reduce patient dose (AAPM, 2019c; Balter, 2019). Modern imaging equipment has more automatic and user-friendly control functions, allowing for easier day-to-day operation and improved optimisation. However, this can create a false perception that the equipment almost works by itself (rather analogous to the driverless car) in acquiring perfect images at the lowest possible doses for patient and staff, but this is far from the reality. Moreover, vendor pre-set protocols are often not set up adequately to fully provide the best optimisation.
(207) The automated systems to reduce patient dose in modern digital imaging equipment are complex. If they are set up correctly, they will provide a much better service with lower doses, but if they are set up incorrectly, features that could potentially reduce dose can have the opposite effect. Staff may be unaware because the images are good and the dose reduction tool has been switched on (Trianni et al., 2005). Therefore, more complex equipment requires more knowledgeable and skilled users behind the machine, so the need for careful continuous efforts at training staff has never been more crucial than it is now.
(208) Facility managers and clinical stakeholders may be keen to invest in purchasing expensive, high-profile, imaging equipment, but if they do this, they must also support appropriate training programmes tailored to the imaging device for all the staff involved. An efficient strategy may be one of ‘cascade training’, where a few staff learn how to optimise the new device/software in more depth in order that they can then pass on their knowledge to more staff internally. Otherwise, the full potential of the equipment will not be realised, and patient doses could be increased rather than reduced. Radiology professionals responsible for management, quality, and patient and staff safety have a responsibility to ensure that facility management are aware of, and support, the need for the provision of adequate training. The same is true of vendors and their representatives. In this context, vendors also have a responsibility for the provision of tools and support to implement such specific end-user training.
(209) Developments in application and use of AI, notably machine learning, relating to optimisation of imaging are expanding rapidly and have, in some cases, demonstrated improvements in standardisation and optimisation of protocols compared with expert radiologists (Mukherjee et al., 2020; Pinto et al., 2021). As this rapidly expanding field moves forward, further developments will require validation, policy, and ethical oversight. This will, in turn, have particular implications for staff training, with a need for teamwork to achieve implementation and establish adequate QA processes (Levenson, 2012). The associated investment in training needs to be made now in order to avoid future clinical errors with potentially catastrophic consequences.
6.3.3. Understanding the concept of optimisation and the team approach
(210) Basic medical education complemented with specific clinical and imaging knowledge is assumed for medical imaging specialists, and such education is available from many sources. In addition to this basic and specialist education, imaging professionals must learn the radiological protection principle of optimisation, why they should care about it, and how they can work as a member of the core team to implement it across a growing variety of imaging modalities, complex protocols, and patient sizes, and be engaged in it. These are important goals of the training for the core team professionals in optimisation that should be considered when developing learning objectives.
(211) For best results, optimisation education and training should aim to improve patient care and optimise clinical outcome rather than focus on ALARA or dose reduction alone. This is an iterative process (ICRP, 2006, 2017) and is strongly related to quality improvement, as well as the principles of biomedical ethics (ICRP, 2018; Beauchamp et al., 2019). In this context, training on optimisation should include means to improve professional knowledge, skills, and attitudes, and develop the competencies needed to effectively implement optimisation, which will also contribute to building stronger teams. Regular reflective meetings on optimisation and review of lessons learned from safety and near-miss events will support ongoing education.
(212) When training on optimisation is provided by a multi-disciplinary team, members will complement each other, improve mutual understanding, and foster a team culture. Multiple studies show that one of the obstacles for optimisation is the limited appreciation by different professionals of their respective roles and competencies. This creates a barrier to effective communication which leads to delays, protocol errors, sometimes repeat imaging, and patient safety concerns (Hyer and Novello, 2006; EPA, 2007; Lau et al., 2011a,b).
6.4. Provision of training
6.4.1. Ways of sharing information for learning
(213) Guidance on radiological protection education and training of healthcare professionals has been developed, for example, by ICRP and the European Commission, including optimisation among the list of essential learning topics (ICRP, 2009; EC, 2014). Other professional organisations have developed a variety of learning resources on efficient approaches for optimisation (IAEA, 2021a,b).
(214) Knowledge (theoretical basis), skills (ability to apply this knowledge), and attitudes (the personal and interpersonal behaviour needed to perform duties with high quality and safety), as components of a person’s KSCs, relevant to optimisation are obtained and maintained during university education, postgraduate training and residencies, and focused courses for CPD. Depending on the scope and purpose, these courses could either be targeted at one specific group of healthcare providers or encompass a broader multi-disciplinary group of professionals. The latter is particularly important for optimisation, so staff members can better understand and appreciate their respective roles and the roles of other professional groups. Radiology and facility management should make attendance at such courses accessible to all, rather than specific staff groups.
(215) Training on optimisation can be provided in structured courses and shorter focused sessions to stimulate interaction and knowledge/opinion sharing between professionals with different roles. Refresher courses and special focused sessions organised during professional congresses and scientific conferences, as well as learning initiatives from professional societies, vendors, or other organisations, have an important role in updating staff about new technological developments and sharing optimisation experiences.
(216) Records should be kept of training provided, and organisations that provide formal courses should be accredited. More information on methods for accreditation of courses, certification of individuals, evaluation of knowledge gained, and obtaining feedback from course participants is given in
(217) A combination of lectures, practical training, and hands-on sessions in a hospital environment in small groups has proved to be an effective learning approach for optimisation. Other options include using simulators, video tutorials, and e-learning tools. Some student radiology learning includes cartoon self-study, and others use competitive 3D gaming virtual worlds such as ‘Second Life’ (Rudolphi‑Solero et al., 2021). This can be complemented with on-the-job training through scientific visits to imaging facilities with recognised good practice (Vassileva et al., 2012, 2013; Rastogi et al., 2020).
(218) Regular departmental meetings provide opportunities for imaging team members (i.e. radiologists, radiographers, and medical physicists) to discuss optimisation and quality improvement, identify priority actions, and distribute roles. Each facility should assess the training needs, taking into account the local conditions, and more information on training plans, the design of programmes, and different formats that might be considered for provision of training is discussed in
6.4.2. Improving teamwork skills
(219) Fostering multi-disciplinary teamwork is an essential component of optimisation training, and staff should receive instruction on team building and communication with other disciplines. The trainers themselves should be role models for proper teamwork, and this can be facilitated by cascade training activities on optimisation involving multi-disciplinary teams. This approach will support team culture and safety culture through improving mutual understanding and respect for others.
(220) Building a teamwork atmosphere as part of training on optimisation can be achieved by sharing activities; dividing the responsibility between team members; allocating multiple tasks so that individuals can substitute for each other whenever possible, and can easily relocate; sharing rewards and accountability; encouraging positive competition between the team members; and helping new members through the sharing of experiences within the team itself.
(221) Working in a team, in which there is a positive attitude towards improvement with encouragement to share lessons learned from mistakes or near-misses, and where members are open to positive criticism and willing to adapt their practices, will motivate team members to develop an innovative approach to optimisation (Moore, 2016, 2017).
6.5. Knowledge content adaptation as a basis for optimisation
(222) Each of the key professional groups needs a specific set of KSCs essential for their effective participation in the optimisation process. Competencies define the application of the knowledge, skills, and behaviours in the setting of daily practice.
(223) Current thinking would suggest that education and training in optimisation in medical imaging should be based on Bloom’s taxonomy of learning. It has long been recognised that learning takes place at an increasing level of complexity from the simple recall of facts to the process of analysis and evaluation (Fig. 6.2). This ascending order of complexity was first described by Benjamin Bloom, an American educationalist (Bloom and Krathwohl, 1956), and has since been revised to reflect more current approaches to teaching, learning, and evaluation (Anderson and Krathwohl, 2001). The taxonomy classifies forms and levels of learning based on the premise that an individual cannot apply or evaluate something until it is understood, and that learning at the higher level is dependent on having acquired the prerequisite knowledge and skills at lower levels. This is the basis for qualifications frameworks for lifelong learning worldwide (EPC, 2008; UNESCO, 2018; ACGME, 2019). Educational curricula that use Bloom’s taxonomy should be applied throughout the radiological protection worker’s career to ensure lifelong learning. Modules have been created entitled ‘entrustable professional activities’ which provide measurable assessment of individual KSCs (AAMC, 2014).
The forms and levels of learning identified in Bloom’s taxonomy, with brief description of the processes to which they might apply in the context of optimisation.
(224) This model enables the educator to define the student learning outcomes based on the KSCs that are necessary for radiological protection professionals to apply to optimisation at various levels in the clinical setting. Most of the topics are common for all groups, but the content may need to be adapted to the basic knowledge of each professional group. Examples of key KSCs that enable the development of training modules on optimisation as part of a radiological protection education and training programme are given in Annex F, and more comprehensive lists have been published elsewhere (ICRP, 2009; EC, 2014).
6.6. Responsibility for training
(225) Earlier recommendations of the Commission define responsibilities of different parties in respect of radiological protection education and training, and also apply to training related to optimisation (ICRP, 2009). Organisations highlighted in particular are: universities, training institutions, and scientific societies; radiological protection regulatory bodies and health authorities; international organisations; and radiology equipment vendors.
(226) Professional societies should provide training programmes, and regulatory bodies and health authorities have a critical role in requiring that training providers authorised to give certification for medical professionals have sufficient infrastructure and qualified staff for organisation of the training programmes. In addition, regulatory officers need to have a basic knowledge about optimisation approaches in different modalities to understand and appreciate the importance of optimisation. They need to understand the concept of DRLs and dose audits, and require their implementation during authorisation and inspection processes.
(227) The Bonn Call for Action jointly issued by IAEA and WHO in 2012 identified the need to enhance implementation of the principle of optimisation of protection and safety, and the need to strengthen radiological protection education and training of health professionals as two of the 10 priority actions to improve radiological protection in medicine in the next decade (IAEA/WHO, 2012). An online Bonn Call for Action implementation toolkit has been published recently by IAEA, including online resources for training in optimisation in several languages (IAEA, 2021a). Virtual and on-demand web-based packages can improve access to training and enable review of material independent of time and location. Online training materials could play a significant role for facilities in developing countries with fewer resources by reducing the demands of travelling and scheduling, and improve overall cost-efficiency. Annex C of
(228) Equipment vendors have an important role to play in providing training for new technologies that is relevant to optimisation. Training materials should be produced in parallel with the introduction of new imaging technology and software. Emphasis should be placed on the correct use of new equipment features that have the potential to reduce patient doses, the understanding of settings so that system features function correctly, adaptations for different patients and imaging tasks, and an appreciation of the significance of displays of dose quantities.
(229) Healthcare facility and radiology management have an important responsibility for ensuring sufficient human and financial resources for optimisation and associated training of staff (ICRP, 2007c). They should understand that investing in an adequate staffing level, staff training, and professional development helps to minimise errors and risks, and improve clinical results, and this applies to training in optimisation. Improvements in patient care and staff satisfaction that result increase the standing of the medical facility. Hospital management need to be made aware of training requirements linked to medical imaging, and the roles and responsibilities of different staff members, and allocate staff sufficient time to enable them to achieve and maintain competences.
(230) Healthcare professionals performing medical imaging have to assume their own responsibility for acquiring and maintaining their KSCs, including those in respect of their role in optimisation, as a basic requirement to practice their profession, and to keep themselves updated throughout their professional careers. Equal opportunities should be given for education and training to all staff, and this applies to training in optimisation. All trainers and staff should be treated equitably relative to training and in-services without regard to gender, seniority, ethnicity, or familial relationships.
