Last Updated : February 27, 2024
Artificial intelligence (AI) is an umbrella term used to describe a variety of approaches that let computer programs perform tasks that have been traditionally done by humans.
In diagnostics, AI can be used in cancer screening (e.g., for quality assurance in breast cancer screening), pathology (e.g., to automatically grade tumours), and in critical care medicine (e.g., for the prediction and detection of sepsis). Automating or making informed predictions within certain settings can potentially increase efficiency of clinical workflows, reduce costs, reduce the time it takes to perform tasks and report findings, and free up costly and limited health human resources (e.g., specialized technicians or radiologists).
In the field of public health, AI may help improve disease surveillance, detection, and mitigation by facilitating analyses of large volumes of complex, multi-sourced data from around the world. By using AI to support health data collection, researchers can investigate new methods of assessing the effectiveness of public health interventions and informing targeted health promotion activities and disease incidence forecasts.
The adoption of AI in health care is disruptive because it will transform traditional approaches of clinical and health care decision-making, which can also introduce certain challenges. For example, there are concerns that some communities may be underrepresented in the data used to train AI algorithms, which could result in health disparities and discrimination, and exacerbate access and trust issues in the health care system, and worsen health-related outcomes. Transparency, ongoing assessment of its safety, and addressing ethical and equity concerns are key considerations for decision-makers.
Emerging examples of AI include: