<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE article PUBLIC "-//NLM//DTD Journal Publishing DTD v3.0 20080202//EN" "journalpublishing3.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink" xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xml:lang="en" article-type="review-article">
<?release-delay 0|0?>
<front>
<journal-meta>
<journal-id journal-id-type="publisher-id">MI</journal-id>
<journal-title-group>
<journal-title>Medicine International</journal-title>
</journal-title-group>
<issn pub-type="ppub">2754-3242</issn>
<issn pub-type="epub">2754-1304</issn>
<publisher>
<publisher-name>D.A. Spandidos</publisher-name>
</publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="publisher-id">MI-6-2-00304</article-id>
<article-id pub-id-type="doi">10.3892/mi.2026.304</article-id>
<article-categories>
<subj-group subj-group-type="heading">
<subject>Review</subject>
</subj-group>
</article-categories>
<title-group>
<article-title>Artificial intelligence in oncology: Current status and possibilities (Review)</article-title>
</title-group>
<contrib-group>
<contrib contrib-type="author">
<name><surname>Roy</surname><given-names>Abhavya</given-names></name>
<xref rid="af1-MI-6-2-00304" ref-type="aff">1</xref>
</contrib>
<contrib contrib-type="author" corresp="yes">
<name><surname>Bhoyar</surname><given-names>Apurva</given-names></name>
<xref rid="af2-MI-6-2-00304" ref-type="aff">2</xref>
<xref rid="c1-MI-6-2-00304" ref-type="corresp"/>
</contrib>
<contrib contrib-type="author">
<name><surname>Ahirwar</surname><given-names>Ashok</given-names></name>
<xref rid="af3-MI-6-2-00304" ref-type="aff">3</xref>
</contrib>
<contrib contrib-type="author">
<name><surname>Pawade</surname><given-names>Yogesh</given-names></name>
<xref rid="af2-MI-6-2-00304" ref-type="aff">2</xref>
</contrib>
<contrib contrib-type="author">
<name><surname>Chandra</surname><given-names>Nilesh</given-names></name>
<xref rid="af4-MI-6-2-00304" ref-type="aff">4</xref>
</contrib>
</contrib-group>
<aff id="af1-MI-6-2-00304"><label>1</label>University College of Medical Sciences, Guru Teg Bahadur Hospital, Delhi 110095, India</aff>
<aff id="af2-MI-6-2-00304"><label>2</label>Department of Biochemistry, All India Institute of Medical Sciences, Nagpur, Maharashtra 441108, India</aff>
<aff id="af3-MI-6-2-00304"><label>3</label>Department of Laboratory Medicine, All India Institute of Medical Sciences, New Delhi 110029, India</aff>
<aff id="af4-MI-6-2-00304"><label>4</label>Indian Council of Medical Research, Ansari Nagar, New Delhi 110029, India</aff>
<author-notes>
<corresp id="c1-MI-6-2-00304"><italic>Correspondence to:</italic> Dr Apurva Bhoyar, Department of Biochemistry, All India Institute of Medical Sciences, Plot no. 2, Sector 20, MIHAN, Nagpur, Maharashtra 441108, India <email>apurvasakarde@aiimsnagpur.edu.in</email></corresp>
</author-notes>
<pub-date pub-type="collection"><season>Mar-Apr</season><year>2026</year></pub-date>
<pub-date pub-type="epub"><day>19</day><month>02</month><year>2026</year></pub-date>
<volume>6</volume>
<issue>2</issue>
<elocation-id>20</elocation-id>
<history>
<date date-type="received">
<day>30</day>
<month>07</month>
<year>2025</year>
</date>
<date date-type="accepted">
<day>05</day>
<month>02</month>
<year>2026</year>
</date>
</history>
<permissions>
<copyright-statement>Copyright: &#x00A9; 2026 Roy et al.</copyright-statement>
<copyright-year>2026</copyright-year>
<license license-type="open-access">
<license-p>This is an open access article distributed under the terms of the <ext-link ext-link-type="uri" xlink:href="https://creativecommons.org/licenses/by/4.0/">Creative Commons Attribution License</ext-link>, which permits unrestricted use, distribution, reproduction and adaptation in any medium and for any purpose provided that it is properly attributed. For attribution, the original author(s), title, publication source (PeerJ) and either DOI or URL of the article must be cited.</license-p></license>
</permissions>
<abstract>
<p>Artificial intelligence (AI) is increasingly reshaping oncology by enhancing diagnostic accuracy, improving prognostication and enabling personalized treatment planning. The present review aimed to critically synthesize the contemporary landscape of AI applications across cancer imaging, digital pathology, clinical outcome prediction, chemotherapy and radiotherapy. Recent advances in machine learning and deep learning, particularly convolutional neural networks and transformer-based architectures, have demonstrated robust performance in lesion detection, tumour grading, survival prediction and treatment optimization, in several instances approaching or exceeding expert-level accuracy. Despite these advances, translation into routine clinical practice remains limited due to dataset bias, limited generalizability, the lack of standardized data protocols, insufficient interpretability and regulatory barriers. Ethical challenges related to fairness, transparency and equitable access are especially relevant in low- and middle-income countries. Emerging frontiers, including multimodal AI, foundation models, federated learning, and explainable AI, provide potential solutions to these challenges. Multidisciplinary collaboration, rigorous prospective validation and robust ethical governance will be essential to realize the full potential of AI in advancing precision oncology and improving global cancer outcomes.</p>
</abstract>
<kwd-group>
<kwd>oncology</kwd>
<kwd>machine learning</kwd>
<kwd>artificial intelligence</kwd>
<kwd>deep learning</kwd>
<kwd>radiomics</kwd>
<kwd>predictive oncology</kwd>
<kwd>cancer imaging</kwd>
<kwd>personalized cancer treatment</kwd>
<kwd>digital pathology</kwd>
</kwd-group>
<funding-group>
<funding-statement><bold>Funding:</bold> No funding was received.</funding-statement>
</funding-group>
</article-meta>
</front>
<body>
<sec>
<title>1. Introduction</title>
<p>Cancer remains a leading cause of morbidity and mortality worldwide, accounting for almost ten million deaths annually and imposing a growing burden on healthcare systems. Modern oncology requires the integration of complex diagnostic, prognostic and therapeutic information derived from medical imaging, histopathology, molecular profiling and longitudinal clinical data. Conventional workflows rely heavily on human interpretation, which is inherently subjective and prone to inter- and intra-observer variability, cognitive bias and information overload. These limitations contribute to variability in diagnosis, risk stratification and treatment selection.</p>
<p>Artificial intelligence (AI), defined as computational systems capable of learning, reasoning and pattern recognition, provides a paradigm shift by enabling automated, objective and data-driven clinical decision support (<xref rid="b1-MI-6-2-00304" ref-type="bibr">1</xref>). Advances in machine learning (ML) and deep learning (DL), particularly convolutional neural networks (CNNs), have driven major breakthroughs in medical image analysis and predictive modelling (<xref rid="b2-MI-6-2-00304" ref-type="bibr">2</xref>,<xref rid="b3-MI-6-2-00304" ref-type="bibr">3</xref>). Unlike early rule-based systems, contemporary AI models can extract high-dimensional features from complex data and model nonlinear relationships that are difficult to capture using traditional statistical approaches.</p>
<p>Multiple systematic reviews and meta-analyses have demonstrated that AI systems can achieve diagnostic performance comparable to, and in some contexts exceeding, that of healthcare professionals in selected oncologic tasks (<xref rid="b4-MI-6-2-00304 b5-MI-6-2-00304 b6-MI-6-2-00304 b7-MI-6-2-00304" ref-type="bibr">4-7</xref>). Despite this promise, widespread clinical adoption remains limited. Barriers include dataset bias, limited external generalizability, the lack of standardized data acquisition and annotation protocols, insufficient interpretability of model outputs and complex regulatory pathways (<xref rid="b8-MI-6-2-00304" ref-type="bibr">8</xref>). In addition, concerns regarding data privacy, accountability and equity have become increasingly prominent (<xref rid="b9-MI-6-2-00304" ref-type="bibr">9</xref>).</p>
<p>The present review aimed to provide a critical, clinically oriented synthesis of AI applications across the oncology continuum. Rather than providing a purely descriptive overview, the present review aimed to discuss the methodological strengths and limitations, unmet clinical needs, and emerging technological paradigms, with the aim of providing a practical roadmap for responsible and equitable integration of AI into oncology practice.</p>
</sec>
<sec>
<title>2. AI in cancer imaging</title>
<p>Medical imaging represents the most mature and clinically advanced domain of AI applications in oncology. AI has been applied across the entire imaging pipeline, including acquisition, reconstruction, segmentation, detection, classification and prognostication, spanning modalities such as computed tomography (CT), magnetic resonance imaging (MRI), positron emission tomography, mammography and ultrasound.</p>
<p>DL-based reconstruction algorithms have enabled substantial improvements in image quality by reducing noise and artefacts, facilitating lower radiation doses and shorter acquisition times without compromising diagnostic accuracy. These advances are particularly relevant in CT and MRI, where dose reduction and accelerated imaging have direct implications for patient safety and workflow efficiency. AI-driven image reconstruction has also enabled improved visualization of subtle lesions, potentially enhancing early cancer detection (<xref rid="b10-MI-6-2-00304" ref-type="bibr">10</xref>).</p>
<p>CNN-based computer-aided diagnosis (CADx) systems analyse lesion morphology, texture, and spatial context to support malignancy detection, staging and risk stratification (<xref rid="b11-MI-6-2-00304" ref-type="bibr">11</xref>). Robust performance has been demonstrated in melanoma classification (<xref rid="b12-MI-6-2-00304" ref-type="bibr">12</xref>), real-time colorectal polyp detection during colonoscopy (<xref rid="b13-MI-6-2-00304" ref-type="bibr">13</xref>), the identification of nodal metastases in head and neck cancer (<xref rid="b14-MI-6-2-00304" ref-type="bibr">14</xref>) and breast cancer risk prediction using mammography (<xref rid="b15-MI-6-2-00304" ref-type="bibr">15</xref>). Several of these systems have progressed to regulatory approval, reflecting the increasing acceptance of AI-enabled imaging tools by agencies, such as the US Food and Drug Administration (<xref rid="b16-MI-6-2-00304" ref-type="bibr">16</xref>).</p>
<p>Beyond detection, AI-based imaging facilitates tumour phenotyping and prognostication by integrating information across multiple modalities (<xref rid="tI-MI-6-2-00304" ref-type="table">Table I</xref>). Radiomic and DL features extracted from imaging data have been associated with molecular subtypes, treatment response, and survival outcomes (<xref rid="b17-MI-6-2-00304" ref-type="bibr">17</xref>). However, a number of published studies remain retrospective and single-centre, with limited demographic and technical diversity, increasing the risk of overfitting and dataset bias (<xref rid="b18-MI-6-2-00304 b19-MI-6-2-00304 b20-MI-6-2-00304" ref-type="bibr">18-20</xref>). Variability in scanner hardware, acquisition protocols and reconstruction parameters further constrains generalizability (<xref rid="tII-MI-6-2-00304" ref-type="table">Table II</xref>) (<xref rid="b21-MI-6-2-00304" ref-type="bibr">21</xref>).</p>
<p>Another persistent challenge is interpretability. The black-box nature of DL models complicates the understanding of decision pathways and limits clinician trust. Explainable AI techniques, including saliency mapping and concept-based models, are increasingly explored to address this limitation (<xref rid="b22-MI-6-2-00304" ref-type="bibr">22</xref>). Federated learning offers a promising strategy for multi-institutional model development while preserving data privacy (<xref rid="tII-MI-6-2-00304" ref-type="table">Table II</xref>) (<xref rid="b23-MI-6-2-00304" ref-type="bibr">23</xref>). Future research is required to prioritize prospective, multi-centre validation studies, standardized benchmarking datasets and transparent reporting to enable meaningful clinical translation.</p>
</sec>
<sec>
<title>3. AI in digital pathology</title>
<p>The digitization of histopathology through whole-slide imaging has catalysed rapid growth in AI-driven computational pathology (<xref rid="tI-MI-6-2-00304" ref-type="table">Table I</xref>). Digital slides enable large-scale analysis using DL architectures such as CNNs, fully convolutional networks, and, more recently, vision transformers. These models have demonstrated high accuracy in tumour detection, grading, and prognostication across multiple cancer types (<xref rid="tII-MI-6-2-00304" ref-type="table">Table II</xref>) (<xref rid="b24-MI-6-2-00304" ref-type="bibr">24</xref>).</p>
<p>Weakly supervised learning approaches have been particularly impactful, enabling model training using slide-level labels rather than exhaustive pixel-level annotations. Clinical-grade performance has been reported for tasks, such as Gleason grading in prostate cancer and lymph node metastasis detection (<xref rid="b25-MI-6-2-00304" ref-type="bibr">25</xref>). Beyond morphological assessment, AI models can infer genomic alterations and clinically actionable mutations directly from routine haematoxylin and eosin-stained slides, including in lung and gastrointestinal cancers (<xref rid="b26-MI-6-2-00304" ref-type="bibr">26</xref>,<xref rid="b27-MI-6-2-00304" ref-type="bibr">27</xref>).</p>
<p>Several studies have demonstrated that AI-derived histopathological features are associated with survival outcomes and therapeutic response, particularly when integrated with molecular and clinical data. These findings suggest that AI-enabled pathology may serve not only as a diagnostic adjunct, but also as a prognostic and predictive tool. In resource-limited settings, the ability to infer molecular information from routine histology has potential implications for cost-effective precision oncology (<xref rid="b26-MI-6-2-00304" ref-type="bibr">26</xref>).</p>
<p>Despite these advances, digital pathology faces substantial challenges. Variability in staining protocols, scanner technologies and laboratory workflows limits reproducibility across institutions. Dataset bias and relatively small sample sizes further restrict external validity (<xref rid="b28-MI-6-2-00304" ref-type="bibr">28</xref>). Regulatory approval is complicated by limited interpretability and the paucity of prospective validation studies.</p>
<p>Future efforts should focus on standardized pre-processing pipelines, harmonized data annotation practices and pathology-specific reporting guidelines, including extensions of frameworks, such as TRIPOD (<xref rid="b29-MI-6-2-00304" ref-type="bibr">29</xref>). Explainable AI methods that highlight diagnostically relevant histological features may improve transparency and pathologist confidence. Federated learning frameworks can facilitate multi-centre collaboration while preserving patient privacy, and multimodal integration with radiologic and molecular data may further enhance predictive performance (<xref rid="tII-MI-6-2-00304" ref-type="table">Table II</xref>). The integration of AI across oncological pathology, highlighting its role in digital slide analysis, biomarker profiling, tumour detection and predictive prognostication is illustrated in <xref rid="f1-MI-6-2-00304" ref-type="fig">Fig. 1</xref>. The schematic illustration in <xref rid="f1-MI-6-2-00304" ref-type="fig">Fig. 1</xref> emphasizes multimodal data processing from histopathology and molecular features to outcome prediction, demonstrating how AI supports diagnosis, grading and personalized therapy planning within a unified pathology workflow.</p>
</sec>
<sec>
<title>4. AI in predicting clinical outcomes</title>
<p>Predicting clinical outcomes, such as survival, treatment toxicity and therapeutic response is central to personalized oncology (<xref rid="tI-MI-6-2-00304" ref-type="table">Table I</xref>). AI models integrating radiologic, histopathologic, genomic and clinical data have shown promise in improving risk stratification beyond traditional staging systems (<xref rid="b30-MI-6-2-00304 b31-MI-6-2-00304 b32-MI-6-2-00304 b33-MI-6-2-00304" ref-type="bibr">30-33</xref>). Ensemble ML methods and deep neural networks have been applied to predict progression-free and overall survival in lung, colorectal, breast and head and neck cancers (<xref rid="b34-MI-6-2-00304 b35-MI-6-2-00304 b36-MI-6-2-00304" ref-type="bibr">34-36</xref>).</p>
<p>These models provide several potential advantages, including the early identification of high-risk patients, the proactive modification of treatment strategies and the improved selection of therapeutic modalities. However, a number of outcome prediction models are developed using limited or single-institution datasets, increasing susceptibility to overfitting and reducing external validity (<xref rid="b37-MI-6-2-00304 b38-MI-6-2-00304 b39-MI-6-2-00304 b40-MI-6-2-00304" ref-type="bibr">37-40</xref>). Interpretability remains a major challenge, particularly when models generate probabilistic outputs without clear clinical actionability.</p>
<p>Another limitation is the limited incorporation of longitudinal data. Cancer progression and treatment response are dynamic processes, and static baseline models may fail to capture temporal changes. Recent research has emphasized the importance of longitudinal modelling using real-world data and post-deployment auditing to ensure safety and performance stability over time (<xref rid="b4-MI-6-2-00304" ref-type="bibr">4</xref>,<xref rid="b41-MI-6-2-00304" ref-type="bibr">41</xref>,<xref rid="b42-MI-6-2-00304" ref-type="bibr">42</xref>) (<xref rid="tIII-MI-6-2-00304" ref-type="table">Table III</xref>). Prospective, multi-centre validation and close collaboration between clinicians and data scientists are essential to align AI tools with real clinical needs.</p>
</sec>
<sec>
<title>5. AI in chemotherapy</title>
<p>Chemotherapy selection and dosing require careful consideration of tumour biology, patient characteristics and the risk of toxicity (<xref rid="tI-MI-6-2-00304" ref-type="table">Table I</xref>). ML and DL approaches have improved the prediction of the drug response by modelling complex pharmacogenomic and molecular interactions that are difficult to capture using conventional statistical methods (<xref rid="b43-MI-6-2-00304 b44-MI-6-2-00304 b45-MI-6-2-00304 b46-MI-6-2-00304 b47-MI-6-2-00304" ref-type="bibr">43-47</xref>). Models trained on large cancer cell-line datasets and multi-omics data have demonstrated superior performance in predicting drug sensitivity and resistance (<xref rid="b48-MI-6-2-00304" ref-type="bibr">48</xref>).</p>
<p>Despite encouraging preclinical and retrospective results, translation to routine clinical care remains limited. High-quality, well-annotated clinical pharmacogenomic datasets are limited, and the majority of models lack prospective validation (<xref rid="b1-MI-6-2-00304" ref-type="bibr">1</xref>,<xref rid="b49-MI-6-2-00304" ref-type="bibr">49</xref>). Limited interpretability further constrains clinician confidence and shared decision-making (<xref rid="tII-MI-6-2-00304" ref-type="table">Table II</xref>) (<xref rid="b50-MI-6-2-00304" ref-type="bibr">50</xref>,<xref rid="b51-MI-6-2-00304" ref-type="bibr">51</xref>). Reporting guidelines for clinical trials evaluating AI interventions underscore the need for rigorous study design and transparency.</p>
<p>Future directions include the integration of real-world clinical data, the development of explainable AI frameworks and prospective clinical trials evaluating AI-guided chemotherapy strategies. Reinforcement learning approaches may enable adaptive treatment optimization based on individual patient response trajectories.</p>
</sec>
<sec>
<title>6. AI in radiotherapy</title>
<p>Radiotherapy is a highly data-intensive discipline, making it particularly amenable to AI-driven optimization. Applications include automated contouring, dose calculation, treatment planning, toxicity prediction, and adaptive radiotherapy (<xref rid="tI-MI-6-2-00304" ref-type="table">Table I</xref>). While Monte Carlo simulations remain the gold standard for dose calculation, they are computationally intensive (<xref rid="b52-MI-6-2-00304 b53-MI-6-2-00304 b54-MI-6-2-00304" ref-type="bibr">52-54</xref>). DL models can generate accurate dose distributions rapidly, improving workflow efficiency (<xref rid="b55-MI-6-2-00304" ref-type="bibr">55</xref>).</p>
<p>Reinforcement learning has been explored for adaptive radiotherapy, enabling treatment plans to evolve in response to anatomical and biological changes during the course of therapy (<xref rid="b56-MI-6-2-00304" ref-type="bibr">56</xref>). Generative adversarial networks have also been investigated for synthetic data generation to address limited sample sizes and class imbalance. However, challenges related to interpretability, regulatory approval and integration into existing planning systems persist (<xref rid="tI-MI-6-2-00304" ref-type="table">Table I</xref>) (<xref rid="b57-MI-6-2-00304" ref-type="bibr">57</xref>).</p>
<p>Future research is warranted to prioritize transparent and interpretable models that provide clinically meaningful rationales for dose modification and toxicity prediction. Standardized validation protocols, prospective clinical evaluation, and integration with clinical decision-support systems will be critical for safe and effective deployment.</p>
</sec>
<sec>
<title>7. Challenges and ethical considerations</title>
<p>The clinical translation of AI in oncology is hampered by regulatory requirements demanding robust evidence of safety, efficacy and generalizability. Data-related challenges (<xref rid="tIII-MI-6-2-00304" ref-type="table">Table III</xref>) include inconsistent acquisition, annotation and pre-processing protocols, as well as sampling and observation bias. Models trained on unrepresentative datasets may perform poorly in underrepresented populations, exacerbating health disparities, particularly in low- and middle-income countries (<xref rid="b58-MI-6-2-00304 b59-MI-6-2-00304 b60-MI-6-2-00304" ref-type="bibr">58-60</xref>).</p>
<p>Several AI systems in oncology have indeed been withdrawn or scaled back following deployment due to inadequate validation and poor real-world performance. Perhaps the most notable example is IBM Watson for Oncology, which was marketed as an AI-driven treatment recommendation tool, but lacked robust clinical validation; it produced recommendations that often did not align with real-world practice and was ultimately discontinued when hospitals withdrew and IBM sold its Watson Health division after failing to demonstrate clinical utility and safety (<xref rid="b61-MI-6-2-00304" ref-type="bibr">61</xref>,<xref rid="b62-MI-6-2-00304" ref-type="bibr">62</xref>). Other older oncology decision support prototypes, including rule-based systems such as OncoDoc and its successors, were never widely adopted outside research settings due to high rates of guideline-discordant recommendations and insufficient validation on diverse clinical data (<xref rid="b63-MI-6-2-00304" ref-type="bibr">63</xref>). These cases underscore the risks of implementing AI in oncology without rigorous external validation and prospective outcome evaluation before broad clinical use.</p>
<p>Ethical governance frameworks need to prioritize fairness, transparency, accountability and inclusivity to prevent AI from reinforcing existing inequities (<xref rid="tIII-MI-6-2-00304" ref-type="table">Table III</xref>). Robust data governance policies are required to balance innovation with privacy, informed consent and security. The engagement of all stakeholders, including patients, clinicians, developers, regulators and ethicists, is essential, as is continuous performance monitoring and post-deployment auditing.</p>
</sec>
<sec>
<title>8. Emerging frontiers</title>
<p>Several emerging paradigms have the potential to overcome persistent scientific, clinical and implementation barriers in AI-enabled oncology. Among these, multimodal AI (<xref rid="tII-MI-6-2-00304" ref-type="table">Table II</xref>) represents a critical advancement by integrating heterogeneous data sources, namely radiological imaging, digital pathology, genomics, proteomics, laboratory parameters and longitudinal electronic health records into unified predictive frameworks. Such models better capture tumour heterogeneity, temporal disease evolution and patient-specific context, thereby enabling more accurate diagnosis, risk stratification, treatment selection and outcome prediction compared with unimodal approaches (<xref rid="b64-MI-6-2-00304" ref-type="bibr">64</xref>).</p>
<p>Foundation models (<xref rid="tII-MI-6-2-00304" ref-type="table">Table II</xref>) pretrained on large, diverse and multi-institutional datasets are increasingly influential in oncology. These models leverage self-supervised or weakly supervised learning to acquire generalizable representations that can be efficiently fine-tuned for specific cancer types, modalities, or clinical tasks. Foundation models reduce the dependence on extensive labelled datasets, enhance robustness across populations and scanners, and facilitate rapid deployment in resource-variable settings, including smaller centres (<xref rid="b65-MI-6-2-00304 b66-MI-6-2-00304 b67-MI-6-2-00304" ref-type="bibr">65-67</xref>).</p>
<p>Federated learning (<xref rid="tII-MI-6-2-00304" ref-type="table">Table II</xref>) provides a pragmatic solution to data-sharing constraints by enabling collaborative model training across institutions without centralized transfer of sensitive patient data. This paradigm is particularly valuable in oncology, where data scarcity, privacy regulations, and institutional silos limit model generalizability. Federated approaches can improve performance across diverse demographic and clinical settings while maintaining compliance with data protection frameworks (<xref rid="tII-MI-6-2-00304" ref-type="table">Table II</xref>) (<xref rid="b68-MI-6-2-00304" ref-type="bibr">68</xref>,<xref rid="b69-MI-6-2-00304" ref-type="bibr">69</xref>).</p>
<p>Despite these technical advances, explainable AI remains central to clinical adoption, regulatory approval and medicolegal accountability. Transparent models that provide interpretable features, uncertainty estimates, and clinically meaningful visualizations foster clinician trust and support safe integration into decision-making workflows (<xref rid="tII-MI-6-2-00304" ref-type="table">Table II</xref>) (<xref rid="b70-MI-6-2-00304" ref-type="bibr">70</xref>,<xref rid="b71-MI-6-2-00304" ref-type="bibr">71</xref>).</p>
<p>Finally, ethical and equity considerations are especially salient in low- and middle-income countries. Challenges related to infrastructure, data quality, workforce training and algorithmic bias (<xref rid="tIII-MI-6-2-00304" ref-type="table">Table III</xref>) need to be addressed through targeted capacity building, inclusive dataset development and international collaboration. Without deliberate governance and context-aware implementation, AI risks exacerbating existing disparities (<xref rid="b72-MI-6-2-00304" ref-type="bibr">72</xref>,<xref rid="b73-MI-6-2-00304" ref-type="bibr">73</xref>). Ensuring fairness, transparency and accessibility will be essential for realizing the global promise of AI-driven oncology.</p>
</sec>
<sec>
<title>9. Conclusion</title>
<p>AI holds transformative potential to advance precision oncology by improving diagnostic accuracy, prognostication and treatment optimization. Realizing this potential requires rigorous prospective validation, transparent model design, multidisciplinary collaboration and robust ethical safeguards. With responsible development and implementation, AI can become an integral component of oncology practice and contribute meaningfully to improved cancer outcomes worldwide.</p>
</sec>
</body>
<back>
<ack>
<title>Acknowledgements</title>
<p>Not applicable.</p>
</ack>
<sec sec-type="data-availability">
<title>Availability of data and materials</title>
<p>Not applicable.</p>
</sec>
<sec>
<title>Authors&#x0027; contributions</title>
<p>AR and AB jointly conceptualized the review. AR, AB, AA and NC generated the outline of the review. AR, AB, AA and YP drafted the manuscript. AA, YP and NC reviewed the manuscript. Data authentication is not applicable. All authors have read and approved the final version of the manuscript.</p>
</sec>
<sec>
<title>Ethics approval and consent to participate</title>
<p>Not applicable.</p>
</sec>
<sec>
<title>Patient consent for publication</title>
<p>Not applicable.</p>
</sec>
<sec sec-type="COI-statement">
<title>Competing interests</title>
<p>The authors declare that they have no competing interests.</p>
</sec>
<ref-list>
<title>References</title>
<ref id="b1-MI-6-2-00304"><label>1</label><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Topol</surname><given-names>EJ</given-names></name></person-group><article-title>High-performance medicine: The convergence of human and artificial intelligence</article-title><source>Nat Med</source><volume>25</volume><fpage>44</fpage><lpage>56</lpage><year>2019</year><pub-id pub-id-type="pmid">30617339</pub-id><pub-id pub-id-type="doi">10.1038/s41591-018-0300-7</pub-id></element-citation></ref>
<ref id="b2-MI-6-2-00304"><label>2</label><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Esteva</surname><given-names>A</given-names></name><name><surname>Chou</surname><given-names>K</given-names></name><name><surname>Yeung</surname><given-names>S</given-names></name><name><surname>Naik</surname><given-names>N</given-names></name><name><surname>Madani</surname><given-names>A</given-names></name><name><surname>Mottaghi</surname><given-names>A</given-names></name><name><surname>Liu</surname><given-names>Y</given-names></name><name><surname>Topol</surname><given-names>E</given-names></name><name><surname>Dean</surname><given-names>J</given-names></name><name><surname>Socher</surname><given-names>R</given-names></name></person-group><article-title>Deep learning-enabled medical computer vision</article-title><source>NPJ Digit Med</source><volume>4</volume><issue>5</issue><year>2021</year><pub-id pub-id-type="pmid">33420381</pub-id><pub-id pub-id-type="doi">10.1038/s41746-020-00376-2</pub-id></element-citation></ref>
<ref id="b3-MI-6-2-00304"><label>3</label><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Biswas</surname><given-names>M</given-names></name><name><surname>Kuppili</surname><given-names>V</given-names></name><name><surname>Saba</surname><given-names>L</given-names></name><name><surname>Edla</surname><given-names>DR</given-names></name><name><surname>Suri</surname><given-names>HS</given-names></name><name><surname>Cuadrado-Godia</surname><given-names>E</given-names></name><name><surname>Laird</surname><given-names>JR</given-names></name><name><surname>Marinhoe</surname><given-names>RT</given-names></name><name><surname>Sanches</surname><given-names>JM</given-names></name><name><surname>Nicolaides</surname><given-names>A</given-names></name><name><surname>Suri</surname><given-names>JS</given-names></name></person-group><article-title>State-of-the-art review on deep learning in medical imaging</article-title><source>Front Biosci (Landmark Ed)</source><volume>24</volume><fpage>392</fpage><lpage>426</lpage><year>2019</year><pub-id pub-id-type="pmid">30468663</pub-id><pub-id pub-id-type="doi">10.2741/4725</pub-id></element-citation></ref>
<ref id="b4-MI-6-2-00304"><label>4</label><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Xu</surname><given-names>Y</given-names></name><name><surname>Li</surname><given-names>Y</given-names></name><name><surname>Wang</surname><given-names>F</given-names></name><name><surname>Zhang</surname><given-names>Y</given-names></name><name><surname>Huang</surname><given-names>D</given-names></name></person-group><article-title>Addressing the current challenges in the clinical application of AI-based radiomics for cancer imaging</article-title><source>Front Med (Lausanne)</source><volume>12</volume><issue>1674397</issue><year>2025</year><pub-id pub-id-type="pmid">41090135</pub-id><pub-id pub-id-type="doi">10.3389/fmed.2025.1674397</pub-id></element-citation></ref>
<ref id="b5-MI-6-2-00304"><label>5</label><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Abbas</surname><given-names>Q</given-names></name><name><surname>Jeong</surname><given-names>W</given-names></name><name><surname>Lee</surname><given-names>SW</given-names></name></person-group><article-title>Explainable AI in clinical decision support systems: A meta-analysis of methods, applications, and usability challenges</article-title><source>Healthcare (Basel)</source><volume>13</volume><issue>2154</issue><year>2025</year><pub-id pub-id-type="pmid">40941506</pub-id><pub-id pub-id-type="doi">10.3390/healthcare13172154</pub-id></element-citation></ref>
<ref id="b6-MI-6-2-00304"><label>6</label><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Chua</surname><given-names>IS</given-names></name><name><surname>Gaziel-Yablowitz</surname><given-names>M</given-names></name><name><surname>Korach</surname><given-names>ZT</given-names></name><name><surname>Kehl</surname><given-names>KL</given-names></name><name><surname>Levitan</surname><given-names>NA</given-names></name><name><surname>Arriaga</surname><given-names>YE</given-names></name><name><surname>Jackson</surname><given-names>GP</given-names></name><name><surname>Bates</surname><given-names>DW</given-names></name><name><surname>Hassett</surname><given-names>M</given-names></name></person-group><article-title>Artificial intelligence in oncology: Path to implementation</article-title><source>Cancer Med</source><volume>10</volume><fpage>4138</fpage><lpage>4149</lpage><year>2021</year><pub-id pub-id-type="pmid">33960708</pub-id><pub-id pub-id-type="doi">10.1002/cam4.3935</pub-id></element-citation></ref>
<ref id="b7-MI-6-2-00304"><label>7</label><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Sultan</surname><given-names>AS</given-names></name><name><surname>Elgharib</surname><given-names>MA</given-names></name><name><surname>Tavares</surname><given-names>T</given-names></name><name><surname>Jessri</surname><given-names>M</given-names></name><name><surname>Basile</surname><given-names>JR</given-names></name></person-group><article-title>The use of artificial intelligence, machine learning and deep learning in oncologic histopathology</article-title><source>J Oral Pathol Med</source><volume>49</volume><fpage>849</fpage><lpage>856</lpage><year>2020</year><pub-id pub-id-type="pmid">32449232</pub-id><pub-id pub-id-type="doi">10.1111/jop.13042</pub-id></element-citation></ref>
<ref id="b8-MI-6-2-00304"><label>8</label><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Hong</surname><given-names>GS</given-names></name><name><surname>Jang</surname><given-names>M</given-names></name><name><surname>Kyung</surname><given-names>S</given-names></name><name><surname>Cho</surname><given-names>K</given-names></name><name><surname>Jeong</surname><given-names>J</given-names></name><name><surname>Lee</surname><given-names>GY</given-names></name><name><surname>Shin</surname><given-names>K</given-names></name><name><surname>Kim</surname><given-names>KD</given-names></name><name><surname>Ryu</surname><given-names>SM</given-names></name><name><surname>Seo</surname><given-names>JB</given-names></name><etal/></person-group><article-title>Overcoming the challenges in the development and implementation of artificial intelligence in radiology: A comprehensive review of solutions beyond supervised learning</article-title><source>Korean J Radiol</source><volume>24</volume><fpage>1061</fpage><lpage>1080</lpage><year>2023</year><pub-id pub-id-type="pmid">37724586</pub-id><pub-id pub-id-type="doi">10.3348/kjr.2023.0393</pub-id></element-citation></ref>
<ref id="b9-MI-6-2-00304"><label>9</label><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Ueda</surname><given-names>D</given-names></name><name><surname>Kakinuma</surname><given-names>T</given-names></name><name><surname>Fujita</surname><given-names>S</given-names></name><name><surname>Kamagata</surname><given-names>K</given-names></name><name><surname>Fushimi</surname><given-names>Y</given-names></name><name><surname>Ito</surname><given-names>R</given-names></name><name><surname>Matsui</surname><given-names>Y</given-names></name><name><surname>Nozaki</surname><given-names>T</given-names></name><name><surname>Nakaura</surname><given-names>T</given-names></name><name><surname>Fujima</surname><given-names>N</given-names></name><etal/></person-group><article-title>Fairness of artificial intelligence in healthcare: Review and recommendations</article-title><source>Jpn J Radiol</source><volume>42</volume><fpage>3</fpage><lpage>15</lpage><year>2024</year><pub-id pub-id-type="pmid">37540463</pub-id><pub-id pub-id-type="doi">10.1007/s11604-023-01474-3</pub-id></element-citation></ref>
<ref id="b10-MI-6-2-00304"><label>10</label><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Melazzini</surname><given-names>L</given-names></name><name><surname>Bortolotto</surname><given-names>C</given-names></name><name><surname>Brizzi</surname><given-names>L</given-names></name><name><surname>Achilli</surname><given-names>M</given-names></name><name><surname>Basla</surname><given-names>N</given-names></name><name><surname>D&#x0027;Onorio De Meo</surname><given-names>A</given-names></name><name><surname>Gerbasi</surname><given-names>A</given-names></name><name><surname>Bottinelli</surname><given-names>OM</given-names></name><name><surname>Bellazzi</surname><given-names>R</given-names></name><name><surname>Preda</surname><given-names>L</given-names></name></person-group><article-title>AI for image quality and patient safety in CT and MRI</article-title><source>Eur Radiol Exp</source><volume>9</volume><issue>28</issue><year>2025</year><pub-id pub-id-type="pmid">39987533</pub-id><pub-id pub-id-type="doi">10.1186/s41747-025-00562-5</pub-id></element-citation></ref>
<ref id="b11-MI-6-2-00304"><label>11</label><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Nie</surname><given-names>Y</given-names></name><name><surname>Sommella</surname><given-names>P</given-names></name><name><surname>Carrat&#x00F9;</surname><given-names>M</given-names></name><name><surname>O&#x0027;Nils</surname><given-names>M</given-names></name><name><surname>Lundgren</surname><given-names>J</given-names></name></person-group><article-title>A deep CNN transformer hybrid model for skin lesion classification of dermoscopic images using focal loss</article-title><source>Diagnostics (Basel)</source><volume>13</volume><issue>72</issue><year>2022</year><pub-id pub-id-type="pmid">36611363</pub-id><pub-id pub-id-type="doi">10.3390/diagnostics13010072</pub-id></element-citation></ref>
<ref id="b12-MI-6-2-00304"><label>12</label><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Tschandl</surname><given-names>P</given-names></name><name><surname>Rinner</surname><given-names>C</given-names></name><name><surname>Apalla</surname><given-names>Z</given-names></name><name><surname>Argenziano</surname><given-names>G</given-names></name><name><surname>Codella</surname><given-names>N</given-names></name><name><surname>Halpern</surname><given-names>A</given-names></name><name><surname>Janda</surname><given-names>M</given-names></name><name><surname>Lallas</surname><given-names>A</given-names></name><name><surname>Longo</surname><given-names>C</given-names></name><name><surname>Malvehy</surname><given-names>J</given-names></name><etal/></person-group><article-title>Human-computer collaboration for skin cancer recognition</article-title><source>Nat Med</source><volume>26</volume><fpage>1229</fpage><lpage>1234</lpage><year>2020</year><pub-id pub-id-type="pmid">32572267</pub-id><pub-id pub-id-type="doi">10.1038/s41591-020-0942-0</pub-id></element-citation></ref>
<ref id="b13-MI-6-2-00304"><label>13</label><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Goyal</surname><given-names>H</given-names></name><name><surname>Mann</surname><given-names>R</given-names></name><name><surname>Gandhi</surname><given-names>Z</given-names></name><name><surname>Perisetti</surname><given-names>A</given-names></name><name><surname>Ali</surname><given-names>A</given-names></name><name><surname>Aman Ali</surname><given-names>K</given-names></name><name><surname>Sharma</surname><given-names>N</given-names></name><name><surname>Saligram</surname><given-names>S</given-names></name><name><surname>Tharian</surname><given-names>B</given-names></name><name><surname>Inamdar</surname><given-names>S</given-names></name></person-group><article-title>Scope of artificial intelligence in screening and diagnosis of colorectal cancer</article-title><source>J Clin Med</source><volume>9</volume><issue>3313</issue><year>2020</year><pub-id pub-id-type="pmid">33076511</pub-id><pub-id pub-id-type="doi">10.3390/jcm9103313</pub-id></element-citation></ref>
<ref id="b14-MI-6-2-00304"><label>14</label><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Ho</surname><given-names>TY</given-names></name><name><surname>Chao</surname><given-names>CH</given-names></name><name><surname>Chin</surname><given-names>SC</given-names></name><name><surname>Ng</surname><given-names>SH</given-names></name><name><surname>Kang</surname><given-names>CJ</given-names></name><name><surname>Tsang</surname><given-names>NM</given-names></name></person-group><article-title>Classifying neck lymph nodes of head and neck squamous cell carcinoma in MRI images with radiomic features</article-title><source>J Digit Imaging</source><volume>33</volume><fpage>613</fpage><lpage>618</lpage><year>2020</year><pub-id pub-id-type="pmid">31950301</pub-id><pub-id pub-id-type="doi">10.1007/s10278-019-00309-w</pub-id></element-citation></ref>
<ref id="b15-MI-6-2-00304"><label>15</label><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Yala</surname><given-names>A</given-names></name><name><surname>Lehman</surname><given-names>C</given-names></name><name><surname>Schuster</surname><given-names>T</given-names></name><name><surname>Portnoi</surname><given-names>T</given-names></name><name><surname>Barzilay</surname><given-names>R</given-names></name></person-group><article-title>A deep learning mammography-based model for improved breast cancer risk prediction</article-title><source>Radiology</source><volume>292</volume><fpage>60</fpage><lpage>66</lpage><year>2019</year><pub-id pub-id-type="pmid">31063083</pub-id><pub-id pub-id-type="doi">10.1148/radiol.2019182716</pub-id></element-citation></ref>
<ref id="b16-MI-6-2-00304"><label>16</label><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Sivakumar</surname><given-names>R</given-names></name><name><surname>Lue</surname><given-names>B</given-names></name><name><surname>Kundu</surname><given-names>S</given-names></name></person-group><article-title>FDA approval of artificial intelligence and machine learning devices in radiology: A systematic review</article-title><source>JAMA Netw Open</source><volume>8</volume><issue>e2542338</issue><year>2025</year><pub-id pub-id-type="pmid">41201805</pub-id><pub-id pub-id-type="doi">10.1001/jamanetworkopen.2025.42338</pub-id></element-citation></ref>
<ref id="b17-MI-6-2-00304"><label>17</label><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Evangelou</surname><given-names>K</given-names></name><name><surname>Kotsantis</surname><given-names>I</given-names></name><name><surname>Kalyvas</surname><given-names>A</given-names></name><name><surname>Kyriazoglou</surname><given-names>A</given-names></name><name><surname>Economopoulou</surname><given-names>P</given-names></name><name><surname>Velonakis</surname><given-names>G</given-names></name><name><surname>Gavra</surname><given-names>M</given-names></name><name><surname>Psyrri</surname><given-names>A</given-names></name><name><surname>Boviatsis</surname><given-names>EJ</given-names></name><name><surname>Stavrinou</surname><given-names>LC</given-names></name></person-group><article-title>Artificial intelligence in the diagnosis and treatment of brain gliomas</article-title><source>Biomedicines</source><volume>13</volume><issue>2285</issue><year>2025</year><pub-id pub-id-type="pmid">41007844</pub-id><pub-id pub-id-type="doi">10.3390/biomedicines13092285</pub-id></element-citation></ref>
<ref id="b18-MI-6-2-00304"><label>18</label><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Kelly</surname><given-names>CJ</given-names></name><name><surname>Karthikesalingam</surname><given-names>A</given-names></name><name><surname>Suleyman</surname><given-names>M</given-names></name><name><surname>Corrado</surname><given-names>G</given-names></name><name><surname>King</surname><given-names>D</given-names></name></person-group><article-title>Key challenges for delivering clinical impact with artificial intelligence</article-title><source>BMC Med</source><volume>17</volume><issue>195</issue><year>2019</year><pub-id pub-id-type="pmid">31665002</pub-id><pub-id pub-id-type="doi">10.1186/s12916-019-1426-2</pub-id></element-citation></ref>
<ref id="b19-MI-6-2-00304"><label>19</label><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Roberts</surname><given-names>M</given-names></name><name><surname>Driggs</surname><given-names>D</given-names></name><name><surname>Thorpe</surname><given-names>M</given-names></name><name><surname>Gilbey</surname><given-names>J</given-names></name><name><surname>Yeung</surname><given-names>M</given-names></name><name><surname>Ursprung</surname><given-names>S</given-names></name><name><surname>Aviles-Rivero</surname><given-names>AI</given-names></name><name><surname>Etmann</surname><given-names>C</given-names></name><name><surname>McCague</surname><given-names>C</given-names></name><name><surname>Beer</surname><given-names>L</given-names></name><etal/></person-group><article-title>Common pitfalls and recommendations for using machine learning to detect and prognosticate for COVID-19 using chest radiographs and CT scans</article-title><source>Nat Mach Intell</source><volume>3</volume><fpage>199</fpage><lpage>217</lpage><year>2021</year></element-citation></ref>
<ref id="b20-MI-6-2-00304"><label>20</label><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Zech</surname><given-names>JR</given-names></name><name><surname>Badgeley</surname><given-names>MA</given-names></name><name><surname>Liu</surname><given-names>M</given-names></name><name><surname>Costa</surname><given-names>AB</given-names></name><name><surname>Titano</surname><given-names>JJ</given-names></name><name><surname>Oermann</surname><given-names>EK</given-names></name></person-group><article-title>Variable generalization performance of a deep learning model to detect pneumonia in chest radiographs: A cross-sectional study</article-title><source>PLoS Med</source><volume>15</volume><issue>e1002683</issue><year>2018</year><pub-id pub-id-type="pmid">30399157</pub-id><pub-id pub-id-type="doi">10.1371/journal.pmed.1002683</pub-id></element-citation></ref>
<ref id="b21-MI-6-2-00304"><label>21</label><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Tran</surname><given-names>AT</given-names></name><name><surname>Zeevi</surname><given-names>T</given-names></name><name><surname>Payabvash</surname><given-names>S</given-names></name></person-group><article-title>Strategies to Improve the Robustness and Generalizability of Deep Learning Segmentation and Classification in Neuroimaging</article-title><source>BioMedInformatics</source><volume>5</volume><issue>20</issue><year>2025</year><pub-id pub-id-type="pmid">40271381</pub-id><pub-id pub-id-type="doi">10.3390/biomedinformatics5020020</pub-id></element-citation></ref>
<ref id="b22-MI-6-2-00304"><label>22</label><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Chaddad</surname><given-names>A</given-names></name><name><surname>Peng</surname><given-names>J</given-names></name><name><surname>Xu</surname><given-names>J</given-names></name><name><surname>Bouridane</surname><given-names>A</given-names></name></person-group><article-title>Survey of explainable AI techniques in healthcare</article-title><source>Sensors (Basel)</source><volume>23</volume><issue>634</issue><year>2023</year><pub-id pub-id-type="pmid">36679430</pub-id><pub-id pub-id-type="doi">10.3390/s23020634</pub-id></element-citation></ref>
<ref id="b23-MI-6-2-00304"><label>23</label><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Yang</surname><given-names>M</given-names></name><name><surname>Huang</surname><given-names>D</given-names></name><name><surname>Wan</surname><given-names>W</given-names></name><name><surname>Jin</surname><given-names>M</given-names></name></person-group><article-title>Federated learning for privacy-preserving medical data sharing in drug development</article-title><source>Appl Comput Eng</source><volume>134</volume><fpage>80</fpage><lpage>84</lpage><year>2025</year><pub-id pub-id-type="pmid">40978510</pub-id><pub-id pub-id-type="doi">10.3389/fdsfr.2025.1579922</pub-id></element-citation></ref>
<ref id="b24-MI-6-2-00304"><label>24</label><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Iizuka</surname><given-names>O</given-names></name><name><surname>Kanavati</surname><given-names>F</given-names></name><name><surname>Kato</surname><given-names>K</given-names></name><name><surname>Rambeau</surname><given-names>M</given-names></name><name><surname>Arihiro</surname><given-names>K</given-names></name><name><surname>Tsuneki</surname><given-names>M</given-names></name></person-group><article-title>Deep learning models for histopathological classification of gastric and colonic epithelial tumours</article-title><source>Sci Rep</source><volume>10</volume><issue>1504</issue><year>2020</year><pub-id pub-id-type="pmid">32001752</pub-id><pub-id pub-id-type="doi">10.1038/s41598-020-58467-9</pub-id></element-citation></ref>
<ref id="b25-MI-6-2-00304"><label>25</label><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Butt</surname><given-names>MA</given-names></name><name><surname>Kaleem</surname><given-names>MF</given-names></name><name><surname>Bilal</surname><given-names>M</given-names></name><name><surname>Hanif</surname><given-names>MS</given-names></name></person-group><article-title>Using multi-label ensemble CNN classifiers to mitigate labelling inconsistencies in patch-level Gleason grading</article-title><source>PLoS One</source><volume>19</volume><issue>e0304847</issue><year>2024</year><pub-id pub-id-type="pmid">38968206</pub-id><pub-id pub-id-type="doi">10.1371/journal.pone.0304847</pub-id></element-citation></ref>
<ref id="b26-MI-6-2-00304"><label>26</label><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Wang</surname><given-names>X</given-names></name><name><surname>Jiang</surname><given-names>Y</given-names></name><name><surname>Yang</surname><given-names>S</given-names></name><name><surname>Wang</surname><given-names>F</given-names></name><name><surname>Zhang</surname><given-names>X</given-names></name><name><surname>Wang</surname><given-names>W</given-names></name><name><surname>Chen</surname><given-names>Y</given-names></name><name><surname>Wu</surname><given-names>X</given-names></name><name><surname>Xiang</surname><given-names>J</given-names></name><name><surname>Li</surname><given-names>Y</given-names></name><etal/></person-group><article-title>Foundation model for predicting prognosis and adjuvant therapy benefit from digital pathology in GI cancers</article-title><source>J Clin Oncol</source><volume>43</volume><fpage>3468</fpage><lpage>3481</lpage><year>2025</year><pub-id pub-id-type="pmid">40168636</pub-id><pub-id pub-id-type="doi">10.1200/JCO-24-01501</pub-id></element-citation></ref>
<ref id="b27-MI-6-2-00304"><label>27</label><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Qaiser</surname><given-names>T</given-names></name><name><surname>Lee</surname><given-names>CY</given-names></name><name><surname>Vandenberghe</surname><given-names>M</given-names></name><name><surname>Yeh</surname><given-names>J</given-names></name><name><surname>Gavrielides</surname><given-names>MA</given-names></name><name><surname>Hipp</surname><given-names>J</given-names></name><name><surname>Scott</surname><given-names>M</given-names></name><name><surname>Reischl</surname><given-names>J</given-names></name></person-group><article-title>Usability of deep learning and H&#x0026;E images predict disease outcome-emerging tool to optimize clinical trials</article-title><source>NPJ Precis Oncol</source><volume>6</volume><issue>37</issue><year>2022</year><pub-id pub-id-type="pmid">35705792</pub-id><pub-id pub-id-type="doi">10.1038/s41698-022-00275-7</pub-id></element-citation></ref>
<ref id="b28-MI-6-2-00304"><label>28</label><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Hsu</surname><given-names>CY</given-names></name><name><surname>Askar</surname><given-names>S</given-names></name><name><surname>Alshkarchy</surname><given-names>SS</given-names></name><name><surname>Nayak</surname><given-names>PP</given-names></name><name><surname>Attabi</surname><given-names>KAL</given-names></name><name><surname>Khan</surname><given-names>MA</given-names></name><name><surname>Mayan</surname><given-names>JA</given-names></name><name><surname>Sharma</surname><given-names>MK</given-names></name><name><surname>Islomov</surname><given-names>S</given-names></name><name><surname>Soleimani Samarkhazan</surname><given-names>H</given-names></name></person-group><article-title>AI-driven multi-omics integration in precision oncology: Bridging the data deluge to clinical decisions</article-title><source>Clin Exp Med</source><volume>26</volume><issue>29</issue><year>2025</year><pub-id pub-id-type="pmid">41266662</pub-id><pub-id pub-id-type="doi">10.1007/s10238-025-01965-9</pub-id></element-citation></ref>
<ref id="b29-MI-6-2-00304"><label>29</label><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Lekadir</surname><given-names>K</given-names></name><name><surname>Feragen</surname><given-names>A</given-names></name><name><surname>Fofanah</surname><given-names>AJ</given-names></name><name><surname>Frangi</surname><given-names>AF</given-names></name><name><surname>Buyx</surname><given-names>A</given-names></name><name><surname>Emelie</surname><given-names>A</given-names></name><name><surname>Lara</surname><given-names>A</given-names></name><name><surname>Porras</surname><given-names>AR</given-names></name><name><surname>Chan</surname><given-names>AW</given-names></name><name><surname>Navarro</surname><given-names>A</given-names></name><etal/></person-group><comment>FUTURE-AI: International consensus guideline for trustworthy and deployable artificial intelligence in healthcare. arXiv: 2309.12325, 2023.</comment></element-citation></ref>
<ref id="b30-MI-6-2-00304"><label>30</label><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Elemento</surname><given-names>O</given-names></name><name><surname>Khozin</surname><given-names>S</given-names></name><name><surname>Sternberg</surname><given-names>CN</given-names></name></person-group><article-title>The use of artificial intelligence for cancer therapeutic decision-making</article-title><source>NEJM AI</source><volume>2</volume><issue>10.1056/aira2401164</issue><year>2025</year><pub-id pub-id-type="pmid">41112204</pub-id><pub-id pub-id-type="doi">10.1056/aira2401164</pub-id></element-citation></ref>
<ref id="b31-MI-6-2-00304"><label>31</label><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Shao</surname><given-names>J</given-names></name><name><surname>Ma</surname><given-names>J</given-names></name><name><surname>Zhang</surname><given-names>Q</given-names></name><name><surname>Li</surname><given-names>W</given-names></name><name><surname>Wang</surname><given-names>C</given-names></name></person-group><article-title>Predicting gene mutation status via artificial intelligence technologies based on multimodal integration (MMI) to advance precision oncology</article-title><source>Semin Cancer Biol</source><volume>91</volume><fpage>1</fpage><lpage>15</lpage><year>2023</year><pub-id pub-id-type="pmid">36801447</pub-id><pub-id pub-id-type="doi">10.1016/j.semcancer.2023.02.006</pub-id></element-citation></ref>
<ref id="b32-MI-6-2-00304"><label>32</label><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Awuah</surname><given-names>WA</given-names></name><name><surname>Ben-Jaafar</surname><given-names>A</given-names></name><name><surname>Roy</surname><given-names>S</given-names></name><name><surname>Nkrumah-Boateng</surname><given-names>PA</given-names></name><name><surname>Tan</surname><given-names>JK</given-names></name><name><surname>Abdul-Rahman</surname><given-names>T</given-names></name><name><surname>Atallah</surname><given-names>O</given-names></name></person-group><article-title>Predicting survival in malignant glioma using artificial intelligence</article-title><source>Eur J Med Res</source><volume>30</volume><issue>61</issue><year>2025</year><pub-id pub-id-type="pmid">39891313</pub-id><pub-id pub-id-type="doi">10.1186/s40001-025-02339-3</pub-id></element-citation></ref>
<ref id="b33-MI-6-2-00304"><label>33</label><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Bera</surname><given-names>K</given-names></name><name><surname>Braman</surname><given-names>N</given-names></name><name><surname>Gupta</surname><given-names>A</given-names></name><name><surname>Velcheti</surname><given-names>V</given-names></name><name><surname>Madabhushi</surname><given-names>A</given-names></name></person-group><article-title>Predicting cancer outcomes with radiomics and artificial intelligence in radiology</article-title><source>Nat Rev Clin Oncol</source><volume>19</volume><fpage>132</fpage><lpage>146</lpage><year>2022</year><pub-id pub-id-type="pmid">34663898</pub-id><pub-id pub-id-type="doi">10.1038/s41571-021-00560-7</pub-id></element-citation></ref>
<ref id="b34-MI-6-2-00304"><label>34</label><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Wang</surname><given-names>S</given-names></name><name><surname>Zhang</surname><given-names>H</given-names></name><name><surname>Liu</surname><given-names>Z</given-names></name><name><surname>Liu</surname><given-names>Y</given-names></name></person-group><article-title>A novel deep learning method to predict lung cancer long-term survival with biological knowledge incorporated gene expression images and clinical data</article-title><source>Front Genet</source><volume>13</volume><issue>800853</issue><year>2022</year><pub-id pub-id-type="pmid">35368657</pub-id><pub-id pub-id-type="doi">10.3389/fgene.2022.800853</pub-id></element-citation></ref>
<ref id="b35-MI-6-2-00304"><label>35</label><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Akselrod-Ballin</surname><given-names>A</given-names></name><name><surname>Chorev</surname><given-names>M</given-names></name><name><surname>Shoshan</surname><given-names>Y</given-names></name><name><surname>Spiro</surname><given-names>A</given-names></name><name><surname>Hazan</surname><given-names>A</given-names></name><name><surname>Melamed</surname><given-names>R</given-names></name><name><surname>Barkan</surname><given-names>E</given-names></name><name><surname>Herzel</surname><given-names>E</given-names></name><name><surname>Naor</surname><given-names>S</given-names></name><name><surname>Karavani</surname><given-names>E</given-names></name><etal/></person-group><article-title>Predicting breast cancer by applying deep learning to linked health records and mammograms</article-title><source>Radiology</source><volume>292</volume><fpage>331</fpage><lpage>342</lpage><year>2019</year><pub-id pub-id-type="pmid">31210611</pub-id><pub-id pub-id-type="doi">10.1148/radiol.2019182622</pub-id></element-citation></ref>
<ref id="b36-MI-6-2-00304"><label>36</label><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Chiu</surname><given-names>YC</given-names></name><name><surname>Chen</surname><given-names>HIH</given-names></name><name><surname>Zhang</surname><given-names>T</given-names></name><name><surname>Zhang</surname><given-names>S</given-names></name><name><surname>Gorthi</surname><given-names>A</given-names></name><name><surname>Wang</surname><given-names>LJ</given-names></name><name><surname>Huang</surname><given-names>Y</given-names></name><name><surname>Chen</surname><given-names>Y</given-names></name></person-group><article-title>Predicting drug response of tumors from integrated genomic profiles by deep neural networks</article-title><source>BMC Med Genomics</source><volume>12 (Suppl 1)</volume><issue>S18</issue><year>2019</year><pub-id pub-id-type="pmid">30704458</pub-id><pub-id pub-id-type="doi">10.1186/s12920-018-0460-9</pub-id></element-citation></ref>
<ref id="b37-MI-6-2-00304"><label>37</label><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Huynh</surname><given-names>BN</given-names></name><name><surname>Groendahl</surname><given-names>AR</given-names></name><name><surname>Tomic</surname><given-names>O</given-names></name><name><surname>Liland</surname><given-names>KH</given-names></name><name><surname>Knudtsen</surname><given-names>IS</given-names></name><name><surname>Hoebers</surname><given-names>F</given-names></name><name><surname>van Elmpt</surname><given-names>W</given-names></name><name><surname>Malinen</surname><given-names>E</given-names></name><name><surname>Dale</surname><given-names>E</given-names></name><name><surname>Futsaether</surname><given-names>CM</given-names></name></person-group><article-title>Head and neck cancer treatment outcome prediction: A comparison between machine learning with conventional radiomics features and deep learning radiomics</article-title><source>Front Med (Lausanne)</source><volume>10</volume><issue>1217037</issue><year>2023</year><pub-id pub-id-type="pmid">37711738</pub-id><pub-id pub-id-type="doi">10.3389/fmed.2023.1217037</pub-id></element-citation></ref>
<ref id="b38-MI-6-2-00304"><label>38</label><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Luo</surname><given-names>Y</given-names></name><name><surname>Tseng</surname><given-names>HH</given-names></name><name><surname>Cui</surname><given-names>S</given-names></name><name><surname>Wei</surname><given-names>L</given-names></name><name><surname>Ten Haken</surname><given-names>RK</given-names></name><name><surname>El Naqa</surname><given-names>I</given-names></name></person-group><article-title>Balancing accuracy and interpretability of machine learning approaches for radiation treatment outcomes modeling</article-title><source>BJR Open</source><volume>1</volume><issue>20190021</issue><year>2019</year><pub-id pub-id-type="pmid">33178948</pub-id><pub-id pub-id-type="doi">10.1259/bjro.20190021</pub-id></element-citation></ref>
<ref id="b39-MI-6-2-00304"><label>39</label><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Sinha</surname><given-names>T</given-names></name><name><surname>Khan</surname><given-names>A</given-names></name><name><surname>Awan</surname><given-names>M</given-names></name><name><surname>Bokhari</surname><given-names>SFH</given-names></name><name><surname>Ali</surname><given-names>K</given-names></name><name><surname>Amir</surname><given-names>M</given-names></name><name><surname>Jadhav</surname><given-names>AN</given-names></name><name><surname>Bakht</surname><given-names>D</given-names></name><name><surname>Puli</surname><given-names>ST</given-names></name><name><surname>Burhanuddin</surname><given-names>M</given-names></name></person-group><article-title>Artificial intelligence and machine learning in predicting the response to immunotherapy in non-small cell lung carcinoma: A systematic review</article-title><source>Cureus</source><volume>16</volume><issue>e61220</issue><year>2024</year><pub-id pub-id-type="pmid">38939246</pub-id><pub-id pub-id-type="doi">10.7759/cureus.61220</pub-id></element-citation></ref>
<ref id="b40-MI-6-2-00304"><label>40</label><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Pesapane</surname><given-names>F</given-names></name><name><surname>Nicosia</surname><given-names>L</given-names></name><name><surname>D&#x0027;Amelio</surname><given-names>L</given-names></name><name><surname>Quercioli</surname><given-names>G</given-names></name><name><surname>Pannarale</surname><given-names>MR</given-names></name><name><surname>Priolo</surname><given-names>F</given-names></name><name><surname>Marinucci</surname><given-names>I</given-names></name><name><surname>Farina</surname><given-names>MG</given-names></name><name><surname>Penco</surname><given-names>S</given-names></name><name><surname>Dominelli</surname><given-names>V</given-names></name><etal/></person-group><article-title>Artificial intelligence-driven personalization in breast cancer screening: From population models to individualized protocols</article-title><source>Cancers (Basel)</source><volume>17</volume><issue>2901</issue><year>2025</year><pub-id pub-id-type="pmid">40940998</pub-id><pub-id pub-id-type="doi">10.3390/cancers17172901</pub-id></element-citation></ref>
<ref id="b41-MI-6-2-00304"><label>41</label><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Ennab</surname><given-names>M</given-names></name><name><surname>Mcheick</surname><given-names>H</given-names></name></person-group><article-title>Enhancing interpretability and accuracy of AI models in healthcare: A comprehensive review on challenges and future directions</article-title><source>Front Robot AI</source><volume>11</volume><issue>1444763</issue><year>2024</year><pub-id pub-id-type="pmid">39677978</pub-id><pub-id pub-id-type="doi">10.3389/frobt.2024.1444763</pub-id></element-citation></ref>
<ref id="b42-MI-6-2-00304"><label>42</label><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Niu</surname><given-names>S</given-names></name><name><surname>Ma</surname><given-names>J</given-names></name><name><surname>Yin</surname><given-names>Q</given-names></name><name><surname>Wang</surname><given-names>Z</given-names></name><name><surname>Bai</surname><given-names>L</given-names></name><name><surname>Yang</surname><given-names>X</given-names></name></person-group><article-title>Modelling patient longitudinal data for clinical decision support: A case study on emerging AI healthcare Technologies</article-title><source>Inf Syst Front</source><volume>27</volume><fpage>409</fpage><lpage>427</lpage><year>2025</year></element-citation></ref>
<ref id="b43-MI-6-2-00304"><label>43</label><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Sartori</surname><given-names>F</given-names></name><name><surname>Codic&#x00E8;</surname><given-names>F</given-names></name><name><surname>Caranzano</surname><given-names>I</given-names></name><name><surname>Rollo</surname><given-names>C</given-names></name><name><surname>Birolo</surname><given-names>G</given-names></name><name><surname>Fariselli</surname><given-names>P</given-names></name><name><surname>Pancotti</surname><given-names>C</given-names></name></person-group><article-title>A comprehensive review of deep learning applications with multi-omics data in cancer research</article-title><source>Genes (Basel)</source><volume>16</volume><issue>648</issue><year>2025</year><pub-id pub-id-type="pmid">40565540</pub-id><pub-id pub-id-type="doi">10.3390/genes16060648</pub-id></element-citation></ref>
<ref id="b44-MI-6-2-00304"><label>44</label><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Clayton</surname><given-names>EA</given-names></name><name><surname>Pujol</surname><given-names>TA</given-names></name><name><surname>McDonald</surname><given-names>JF</given-names></name><name><surname>Qiu</surname><given-names>P</given-names></name></person-group><article-title>Leveraging TCGA gene expression data to build predictive models for cancer drug response</article-title><source>BMC Bioinformatics</source><volume>21 (Suppl 14)</volume><issue>S364</issue><year>2020</year><pub-id pub-id-type="pmid">32998700</pub-id><pub-id pub-id-type="doi">10.1186/s12859-020-03690-4</pub-id></element-citation></ref>
<ref id="b45-MI-6-2-00304"><label>45</label><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Liu</surname><given-names>X</given-names></name><name><surname>Song</surname><given-names>C</given-names></name><name><surname>Huang</surname><given-names>F</given-names></name><name><surname>Fu</surname><given-names>H</given-names></name><name><surname>Xiao</surname><given-names>W</given-names></name><name><surname>Zhang</surname><given-names>W</given-names></name></person-group><article-title>GraphCDR: A graph neural network method with contrastive learning for cancer drug response prediction</article-title><source>Brief Bioinform</source><volume>23</volume><issue>bbab457</issue><year>2021</year><pub-id pub-id-type="pmid">34727569</pub-id><pub-id pub-id-type="doi">10.1093/bib/bbab457</pub-id></element-citation></ref>
<ref id="b46-MI-6-2-00304"><label>46</label><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Ali</surname><given-names>M</given-names></name><name><surname>Aittokallio</surname><given-names>T</given-names></name></person-group><article-title>Machine learning and feature selection for drug response prediction in precision oncology applications</article-title><source>Biophys Rev</source><volume>11</volume><fpage>31</fpage><lpage>39</lpage><year>2019</year><pub-id pub-id-type="pmid">30097794</pub-id><pub-id pub-id-type="doi">10.1007/s12551-018-0446-z</pub-id></element-citation></ref>
<ref id="b47-MI-6-2-00304"><label>47</label><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Kalinin</surname><given-names>AA</given-names></name><name><surname>Higgins</surname><given-names>GA</given-names></name><name><surname>Reamaroon</surname><given-names>N</given-names></name><name><surname>Soroushmehr</surname><given-names>S</given-names></name><name><surname>Allyn-Feuer</surname><given-names>A</given-names></name><name><surname>Dinov</surname><given-names>ID</given-names></name><name><surname>Najarian</surname><given-names>K</given-names></name><name><surname>Athey</surname><given-names>BD</given-names></name></person-group><article-title>Deep learning in pharmacogenomics: From gene regulation to patient stratification</article-title><source>Pharmacogenomics</source><volume>19</volume><fpage>629</fpage><lpage>650</lpage><year>2018</year><pub-id pub-id-type="pmid">29697304</pub-id><pub-id pub-id-type="doi">10.2217/pgs-2018-0008</pub-id></element-citation></ref>
<ref id="b48-MI-6-2-00304"><label>48</label><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Sharifi-Noghabi</surname><given-names>H</given-names></name><name><surname>Jahangiri-Tazehkand</surname><given-names>S</given-names></name><name><surname>Smirnov</surname><given-names>P</given-names></name><name><surname>Hon</surname><given-names>C</given-names></name><name><surname>Mammoliti</surname><given-names>A</given-names></name><name><surname>Nair</surname><given-names>SK</given-names></name><name><surname>Mer</surname><given-names>AS</given-names></name><name><surname>Ester</surname><given-names>M</given-names></name><name><surname>Haibe-Kains</surname><given-names>B</given-names></name></person-group><article-title>Drug sensitivity prediction from cell line-based pharmacogenomics data: Guidelines for developing machine learning models</article-title><source>Brief Bioinform</source><volume>22</volume><issue>bbab294</issue><year>2021</year><pub-id pub-id-type="pmid">34382071</pub-id><pub-id pub-id-type="doi">10.1093/bib/bbab294</pub-id></element-citation></ref>
<ref id="b49-MI-6-2-00304"><label>49</label><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Beam</surname><given-names>AL</given-names></name><name><surname>Kohane</surname><given-names>IS</given-names></name></person-group><article-title>Big data and machine learning in health care</article-title><source>JAMA</source><volume>319</volume><fpage>1317</fpage><lpage>1318</lpage><year>2018</year><pub-id pub-id-type="pmid">29532063</pub-id><pub-id pub-id-type="doi">10.1001/jama.2017.18391</pub-id></element-citation></ref>
<ref id="b50-MI-6-2-00304"><label>50</label><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Vamathevan</surname><given-names>J</given-names></name><name><surname>Clark</surname><given-names>D</given-names></name><name><surname>Czodrowski</surname><given-names>P</given-names></name><name><surname>Dunham</surname><given-names>I</given-names></name><name><surname>Ferran</surname><given-names>E</given-names></name><name><surname>Lee</surname><given-names>G</given-names></name><name><surname>Li</surname><given-names>B</given-names></name><name><surname>Madabhushi</surname><given-names>A</given-names></name><name><surname>Shah</surname><given-names>P</given-names></name><name><surname>Spitzer</surname><given-names>M</given-names></name><name><surname>Zhao</surname><given-names>S</given-names></name></person-group><article-title>Applications of machine learning in drug discovery and development</article-title><source>Nat Rev Drug Discov</source><volume>18</volume><fpage>463</fpage><lpage>477</lpage><year>2019</year><pub-id pub-id-type="pmid">30976107</pub-id><pub-id pub-id-type="doi">10.1038/s41573-019-0024-5</pub-id></element-citation></ref>
<ref id="b51-MI-6-2-00304"><label>51</label><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Holzinger</surname><given-names>A</given-names></name><name><surname>Langs</surname><given-names>G</given-names></name><name><surname>Denk</surname><given-names>H</given-names></name><name><surname>Zatloukal</surname><given-names>K</given-names></name><name><surname>M&#x00FC;ller</surname><given-names>H</given-names></name></person-group><article-title>Causability and explainability of artificial intelligence in medicine</article-title><source>Wiley Interdiscip Rev Data Min Knowl Discov</source><volume>9</volume><issue>e1312</issue><year>2019</year><pub-id pub-id-type="pmid">32089788</pub-id><pub-id pub-id-type="doi">10.1002/widm.1312</pub-id></element-citation></ref>
<ref id="b52-MI-6-2-00304"><label>52</label><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Damilakis</surname><given-names>J</given-names></name><name><surname>Stratakis</surname><given-names>J</given-names></name></person-group><article-title>Descriptive overview of AI applications in x-ray imaging and radiotherapy</article-title><source>J Radiol Prot</source><volume>44</volume><issue>041001</issue><year>2024</year><pub-id pub-id-type="pmid">39681008</pub-id><pub-id pub-id-type="doi">10.1088/1361-6498/ad9f71</pub-id></element-citation></ref>
<ref id="b53-MI-6-2-00304"><label>53</label><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Psoroulas</surname><given-names>S</given-names></name><name><surname>Paunoiu</surname><given-names>A</given-names></name><name><surname>Corradini</surname><given-names>S</given-names></name><name><surname>H&#x00F6;rner-Rieber</surname><given-names>J</given-names></name><name><surname>Tanadini-Lang</surname><given-names>S</given-names></name></person-group><article-title>MR-linac: Role of artificial intelligence and automation</article-title><source>Strahlenther Onkol</source><volume>201</volume><fpage>298</fpage><lpage>305</lpage><year>2025</year><pub-id pub-id-type="pmid">39843783</pub-id><pub-id pub-id-type="doi">10.1007/s00066-024-02358-9</pub-id></element-citation></ref>
<ref id="b54-MI-6-2-00304"><label>54</label><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Smolders</surname><given-names>A</given-names></name><name><surname>Lomax</surname><given-names>A</given-names></name><name><surname>Weber</surname><given-names>DC</given-names></name><name><surname>Albertini</surname><given-names>F</given-names></name></person-group><article-title>Deep learning based uncertainty prediction of deformable image registration for contour propagation and dose accumulation in online adaptive radiotherapy</article-title><source>Phys Med Biol</source><volume>68</volume><issue>245027</issue><year>2023</year><pub-id pub-id-type="pmid">37820691</pub-id><pub-id pub-id-type="doi">10.1088/1361-6560/ad0282</pub-id></element-citation></ref>
<ref id="b55-MI-6-2-00304"><label>55</label><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Chen</surname><given-names>X</given-names></name><name><surname>Men</surname><given-names>K</given-names></name><name><surname>Li</surname><given-names>Y</given-names></name><name><surname>Yi</surname><given-names>J</given-names></name><name><surname>Dai</surname><given-names>J</given-names></name></person-group><article-title>A feasibility study on an automated method to generate patient-specific dose distributions for radiotherapy using deep learning</article-title><source>Med Phys</source><volume>46</volume><fpage>56</fpage><lpage>64</lpage><year>2019</year><pub-id pub-id-type="pmid">30367492</pub-id><pub-id pub-id-type="doi">10.1002/mp.13262</pub-id></element-citation></ref>
<ref id="b56-MI-6-2-00304"><label>56</label><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Li</surname><given-names>C</given-names></name><name><surname>Guo</surname><given-names>Y</given-names></name><name><surname>Lin</surname><given-names>X</given-names></name><name><surname>Feng</surname><given-names>X</given-names></name><name><surname>Xu</surname><given-names>D</given-names></name><name><surname>Yang</surname><given-names>R</given-names></name></person-group><article-title>Deep reinforcement learning in radiation therapy planning optimization: A comprehensive review</article-title><source>Phys Med</source><volume>125</volume><issue>104498</issue><year>2024</year><pub-id pub-id-type="pmid">39163802</pub-id><pub-id pub-id-type="doi">10.1016/j.ejmp.2024.104498</pub-id></element-citation></ref>
<ref id="b57-MI-6-2-00304"><label>57</label><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Akpinar</surname><given-names>MH</given-names></name><name><surname>Sengur</surname><given-names>A</given-names></name><name><surname>Salvi</surname><given-names>M</given-names></name><name><surname>Seoni</surname><given-names>S</given-names></name><name><surname>Faust</surname><given-names>O</given-names></name><name><surname>Mir</surname><given-names>H</given-names></name><name><surname>Molinari</surname><given-names>F</given-names></name><name><surname>Acharya</surname><given-names>UR</given-names></name></person-group><article-title>Synthetic data generation via generative adversarial networks in healthcare: A systematic review of image- and signal-based studies</article-title><source>IEEE Open J Eng Med Biol</source><volume>6</volume><fpage>183</fpage><lpage>192</lpage><year>2024</year><pub-id pub-id-type="pmid">39698120</pub-id><pub-id pub-id-type="doi">10.1109/OJEMB.2024.3508472</pub-id></element-citation></ref>
<ref id="b58-MI-6-2-00304"><label>58</label><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Chen</surname><given-names>Y</given-names></name><name><surname>Clayton</surname><given-names>EW</given-names></name><name><surname>Novak</surname><given-names>LL</given-names></name><name><surname>Anders</surname><given-names>S</given-names></name><name><surname>Malin</surname><given-names>B</given-names></name></person-group><article-title>Human-centered design to address biases in artificial intelligence</article-title><source>J Med Internet Res</source><volume>25</volume><issue>e43251</issue><year>2023</year><pub-id pub-id-type="pmid">36961506</pub-id><pub-id pub-id-type="doi">10.2196/43251</pub-id></element-citation></ref>
<ref id="b59-MI-6-2-00304"><label>59</label><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Abr&#x00E0;moff</surname><given-names>MD</given-names></name><name><surname>Tarver</surname><given-names>ME</given-names></name><name><surname>Loyo-Berrios</surname><given-names>N</given-names></name><name><surname>Trujillo</surname><given-names>S</given-names></name><name><surname>Char</surname><given-names>D</given-names></name><name><surname>Obermeyer</surname><given-names>Z</given-names></name><name><surname>Eydelman</surname><given-names>MB</given-names></name></person-group><comment>Foundational Principles of Ophthalmic Imaging and Algorithmic Interpretation Working Group of the Collaborative Community for Ophthalmic Imaging Foundation</comment><person-group person-group-type="author"><name><surname>Washington</surname><given-names>D.C.</given-names></name><name><surname>Maisel</surname><given-names>WH</given-names></name></person-group><article-title>Considerations for addressing bias in artificial intelligence for health equity</article-title><source>NPJ Digit Med</source><volume>6</volume><issue>170</issue><year>2023</year><pub-id pub-id-type="pmid">37700029</pub-id><pub-id pub-id-type="doi">10.1038/s41746-023-00913-9</pub-id></element-citation></ref>
<ref id="b60-MI-6-2-00304"><label>60</label><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Tejani</surname><given-names>AS</given-names></name><name><surname>Ng</surname><given-names>YS</given-names></name><name><surname>Xi</surname><given-names>Y</given-names></name><name><surname>Rayan</surname><given-names>JC</given-names></name></person-group><article-title>Understanding and mitigating bias in imaging artificial intelligence</article-title><source>Radiographics</source><volume>44</volume><issue>e230067</issue><year>2024</year><pub-id pub-id-type="pmid">38635456</pub-id><pub-id pub-id-type="doi">10.1148/rg.230067</pub-id></element-citation></ref>
<ref id="b61-MI-6-2-00304"><label>61</label><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Ross</surname><given-names>C</given-names></name><name><surname>Swetlitz</surname><given-names>I</given-names></name></person-group><comment>IBM&#x0027;s Watson supercomputer recommended &#x2018;unsafe and incorrect&#x2019; cancer treatments, internal documents show. STAT, Boston, MA, 2018.</comment></element-citation></ref>
<ref id="b62-MI-6-2-00304"><label>62</label><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Strickland</surname><given-names>E</given-names></name></person-group><article-title>IBM Watson, heal thyself: How IBM overpromised and underdelivered on AI health care</article-title><source>IEEE Spectr</source><volume>56</volume><fpage>24</fpage><lpage>31</lpage><year>2019</year></element-citation></ref>
<ref id="b63-MI-6-2-00304"><label>63</label><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>S&#x00E9;roussi</surname><given-names>B</given-names></name><name><surname>Laou&#x00E9;nan</surname><given-names>C</given-names></name><name><surname>Gligorov</surname><given-names>J</given-names></name><name><surname>Uzan</surname><given-names>S</given-names></name><name><surname>Mentr&#x00E9;</surname><given-names>F</given-names></name><name><surname>Bouaud</surname><given-names>J</given-names></name></person-group><article-title>Which breast cancer decisions remain non-compliant with guidelines despite the use of computerised decision support?</article-title><source>Br J Cancer</source><volume>109</volume><fpage>1147</fpage><lpage>1156</lpage><year>2013</year><pub-id pub-id-type="pmid">23942076</pub-id><pub-id pub-id-type="doi">10.1038/bjc.2013.453</pub-id></element-citation></ref>
<ref id="b64-MI-6-2-00304"><label>64</label><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Jandoubi</surname><given-names>B</given-names></name><name><surname>Akhloufi</surname><given-names>MA</given-names></name></person-group><article-title>Multimodal artificial intelligence in medical diagnostics</article-title><source>Information</source><volume>16</volume><issue>591</issue><year>2025</year></element-citation></ref>
<ref id="b65-MI-6-2-00304"><label>65</label><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Tak</surname><given-names>D</given-names></name><name><surname>Garomsa</surname><given-names>BA</given-names></name><name><surname>Chaunzwa</surname><given-names>TL</given-names></name><name><surname>Zapaishchykova</surname><given-names>A</given-names></name><name><surname>Climent Pardo</surname><given-names>JC</given-names></name><name><surname>Ye</surname><given-names>Z</given-names></name><name><surname>Zielke</surname><given-names>J</given-names></name><name><surname>Ravipati</surname><given-names>Y</given-names></name><name><surname>Vajapeyam</surname><given-names>S</given-names></name><name><surname>Mahootiha</surname><given-names>M</given-names></name><etal/></person-group><comment>A foundation model for generalized brain MRI analysis. medRxiv &#x005B;Preprint&#x005D;: 2024.12.02.24317992, 2024.</comment></element-citation></ref>
<ref id="b66-MI-6-2-00304"><label>66</label><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Yan</surname><given-names>S</given-names></name><name><surname>Yu</surname><given-names>Z</given-names></name><name><surname>Primiero</surname><given-names>C</given-names></name><name><surname>Vico-Alonso</surname><given-names>C</given-names></name><name><surname>Wang</surname><given-names>Z</given-names></name><name><surname>Yang</surname><given-names>L</given-names></name><name><surname>Tschandl</surname><given-names>P</given-names></name><name><surname>Hu</surname><given-names>M</given-names></name><name><surname>Ju</surname><given-names>L</given-names></name><name><surname>Tan</surname><given-names>G</given-names></name><etal/></person-group><article-title>A multimodal vision foundation model for clinical dermatology</article-title><source>Nat Med</source><volume>31</volume><fpage>2691</fpage><lpage>2702</lpage><year>2025</year><pub-id pub-id-type="pmid">40481209</pub-id><pub-id pub-id-type="doi">10.1038/s41591-025-03747-y</pub-id></element-citation></ref>
<ref id="b67-MI-6-2-00304"><label>67</label><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Ding</surname><given-names>T</given-names></name><name><surname>Wagner</surname><given-names>SJ</given-names></name><name><surname>Song</surname><given-names>AH</given-names></name><name><surname>Chen</surname><given-names>RJ</given-names></name><name><surname>Lu</surname><given-names>MY</given-names></name><name><surname>Zhang</surname><given-names>A</given-names></name><name><surname>Vaidya</surname><given-names>AJ</given-names></name><name><surname>Jaume</surname><given-names>G</given-names></name><name><surname>Shaban</surname><given-names>M</given-names></name><name><surname>Kim</surname><given-names>A</given-names></name><etal/></person-group><article-title>A multimodal whole-slide foundation model for pathology</article-title><source>Nat Med</source><volume>31</volume><fpage>3749</fpage><lpage>3761</lpage><year>2025</year><pub-id pub-id-type="pmid">41193692</pub-id><pub-id pub-id-type="doi">10.1038/s41591-025-03982-3</pub-id></element-citation></ref>
<ref id="b68-MI-6-2-00304"><label>68</label><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Hao</surname><given-names>R</given-names></name><name><surname>Chang</surname><given-names>WC</given-names></name><name><surname>Hu</surname><given-names>J</given-names></name><name><surname>Gao</surname><given-names>M</given-names></name></person-group><comment>Federated Learning-Driven Health Risk Prediction on Electronic Health Records Under Privacy. Constraints. Preprints: <ext-link xmlns:xlink="http://www.w3.org/1999/xlink" ext-link-type="uri" xlink:href="https://doi.org/10.20944/preprints202510.1471.v1">https://doi.org/10.20944/preprints202510.1471.v1</ext-link>.</comment></element-citation></ref>
<ref id="b69-MI-6-2-00304"><label>69</label><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Mu</surname><given-names>J</given-names></name><name><surname>Kadoch</surname><given-names>M</given-names></name><name><surname>Yuan</surname><given-names>T</given-names></name><name><surname>Lv</surname><given-names>W</given-names></name><name><surname>Liu</surname><given-names>Q</given-names></name><name><surname>Li</surname><given-names>B</given-names></name></person-group><article-title>Explainable federated medical image analysis through causal learning and blockchain</article-title><source>IEEE J Biomed Health Inform</source><volume>28</volume><fpage>3206</fpage><lpage>3218</lpage><year>2024</year><pub-id pub-id-type="pmid">38470597</pub-id><pub-id pub-id-type="doi">10.1109/JBHI.2024.3375894</pub-id></element-citation></ref>
<ref id="b70-MI-6-2-00304"><label>70</label><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Rezaeian</surname><given-names>O</given-names></name><name><surname>Bayrak</surname><given-names>AE</given-names></name><name><surname>Asan</surname><given-names>O</given-names></name></person-group><comment>Explainability and AI confidence in clinical decision support systems: Effects on trust, diagnostic performance, and cognitive load in breast cancer care. arXiv: <ext-link xmlns:xlink="http://www.w3.org/1999/xlink" ext-link-type="uri" xlink:href="https://doi.org/10.48550/arXiv.2501.16693">https://doi.org/10.48550/arXiv.2501.16693</ext-link>.</comment></element-citation></ref>
<ref id="b71-MI-6-2-00304"><label>71</label><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Salimparsa</surname><given-names>M</given-names></name><name><surname>Sedig</surname><given-names>K</given-names></name><name><surname>Lizotte</surname><given-names>DJ</given-names></name><name><surname>Abdullah</surname><given-names>SS</given-names></name><name><surname>Chalabianloo</surname><given-names>N</given-names></name><name><surname>Muanda</surname><given-names>FT</given-names></name></person-group><article-title>Explainable AI for clinical decision support systems: Literature review, key gaps, and research synthesis</article-title><source>Informatics</source><volume>12</volume><issue>119</issue><year>2025</year><pub-id pub-id-type="pmid">39855597</pub-id><pub-id pub-id-type="doi">10.1016/j.ajogmf.2025.101612</pub-id></element-citation></ref>
<ref id="b72-MI-6-2-00304"><label>72</label><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Marey</surname><given-names>A</given-names></name><name><surname>Ambrozaite</surname><given-names>O</given-names></name><name><surname>Afifi</surname><given-names>A</given-names></name><name><surname>Agarwal</surname><given-names>R</given-names></name><name><surname>Chellappa</surname><given-names>R</given-names></name><name><surname>Adeleke</surname><given-names>S</given-names></name><name><surname>Umair</surname><given-names>M</given-names></name></person-group><comment>A perspective on AI implementation in medical imaging in LMICs: Challenges, priorities, and strategies. Eur Radiol: October 23, 2025 (Epub ahead of print).</comment></element-citation></ref>
<ref id="b73-MI-6-2-00304"><label>73</label><element-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Kaushik</surname><given-names>A</given-names></name><name><surname>Barcellona</surname><given-names>C</given-names></name><name><surname>Mandyam</surname><given-names>NK</given-names></name><name><surname>Tan</surname><given-names>SY</given-names></name><name><surname>Tromp</surname><given-names>J</given-names></name></person-group><article-title>Challenges and opportunities for data sharing related to artificial intelligence tools in health care in low- and middle-income countries: Systematic review and case study from Thailand</article-title><source>J Med Internet Res</source><volume>27</volume><issue>e58338</issue><year>2025</year><pub-id pub-id-type="pmid">39903508</pub-id><pub-id pub-id-type="doi">10.2196/58338</pub-id></element-citation></ref>
</ref-list>
</back>
<floats-group>
<fig id="f1-MI-6-2-00304" position="float">
<label>Figure 1</label>
<caption><p>AI in oncological pathology: AI integration across digital pathology, biomarker analysis, tumour diagnosis and grading, and predictive prognostication, highlighting how multimodal data processing supports pattern recognition and outcome prediction to enable precision oncology. AI, artificial intelligence.</p></caption>
<graphic xlink:href="mi-06-02-00304-g00.tif"/>
</fig>
<table-wrap id="tI-MI-6-2-00304" position="float">
<label>Table I</label>
<caption><p>AI in clinical fields: Applications and limitations.</p></caption>
<table frame="hsides" rules="groups">
<thead>
<tr>
<th align="left" valign="middle">Clinical field</th>
<th align="center" valign="middle">Key AI applications</th>
<th align="center" valign="middle">Representative outcomes</th>
<th align="center" valign="middle">Major limitations</th>
</tr>
</thead>
<tbody>
<tr>
<td align="left" valign="middle">Cancer imaging (CT, MRI, PET, mammography, USG)</td>
<td align="left" valign="middle">Lesion detection, segmentation, staging, radiomics-based prognostication, image reconstruction</td>
<td align="left" valign="middle">Improved detection of melanoma, breast cancer risk prediction, colorectal polyp identification, nodal metastasis classification</td>
<td align="left" valign="middle">Retrospective single-centre datasets, scanner/protocol variability, limited interpretability, dataset bias</td>
</tr>
<tr>
<td align="left" valign="middle">Digital pathology</td>
<td align="left" valign="middle">Tumour detection and grading, lymph node metastasis identification, genomic mutation inference from H&#x0026;E slides, survival prediction</td>
<td align="left" valign="middle">Gleason grading, gastric/colonic tumour classification, prediction of actionable mutations</td>
<td align="left" valign="middle">Staining and scanner heterogeneity, lack of standardization, insufficient prospective validation</td>
</tr>
<tr>
<td align="left" valign="middle">Clinical outcome prediction</td>
<td align="left" valign="middle">Survival prediction, toxicity risk estimation, response assessment</td>
<td align="left" valign="middle">Enhanced risk stratification beyond conventional staging</td>
<td align="left" valign="middle">Overfitting, poor external generalizability, limited longitudinal modelling</td>
</tr>
<tr>
<td align="left" valign="middle">Chemotherapy</td>
<td align="left" valign="middle">Drug response prediction, pharmacogenomic modelling, resistance detection</td>
<td align="left" valign="middle">Improved <italic>in silico</italic> prediction of sensitivity using multi-omics data</td>
<td align="left" valign="middle">Scarcity of high-quality clinical datasets, limited explainability, lack of prospective trials</td>
</tr>
<tr>
<td align="left" valign="middle">Radiotherapy</td>
<td align="left" valign="middle">Automated contouring, dose calculation, adaptive planning, toxicity prediction</td>
<td align="left" valign="middle">More rapid treatment planning and adaptive workflows</td>
<td align="left" valign="middle">Regulatory hurdles, integration challenges, black-box models</td>
</tr>
</tbody>
</table>
<table-wrap-foot>
<fn><p>The information presented in the table is derived from previous studies (<xref rid="b17-MI-6-2-00304" ref-type="bibr">17</xref>,<xref rid="b21-MI-6-2-00304 b22-MI-6-2-00304 b23-MI-6-2-00304 b24-MI-6-2-00304" ref-type="bibr">21-24</xref>,<xref rid="b30-MI-6-2-00304 b31-MI-6-2-00304 b32-MI-6-2-00304 b33-MI-6-2-00304 b34-MI-6-2-00304 b35-MI-6-2-00304 b36-MI-6-2-00304 b37-MI-6-2-00304 b38-MI-6-2-00304 b39-MI-6-2-00304 b40-MI-6-2-00304 b41-MI-6-2-00304" ref-type="bibr">30-41</xref>,<xref rid="b52-MI-6-2-00304 b53-MI-6-2-00304 b54-MI-6-2-00304" ref-type="bibr">52-54</xref>,<xref rid="b57-MI-6-2-00304" ref-type="bibr">57</xref>,<xref rid="b68-MI-6-2-00304" ref-type="bibr">68</xref>,<xref rid="b69-MI-6-2-00304" ref-type="bibr">69</xref>). CT, computed tomography; MRI, magnetic resonance imaging; PET, positron emission tomography; USG, ultrasound sonography.</p></fn>
</table-wrap-foot>
</table-wrap>
<table-wrap id="tII-MI-6-2-00304" position="float">
<label>Table II</label>
<caption><p>AI methodologies in oncology: Applications and challenges.</p></caption>
<table frame="hsides" rules="groups">
<thead>
<tr>
<th align="left" valign="middle">Methodology</th>
<th align="center" valign="middle">Typical applications</th>
<th align="center" valign="middle">Strengths</th>
<th align="center" valign="middle">Challenges</th>
</tr>
</thead>
<tbody>
<tr>
<td align="left" valign="middle">Machine learning (ML)</td>
<td align="left" valign="middle">Survival prediction, drug response modelling, radiomics</td>
<td align="left" valign="middle">Handles structured clinical and molecular data</td>
<td align="left" valign="middle">Feature engineering dependence, limited scalability</td>
</tr>
<tr>
<td align="left" valign="middle">Deep learning (CNNs, transformers)</td>
<td align="left" valign="middle">Imaging analysis, digital pathology, outcome prediction</td>
<td align="left" valign="middle">Automatic feature extraction, high diagnostic accuracy</td>
<td align="left" valign="middle">Poor interpretability, large data requirements</td>
</tr>
<tr>
<td align="left" valign="middle">Radiomics</td>
<td align="left" valign="middle">Tumour phenotyping, prognostication</td>
<td align="left" valign="middle">Quantifies imaging heterogeneity</td>
<td align="left" valign="middle">Sensitive to acquisition variability</td>
</tr>
<tr>
<td align="left" valign="middle">Multimodal AI</td>
<td align="left" valign="middle">Integration of imaging, pathology, genomics, EHRs</td>
<td align="left" valign="middle">Captures tumour biology and patient context comprehensively</td>
<td align="left" valign="middle">Complex model design, data harmonization</td>
</tr>
<tr>
<td align="left" valign="middle">Foundation models</td>
<td align="left" valign="middle">Transfer learning across cancers and institutions</td>
<td align="left" valign="middle">Reduced labelling needs, improved robustness</td>
<td align="left" valign="middle">Computationally intensive, transparency concerns</td>
</tr>
<tr>
<td align="left" valign="middle">Federated learning</td>
<td align="left" valign="middle">Multi-centre model training without data sharing</td>
<td align="left" valign="middle">Preserves privacy, improves generalizability</td>
<td align="left" valign="middle">Infrastructure demands, communication overhead</td>
</tr>
<tr>
<td align="left" valign="middle">Explainable AI (XAI)</td>
<td align="left" valign="middle">Clinical decision support transparency</td>
<td align="left" valign="middle">Builds clinician trust</td>
<td align="left" valign="middle">Often adds complexity, limited standardization</td>
</tr>
</tbody>
</table>
<table-wrap-foot>
<fn><p>The information presented in the table is derived from previous studies (<xref rid="b4-MI-6-2-00304" ref-type="bibr">4</xref>,<xref rid="b21-MI-6-2-00304 b22-MI-6-2-00304 b23-MI-6-2-00304 b24-MI-6-2-00304 b25-MI-6-2-00304 b26-MI-6-2-00304 b27-MI-6-2-00304 b28-MI-6-2-00304 b29-MI-6-2-00304" ref-type="bibr">21-29</xref>,<xref rid="b41-MI-6-2-00304 b42-MI-6-2-00304 b43-MI-6-2-00304" ref-type="bibr">41-43</xref>,<xref rid="b47-MI-6-2-00304" ref-type="bibr">47</xref>,<xref rid="b52-MI-6-2-00304 b53-MI-6-2-00304 b54-MI-6-2-00304 b55-MI-6-2-00304 b56-MI-6-2-00304 b57-MI-6-2-00304" ref-type="bibr">52-57</xref>,<xref rid="b64-MI-6-2-00304 b65-MI-6-2-00304 b66-MI-6-2-00304 b67-MI-6-2-00304 b68-MI-6-2-00304 b69-MI-6-2-00304 b70-MI-6-2-00304 b71-MI-6-2-00304 b72-MI-6-2-00304 b73-MI-6-2-00304" ref-type="bibr">64-73</xref>). AI, artificial intelligence; CNNs, convolutional neural networks; EHRs, electronic health records.</p></fn>
</table-wrap-foot>
</table-wrap>
<table-wrap id="tIII-MI-6-2-00304" position="float">
<label>Table III</label>
<caption><p>AI-driven challenges in oncology: Computational, ethical/regulatory and data quality domains.</p></caption>
<table frame="hsides" rules="groups">
<thead>
<tr>
<th align="left" valign="middle">Domain</th>
<th align="center" valign="middle">Key challenges</th>
</tr>
</thead>
<tbody>
<tr>
<td align="left" valign="middle">Computational</td>
<td align="left" valign="middle">High hardware requirements, model instability over time, lack of standardized benchmarking, workflow integration difficulties</td>
</tr>
<tr>
<td align="left" valign="middle">Ethical/regulatory</td>
<td align="left" valign="middle">Limited transparency, accountability concerns, unclear liability, bias amplification, delayed regulatory approval, failures such as IBM Watson for Oncology due to inadequate validation</td>
</tr>
<tr>
<td align="left" valign="middle">Data quality</td>
<td align="left" valign="middle">Dataset bias, small sample sizes, inconsistent acquisition and annotation, missing longitudinal data, underrepresentation of LMIC populations</td>
</tr>
<tr>
<td align="left" valign="middle">Equity (LMICs)</td>
<td align="left" valign="middle">Infrastructure gaps, workforce shortages, limited digitization, algorithmic bias, restricted access to validated AI tools</td>
</tr>
</tbody>
</table>
<table-wrap-foot>
<fn><p>The information presented in the table is derived from previous studies (<xref rid="b4-MI-6-2-00304" ref-type="bibr">4</xref>,<xref rid="b23-MI-6-2-00304" ref-type="bibr">23</xref>,<xref rid="b28-MI-6-2-00304" ref-type="bibr">28</xref>,<xref rid="b41-MI-6-2-00304" ref-type="bibr">41</xref>,<xref rid="b42-MI-6-2-00304" ref-type="bibr">42</xref>,<xref rid="b52-MI-6-2-00304 b53-MI-6-2-00304 b54-MI-6-2-00304" ref-type="bibr">52-54</xref>,<xref rid="b58-MI-6-2-00304 b59-MI-6-2-00304 b60-MI-6-2-00304 b61-MI-6-2-00304 b62-MI-6-2-00304 b63-MI-6-2-00304" ref-type="bibr">58-63</xref>,<xref rid="b72-MI-6-2-00304" ref-type="bibr">72</xref>,<xref rid="b73-MI-6-2-00304" ref-type="bibr">73</xref>). AI, artificial intelligence; LMICs, low- and middle-income countries.</p></fn>
</table-wrap-foot>
</table-wrap>
</floats-group>
</article>
