Introduction
In the field of medical research, conducting systematic and other forms of literature reviews and managing its data are critical yet intricate tasks for medical researchers. Systematic reviews play a crucial role in generating various types of knowledge for different forms of stakeholders like patients, healthcare providers, researchers, and policymakers (Page et al., 2021). Traditional approaches to literature reviews and data management in medical research have historically relied on manual database searching, indexing, and organising of sources (Alzyood et al., 2018). The traditional methods used when conducting systematic reviews are often time-intensive and laborious, posing significant challenges for medical researchers and can take between 12 and 24 months to complete (Ganann et al., 2010; Alzyood et al., 2018). Manual database searching and conducting systematic reviews requires meticulous attention to detail and can be time-consuming, especially when multiple databases need to be searched to ensure comprehensive coverage of the literature (Clark et al., 2020). For instance, manual database searching involves the use of specific keywords and Medical Subject Headings (Mesh) phrases to retrieve relevant literature from various databases to name a few Embase, Medline, PubMed, Scopus, and Web of Science (Alzyood et al., 2018; Gusenbauer and Haddaway, 2020). The experiences of completing a systematic integrative literature included indexing, organising sources through categorising, and manual coding was a laborious task especially when dealing with a large volume of literature (Alzyood et al., 2018). The question is, can the integration of Artificial intelligence (AI) help to make conducting a systematic review, data management and organisation more efficient and less time consuming?
In this editorial, we argue that the advent of AI tools heralds a transformative era in medical research. The use of AI tools serve as formidable allies, offering automation of labour-intensive processes and delivering insights that might otherwise elude manual methods (Mohammed et al., 2022). In this editorial, we advocate for the integration of AI tools in healthcare literature reviews and in both data organisation and management for systematic reviews. We will explore how the integration of AI technologies can augment the efficiency, accuracy, accessibility, and overall quality of these critical research activities, ultimately leading to more informed and effective healthcare research (Hinojo-Lucena et al., 2019; Aldosari, 2020).
The integration of AI in medical research is not merely about replacing traditional methods of manual database searching and looking for evidence though different online platforms, but rather, about enhancing and refining them. Advanced AI algorithms excel in sifting through vast databases, identifying relevant studies with remarkable precision, and discerning patterns and correlations that might be overlooked in manual or human-driven systematic literature reviews (Yang et al., 2019). This capability is particularly invaluable in systematic reviews and meta-analyses, where comprehensiveness and accuracy are paramount (Marshall et a., 2018). Additionally, AI’s prowess in data management—organising, categorising, and synthesising information—transforms unwieldy data sets into coherent, accessible knowledge (He et al., 2019). This not only saves medical researchers’ valuable time but also allows for a more thorough and nuanced understanding of complex healthcare topics. By examining specific case studies and recent advancements in AI-driven research tools, we can better appreciate the nuanced ways in which AI is reshaping the landscape of healthcare literature review and data management, setting a new standard for precision and efficiency in medical research.
Integrating AI into medical literature reviews, data organisation and management become increasingly clear (Jiang et al., 2017). For instance, AI would help medical researchers to save time while looking for studies and organising datasets by focusing on analysis and interpretation (Eshwar, 2023). Furthermore, AI’s advanced algorithms like machine learning (ML) which is a subfield of AI that enables computers to learn human-like decision-making from data are adept at uncovering hidden patterns and trends in large datasets, facilitating a level of insight and depth in data analysis that is challenging to achieve manually (Rouzrokh et al., 2022). For systematic reviews, the integration of AI would not only accelerate the review process but also significantly raise the quality and accuracy of the evidence extracted from studies included in the systematic reviews.
The transformative impact of AI in medical research becomes particularly evident through real-world examples. Here we consider a few notable instances where AI has made significant strides in literature reviews and data management. The use of AI to facilitate searching for relevant studies to be included in literature reviews. For instance, Dijk et al. (2023) reported that AI has saved their time by only assessing 23% of the articles that were identified as relevant. The authors used AI tool ‘ASReview’ (V.0.17.1) (Schoot et al., 2021) to facilitate their searches and report findings from database searching. Findings from this study provide valuable insights into the application of AI in healthcare literature reviews and systematic review methodologies. The use of AI tools, such as ‘ASReview’, has demonstrated promising potential to accelerate the systematic review process, resulting in significant time savings and improved efficiency. Additionally, the AI tool, when appropriately used, has shown to be effective in conducting transparent and reliable systematic reviews, particularly in the highly accelerated literature selection process through the title and abstract screening phase (Dijk et al., 2023). In their systematic review, Dijk et al. (2023) claimed that the use of AI tools saved 77% of the time for title and abstract screening. For medical research practice, using AI and machine learning to screen thousands of trial reports and identify those that are most likely to be relevant to include in Cochrane Reviews reduced the workload considerably for Cochrane Review authors, freeing their time to focus on more in-depth analysis work (Cochrane, 2017). Nevertheless, there is a lack of research on AI’s role in searching for relevant studies by feeding the AI with pre-identified inclusion and exclusion criteria.
The challenges of integrating AI in completing systematic reviews in medical research are multifaceted and ever-evolving. One of the challenges is managing the sheer volume of data and literature available (Cozzoli et al., 2022). The human-driven systematic literature search followed when searching for evidence from a peer-reviewed study, while thorough, often fell short in keeping pace with the rapidly growing healthcare information (Kelly et al., 2019). Such a scenario could lead to an excess of information, where valuable insights may be lost, and the difficulty of staying abreast with the latest studies and findings (Sauerbrei et al., 2023). In addressing these challenges, we argue that AI emerges as a beacon of innovation in completing systematic reviews. The AI’s ability to process and analyse large datasets swiftly, and with a level of precision unattainable by human efforts alone, presents a solution that is timely (Esmaeilzadeh, 2020). We experienced wide-ranging AI-powered tools which were capable of sifting through an extensive body of literature, efficiently extracting relevant information, and presenting it in a manner that is both accessible and actionable for novice researchers, research students, experienced researchers and healthcare professionals.
While the promise of AI in enhancing medical research is undeniable, it is also imperative to recognise its limitations and the ethical considerations. For instance, AI systems are only as good as the data they are trained on, making them susceptible to biases present in the source data (Varsha, 2023). Therefore, bias is a deviation from the expected performance of machine learning algorithms, resulting from either the implicit choices made by the algorithm designers or the characteristics of the data used for training and testing (Dilmegani, 2022). Additionally, the ‘black box’ nature of some AI algorithms can raise transparency and interpretability issues, crucial in healthcare decision-making (Linardatos et al., 2020). Ethical concerns, particularly regarding patient data privacy and the potential misuse of AI tools, also warrant careful consideration (Farhud and Zokaei, 2021; Jabarulla and Lee, 2021). These challenges necessitate a cautious and informed approach to the adoption of AI in medical research, underscoring the need for robust ethical guidelines and transparent methodologies (Petersson et al., 2022).
An additional significant challenge is the bias and the quality in the data used to train AI systems, with a call for sufficient transparency about the types of data used and any shortcomings in the software (for instance, data bias) (Gerke et al., 2020). Several legal challenges were posed by AI in healthcare in the USA and Europe (Gerke et al., 2020) as AI algorithms can inadvertently perpetuate and amplify biases present in the source data, potentially leading to skewed or unethical outcomes (Chen, 2023). For instance, if an AI system is trained predominantly on literature from high-income countries, its applicability and relevance to low-income settings might be limited, thereby perpetuating healthcare disparities (Celi et al., 2022). Evidence from Celi et al. (2022) suggests that the database primarily came from the United States and China and that at least the first 10 places were from high income countries indicating the importance of developing infrastructure for AI in data-poor regions and low-income countries.
From our experiences, an additional risk of bias is also associated with the human–AI interaction with the machine and the type of questions are usually asked to the AI tool. Inevitably, AI algorithms inherit our mistakes and biases since they are designed by humans and described as ‘weapons of math destruction’ (Verma, 2019). It has been argued that AI algorithms use historical data to identify patterns and predict future events based on what happened in the past (Vicente and Matute, 2023). The difficulty in obtaining high-quality datasets, especially in clinical contexts, can lead to biases in AI systems, as they often learn and potentially replicate the imbalances present in their training data (Obermeyer et al., 2019; Wiens et al., 2019; Larrazabal et al., 2020). For instance, misdiagnosis of a certain pathology based on misidentification of a certain pattern of coloured pixels in an image (Vicente and Matute, 2023). It is therefore possible to recommend that using AI tools for literature searching, making decisions regarding inclusion or exclusion of studies, title or abstract screening, and reaching conclusions, bias can be a key factor to consider.
An additional challenge to the use of AI tools in searching databases lies in the nature of the AI tools itself. For instance, the majority of the AI literature searching tools are owned and controlled by private entities (Murdoch, 2021). From our experience, AI tools like Rayyan1, ASReview2, DistillerSR3, EPPI-Reviewer4, Pico Portal5, RobotAnalyst6, and SWIFTActive-Screener2 require paid subscriptions to fully access their ‘premium’ features. These tools have a free trial version that is limited to certain basic features and require the researchers to purchase a full package in order to use important features. For instance, a paid for full version would enable researchers from completing enhanced statistical tools, accessing complex data visualisation options, shared workspaces, real-time editing, summarising large sets of data or research findings, enhanced data security features, training sessions, and exporting data and findings in various formats (Jiménez et al., 2022; Dijk et al., 2023; Dwivedi et al., 2023). Hence, concerns on the accessibility of AI’s premium features is particularly challenging for novice researchers and students due to the lack of skills and knowledge on how to best utilise these tools. It is therefore recommended that medical researchers require training tailored support on how to use the AI tools by their medical and education institutions.
This section has highlighted some of the potentials and challenges in integrating AI into systematic literature reviews, data management and organisation. The next section will explore the future prospects of AI in this field, considering the potential advancements and the evolving role of AI in facilitating more efficient literature reviewing in medical research.
Future prospects of AI in medical research
The future of AI in medical research and education holds immense promise. Looking ahead, we can anticipate several advancements and evolving roles of AI that could further revolutionise this field. Concerning the completion of systematic reviews, data management and organisation, future advancements in AI algorithms are expected to provide even greater accuracy and efficiency. With the development of more sophisticated machine learning models, AI will be able to understand context and nuances in medical literature more effectively, leading to more precise and comprehensiveness in systematic reviews. As AI becomes more ingrained in medical research, there will be a greater emphasis on developing ‘ethical AI’ that is fair, transparent, and respectful of privacy and rights (Munn, 2022). To fully capitalise on the potential of AI, medical researchers must stay abreast of AI’s advancements and actively engage in shaping the future of AI in medical research. This includes continuous education, interdisciplinary collaboration, and proactive policy-making. Concurrently, the development of comprehensive policies and frameworks governing the use of AI in medical research will be crucial in ensuring these technologies are used responsibly. In synthesising the insights from our exploration on the use of AI in systematic reviews, data abstraction, management and organisation, it becomes evident that AI holds transformative potential for this field. However, realising this potential requires a balanced approach that acknowledges both the capabilities and the limitations of AI. It is therefore necessary for medical researchers to emphasise how AI was used in their systematic reviews, including what questions were asked and whether or not researchers were involved in data validation and verification processes.
Recommendations for medical researchers
- For medical researchers, it is essential to embrace AI tools while maintaining a critical eye towards their output, ensuring that AI complements but does not replace their expertise. Hence, the main argument in this paper is to emphasise how AI tools can help to foster literature searching when conducting systematic reviews rather than replacing the role of researchers.
- Transparency and reproducibility are vital as medical researchers are expected to report and clearly explain how the AI tools were used including the type of questions asked leaving a clear trail on steps that could be followed by future researchers.
- Continuous education in AI and its applications in medical research is paramount. It is necessary for more research to assess the benefits and challenges associated with AI applications through rigorous methodologies concerning the potential enhancements in the systematic review process within medical research. This can be achieved through full prepaid access to AI tools that aid in systematic reviews and financial and infrastructural support from these institutions.
- It is crucial to integrate AI training into the medical and health informatics curricula. This will equip future medical researchers with the necessary skills to effectively utilise AI in their research. Librarians at medical and educational institutions play a pivotal role in offering training sessions on the use of AI tools. Their expertise in information management and technology makes them ideally suited to guide researchers in navigating AI tools.
In conclusion, we have seen how AI can greatly enhance the efficiency, accuracy, and depth of medical research when conducting systematic literature reviews. Case studies have demonstrated its ability to streamline literature reviews and manage complex datasets. At the same time, we have discussed the challenges, particularly in terms of data bias, transparency, and ethical considerations like privacy and rights. Below are recommendations for a better use of AI, for instance, when searching the literature looking for a peer-review study to include in a systematic review. As we look to the future, the role of AI in medical research is poised to grow. The ongoing advancements in AI technology promise even more powerful tools for researchers. It is important to note that the key to leveraging AI advancements lies in our collective commitment to ethical, transparent, and responsible use of AI in medical research when conducting a systematic literature review.
References:
- Alzyood, M., Jackson, D., Brooke, J., & Aveyard, H. (2018). An integrative review exploring the perceptions of patients and healthcare professionals towards patient involvement in promoting hand hygiene compliance in the hospital setting. Journal of Clinical Nursing, 27(7-8), 1329-1345. https://doi.org/10.1111/jocn.14305
- Amedior, N. C. (2023). Ethical implications of artificial intelligence in the healthcare sector. Advances in Multidisciplinary and Scientific Research Journal Publication, 36, 1-12. https://doi.org/10.22624/aims-/accrabespoke2023p1
- Aldosari, S. A. (2020). The future of higher education in the light of artificial intelligence transformations. International Journal of Higher Education, 9(3), 145. https://doi.org/10.5430/ijhe.v9n3p145
- Celi, L. A., Cellini, J., Charpignon, M., Dee, E. C., Dernoncourt, F., Eber, R., … & Yao, J. S. (2022). Sources of bias in artificial intelligence that perpetuate healthcare disparities—a global review. PLOS Digital Health, 1(3), e0000022. https://doi.org/10.1371/journal.pdig.0000022
- Chen, Z. (2023). Ethics and discrimination in artificial intelligence-enabled recruitment practices. Humanities and Social Sciences Communications, 10(1). https://doi.org/10.1057/s41599-023-02079-x
- Clark, J., Sanders, S., Carter, M., Honeyman, D., Cleo, G., Auld, Y., … & Beller, E. (2020). Improving the translation of search strategies using the polyglot search translator: a randomized controlled trial. Journal of the Medical Library Association, 108(2). https://doi.org/10.5195/jmla.2020.834
- Cozzoli, N., Salvatore, F. P., Faccilongo, N., & Milone, M. (2022). How can big data analytics be used for healthcare organization management? Literary framework and future research from a systematic review. BMC Health Services Research, 22(1). https://doi.org/10.1186/s12913-022-08167-z
- Dijk, S., Brusse-Keizer, M., Bucsán, C., Palen, J., Doggen, C., & Lenferink, A. (2023). Artificial intelligence in systematic reviews: promising when appropriately used. BMJ Open, 13(7), e072254. https://doi.org/10.1136/bmjopen-2023-072254
- Dwivedi, Y. K., Kshetri, N., Hughes, L., Slade, E., Jeyaraj, A., Kar, A. K., … & Wright, R. (2023). Opinion paper: “so what if chatgpt wrote it?” multidisciplinary perspectives on opportunities, challenges and implications of generative conversational ai for research, practice and policy. International Journal of Information Management, 71, 102642. https://doi.org/10.1016/j.ijinfomgt.2023.102642
- Eshwar, M. S. (2023). Exploring the potential of artificial intelligence in healthcare: possibilities and challenges. International Scientific Journal of Engineering and Management, 02(04). https://doi.org/10.55041/isjem00408
- Esmaeilzadeh, P. (2020). Use of ai-based tools for healthcare purposes: a survey study from consumers’ perspectives. BMC Medical Informatics and Decision Making, 20(1). https://doi.org/10.1186/s12911-020-01191-1
- Farhud DD, Zokaei S. Ethical Issues of Artificial Intelligence in Medicine and Healthcare. Iran J Public Health. 2021 Nov;50(11):i-v. doi: 10.18502/ijph.v50i11.7600. PMID: 35223619; PMCID: PMC8826344.
- Ganann, R., Ciliska, D., & Thomas, H. (2010). Expediting systematic reviews: methods and implications of rapid reviews. Implementation Science, 5(1). https://doi.org/10.1186/1748-5908-5-56
- Ganann, R., Ciliska, D., & Thomas, H. (2010). Expediting systematic reviews: methods and implications of rapid reviews. Implementation Science, 5(1). https://doi.org/10.1186/1748-5908-5-56
- Gerke, S., Minssen, T., & Cohen, G. (2020). Ethical and legal challenges of artificial intelligence-driven healthcare. Artificial Intelligence in Healthcare, 295-336. https://doi.org/10.1016/b978-0-12-818438-7.00012-5
- Gusenbauer, M. and Haddaway, N. R. (2020). Which academic search systems are suitable for systematic reviews or meta‐analyses? evaluating retrieval qualities of google scholar, pubmed, and 26 other resources. Research Synthesis Methods, 11(2), 181-217. https://doi.org/10.1002/jrsm.1378
- Hinojo-Lucena, F., Díaz, I. A., Reche, M. d. P. C., & Rodríguez, J. M. R. (2019). Artificial intelligence in higher education: a bibliometric study on its impact in the scientific literature. Education Sciences, 9(1), 51. https://doi.org/10.3390/educsci9010051
- He, J., Baxter, S. L., Xu, J., Xu, J., Zhou, X., & Zhang, K. (2019). The practical implementation of artificial intelligence technologies in medicine. Nature Medicine, 25(1), 30-36. https://doi.org/10.1038/s41591-018-0307-0
- Jabarulla, M. Y. and Lee, H. (2021). A blockchain and artificial intelligence-based, patient-centric healthcare system for combating the covid-19 pandemic: opportunities and applications. Healthcare, 9(8), 1019. https://doi.org/10.3390/healthcare9081019
- Jiang, F., Jiang, Y., Zhi, H., Dong, Y., Li, H., Ma, S., … & Wang, Y. (2017). Artificial intelligence in healthcare: past, present and future. Stroke and Vascular Neurology, 2(4), 230-243. https://doi.org/10.1136/svn-2017-000101
- Jiménez, R., Lee, T., Rosillo, N., Córdova, R., Cree, I. A., González, Á. F., … & Ruiz, B. I. I. (2022). Machine learning computational tools to assist the performance of systematic reviews: a mapping review. BMC Medical Research Methodology, 22(1). https://doi.org/10.1186/s12874-022-01805-4
- Jobin, A. and Ienca, M. (2019). The global landscape of ai ethics guidelines. Nature Machine Intelligence, 1(9), 389-399. https://doi.org/10.1038/s42256-019-0088-2
- Kelly, C., Karthikesalingam, A., Suleyman, M., Corrado, G. S., & King, D. (2019). Key challenges for delivering clinical impact with artificial intelligence. BMC Medicine, 17(1). https://doi.org/10.1186/s12916-019-1426-2
- Larrazabal, A. J., Nieto, N., Peterson, V., Milone, D. H., & Ferrante, E. (2020). Gender imbalance in medical imaging datasets produces biased classifiers for computer-aided diagnosis. Proceedings of the National Academy of Sciences, 117(23), 12592-12594. https://doi.org/10.1073/pnas.1919012117
- Marshall, I., Noel-Storr, A., Kuiper, J., Thomas, J., & Wallace, B. (2018). Machine learning for identifying randomized controlled trials: an evaluation and practitioner’s guide. Research Synthesis Methods, 9(4), 602-614. https://doi.org/10.1002/jrsm.1287
- Mohammed, M. A., Mohammed, M. A., & Mohammed, V. A. (2022). Impact of artificial intelligence on the automation of digital health system. International Journal of Software Engineering &Amp; Applications, 13(6), 23-29. https://doi.org/10.5121/ijsea.2022.13602
- Murdoch, B. (2021). Privacy and artificial intelligence: challenges for protecting health information in a new era. BMC Medical Ethics, 22(1). https://doi.org/10.1186/s12910-021-00687-3
- Munn, L. (2022). The uselessness of ai ethics. AI and Ethics, 3(3), 869-877. https://doi.org/10.1007/s43681-022-00209-w
- Obermeyer, Z., Powers, B., Vogeli, C., & Mullainathan, S. (2019). Dissecting racial bias in an algorithm used to manage the health of populations. Science, 366(6464), 447-453. https://doi.org/10.1126/science.aax2342
- Onwumere, J., Amaral, F., & Valmaggia, L. (2018). Digital technology for caregivers of people with psychosis: systematic review. JMIR Mental Health, 5(3), e55. https://doi.org/10.2196/mental.9857
- Page, M., McKenzie, J., Bossuyt, P., Boutron, I., Hoffmann, T., Mulrow, C., … & Moher, D. (2021). The prisma 2020 statement: an updated guideline for reporting systematic reviews. Systematic Reviews, 10(1). https://doi.org/10.1186/s13643-021-01626-4
- Petersson, L., Larsson, I., Nygren, J., Nilsen, P., Neher, M., Reed, J., … & Svedberg, P. (2022). Challenges to implementing artificial intelligence in healthcare: a qualitative interview study with healthcare leaders in sweden. BMC Health Services Research, 22(1). https://doi.org/10.1186/s12913-022-08215-8
- Rouzrokh, P., Khosravi, B., Vahdati, S., Moassefi, M., Faghani, S., Mahmoudi, E., … & Erickson, B. J. (2022). Machine learning in cardiovascular imaging: a scoping review of published literature. Current Radiology Reports, 11(2), 34-45. https://doi.org/10.1007/s40134-022-00407-8
- Salah, H., Muhsen, I. N., Salama, M. E., Owaidah, T., & Hashmi, S. K. (2019). Machine learning applications in the diagnosis of leukemia: current trends and future directions. International Journal of Laboratory Hematology, 41(6), 717-725. https://doi.org/10.1111/ijlh.13089
- Sauerbrei, A., Kerasidou, A., Lucivero, F., & Hallowell, N. (2023). The impact of artificial intelligence on the person-centred, doctor-patient relationship: some problems and solutions. BMC Medical Informatics and Decision Making, 23(1). https://doi.org/10.1186/s12911-023-02162-y
- Schoot, R., Bruin, J., Schram, R., Zahedi, P., Boer, J., Weijdema, F., … & Oberski, D. (2021). An open source machine learning framework for efficient and transparent systematic reviews. Nature machine Intelligence, 3(2), 125-133. https://doi.org/10.1038/s42256-020-00287-7
- Varsha, P. (2023). How can we manage biases in artificial intelligence systems – a systematic literature review. International Journal of Information Management Data Insights, 3(1), 100165. https://doi.org/10.1016/j.jjimei.2023.100165
- Verma, S. (2019). Weapons of math destruction: how big data increases inequality and threatens democracy. Vikalpa: The Journal for Decision Makers, 44(2), 97-98. https://doi.org/10.1177/0256090919853933
- Vicente, L. and Matute, H. (2023). Humans inherit artificial intelligence biases. Scientific Reports, 13(1). https://doi.org/10.1038/s41598-023-42384-8
- Wiens, J., Saria, S., Sendak, M., Ghassemi, M., Vincent, J., Doshi‐Velez, F., … & Goldenberg, A. (2019). Do no harm: a roadmap for responsible machine learning for health care. Nature Medicine, 25(9), 1337-1340. https://doi.org/10.1038/s41591-019-0548-6
- Yang, X., Wang, Y., Byrne, R., Schneider, G., & Yang, S. (2019). Concepts of artificial intelligence for computer-assisted drug discovery. Chemical Reviews, 119(18), 10520-10594. https://doi.org/10.1021/acs.chemrev.8b00728


Leave a comment