ChatGPT in its Infancy — A Chat with its Pediatrician: Dr. Junaid Bajwa
Navigating the complexities of healthcare and artificial intelligence brings to mind the nurturing of a child through its stages of development. As one of the few practicing and academic physicians within Microsoft Research, Dr Junaid Bajwa provides a relatively unique lens through which to examine the evolving role of ChatGPT in healthcare. With an interdisciplinary background spanning clinical practice, digital health innovation, and scholarly research, Dr. Bajwa offers a multifaceted perspective on this intriguing subject.
John Nosta: Dr. Bajwa, it’s always a pleasure chatting with you. Serving as a sort of functional ‘pediatrician for ChatGPT’ at Microsoft, could you elaborate on the AI’s first steps in healthcare? Specifically, what early roles are best suited for ChatGPT?
Dr. Junaid Bajwa: Thanks John. Much like a pediatrician or family physician monitoring early developmental milestones, we’re looking at foundational sectors within healthcare as initial playgrounds for ChatGPT, ideally low risk, low complexity- but high impact areas
These could include a diverse array of areas from administrative tasks like appointment scheduling to more analytical domains such as financial modeling, and even extend to preliminary work in clinical research. The intention here is to create an iterative learning environment, not unlike a developmental sandbox, wherein both the technology and the human stakeholders can adapt and refine their interactions. This creates a symbiotic relationship that will ultimately facilitate broader application in more complex and high-stakes healthcare sectors.
John Nosta: Trust is crucial in healthcare, from doctor-patient relationships to new technological interventions. Could you outline a roadmap for clinicians to cultivate a trusting relationship with ChatGPT and by extension other Large Language Models?
Dr. Junaid Bajwa: Without a doubt, trust is fundamental, especially in healthcare where the margin for error can be life-altering. Establishing trust with a novel AI technology like ChatGPT is a complex undertaking that must be both rigorous and dynamic. It begins with scientific validation — rigorous testing of the AI’s performance under multiple clinical and non-clinical scenarios, involving varied patient populations and healthcare settings. However, trust isn’t just built through scientific rigor; it must also be nurtured through experiential evidence. Clinicians and non-clinicians should have the opportunity to ‘test drive’ the technology, providing real-world feedback that can be looped back into system improvements. It’s these lived experiences with the technology that can make life easier for the healthcare teams and logically fit into a workflow. Additionally, transparent, interdisciplinary dialogues are essential for addressing ethical and operational concerns, such as data privacy and potential biases.
John Nosta: Technology adoption in healthcare often faces resistance due to fears and uncertainties. How should healthcare professionals navigate the “fear of adoption” when it comes to ChatGPT?
Dr. Junaid Bajwa: The apprehension to adopt new technologies in healthcare is completely understandable and is a complex mosaic of various concerns. There’s a justified need for caution given the life-critical nature of healthcare services. There are also ethical and legal concerns, including those regarding data privacy and the explainability of AI decision-making processes. Navigating this labyrinth of “fear of adoption” requires a multi-faceted strategy. This includes not only strong empirical evidence of efficacy and safety but also education and training programs to ensure that healthcare professionals feel confident and competent in using the technology. Establishing advisory panels that include medical professionals, ethicists, and patients could also be instrumental in providing a rounded, comprehensive view on adoption challenges. In the final analysis, the clinical community is our partner is both learning and adoption.
John Nosta: What’s your take on patient acceptance and utility of ChatGPT as a healthcare tool?
Dr. Junaid Bajwa: Patient acceptance is an integral pillar for the successful integration of any healthcare tool, ChatGPT included. Patients are increasingly becoming active participants in their healthcare journeys and technologies that empower this agency are likely to find greater acceptance. ChatGPT can serve as an educational tool, an interactive interface for care plans, or even a medium for mental health support. The critical determinant for patient acceptance will be the tool’s user-friendliness, the perceived added value, and, above all, a demonstrable track record of maintaining the privacy and integrity of their data. In addition, the information provided needs to be trusted/ from trusted resources or references, in a language + approach which makes sense to the end user.
From a utility perspective, the technology should not just be a “nice-to-have” but should provide concrete benefits like reducing time to diagnosis, aiding in treatment plans, or enhancing overall patient experience.
Navigating the complexities of care, with care.
Our dialogue with Dr. Bajwa shines a spotlight on the nuanced pathways through which ChatGPT could find its footing and mature within the healthcare ecosystem. Analogous to the critical nurturing required in human development, the technology’s integration into healthcare requires systematic validation, trust-building, and collaborative engagement from all stakeholders. These pillars provide the foundational base upon which ChatGPT can evolve from an intriguing concept into an indispensable tool in modern healthcare — impacting clinicians, administrators, and patients alike.
This story was written in collaboration with Microsoft.