Skip to content

NNot long after Artificial Intelligence company OpenAI released its ChatGPT chatbot, the application went viral. Five days after its release, it garnered 1 million users. Since then, it has been called a world-changing, tipping point for artificial intelligence and the start of a new technological revolution.

Like others, we trained with more than 570 gigabytes of online textual data, including books, web articles, Wikipedia, articles, and other content on the Internet, and began exploring potential clinical applications for ChatGPT. and health care. Although we are excited about the use of AI for medical applications such as ChatGPT, inaccuracies, masking and biases make us hesitant to support its use outside of certain circumstances. These include streamlining education and administrative tasks to aid clinical decision-making, although there are significant challenges and difficulties in application.

as an educational aid

In the United States, medical education continues to inch away from a system that revolves around memorizing and retaining information. AI systems like ChatGPT can help medical students and doctors learn more effectively, from creating unique mnemonic tools (“generate cranial nerve names”) to explaining complex concepts in a variety of complex language (“define tetralogy”). Fallot for me is a 10th grader, 1st year medical student or cardiology fellow”).

advertisement

By asking ChatGPT, we found that it helps in studying standardized medical tests by generating quality practice questions along with providing detailed explanations of correct and incorrect answers. Perhaps it’s no surprise that the study, which was recently released in preprint — in which ChatGPT is listed as a co-author — passed the first two steps of the US medical licensing exam. Medical students take quality for medical license.

Chat GPT’s responsive design can simulate a patient and be asked to provide medical history, physical exam findings, lab results, and more. Through its ability to answer follow-up questions, ChatGPT may provide opportunities to test doctors’ diagnostic skills and clinical knowledge in general, despite its high degree of uncertainty.

advertisement

Although ChatGPT can help physicians, they should tread carefully and not use it as a primary source without verification.

For management work

In the year In 2018, the last year for which we have hard statistics, 70% of physicians reported spending at least 10 hours on paperwork and administrative tasks, with one-third spending 20 hours or more.

ChatGPT can be used to help healthcare workers save time on non-clinical tasks, which contribute to burnout and take time away from interacting with patients. We found that the CHAT GPT dataset includes a set of Current Procedural Terminology (CPT) codes, a standardized system for identifying medical procedures and services that most physicians charge for the procedures or care they provide. To test how well it worked, when we asked chatgpt for multiple billing codes, it gave us the correct code for covid vaccinations, but incorrect codes for amniocentesis and sacrum x-rays. In other words, for now, close without enough progress but no cigar.

Clinicians spend an inordinate amount of time writing letters to advocate for patients’ insurance approvals and the needs of third-party contractors. ChatGPT can help with this time-consuming task. We asked Chatgpt, “Can you write a letter of authorization to Blue Cross regarding the use of transesophageal echocardiogram in a patient with valvular disease?” The service is not covered by the insurance provider. Please include references that include scientific research. Within seconds, we received a personalized email that served as a time-saving template for this question. Needs some editing, but overall got the message across.

Clinical applications

The use of Chat GPT in clinical practice should be approached with more caution than its promise in educational and administrative work. In clinical practice, ChatGPT can streamline the documentation process, generating medical charts, progress notes and discharge instructions. For example, Jeremy Fast, an emergency medicine physician at Brigham and Women’s Hospital in Boston, put ChatGPIT to the test, asking for a chart for a fictional patient with a cough, and said the system was “pretty good.” He said. The potential is clear: helping health care workers identify symptoms, determine dosages, recommend courses of action, and so on. But the risk is high.

One of the main issues with Chat GPT is its potential to generate inaccurate or misleading information. When we questioned his application for a specific diagnosis for postpartum hemorrhage, he seemed to do the work of experts, and even provided supporting scientific evidence. But when we checked the sources, none of them were accurate. FastChatGPT makes a similar mistake when it states that costachondritis is a common cause of chest pain caused by oral contraceptive pills, but produces a bogus research paper to support this statement.

This potential for deception is so worrisome that a recent pre-print shows that scientists have trouble distinguishing between real research and fake summaries generated by ChatGPT. As many do now on Google and other search engines, patients are more likely to be misinformed. In fact, Chatgpti created an eerily convincing explanation of how “crushed clay added to breast milk can support the digestive system of infants.”

Concerns about clinical misinformation are further heightened by bias in ChatGPT responses. When a user asked ChatGPT to generate computer code to determine whether a person would be a good scientist based on their race and gender, the program determined that a good scientist would be a white male. While OpenAI can filter out some overt biases, we are concerned about more subtle biases that help perpetuate stigma and discrimination in healthcare. Such bias may arise due to the small sample sizes of the training data and the limited variability in the data. But since ChatGPT was trained on more than 570 gigabytes of online text data, the program’s biases may reflect biases on the Internet.

What’s next?

Artificial intelligence tools are here to stay. They are already used as clinical decision aids to predict kidney disease, simplify radiological reports, and accurately predict leukemia remission rates. The recent release of Google Med-PalM, a similar AI model for medicine, and the OpenAI application programming interface, ChatGPT, for building healthcare software, further underscored the technological revolution that will transform healthcare.

But in this never-ending plane of progress an imperfect apparatus is being deployed without the necessary lines of protection. Although there may be acceptable uses of ChatGPT in medical education and administrative practice, we cannot support the use of the program for clinical purposes—at least not in its current form.

Released to the public as a beta product, ChatGPT will undoubtedly improve, and we await the arrival of ChatGPT-4, where we hope that performance will improve in quality and efficiency. The release of such a powerful tool as ChatGPT can be intimidating, but in medicine, appropriate steps should be taken to assess its potential, minimize its harm, and facilitate its use.

Rushabh H. Doshi is a medical student at Yale School of Medicine. Simar S. Bajaj is an undergraduate student at Harvard University. The authors thank Harlan M., director of the Outcomes Research Center at Yale-New Haven Hospital. Thanks to Krumholz for your input and help with this essay.


First Opinion Newspaper: If you love reading opinion and opinion articles, you’ll get the first weekly opinion delivered to your inbox every Sunday. Register here.



[ad_2]