Summary: | Abstract Background Large language models (LLMs) are emerging artificial intelligence (AI) technology refining research and healthcare. Their use in medicine has seen numerous recent applications. One area where LLMs have shown particular promise is in the provision of medical information and guidance to practitioners. Objective This study aims to assess three prominent LLMs—Google’s AI BARD, BingAI and ChatGPT‐4 in providing management advice for melanoma by comparing their responses to current clinical guidelines and existing literature. Methods Five questions on melanoma pathology were prompted to three LLMs. A panel of three experienced Board‐certified plastic surgeons evaluated the responses for reliability using reliability matrix (Flesch Reading Ease Score, the Flesch–Kincaid Grade Level and the Coleman–Liau Index), suitability (modified DISCERN score) and comparing them to existing guidelines. T‐test was performed to calculate differences in mean readability and reliability scores between LLMs and p value <0.05 was considered statistically significant. Results The mean readability scores across three LLMs were same. ChatGPT exhibited superiority with a Flesch Reading Ease Score of 35.42 (±21.02), Flesch–Kincaid Grade Level of 11.98 (±4.49), and Coleman–Liau Index of 12.00 (±5.10), however all of these were insignificant (p > 0.05). Suitability‐wise using DISCERN score, ChatGPT 58 (±6.44) significantly (p = 0.04) outperformed BARD 36.2 (±34.06) and was insignificant to BingAI's 49.8 (±22.28). Conclusion This study demonstrates that ChatGPT marginally outperforms BARD and BingAI in providing reliable, evidence‐based clinical advice, but they still face limitations in depth and specificity. Future research should improve LLM performance by integrating specialized databases and expert knowledge to support patient‐centred care.
|