Can Artificial Intelligence Improve the Readability of Patient Education Materials on Aortic Stenosis? A Pilot Study.
人工智慧是否能改善主動脈瓣狹窄患者教育資料的易讀性?一項初步研究。
Cardiol Ther 2024-03-01
Performance of AI-powered chatbots in diagnosing acute pulmonary thromboembolism from given clinical vignettes.
AI 驅動聊天機器人在從臨床案例診斷急性肺栓塞的表現。
Acute Med 2024-08-12
Radiologic Decision-Making for Imaging in Pulmonary Embolism: Accuracy and Reliability of Large Language Models-Bing, Claude, ChatGPT, and Perplexity.
肺栓塞影像學的放射學決策:大型語言模型-Bing、Claude、ChatGPT 和 Perplexity 的準確性與可靠性。
Indian J Radiol Imaging 2024-09-25
An Observational Study to Evaluate Readability and Reliability of AI-Generated Brochures for Emergency Medical Conditions.
一項觀察性研究以評估人工智慧生成的急救醫療條件手冊的可讀性和可靠性。
Cureus 2024-10-01
Empowering patients: how accurate and readable are large language models in renal cancer education.
賦能患者:大型語言模型在腎癌教育中的準確性和可讀性如何。
Front Oncol 2024-10-12
Use of generative large language models for patient education on common surgical conditions: a comparative analysis between ChatGPT and Google Gemini.
使用生成性大型語言模型進行常見外科病症的病患教育:ChatGPT 與 Google Gemini 的比較分析。
Updates Surg 2025-01-15
Artificial intelligence in healthcare education: evaluating the accuracy of ChatGPT, Copilot, and Google Gemini in cardiovascular pharmacology.
醫療教育中的人工智慧:評估 ChatGPT、Copilot 和 Google Gemini 在心血管藥理學中的準確性。
Front Med (Lausanne) 2025-03-06
這項研究分析了三種生成式人工智慧工具—ChatGPT-4、Copilot 和 Google Gemini—在心血管藥理學問題上的表現。研究使用了45道多選題和30道短答題,並由專家評估AI生成的答案準確性。結果顯示,這三種AI在簡單和中等難度的多選題上表現良好,但在高難度題目上表現不佳,特別是Gemini。ChatGPT-4在所有題型中表現最佳,Copilot次之,而Gemini則需改進。這些結果顯示AI在醫學教育中的潛力與挑戰。
PubMedDOI