Reference Hallucination Score for Medical Artificial Intelligence Chatbots: Development and Usability Study.
醫療人工智慧聊天機器人的參考幻覺評分:開發與可用性研究。
JMIR Med Inform 2024-07-31
Exploring the Boundaries of Reality: Investigating the Phenomenon of Artificial Intelligence Hallucination in Scientific Writing Through ChatGPT References.
探索現實的界限:透過 ChatGPT 參考資料,研究科學寫作中人工智慧幻覺現象。
Cureus 2023-05-16
ChatGPT and artificial hallucinations in stem cell research: assessing the accuracy of generated references - a preliminary study.
ChatGPT與幹細胞研究中的人工幻覺:評估生成參考文獻的準確性-初步研究。
Ann Med Surg (Lond) 2023-10-18
Performance of AI chatbots on controversial topics in oral medicine, pathology, and radiology.
人工智慧聊天機器人在口腔醫學、病理學和放射學爭議性議題上的表現。
Oral Surg Oral Med Oral Pathol Oral Radiol 2024-03-29
How artificial intelligence can provide information about subdural hematoma: Assessment of readability, reliability, and quality of ChatGPT, BARD, and perplexity responses.
人工智慧如何提供有關硬腦膜下血腫的資訊:對 ChatGPT、BARD 和 perplexity 回應的易讀性、可靠性和品質進行評估。
Medicine (Baltimore) 2024-05-03
Hallucination Rates and Reference Accuracy of ChatGPT and Bard for Systematic Reviews: Comparative Analysis.
ChatGPT和Bard用於系統性評論的幻覺率和參考準確性:比較分析。
J Med Internet Res 2024-05-22
AI chatbots show promise but limitations on UK medical exam questions: a comparative performance study.
AI 聊天機器人在英國醫學考試問題上的潛力與限制:一項比較性能研究。
Sci Rep 2024-08-14
Assessment of readability, reliability, and quality of ChatGPT®, BARD®, Gemini®, Copilot®, Perplexity® responses on palliative care.
對於緩和醫療的 ChatGPT®、BARD®、Gemini®、Copilot®、Perplexity® 回應的可讀性、可靠性和質量評估。
Medicine (Baltimore) 2024-08-16
Human versus Artificial Intelligence: ChatGPT-4 Outperforming Bing, Bard, ChatGPT-3.5 and Humans in Clinical Chemistry Multiple-Choice Questions.
人類與人工智慧:ChatGPT-4 在臨床化學多選題中超越 Bing、Bard、ChatGPT-3.5 及人類。
Adv Med Educ Pract 2024-09-25