Evaluating the Reliability and Quality of Sarcoidosis-Related Information Provided by AI Chatbots.
AI 聊天機器人所提供有關 Sarcoidosis 資訊之可靠性與品質評估
Healthcare (Basel) 2025-06-13
這項研究發現,採用檢索增強技術的AI聊天機器人(如ChatGPT-4o Deep Research等)在提供結節病資訊時,比一般AI更準確可靠。不過,他們的回答多半太艱深,病人不易看懂,而且給的實際建議也不夠明確。整體來說,AI雖然能提供高品質資訊,但在易讀性和實用性上還有進步空間。
PubMedDOI♡
站上相關主題文章列表
Assessing the readability, quality and reliability of responses produced by ChatGPT, Gemini, and Perplexity regarding most frequently asked keywords about low back pain.
評估 ChatGPT、Gemini 和 Perplexity 對於有關下背痛的最常見關鍵字所產生的回應的可讀性、質量和可靠性。
PeerJ 2025-01-27
Evaluating the Quality and Readability of Information Provided by Generative Artificial Intelligence Chatbots on Clavicle Fracture Treatment Options.
評估生成式人工智慧聊天機器人提供的鎖骨骨折治療選項資訊的質量和可讀性。
Cureus 2025-02-10
A Future of Self-Directed Patient Internet Research: Large Language Model-Based Tools Versus Standard Search Engines.
自我導向病患網路研究的未來:大型語言模型工具與標準搜尋引擎的比較。
Ann Biomed Eng 2025-03-02
Assessing the Quality and Reliability of ChatGPT's Responses to Radiotherapy-Related Patient Queries: Comparative Study With GPT-3.5 and GPT-4.
ChatGPT 回應放射治療相關病患問題的品質與可靠性評估:與 GPT-3.5 及 GPT-4 的比較研究
JMIR Cancer 2025-04-16
Evaluation of AI-Based Chatbots in Liver Cancer Information Dissemination: A Comparative Analysis of GPT, DeepSeek, Copilot, and Gemini.
AI 聊天機器人在肝癌資訊傳播中的評估:GPT、DeepSeek、Copilot 與 Gemini 之比較分析
Oncology 2025-06-10
Evaluating the readability, quality, and reliability of responses generated by ChatGPT, Gemini, and Perplexity on the most commonly asked questions about Ankylosing spondylitis.
ChatGPT、Gemini 與 Perplexity 回答最常見 Ankylosing spondylitis 問題之可讀性、品質與可靠性評估
PLoS One 2025-06-18
The Reliability Gap: How Traditional Search Engines Outperform Artificial Intelligence (AI) Chatbots in Rosacea Public Health Information Quality.
可靠性差距:傳統搜尋引擎在 Rosacea 公共衛生資訊品質上優於人工智慧(AI)聊天機器人
Cureus 2025-07-23