Evaluating the Appropriateness, Consistency, and Readability of ChatGPT in Critical Care Recommendations.
評估 ChatGPT 在重症護理建議中的適當性、一致性和可讀性。
J Intensive Care Med 2024-08-09
Evaluating the Feasibility of ChatGPT in Healthcare: An Analysis of Multiple Clinical and Research Scenarios.
評估 ChatGPT 在醫療保健領域的可行性:對多種臨床和研究情境的分析。
J Med Syst 2023-12-31
ChatGPT and large language model (LLM) chatbots: The current state of acceptability and a proposal for guidelines on utilization in academic medicine.
ChatGPT 和大型語言模型 (LLM) 聊天機器人:在學術醫學中的可接受性現狀及利用指南的提議。
J Pediatr Urol 2023-10-02
ChatGPT's Response Consistency: A Study on Repeated Queries of Medical Examination Questions.
ChatGPT 的回應一致性:對醫學檢查問題重複查詢的研究。
Eur J Investig Health Psychol Educ 2024-03-29
研究比較了ChatGPT 3.5和ChatGPT 4回答醫學考試問題的表現,結果發現ChatGPT 4在準確度(85.7% vs. 57.7%)和一致性(77.8% vs. 44.9%)方面有明顯進步。這顯示ChatGPT 4在醫學教育和臨床決策上更可靠。但人類醫療服務仍然不可或缺,使用AI時應持續評估。
PubMedDOI
ChatGPT as a Tool for Medical Education and Clinical Decision-Making on the Wards: Case Study.
ChatGPT作為醫學教育和臨床決策工具在病房上的應用:案例研究。
JMIR Form Res 2024-05-08
Assessing Generative Pretrained Transformers (GPT) in Clinical Decision-Making: Comparative Analysis of GPT-3.5 and GPT-4.
評估生成式預訓練轉換器(GPT)在臨床決策中的應用:GPT-3.5與GPT-4的比較分析。
J Med Internet Res 2024-06-27
The potential and pitfalls of using a large language model such as ChatGPT, GPT-4, or LLaMA as a clinical assistant.
使用大型語言模型如ChatGPT、GPT-4或LLaMA作為臨床助手的潛力與陷阱。
J Am Med Inform Assoc 2024-07-17
Comparison of the Usability and Reliability of Answers to Clinical Questions: AI-Generated ChatGPT versus a Human-Authored Resource.
臨床問題答案的可用性和可靠性比較:AI 生成的 ChatGPT 與人類撰寫的資源。
South Med J 2024-08-02
Comparing ChatGPT and a Single Anesthesiologist's Responses to Common Patient Questions: An Exploratory Cross-Sectional Survey of a Panel of Anesthesiologists.
比較 ChatGPT 與單一麻醉醫師對常見病人問題的回應:一項麻醉醫師小組的探索性橫斷面調查。
J Med Syst 2024-08-22