Preventing unrestricted and unmonitored AI experimentation in healthcare through transparency and accountability.
透過透明度和問責制防止在醫療保健中進行不受限制和不受監控的人工智慧實驗。
NPJ Digit Med 2025-01-18
Artificial intelligence speaks up<b>These Strange New Minds: How AI Learned to Talk and What It Means</b> <i>Christopher Summerfield</i> Viking, 2025. 384 pp.
人工智慧發聲<b>這些奇怪的新思維:AI 如何學會說話及其意義</b> <i>克里斯多福·薩默菲爾德</i> Viking, 2025年。384頁。
Science 2025-03-06
Artificial intelligence speaks up<b>These Strange New Minds: How AI Learned to Talk and What It Means</b> <i>Christopher Summerfield</i> Viking, 2025. 384 pp.
人工智慧發聲<b>這些奇怪的新思維:AI 如何學會說話及其意義</b> <i>克里斯多福·薩默菲爾德</i> Viking, 2025. 384 頁。
Science 2025-03-11
[Legal Risk Assessment and Prevention in Artificial Intelligence-Assisted Health Care].
人工智慧輔助醫療中的法律風險評估與預防。
Sichuan Da Xue Xue Bao Yi Xue Ban 2025-03-20
Generative AI and LLMs for Critical Infrastructure Protection: Evaluation Benchmarks, Agentic AI, Challenges, and Opportunities.
用於關鍵基礎設施保護的生成式 AI 與 LLMs:評估基準、Agentic AI、挑戰與機會
Sensors (Basel) 2025-04-28
Helpful, harmless, honest? Sociotechnical limits of AI alignment and safety through Reinforcement Learning from Human Feedback.
有益、無害、誠實?透過人類回饋強化學習(Reinforcement Learning from Human Feedback, RLHF)實現 AI 對齊與安全性的社會技術極限
Ethics Inf Technol 2025-06-09
這篇論文批評現有讓 AI 對齊人類價值的方法(像 RLHF),認為只追求「有幫助、無害、誠實」不夠全面,甚至會互相衝突。作者強調,AI 安全不能只靠技術,還要結合倫理、制度和政治等社會層面來考量。
PubMedDOI