Current safeguards, risk mitigation, and transparency measures of large language models against the generation of health disinformation: repeated cross sectional analysis.
大型語言模型對抗生成健康虛假資訊的當前防護措施、風險緩解和透明度措施:重複橫斷面分析。
BMJ 2024-03-28
Can Large Language Models Counter the Recent Decline in Literacy Levels? An Important Role for Cognitive Science.
大型語言模型能否對抗最近的識字水平下降?認知科學的重要角色。
Cogn Sci 2024-08-18
Artificial intelligence speaks up<b>These Strange New Minds: How AI Learned to Talk and What It Means</b> <i>Christopher Summerfield</i> Viking, 2025. 384 pp.
人工智慧發聲<b>這些奇怪的新思維:AI 如何學會說話及其意義</b> <i>克里斯多福·薩默菲爾德</i> Viking, 2025年。384頁。
Science 2025-03-06
Artificial intelligence speaks up<b>These Strange New Minds: How AI Learned to Talk and What It Means</b> <i>Christopher Summerfield</i> Viking, 2025. 384 pp.
人工智慧發聲<b>這些奇怪的新思維:AI 如何學會說話及其意義</b> <i>克里斯多福·薩默菲爾德</i> Viking, 2025. 384 頁。
Science 2025-03-11
Large language models can consistently generate high-quality content for election disinformation operations.
大型語言模型可以持續生成高品質內容,用於選舉虛假資訊操作。
PLoS One 2025-03-17