Scale matters: Large language models with billions (rather than millions) of parameters better match neural representations of natural language.
規模重要性:擁有數十億(而非數百萬)參數的大型語言模型更能匹配自然語言的神經表徵。
bioRxiv 2024-07-15
Symbol ungrounding: what the successes (and failures) of large language models reveal about human cognition.
符號去根:大型語言模型的成功(與失敗)揭示了人類認知的哪些面向。
Philos Trans R Soc Lond B Biol Sci 2024-08-19
A functional contextual, observer-centric, quantum mechanical, and neuro-symbolic approach to solving the alignment problem of artificial general intelligence: safe AI through intersecting computational psychological neuroscience and LLM architecture for emergent theory of mind.
一個功能性背景、觀察者中心、量子力學及神經符號學的方法來解決人工通用智慧的對齊問題:透過交叉計算心理神經科學和LLM架構實現安全AI,以促進心智理論的出現。
Front Comput Neurosci 2024-08-23
Redefining Cognitive Domains in the Era of ChatGPT: A Comprehensive Analysis of Artificial Intelligence's Influence and Future Implications.
在 ChatGPT 時代重新定義認知領域:人工智慧影響及未來意涵的綜合分析。
Med Res Arch 2024-10-29
Artificial intelligence speaks up<b>These Strange New Minds: How AI Learned to Talk and What It Means</b> <i>Christopher Summerfield</i> Viking, 2025. 384 pp.
人工智慧發聲<b>這些奇怪的新思維:AI 如何學會說話及其意義</b> <i>克里斯多福·薩默菲爾德</i> Viking, 2025. 384 頁。
Science 2025-03-11
AI in Neurology: Everything, Everywhere, all at Once PART 2: Speech, Sentience, Scruples, and Service.
神經學中的 AI:無所不在、無所不包 PART 2:語音、知覺、道德與服務
Ann Neurol 2025-05-27