S<sup>2</sup>AF: An action framework to self-check the Understanding Self-Consistency of Large Language Models.
S<sup>2</sup>AF:一個自我檢查大型語言模型自我一致性理解的行動框架。
Neural Netw 2025-03-18
Can large language models reason and plan?
Large language models, such as GPT-3, are capable of performing certain levels of reasoning and planning based on the patterns and information they have been trained on. These models can generate responses to questions, complete tasks, and even provide suggestions based on the input they receive. However, their ability to reason and plan is limited by the data they have been trained on and the algorithms they use. While they can simulate human-like reasoning to some extent, they do not possess true understanding or consciousness.
Ann N Y Acad Sci 2024-03-06
Framework-based qualitative analysis of free responses of Large Language Models: Algorithmic fidelity.
大型語言模型自由回應的基於框架的定性分析:演算法忠實度。
PLoS One 2024-03-15
Symbol ungrounding: what the successes (and failures) of large language models reveal about human cognition.
符號去根:大型語言模型的成功(與失敗)揭示了人類認知的哪些面向。
Philos Trans R Soc Lond B Biol Sci 2024-08-19
A functional contextual, observer-centric, quantum mechanical, and neuro-symbolic approach to solving the alignment problem of artificial general intelligence: safe AI through intersecting computational psychological neuroscience and LLM architecture for emergent theory of mind.
一個功能性背景、觀察者中心、量子力學及神經符號學的方法來解決人工通用智慧的對齊問題:透過交叉計算心理神經科學和LLM架構實現安全AI,以促進心智理論的出現。
Front Comput Neurosci 2024-08-23
Large Language Models, scientific knowledge and factuality: A framework to streamline human expert evaluation.
大型語言模型、科學知識與事實性:一個簡化人類專家評估的框架。
J Biomed Inform 2024-09-14
A Comprehensive Analysis of a Social Intelligence Dataset and Response Tendencies Between Large Language Models (LLMs) and Humans.
大型語言模型(LLMs)與人類之間社會智慧數據集及反應傾向的綜合分析。
Sensors (Basel) 2025-01-25