Exploring the relationship between features calculated from contextual embeddings and EEG band power during sentence reading in Chinese.
中文句子閱讀時,從語境嵌入計算的特徵與腦電圖頻帶功率之間關係的探討
Front Neurosci 2025-08-14
Emerging trends in multi-modal artificial intelligence for clinical decision support: A narrative review.
臨床決策支援中多模態人工智慧的新興趨勢:敘述性回顧
Health Informatics J 2025-08-14
Multimodal Sensing-Enabled Large Language Models for Automated Emotional Regulation: A Review of Current Technologies, Opportunities, and Challenges.
多模態感測賦能大型語言模型於自動化情緒調節之應用:現有技術、機會與挑戰之綜述
Sensors (Basel) 2025-08-14
Which AI Sees Like Us? Investigating the Cognitive Plausibility of Language and Vision Models via Eye-Tracking in Human-Robot Interaction.
哪一種 AI 能像人類一樣看世界?透過眼動追蹤於人機互動中,探討語言與視覺模型的認知合理性
Sensors (Basel) 2025-08-14
這篇研究用人類眼動數據來評比 AI 模型的認知能力,發現視覺-語言模型(像 LLaVA)在沒加記憶時,最能模仿人類的注意力。加上短期記憶後,只有部分模型(如 DeepSeek)表現變好,其他反而變差。這顯示記憶對 AI 注意力有細緻影響,也提供新方法評估 AI 的認知合理性。
相關文章PubMedDOI推理