Shadows of wisdom: Classifying meta-cognitive and morally grounded narrative content via large language models.
智慧的陰影:透過大型語言模型將元認知和道德基礎敘事內容進行分類。
Behav Res Methods 2024-05-29
Scale matters: Large language models with billions (rather than millions) of parameters better match neural representations of natural language.
規模重要性:擁有數十億(而非數百萬)參數的大型語言模型更能匹配自然語言的神經表徵。
bioRxiv 2024-07-15
Family lexicon: Using language models to encode memories of personally familiar and famous people and places in the brain.
家庭詞彙:使用語言模型編碼大腦中對個人熟悉及著名人物和地點的記憶。
PLoS One 2024-11-22
From statistics to deep learning: Using large language models in psychiatric research.
從統計學到深度學習:在精神病學研究中使用大型語言模型。
Int J Methods Psychiatr Res 2025-01-08