What social stratifications in bias blind spot can tell us about implicit social bias in both LLMs and humans.
偏見盲點中的社會階層化對於隱性社會偏見在人類與大型語言模型(LLMs)中的啟示
Sci Rep 2025-08-19
Comparing diversity, negativity, and stereotypes in Chinese-language AI technologies: an investigation of Baidu, Ernie and Qwen.
比較中文AI技術中的多樣性、負面性和刻板印象:對百度、Ernie和Qwen的調查。
PeerJ Comput Sci 2025-03-26
Measuring gender and racial biases in large language models: Intersectional evidence from automated resume evaluation.
測量大型語言模型中的性別和種族偏見:來自自動化履歷評估的交叉證據。
PNAS Nexus 2025-03-27
Stereotypical bias amplification and reversal in an experimental model of human interaction with generative artificial intelligence.
在生成式人工智慧的人際互動實驗模型中,刻板印象偏見的放大與逆轉。
R Soc Open Sci 2025-04-10
Potential to perpetuate social biases in health care by Chinese large language models: a model evaluation study.
中國大型語言模型在醫療保健中延續社會偏見的潛在風險:一項模型評估研究
Int J Equity Health 2025-07-15