Measuring gender and racial biases in large language models: Intersectional evidence from automated resume evaluation.
測量大型語言模型中的性別和種族偏見:來自自動化履歷評估的交叉證據。
PNAS Nexus 2025-03-27
Potential to perpetuate social biases in health care by Chinese large language models: a model evaluation study.
中國大型語言模型在醫療保健中延續社會偏見的潛在風險:一項模型評估研究
Int J Equity Health 2025-07-15
What social stratifications in bias blind spot can tell us about implicit social bias in both LLMs and humans.
偏見盲點中的社會階層化對於隱性社會偏見在人類與大型語言模型(LLMs)中的啟示
Sci Rep 2025-08-19