Measuring gender and racial biases in large language models: Intersectional evidence from automated resume evaluation.
測量大型語言模型中的性別和種族偏見:來自自動化履歷評估的交叉證據。
PNAS Nexus 2025-03-27
Evaluation and Bias Analysis of Large Language Models in Generating Synthetic Electronic Health Records: Comparative Study.
大型語言模型在生成合成電子健康紀錄的評估與偏誤分析:比較性研究
J Med Internet Res 2025-05-12
What social stratifications in bias blind spot can tell us about implicit social bias in both LLMs and humans.
偏見盲點中的社會階層化對於隱性社會偏見在人類與大型語言模型(LLMs)中的啟示
Sci Rep 2025-08-19