Implicit biases refer to automatic mental processes that shape perceptions, judgments, and behaviors. Previous research on "implicit bias'' in LLMs focused primarily on outputs rather than the processes underlying the outputs. We present the Reasoning Model Implicit Association Test (RM-IAT) to study implicit bias-like processing in reasoning models, which are LLMs that use step-by-step reasoning for complex tasks. Using RM-IAT, we find that reasoning models like o3-mini, DeepSeek-R1, gpt-oss-20b, and Qwen-3 8B consistently expend more reasoning tokens on association-incompatible tasks than association-compatible tasks, suggesting greater computational effort when processing counter-stereotypical information. In contrast, Claude 3.7 Sonnet exhibited reversed or inconsistent patterns, likely due to embedded safety mechanisms that flagged or rejected socially sensitive associations. These divergent behaviors highlight important differences in how alignment and safety processes shape model reasoning. As reasoning models become increasingly integrated into real-world decision-making, understanding their implicit bias-like patterns and how alignment methods influence them is crucial for ensuring fair and trustworthy AI systems.
翻译:暂无翻译