Cognitive science has identified hundreds of human biases. Most of them arise from heuristics—mental shortcuts that simplify complex decisions but can lead to flawed conclusions, misinterpretations, or illogical judgments. LLMs don’t “think” like people, but they reproduce some of these patterns because they’re trained on human text and then further tuned by humans. Fine-tuning, safety layers, and preference models all put a thumb on the scale in ways that can privilege some answers over others. 

“…experiments demonstrate that all LLMs exhibit susceptibility to cognitive biases, with susceptibility rates ranging from 17.8% to 57.3% on average across models and biases… (Knipper, et al., 2025).”

You can't eliminate bias, but you can probe for it. The challenge is that prompts carry semantic weight—what seems neutral to you may trigger strong patterns in the model's training. And like humans falling back on familiar examples, LLMs overrepresent common, well-documented, Western, and high-frequency contexts and struggle more with unfamiliar or underrepresented ones. 

The list below presents common bias patterns with some practical checks to detect them. Some of these interact; for example, confirmation can reinforce authority, and recency can amplify anchoring. Mix and match checks when you audit a response. Also note that GenAI is ever-evolving and so some biases may be more pronounced than others.

Recency Bias 

We pay more attention to the last thing we heard and forget the stuff that came earlier, even if the earlier stuff was just as important. 

LLM manifestation – In chain-of-thought or long prompts, some models give more weight to the instructions or info that appear last, so earlier constraints may get dropped or softened.

Check:

  • Tell me which parts of my prompt you gave the most weight to and why. 
  • Re-send the same prompt but move the key instruction to the top [compare results]
  • Before you answer, restate every instruction I gave you in order, and mark each one as ‘handled’ or ‘not yet handled.’

Authority Bias 

We trust something more just because it seems to come from an expert, not because its better evidence.

LLM manifestation – If your prompt invokes an authority (“according to HBR…”) the model often treats that frame as settled and builds around it, sometimes even supplying what the authority would say.

Check: 

  • Rewrite this answer without citing authorities—rank claims by strength of evidence.
  • How would this differ if an equally reputable source disagreed?
  • Highlight any lines that depended on source prestige rather than facts.

Confirmation Bias

We look for, notice, and repeat things that agree with what we already think, and we overlook or downplay things that don’t.

LLM manifestation – If your prompt already takes a side, e.g., “Explain why X is bad…,” the model will usually cooperate with that stance and give you mostly supporting reasons. That’s because it’s optimized to be helpful and aligned with your intent, rather than to argue with you.

Check:

  • Identify the stance you inferred from my prompt
  • Now make a steelman argument for the opposite view with the same level of detail
  • Tell me which pieces of evidence you left out in the first version

Availability Bias

We think something is common or likely just because examples pop into our head easily—not because we’ve actually counted how often something happens.

LLM manifestation – Models are often most fluent on popular, well-documented, Western/online topics and will present those as the main answer; niche or local details may be thin or invented.

Check:

  • Give 3 standard examples and 3 underrepresented/non-Western examples.
  • What’s missing from typical web discussions of this topic?

Framing Bias

We make different choices depending on how something is worded; for example, "gains" vs. "losses," "advantages" vs. "risks," even when the facts are basically the same.

LLM manifestation – The model takes your framing as intent: ask for benefits and it’s upbeat; ask for risks and it’s cautious.

Check:

  • Ask the opposite: Now do risks/costs instead of benefits.
  • Point out which parts changed just because of my wording.
  • Rewrite in neutral, balanced language.

Anchoring Bias

The first number or idea we see sticks in our mind and pulls our later thinking toward it, even when it shouldn’t.

LLM manifestation – The model locks onto initial parameters, scales, or examples you provide and rarely challenges them, even when they're arbitrary or suboptimal. First examples become templates that constrain subsequent variations, and initial framing sets boundaries the model treats as fixed rather than questioning them.

Check:

  • List every constraint, example, or parameter you used—which came from me versus your own judgment?

Representativeness Bias

We decide “this looks like one of those” and treat it that way, even if the real group is more mixed or the odds don’t support it.

LLM manifestation – When you ask for “a typical X,” models often provide you the stereotype/prototype that appears most in its data, not the full range. Difference awareness is difficult for LLMs.

Check:

  • Identify the population you assumed I was talking about (country, sector, scale). List 4–5 plausible variants. 
  • Which parts of your original answer were based on the most common or most visible variant?

LLMs are not irrational; they are statistical pattern machines. If your prompt supplies a biased pattern, they will usually follow it. The goal is not to eliminate bias but to surface and counter it with targeted checks. The checks above can help turn LLM interactions into an opportunity to interrogate not just the model's outputs, but the assumptions baked into your own questions. Further biases are discussed in the references below (including Conjunction, Sunk Cost, Prospect Theory, Halo Effect, etc.). 

Best practices for interacting with LLMs also include:

  • Provide more details when prompting – situational context can help reduce bias
  • Avoid ambiguous language
  • Avoid leading or biased language
  • Use an appropriately-sized model for the task: larger models are generally less susceptible to bias
  • Systematically change the order of items in prompts and compare results – or learn more about prompt debiasing

 

References

Knipper, R. A., Knipper, C. S., Zhang, K., Sims, V., Bowers, C., & Karmaker, S. (2025). The Bias is in the Details: An Assessment of Cognitive Bias in LLMs. https://arxiv.org/pdf/2509.22856 

Learn Prompting. (2024, August 7). Prompt debiasing. https://learnprompting.org/docs/reliability/debiasing

Roy-Stang, Z., & Davies, J. (2025). Human biases and remedies in AI safety and alignment contexts. AI and Ethics, 1-23.

Saeedi, P., Goodarzi, M., & Canbaz, M. A. (2025, May). Heuristics and biases in ai decision-making: Implications for responsible agi. In 2025 6th International Conference on Artificial Intelligence, Robotics and Control (AIRC) (pp. 214-221). IEEE. https://arxiv.org/pdf/2410.02820