Skip to main content

Bias in artificial intelligence (AI) has been a prominent concern among researchers, ethicists and policymakers. While AI systems may appear neutral, they often encode, reflect and amplify the societal biases embedded in the data on which they are trained. These biases manifest across many sectors and appear in language models, facial recognition systems, hiring tools, policing algorithms and healthcare diagnostics.

The following offers a fresh look at the latest research, investigative reporting and policy developments on AI bias. It highlights how far we’ve come in understanding and addressing this issue, and the ongoing efforts to make AI fairer and more equitable.

Bias in language models and natural language processing

Large language models (LLMs) such as GPT and Llama have demonstrated persistent biases based on race, gender, religion and social class. A 2024 study analyzing 77 LLMs found that most exhibited ingroup favouritism and outgroup derogation, mirroring human social biases.

Further analyses revealed that most mitigation strategies have been applied primarily in English, leading to the spread of Western-centric stereotypes into other languages—a phenomenon described as “digital colonialism.” For example, AI systems have carried gendered tropes like the “dumb blonde” stereotype into languages where it did not previously exist.

Some models also reinforce bias by fabricating scholarly citations to justify prejudiced claims, a practice one researcher termed “hallucinated justifications.”

Bias in facial recognition systems

Facial recognition software has demonstrated significant disparities in performance across demographic groups. The U.S. National Institute of Standards and Technology (NIST) reported in 2019 that most commercial systems showed higher false-positive (wrong match) rates for Black, Asian and female faces. Author Joy Buolamwini recounted how facial recognition tools failed to detect her face until she wore a white mask, highlighting systemic racial bias in computer vision models.

When used by law enforcement, facial recognition has led to wrongful arrests, disproportionately affecting Black men. Amnesty International characterized such applications in the UK as a form of “automated racism.”

Bias in hiring and recruitment algorithms

AI-driven hiring tools have shown a tendency to replicate historical gender and racial biases. Amazon’s scrapped AI resume screening tool, for example, penalized women’s applications by downgrading mentions of “women’s” activities.

Video-interview tools have exhibited linguistic and ableist biases, unfairly penalizing non-native speakers and people with disabilities due to vocal patterns or atypical facial cues that algorithms interpret negatively.

Bias in predictive policing and criminal justice

Critics have widely condemned predictive policing software for reinforcing racial profiling. A 2023 investigation found that PredPol software disproportionately focused on communities of colour, despite low prediction accuracy.

Amnesty International found that 32 of 45 UK police forces used predictive tools that reinforced racist enforcement patterns. In the U.S., wrongful arrests linked to flawed facial recognition tools have prompted lawsuits and public backlash.

Bias in healthcare algorithms

A 2019 study revealed that a widely used healthcare risk algorithm discriminated against Black patients by identifying them as lower-risk. The algorithm used healthcare spending as a proxy for need, disadvantaging Black patients who, due to reduced access to care, typically spend less even when their health needs are higher.

Other studies have shown that diagnostic tools underperform on darker skin tones and that clinical notes often contain racialized language, which can bias downstream AI systems.

Broader manifestations of AI bias

AI image generators frequently depict high-status professions as white and male, while portraying low-status roles or criminals as people of colour. Similarly, automated content moderation systems have demonstrated gender bias, flagging women’s images as explicit more often than men’s in equivalent contexts.

Mitigation efforts

Numerous organizations are working to address AI bias, including the Algorithmic Justice League, the Distributed AI Research Institute (DAIR) and Hugging Face.

Policy responses include the U.S. White House’s Blueprint for an AI Bill of Rights, the EU’s AI Act, and Canada’s AI and Data Act (AIDA). Health-specific regulations from the FDA and HHS now require bias disclosures and transparency in AI tools.

Tackling AI bias requires collective action

Bias in AI is not a technical flaw, it reflects structural inequalities. While awareness is growing and regulatory actions are underway, mitigation remains uneven. Developers, auditors and communities must actively engage in inclusive developments, conduct independent audits and participate in decision-making to ensure AI serves all equitably.