They are trying to embed DEI into AI, leading to those candidates - due to their ‘diversity’ - being far more likely to be hired and over-represented (they got chosen 70.2% of the times)
“As organizations increasingly turn to AI-based tools in talent acquisition, hopes of ‘objective hiring’ are often offset by concerns that AI can encode or even reinforce existing biases,” said Yang, via email. “Standard AI tools often act as ‘black boxes’ that mirror and amplify historical biases.”
Understanding how bias may get embedded into AI learning has led researchers to consider whether AI could actively be trained to recognize and mitigate stereotypes. Could an AI hiring tool be taught to reduce rather than replicate human biases?
That question has fueled the burgeoning field of “inclusive AI” development. The goal is to build DEI principles into the design of generative AI, so that AI tools guide human evaluators to focus on concrete, job-related competencies. By embedding “explicit fairness logic” into AI decision making, says Yang, “we can transform AI into an ethical tool that helps managers override the mental shortcuts that lead to discriminatory hiring and potential legal risks.”
Yang believes that inclusive AI works best with a “human-in-the-loop configuration,” which “supports rather than replaces human judgment.” The idea is not simply to fix bias within existing algorithms that act as passive decision makers. The idea is to design an interactive AI tool that actively helps human decision makers confront their own biases.
“Inclusion-focused AI is a ‘fairness infrastructure’ for the modern workplace,” says Yang. “It doesn’t just filter data; it acts as a cognitive partner that guides recruiters to stay anchored in merit and job-relevant qualifications, especially in complex situations where human subconscious bias is most likely to take over.”
Yang’s 2026 study tested whether an inclusion-focused AI tool would reduce hiring bias compared to a standard AI hiring tool in one specific context: job candidates with disabilities. Hiring bias against individuals with visible or known disabilities is well documented. Applicants with disabilities are often perceived as less capable than equally qualified nondisabled candidates, resulting in significant disparities in interview, callback and hiring rates.
Yang’s study involved 238 human resources professionals from diverse industries, including healthcare, engineering and IT. The HR professionals were asked to make hiring decisions for a job that required logical reasoning ability. Each HR professional made multiple hiring decisions between paired candidate descriptions with varied qualifications and physical disability status.
All of the HR professionals were offered an interactive AI tool, created with OpenAI’s ChatGPT version 4, to assist with their hiring decisions. But the HR professionals were randomly split into two different AI conditions.
In the Standard AI condition, ChatGPT was given the candidate profiles and instructed to: “Help the employer decide which candidate is better suited for tasks requiring strong logical reasoning skills.” ChatGPT was also instructed to focus “exclusively on assessing the potential logical reasoning capabilities of both candidates.”
In the Inclusion-Focused AI condition, ChatGPT was given the same candidate profiles and instructions as in the standard condition. However, ChatGPT was additionally instructed to consider commitment to diversity and inclusion as a decision factor. This included direction to: (1) ensure that hiring decisions are grounded in merit and relevant qualifications, free from bias, and inclusive of all candidates; (2) recognize that physical disabilities do not affect a candidate’s logical reasoning abilities; and (3) emphasize fair assessments that appreciate the unique perspectives and strengths brought by individuals from diverse backgrounds.
In both conditions, the AI tools were designed as a “human-in-the-loop” interactive assistant, with the HR professional making the final hiring decisions. As each HR professional was assessing the candidate profiles, the AI tool was available to provide feedback.
In both conditions, the AI assistants initially provided the HR professionals with factual summaries of the candidates. In the Inclusion-Focused AI condition, however, the AI tool would also highlight the benefits of diverse insights and experiences that a candidate with a disability might bring to the workplace.
In addition to providing candidate summaries, the AI assistants also served as a source of dynamic dialogue by responding to questions or impressions from the HR professionals as they were making their decisions. The key difference between the Standard AI tool and the Inclusion-Focused AI tool was in how they responded during these interactions.
“In the standard condition, the AI remains a neutral information provider, sticking strictly to the data provided in the profiles,” said Yang. In the Inclusion-Focused condition, in contrast, the AI tool offers “a fair evaluative framework.” While both AI versions focus on technical criteria for the job, the Inclusion-Focused AI tool actively confronts bias by identifying and guiding decision makers away from stereotypes and toward job-relevant competencies.
“For instance, if a participant expresses doubt about a candidate’s productivity due to their disability, the AI provides context on workplace accommodations,” explained Yang. Or the AI tool might remind decision makers that physical mobility is irrelevant to cognitive tasks. “This iterative process ensures that, for the inclusion group, the focus on fairness remains a persistent guide throughout the entire decision-making process, rather than a fleeting suggestion at the start of the task,” said Yang.
Did the two different AI designs produce different hiring decisions? The study found that the use of Inclusion-Focused AI significantly reduced disability bias over the use of Standard AI in complex hiring decisions.
The use of Inclusion-Focused AI nearly doubled the likelihood of hiring candidates with disabilities compared to the Standard AI hiring tool. The HR professionals who used the Inclusion-Focused AI hired disabled candidates 70.2% of the time, while the HR professionals who used the Standard AI hired disabled candidates only 36.2% of the time.
This finding reveals the potential for properly designed AI tools to assist human evaluators in mitigating hiring bias. “Inclusion prompts encouraged competency-based evaluations and reduced stereotype reliance,” concluded Yang. The study also shows that it is not enough for AI to merely follow technical, efficiency-driven logic. To achieve fair results, AI must explicitly integrate diversity, fairness and inclusion principles.