How AI Can Go Terribly Wrong: 5 Biases That Create Failure

AI is a game-changing technology for companies, but it can go terribly wrong.

As AI-based systems become more critical to companies, we all need to understand the issue of bias in AI. The biases of AI can result in reputational risk, poor results and outright errors. This article will enable boards and senior executives to ask the right questions about the five dangerous biases of AI.

Artificial Intelligence Biases

1)   Human Bias

One reason bias exists in an AI-based system is that the data we feed AI systems is biased. That data is often biased because it comes from real-world business decisions made by humans.

In other words, humans are biased, but we’ve never looked carefully at the decision-making bias of our employees. Now, because we are looking at what comes out of an AI system, we are horrified to see that AI appears to be biased, but it was us, humans, all along.

For example, a bank may discover that its AI-based loan evaluator is approving fewer loans to minorities than others. When that AI-based loan evaluator is compared to historical loan approvals to minorities, it’s highly likely that the percentage of approvals will be the same.

This discovery of bias in AI can be a good thing. It surfaces bias that exists within a decision-making process and provides the company with an opportunity to course-correct. Everybody wins.

2)   Hidden Bias

One of the most insidious biases in AI is hidden bias—meaning unintentional bias that may never be seen or discovered.

Take the example of a highly qualified person, not making it through the screening process for a job. This candidate had what looked to be a perfect resume for her target company. However, the company’s AI-based HR system rejected her, and she never even made it to the first interview.

At one point, the candidate met representatives of the company at a job fair. When they reviewed her resume, they were excited about her background and invited her to interview with the company.

The candidate explained that she had been rejected several times before and wondered what was different this time. It took a while, but the company finally discovered that the candidate had a BA in Computer Science while the AI was searching only for people with a BS in Computer Science. As a result, the system determined—incorrectly—that she wasn’t qualified.

Unless that candidate had highlighted the problem to the HR team, they never would have known their system was rejecting perfectly qualified candidates.

The “not knowing” is the scary part. For that company, and for that HR team, if this bias hadn’t been brought to their attention, they would have gone on their merry way, missing out on highly qualified candidates and not knowing why.

Companies need to periodically put humans in the loop of important decisions to uncover any potential hidden biases.

3)   Data Sampling Bias

When we train an AI system, it needs good data. Sometimes the data fed into the system has a sampling bias, causing the AI to become biased.

In one example, an AI system that was being trained to understand natural language exhibited gender bias. This system was fed new articles that caused the AI to think, “Man is to Doctor as Woman is to Nurse.” In this case, the data provided had this bias in it, and the AI system learned from that bias.

In another example, Amazon stopped using a hiring algorithm after finding it favored applicants based on words like “executed” or “captured” that were more commonly found on men’s resumes.

Once again, the good news is that these biases can be teased out and eliminated once discovered. A human needs to be part of the process and look for biases.

4)   Long-tail bias

Long-tail bias happens when certain categories are missing from the training data. For example, let’s assume the AI is doing facial recognition and encounters a person with lots of freckles. It’s likely the AI won’t know what to do with that image. They may be categorized as black or white or brown, or even as something not even human.

When an AI encounters something for the first time, it often gets it very wrong. In one example, an image-recognition system was shown a picture of a stop sign with stickers on it and labeled it a refrigerator.

This happens to be one of the obstacles to the implementation of autonomous vehicles as well. AI trained the way we are training AI today doesn’t know what to do when it encounters something rare or unique that it hasn’t seen before, like a paper bag blowing across the road.

5) Intentional Bias

Intentional bias may be the most dangerous of all. Nefarious actors could seek to attack AI systems by intentionally introducing bias into them. Not only that, but those actors will do everything they can to hide the bias they have introduced.

Think of this as a new dimension of a cyberattack. Imagine you are training an AI system for your company to optimize your supply chain. This seems like a reasonably benevolent exercise. Now imagine that a state-sponsored competitor decides to target your company to damage your ability to do business.

If done well, that competitor can hack into your databases without your knowledge. They can lurk there for a long time to ensure they’re not noticed. They can see and understand what you are trying to accomplish regarding your supply chain project.

Once they understand that you are using a specific database to train your AI to optimize your supply chain, they can modify the data being used to train the AI, so it learns the wrong things. For example, they might slightly increase certain suppliers’ costs that are actually the most cost-effective for you. The result is the AI will learn that these vendors aren’t a good fit in your supply chain, and later, when the system is implemented, it might direct you to eliminate them, ultimately increasing your costs.

The scary part of this scenario is that no one in your company will know that you are now paying more than you should be. Your profits will go down, and your competitors’ profits will increase. Not only will you not know why, but you won’t know how to fix the problem caused by this manipulation of training data.

Boards and senior executives need to ask questions about what the company is doing to identify, remove, and prevent bias from happening in their systems. If the company isn’t diligent in this effort, the very systems designed to make the company more competitive can wind up doing damage in unforeseen ways. All of these biases can be adequately addressed. It’s up to the leaders of the company to ensure that they are.

Contact Glenn Gow the “AI Guy” Today!

Glenn Gow is “The AI Guy”. He is a former CEO and has been a board member of four companies. He is currently a board member and CEO coach. Follow him on LinkedIn or email him at glenn@glenngow.com.