What if artificial intelligence is biased?

Discussion
Oct 29, 2018

Artificial Intelligence, which promises to eliminate the inefficiencies that come from subjective human assessments and assumptions, is facing heightened concerns that the technology includes hidden biases.

Amazon.com, for instance, early last year stopped using a recruiting and hiring tool powered by AI because it was biased against women.

The tool observed patterns in resumes of successful hires over a 10-year period to rate the resumes of prospective hires, according to Reuters. Because many of the hires were men, it showed a preference for male candidates and favored resumes using more masculine terms. Resumes including the word “women’s” as in “women’s chess club captain” and candidates who attended all-women’s colleges were downgraded.

Reuters noted that even though the algorithms were edited to mitigate those biases, the tool was scrapped because Amazon wasn’t sure if it wouldn’t devise other ways to be discriminatory.

A number of other studies, including one showing a similar tendency in LinkedIn’s search engine, have likewise found biases for groups underrepresented in AI datasets. Google was criticized a few years ago after its image recognition algorithm identified African Americans as “gorillas.”

“AI algorithms are not inherently biased,” Venkatesh Saligrama, a professor at Boston University’s Department of Electrical and Computer Engineering who has studied word-embedding algorithms, told PC Magazine. “They have deterministic functionality and will pick up any tendencies that already exist in the data they train on.”

Because deep-learning software perceives patterns in human decision making, AI algorithms can also pick covert or overt biases from their human creators when are written.

The feedback loop from a machine learning system, particularly as humans increasingly rely on their AI’s assessments, could create more biased data that algorithms will then analyze and train on in the future.

While programmers seek to reduce prejudices, calls for diversity are being heard in the technology sector where the algorithms originate as well as greater transparency and accountability.

AI-bias is a significant concern in the health care and law enforcement sectors as well. It’s less known how any covert biases in algorithms would impact the personalized experiences AI promises to deliver for retailers.

DISCUSSION QUESTIONS: How big a concern are latent biases embedded in artificial technology? Are retailers using artificial intelligence equipped to uncover and fix these biases?

Please practice The RetailWire Golden Rule when submitting your comments.
Braintrust
"The inscrutable nature of AI makes it difficult, or impossible, to understand prediction logic and to pinpoint flaws of bias; this is still very much a research problem. "
"Ethics are usually playing catchup with tech developments. But what is all that development for in the first place? "
"Next logical question, if AI “judgement” is based on historical patterns, where does innovation come from?"

Join the Discussion!

27 Comments on "What if artificial intelligence is biased?"


Sort by:   newest | oldest | most voted
Chris Petersen, PhD.
Guest

Beauty and bias typically lies in the eyes of the AI creator. Worse yet, a “neutral” algorithm will develop bias if trained on data that has particular trends and patterns. The only hope is maybe to develop AI designed to be “neutral” and uncover bias trends in decisions. While AI can be amazing technology to automate tasks and operations, we may not ever be able to fully “trust” AI to make significant decisions about humans.

Mark Ryski
BrainTrust

Bias can be an unintended consequence of applying AI algorithms as the examples in the article show, and so practitioners must remain vigilant in reviewing and monitoring outcomes generated by these systems. AI and machine learning are still in their infancy and so to is retailers’ ability to uncover and fix biases. Like Amazon and others who are actively using and applying AI, this is still a work in progress.

Ken Lonyai
BrainTrust

The myth around AI is that a neutral “machine” independently develops its own intelligence. In reality, AI and machine learning, even instances that are to a strong degree “self-educating,” originate from human-derived algorithms. Humans are flawed and often from their early intervention into creating/training artificially intelligent systems, can introduce bias. I listed it as a probable issue with Amazon Scout in my article “Amazon is Scouting the Wrong Approach to Improving Product Discovery.

I doubt that many AI systems anywhere are evaluating for much more than obvious bias because they are designed to enhance biased outcomes: meaning, designed to create outcomes to support a business case or need, which is a purposeful interest.

Retailers do need to use AI cautiously for both bias issues that can affect and offend customers and can affect profitability via missed opportunities and wrong assumptions.

Bob Amster
BrainTrust

Almost every technology and technology-based decisions have some unintended consequence. Since AI software does nor develop itself but, rather, it is developed by people, there is always the probability that AI will contain some human bias, whether intended or not. Of course, these unintended errors will be discovered — eventually, and corrected — hopefully.

Doug Garnett
BrainTrust

You might want to read “Weapons of Math Destruction” regarding the question of discovering the errors. In many situations, errors are virtually impossible to detect.

Neil Saunders
BrainTrust

As machines do not, as of yet, have volitional consciousness they are not biased by themselves. Bias comes from human programming or from machines observing human patterns of behavior which exhibit bias.

Sometimes this bias is useful. If I, as a consumer, have a bias towards buying certain products then that helps retailers target me more effectively. Sometimes, the bias is less helpful such as in recruiting or profiling people when there is no need.

They key thing is to ensure that algorithms are auditable and to review outcomes and to ensure that these comply to ethical frameworks and policies. Given that most retailers haven’t really fully got to grips with AI itself, I think exploring the ethical dimension is far from advanced.

Nikki Baird
BrainTrust

Everyone should be highly concerned about latent biases. We don’t fully understand our own biases, and without that understanding, we can never assume that AI won’t “learn the wrong things.” This is why I think there will be an “AI 2.0” in our future – one that has been developed specifically to make its assumptions or lessons learned clear to an end user, who can then better monitor the things that AI learns. The reality is, we WANT AI to learn “ideal” behaviors, but we’ve released it into a world far less than ideal. And it will do with that data what AI does — draw conclusions and apply them going forward.

Laura Davis
BrainTrust
Laura Davis
Founder, Branded Ground
4 years 4 months ago

Beautifully stated Nikki. As always.

Charles Dimov
Guest

When using AI, we all need to be aware of the potential of bias. Then watch for it in the results. This second step is particularly important to avoid a future problem. If there is a bias that is not appropriate then it needs to be fixed. The danger is NOT having this feedback loop. If missed, eventually it may come up as a public embarrassment or brand-eroding incident.

Right now I have not perceived any self-correcting or bias detecting elements in AI software. It’s a call to the software vendors vendors. It isn’t just a responsibility. It is a risk-reducing feature that customers (retailers) will want.

Ian Percy
BrainTrust

You’ve provoked my thinking, Charles. Even “bias detecting” AI software will be biased and will perpetuate a never ending spiral of “bias-ness.” We’ve learned this from software production generally. Software fixes account for almost half of all software faults. Will there ever be fault-free software? Or unbiased AI? I’m working on the former but am not quite so sure of the latter. Appreciated your contribution.

Zel Bianco
BrainTrust

Garbage in — garbage out. If the data the algorithm is being trained on is not somehow scrutinized or scrubbed, it will likely be based on bias and bad assumptions.

Shawn Harris
BrainTrust
Shawn Harris
Board Advisor, Light Line Delivery
4 years 4 months ago

The view of the world an AI has, is not the world; it’s its training data. The inscrutable nature of AI makes it difficult, or impossible, to understand prediction logic and to pinpoint flaws of bias; this is still very much a research problem. However, this is where I believe that diversity of thought among applied data scientist, data engineers, and ethics validators will pay dividends.

Herb Sorensen
BrainTrust

Artificial Intelligence (AI) is not really “intelligence” at all, but rather simply information, hidden in vast skeins of data, about what IS, that is not readily NOTICED by our limited minds. However, it is doubtful that that data accounts for all the biases produced by programmers from demonstrated human monocultures, politically, socially, etc.

Some of the biases may be historically justifiable, but maybe not, too. It’s a mess. But THAT’s the human race, that is on a largely unrecognized positive trajectory. See: Steven Pinker, “Is the World Getting Better or Worse?

Paula Rosenblum
BrainTrust

I agree with Neil’s comments. AI, at the moment, is another magic bullet that is perceived to allow us to alter retail. It’s not clear that this is actually widely in use.

It will be an interesting problem to solve once it has been deployed and is mature enough to truly support decision-making at an individual level, but meanwhile I’m happy to see it getting closer to an accurate weather forecast.

Ian Percy
BrainTrust
EVERYTHING not of nature, that is, built or manifested by human kind, came from imagination and thought. Those of us with a creationist bent believe even our natural world came to us that way. Algorithms themselves came into existence through the same magical influences that created the paintings of the great masters. Their “biases,” the way they see and experience the world, is what makes those works treasured for all time. So what is the difference between the creation of AI technology and these masterpieces? The answer lies in the “Why?” da Vinci, Rembrandt and others, I believe, were channels of what they produced. To get more into the science side, this was true of Fleming and Tesla too. They did their work because their destiny called them to do it. And that is what we’re rapidly losing today. Everything that will ever be possible already exists on a non-material plane. Our work, as creative instruments of the future, is to pick up on which possibilities have our name on them. When we find them,… Read more »
Ian Percy
BrainTrust

As usual, a further thought came to me after I submitted my comment. I referenced the work of the great masters. We heard that an AI-generated portrait just sold for $432,500. So should it be listed along with the Mona Lisa or The Man With The Golden Helmet? Or can you get another copy by hitting “enter?” What really is the difference? Understanding that may help us rethink retail.

Ryan Mathews
BrainTrust

Let me start by questioning whether or not anyone — other than zealots — believes AI “promises to eliminate the inefficiencies that come from subjective human assessments and assumptions.” AI recognizes patterns in data. That’s like thought, but it isn’t totally objective thought. So if there are biases in the data, they will be mirrored in the analysis. These are not independent systems, and they are only as objective as the developers who set them up and the data fields they are analyzing. On any given application they aren’t inherently any more or less biased than a human being, just faster and less prone to computational error.

Joanna Rutter
Guest
4 years 4 months ago

Ethics are usually playing catchup with tech developments. But what is all that development for in the first place? (Paraphrasing a quote from The Wind Singer: “If everything we do is in order to get somewhere else, when’s the end of it all?”) What’s coming to mind: Walmart was developing facial recognition tech to catch shoplifters and detect dissatisfaction last year. I wonder what an AI would do with millions of shoppers’ unhappy/happy faces? What would it find and recommend based on its instructions to detect dissatisfaction (and what a “dissatisfied” face looks like from the POV of the race, gender, cultural background of the developers)? How that would impact the demographics they market certain products to or even what a “good worker’s” face should look like? No easy answers here other than rooting for purposeful AI developed intentionally within as diverse and crowded a room as possible.

Doug Garnett
BrainTrust

Retailer hiring algorithms may be where the most egregious AI errors lie. The excellent book “Weapons of Math Destruction” by Cathy O’Neil discusses hiring algorithms for their secrecy and the complete inability to improve the algorithms by testing.

For proof, we have the recent choice by Amazon to stop using their AI hiring algorithm because it showed bias against women.

But I don’t think the problem is fixable. These biases are hidden and nearly impossible to test. AI can ONLY be led to succeed when the algorithms are trainable by evaluating results. But in hiring software there’s no way to train with “person X was hired elsewhere and performed outstanding work.”

Glenn Cantor
Guest
4 years 4 months ago

It seems to me that AI cannot be completely unbiased because the inherent objective to building a database is to make determinations. If the output is biased, then the information used to create that database must also have some kind of bias.

Ken Morris
BrainTrust
Ken Morris
Managing Partner Cambridge Retail Advisors
4 years 4 months ago

AI is not biased, it is the data that is biased and the humans that created the data. If human decisions have been biased in decisions that impact data, then the output from AI will perpetuate that bias. Garbage in, garbage out. That old phrase is as appropriate today as it ever was. Systemic bias needs to be scrubbed out of the data so neural networks will apply the proper framework of algorithms to make AI unbiased.

Retailers need to be aware of the potential mistakes that AI can make based on historical data and biases and monitor the results of AI to help catch faulty logic. Humans are not perfect and neither is AI.

Ralph Jacobson
Guest

The fact remains that machines learn from the data they ingest. The challenge to “feed” unbiased data will lie squarely upon those humans who augment their intelligence with these machines. Data scientists need to be able to balance the inputs to ensure desired unbiased outputs.

Peter Charness
BrainTrust

As already noted, the bias is likely in the data, which influences the algorithms. Next logical question, if AI “judgement” is based on historical patterns, where does innovation come from? In merchandising, the challenge has always been the self fulfilling prophesy. Don’t retire the creative types in your organization yet …

Ananda Chakravarty
BrainTrust

What a great set of thoughts already. Big concern — yes. Biases, if uncovered, can topple brand value rapidly, especially as it dehumanizes engagement with customers. We are still at the early stages of AI and the problem that occurs is that our bias corrections are not objective enough to be bias free, so we just introduce another set of biases into the training data or apply fixed business rules on top of results, diminishing the value of AI tools to uncover things that humans wouldn’t normally find easily. AI is quite promising, but we’re not at a point where the concerns are well founded nor can we deliver results that can be used readily in retail — except in select cases such as logistics or demand planning.

Harley Feldman
BrainTrust

Like any new science, AI is going through learning lessons, including latent biases. Researchers are working on many techniques to eliminate the bias. By studying the results of AI, biases will typically be found if they exist. Retailers are probably not performing much analysis to determine if biases exist as any AI bias will have little impact on people. The biases may cause the retailer to have the wrong assortment or products in store at the wrong time, nothing that will effect the shopper personally.

Shep Hyken
BrainTrust

There should be a big concern about AI being biased or making bad decisions based on biased programing or human behaviors that cause bias. It has been my position for several years that AI is not ready for advanced decision making. It can help, but it can’t be used (yet) for big issues and decisions. It’s perfect for lower level support and functions. In the HR world, it can be used to create a streamlined application process, help with repetitive tasks and more basic functions. To use it as the sole decision maker for hiring is too soon, if ever at all.

Michael La Kier
BrainTrust

AI is still relative new and still being developed as a solution. Inherent biases are hard to unearth with people which means AI-solutions are bound to have bias (even if vetted). The key is to find and understand how the biases impact the end result.

wpDiscuz
Braintrust
"The inscrutable nature of AI makes it difficult, or impossible, to understand prediction logic and to pinpoint flaws of bias; this is still very much a research problem. "
"Ethics are usually playing catchup with tech developments. But what is all that development for in the first place? "
"Next logical question, if AI “judgement” is based on historical patterns, where does innovation come from?"

Take Our Instant Poll

How big a concern for retailers are latent biases that may be hiding in artificial intelligence-driven algorithms?

View Results

Loading ... Loading ...