Do retailers have a recommendation bias problem?


Cleber Ikeda is Investigative Analytics and Intelligence Director at Walmart. Any views and opinions expressed herein are personal and may not represent those of Walmart. The content of this article does not contain nor reference to confidential, proprietary information of Walmart.
Recommendation algorithms help connect customers to products they need or want to buy. They also increase visibility to promotions and, in some cases, make shopping more fun. They have also, in some cases, crossed serious ethical lines that have damaged the trust that consumers placed with retailers.
There have been unfortunate instances in which recommendation algorithms misled profiling outputs and generated discrimination (e.g., job ads). The root cause usually goes to input data, tainted with prejudice, extremism, harassment or discrimination. Combined with a careless approach to privacy and aggressive advertising practices, data can become the raw material for a terrible customer experience. Irresponsible use of data could even generate severe, undesirable outputs, like threats to human rights.
To address the risks of discrimination and unfairness, retailers must assess if their algorithms discriminate against specific groups, subgroups or minorities. They need to know if profiling techniques are preventing customer segments from full visibility of comparable products and if unsound algorithm design is preventing less affluent customers from accessing good deals.
Training machine learning developers on how prejudice, discrimination and biases can impact algorithmic design is a powerful tool. Without proper training and clear communication from leadership, developers might design algorithms while consciously or unconsciously promoting values that are not aligned with their company’s ethical standards.
Another concern is privacy. Many data points used to profile customers and predict shopping decisions are personal identifiable information or protected attributes. Marketers must observe domestic and international privacy regulations, but it’s also just good business to understand customers’ expectations when it comes to privacy. Violations of trust are business killers.
Retailers also need to exercise caution when it comes to retargeting ads online. There is a line to be drawn between helpful product reminders and what comes across as intrusive.
State-of-the-art artificial intelligence is not yet able to “fix” real-world data. Nevertheless, there is no excuse for recommendation algorithms owners to be negligent about it.
More diversity in the data science teams would help, given that marginalized, vulnerable groups that suffer inequities the most in the digital world are not well represented. Companies can also go outside with bias bounties where hackers compete to identify inherent bias in the code.
- Facebook’s ad algorithms are still excluding women from seeing jobs – MIT Technology Review
- Empathic media and advertising: Industry, policy, legal and citizen perspectives (the case for intimacy) – Sage Journals
- YouTube ran ads from hundreds of brands on extremist channels –CNN
- Companies place ‘bias bounties’ on AI algorithms – RetailWire
DISCUSSION QUESTIONS: What is the role of C-suite executives in the development of ethical recommendation algorithms in retail? How should retailers address the potential for discrimination, intrusion of privacy and even threats to human rights in AI-powered interactions with shoppers?
Join the Discussion!
22 Comments on "Do retailers have a recommendation bias problem?"
You must be logged in to post a comment.
You must be logged in to post a comment.
Principal and Founder, Retail Strategy Group
The role of C-suite executives is clear. They need to have skin in the game and enable diverse and inclusive teams to keep algorithms in check.
Representation from leadership teams to the shop floor keeps retailers learning from the ever-evolving customer, empowering retailers to build authentic relationships with their shoppers. Deeper engagement with customers will keep leaders learning about who their customer is and will help to eliminate and address bias and discrimination in data inputs.
Investigative Analytics and Intelligence Director, Walmart
Thanks for your comments, Liza.
Principal, Retail Technology Group
There is a difference between being “negligent” and not knowing there is a bug in the algorithm logic. I trust that most retailers use AI “making best efforts” to ensure that content is not racially biased and contact is not offensive or intrusive. Yet the possibility of a huge problem always looms over software. It’s the nature of the beast. In our litigious society, this could lead to lawsuits by ambulance chasers and overzealous citizens. Are the users of AI always exposed? If a horrible problem was found and fixed in a day, would retailers be liable for having crossed the line? I should think not.
President, Global Collaborations, Inc.
Members of the C-suite do not want to be surprised when someone outside the company discovers an embarrassing issue with a company algorithm. C-suite executives may not have the expertise to fully understand or question algorithms being used within their company or maybe only one person in the C-suite has that knowledge. In either case that is not sufficient for making decisions that affect the company. It is also not reasonable to expect that the person creating the algorithm also step back to consider all the possible consequences. However none of this excuses people in the C-suite from understanding the consequences. It may be necessary to have a team of employees and/or an outside expert to think about, test, and understand the consequences of algorithms being used and keep members of the C-suite informed. Ignorance is not a sufficient defense.
Investigative Analytics and Intelligence Director, Walmart
Thanks for your comment, Camille!
Professor, International Business, Guizhou University of Finance & Economics and University of Sanya, China.
This is a three-fold issue. The first is the people responsible for the implementing the algorithms in their companies do not understand the algorithm. The second is that the algorithm is written by people and people, no matter how hard they try not to, have biases that creep into the basic assumptions. The third is that the company can’t and doesn’t react until a serious issue surfaces.
How should retailers address it? Management should be keenly aware that it WILL happen. Fix it immediately. But the real challenge is that the fix may bring other unanticipated issues.
Investigative Analytics and Intelligence Director, Walmart
Thanks for your comment, Gene. Wondering what those “unanticipated issues” could be. Would you like to share a bit more with us?
Independent Board Member, Investor and Startup Advisor
If C-suite executives and, even more importantly, their boards are hyper-focused on growth and profitability regardless of the means, then the AI bias problem is guaranteed to exist in their algorithms. Ensuring diversity in the AI team is a start. Reviewing key assumptions and a variety of training data is another step. But paramount is putting in place directives that guide the building of algorithms and actively eliminate embedded bias.
Investigative Analytics and Intelligence Director, Walmart
Thank you, Mohamed. Agree 100% with you.
Board Advisor, Light Line Delivery
It is incumbent upon C-suite executives to set the expectation for how these AI-powered solutions should interact with their shoppers; however a key output of this expectation setting is for staff to develop policies and audit practices for enforcement. The root of the issue is the training data used to develop models. Remember, AI models are only as “smart” as the data you train them on. The AI dev team needs to consider the potential unintended consequences of a given model, then work to mitigate those potential issues. This takes having a diverse staff with varying points of view, as there is a deeply human experience aspect to ideating potential model efficacy challenges. However these solutions still need to be thoroughly tested to further uncover missed opportunities for improvement. None of this is easy, but it’s required.
Investigative Analytics and Intelligence Director, Walmart
Thank you, Shawn. Indeed, not an easy task and the whole matter involving algorithmic bias is still relatively for us. Lots of opportunities there!
Founder, CEO, Black Monk Consulting
As an academically trained philosopher I find this a very complicated set of questions since the “right” answer depends on one’s much larger ethical view of the world. Let’s start with a pragmatic look. How many C-suite executives know enough about AI/ML, coding, or algorithm architecture to have an informed position of ethical algorithm development? Hint: Not too many. So the best that most C-suite denizens can do is set a general ethical position and pray the CIO/CTO can find a way to effectively embed it in their programming. As to what retailers can do today, AI systems can only do what they are told, so the first step to preventing discrimination is to screen for bias, bigotry, prejudice, and even ethical positions among programmers, software architects, and coders, and that is very dicey ground requiring input from legal, HR, and — if I can be so bold — somebody trained in ethics and meta-ethical analysis.
Investigative Analytics and Intelligence Director, Walmart
Thanks for your comments, Ryan. I particularly liked your suggestion of adding other departments to support on the analysis of this type of problem. Depending on the magnitude and extension of the algorithmic bias, their perspective and input can be very valuable.
Chief Data Officer, CaringBridge
The introduction of artificial intelligence into customer interactions has tremendous benefits. At the same time, it is subject to the law of unintended consequences. There are always unintended consequences. In many cases the designers of algorithms do not have ill intentions; rather, the design and the data can lead artificial intelligence to unwittingly reinforce bias and prejudice.
The C-suite is not responsible for monitoring the algorithms, but are clearly responsible for ensuring that company values are transmitted and maintained throughout the organization. In many areas, executives receive reports that monitor compliance with values and HR standards; recommendation engines and artificial intelligence needs to be monitored and tracked just like any other area.
Investigative Analytics and Intelligence Director, Walmart
Thanks for your comments, Mark. Totally agree with your perspective that the C-suite has an important sponsorship role on algorithmic bias assessments.
Vice President, Research at IDC
The C-level executive still bears responsibility for this problem – even if it is developed by machines. Execs need to put in place a system of checks and balances to ensure that discrimination is minimal and inherent biases are weeded out. Not an easy task, a team should be assigned responsibility for validating and testing the recommendation system. This would include data cleaning, metadata management, and initial training of their AI tools. As with most solutions, there are different levels of engagement and the C-suite will need to treat this just as they would preventative security measures in IT. The fallout could be significant. CIOs and CSOs (security or data privacy) should own this task as part of their responsibility.
Investigative Analytics and Intelligence Director, Walmart
Thanks for your comments, Ananda.
President, Protonik
This is an excellent article and important topic. I’d go one caution further, though. Any use of metrics in this process has a fundamental problem that for something to be a metric, it must reject important real world truth. While humans might be able to keep this in mind, algorithms can’t. Programming any metrics to algorithms risks shorting too far from the real world — hence the increase in application of prejudice we sometimes see in machine learning or AI output.
Investigative Analytics and Intelligence Director, Walmart
Thank you, Doug!
Partner, Candezent & Retail Cities Consultant
There seem to be two parts to this — the ethics of job listings and customer targeting. In addressing the latter, I think this goes back to early days of CRM. The objectives have mostly been about catering to shoppers with more personalized and relevant content/offers. There are many examples of getting “too personal” (Maternity offers when it’s a bit too soon) and just plain getting it wrong (a birthday promotional gift delivered for someone who passed away).
Now with AI, we layer on the ethical issues with data, micro segmentation, ad serving, etc. There may be analogies with the media to learn from. Is the news outlet I access from a tablet serving too narrow a newsfeed? Is a retailer missing the opportunity to expand my exposure to their categories and product range based on purchase history?
Investigative Analytics and Intelligence Director, Walmart
Thanks for your comments, Gwen!
Investigative Analytics and Intelligence Director, Walmart
My apologies to all the experts who commented on my article. After a couple of weeks on PTO, only now I had the opportunity to read them and reply. Thank you so much for your comments and great inputs.