Photo: @Elisall via Twenty20
Companies place ‘bias bounties’ on AI algorithms
At least five large companies will introduce “bias bounties” or hacker competitions to identify bias in artificial intelligence (AI) algorithms, predicts the just-released “North American Predictions 2022” from Forrester.
Bias bounties are modeled on bug bounties, which reward hackers or coders (often, outside the organizations) who detect problems in security software. In late July, Twitter launched the first major bias bounty and awarded $3,500 to a student who proved that its image cropping algorithm favors lighter, slimmer and younger faces.
“Finding bias in machine learning (ML) models is difficult, and sometimes, companies find out about unintended ethical harms once they’ve already reached the public,” wrote Rumman Chowdhury, director of Twitter META, in a blog entry. “We want to change that.”
Coders have been unearthing biases in AI-driven algorithms on social media since a programmer in 2015 called out a search feature of the Google Photos app that mistakenly tagged photos of Black people as gorillas.
Twitter in May admitted its automatic cropping algorithm repeatedly cropped out Black faces in favor of white ones and favored men over women.
AI-biases can impact what advertisements or products an individual is shown online or the recommendations they receive on Netflix, but can also lead to prejudices in job hiring, loan applications, health care decisions and criminal intelligence.
Machine learning algorithms can pick up the covert or overt biases from their human developers. Biases are also often attributed to old data that’s already biased.
Companies using AI claim to be taking steps to use more representative training data and to regularly audit their systems to check for unintended bias and disparate impact against certain groups.
Forrester predicted that other major tech companies such as Google and Microsoft in 2022 will implement bias bounties, as will non-technology companies, such as banks and healthcare companies.
Wrote Forrester in its predictions report, “AI professionals should consider using bias bounties as a canary in the coal mine for when incomplete data or existing inequity may lead to discriminatory outcomes from AI systems. With trust high on the agenda of stakeholders, organizations will have to drive decision-making based on levers of trust such as accountability and integrity, making bias elimination ever more critical.”
- Forrester’s North America Predictions 2022: The 30% Of Companies That Insist On A Fully In-Office Model Will Find That Their Employees Simply Won’t Accept It (press release) – Forrester
- Forrester North American Predictions 2022 (study) – Forrester
- Introducing Twitter’s first algorithmic bias bounty challenge – Twitter
- How to Make Artificial Intelligence Less Biased – The Wall Street Journal
- Why algorithms can be racist and sexist – Vox
- Google Mistakenly Tags Black People as ‘Gorillas,’ Showing Limits of Algorithms – The Wall Street Journal
- What Do We Do About the Biases in AI? – Harvard Business Review
- Google Built the Pixel 6 Camera to Better Portray People With Darker Skin Tones. Does It? – The Wall Street Journal
- To stop algorithmic bias, we first have to define it – Brookings Institution
- Facebook Dataset Addresses Algorithmic Bias – The Wall Street Journal
BrainTrust
Jenn McMillen
Chief Accelerant at Incendio & Forbes Contributing Writer
Liz Crawford
VP Planning, TPN Retail
Jeff Weidauer
President, SSR Retail LLC
Discussion Questions
DISCUSSION QUESTIONS: What are the pros and cons of using bounties to root out bias in artificial intelligence (AI) algorithms? Do you see any other newer actions that hold greater promise to reduce AI-bias?
Finally – a ray of hope for digital interfaces!
We have seen protections for AI in the Robots’ Bill of Rights – rethought as recently as 2019. And of course, we have all sorts of protections for companies. But now there may be long-needed initiatives to protect consumers from the invisible hand of Artificial Intelligence.
I could see these kinds of cases ultimately hitting the courts, with a seller or platform arguing that their algorithm is legal versus a representative of consumers claiming harm.
On the surface this strikes me as a very innovative and speedy way to uncover potential issues with biases that are affecting specific populations. Having outsiders participate in finding these issues has to be more efficient that trying to do it with internal resources, who I suspect bring their own set of biases for their companies’ software to the problem.
Absolutely. How many times do you proof your own work and still miss some of the typos or word choices? They need new eyes and many eyes to take on the complexity of this challenge.
Not many realize that AI that we see is based on decisions made by humans based on the training set they choose, assumptions they make, and conclusions they draw on their hypothesis. There are numerous decisions made by humans before AI-driven models make they way into production and are used by everyday users.
As such, there is a high chance of bias creeping in. It is absolutely concerning where people are directly impacted – recruitment, shaping the perspectives with recommended news and articles, etc.
Much like paying ethical hackers for finding bugs, paying a bounty to detect bias is welcome. It is a form of self-regulation and should be welcome.
AI is designed to learn and emulate human behavior, so it stands to reason that the more human hands involved in shaping the algorithms the better. Bias is especially hard to suss out because it’s embedded in the hundreds of subconscious words and thoughts we’ve acquired over multiple generations. It’s going to take conscious awareness to root out these subtleties, and bounties will help build better digital intelligence for the good of all.
With the “it takes a village” mentality, crowdsourcing to make an algorithm better is a good idea since AI is not infallible.
Opening up algorithms and their inherent biases (because they are human creations) won’t solve all the problems, but it’s a great step in the right direction.
I only see pros to using bias bounties – I just wish we could get to a place where they aren’t needed. Until then, this is an important measure to ensure that technology doesn’t perpetuate discrimination. These efforts are a prime example, like autonomous vehicles, of how technology can make the world safer by working against the inherent flaws of humans.
This is a great way to police potentially biased insights that are being delivered to enterprises and consumers. Moreover, as companies expand the use of AI-based algorithms for more of their business operations, I could see an entire cottage industry not only emerge, but also mature into money making entities in the future.