Digital Bounty Hunters Want To Help Businesses Track Down Hidden AI Biases That Can Prevent People Getting Jobs And Loans

Could an AI program be preventing you from landing a dream job? New York’s city council wants to make sure that’s not the case for people looking for work in the Big Apple. The council recently passed a bill that would require providers of automated employment decision tools to recruiters in the city to have their underlying AI algorithms audited each year and share the results with companies using their services.

If the measure is passed into law, it will be one of the first significant legal moves in the U.S. that attempts to ensure AI-driven software tools don’t have biases embedded in them that discriminate against people on racial, ethnic or other grounds. If more measures along these lines are passed, they could spark a boom in demand for digital bounty hunters who use computers to track down their quarry.

Many companies now offer cybersecurity “bug bounties,” which can sometimes reach hundreds of thousands of dollars or more, to people who help them find previously undetected security flaws in their software. The business has grown to the point where it has spawned startups such as Bugcrowd and HackerOne that help CIOs and other executives launch bounty programs and recruit ethical hackers to work on them.

Now the platforms say they’re seeing growing interest in programs that reward ethical hackers and researchers for flagging unforeseen algorithmic biases. As well as leading to prejudices in hiring, such biases can affect everything from loan applications to policing strategies. They can be deliberately or inadvertently introduced by developers themselves or by the choice of data sets algorithms are trained on.

Almost all the AI bounty initiatives to date have been kept private, with small groups of hackers invited to work on them so companies can get a feel for what’s possible. “There’s a lot of tire-kicking going on,” says Casey Ellis, founder and chairman of Bugcrowd.

Twitter bounties

One business that has gone a step further and run a public experiment is Twitter. In July, the social media giant launched an algorithmic bias bounty program that paid rewards of up to $3,500 for an analysis of a photo-cropping algorithm. The company had already acknowledged the algorithm was repeatedly cropping out Black faces in favor of white ones and favoring men over women. Those findings came after a public outcry over potential bias led Twitter to launch a review of the algorithm.

Some critics saw the subsequent decision to launch the AI bias bounty program publicly as a PR move, but Twitter’s engineers argued that getting a diverse group of people to scrutinize its algorithms would be more likely to help it surface biases. The company ended up awarding $7,000 in bounties, with the top prize going to a person who showed that Twitter’s AI model tended to favor stereotypical beauty traits, such as a preference for slimmer, younger, feminine and lighter-skinned faces.

The company stressed that one reason the exercise had been valuable was because it pulled in a broad geographic spread of contributors. Alex Rice, the CTO of HackerOne, which helped Twitter run its program, believes bounties can help other businesses identify issues with algorithms by subjecting AI models to this kind of broader scrutiny. “The idea is to put as much diversity as we can on the problem in the most real-world environment we can create,” he says.

Although Twitter hasn’t committed to run another program yet, tech research firm Forrester predicts that at least five major companies, including banks and healthcare businesses, will launch their own AI bias bounty offerings next year. Brandon Purcell, one of the firm’s analysts, thinks that within a few years, the number of programs will start growing exponentially and says CIOs will likely be key promoters of them, along with human resources directors.

Wanted: AI bounty hunters

To meet future demand, the world’s going to need many more AI sleuths. Cybersecurity experts are in short supply, but there are even fewer people with a deep understanding of how AI models work. Some security-focused hackers are likely to hunt for AI biases too, assuming the bounties are big enough, but experts say there are key differences that make bias-hunting more challenging.

One of them is that algorithms evolve constantly over time as they feed on more data. Cybersecurity systems morph, too, but generally at a slower pace. AI bias hunters also need to be more willing to look at how algorithms interact with broader systems within a business, whereas many cyber challenges are more circumscribed.

Some ethical hackers who’ve also hunted for security bugs say those challenges are what makes AI bias hunting so intellectually stimulating. “It’s more of a creative process and less of a logical process that involves going through trying to break something using a lot of predefined methods,” says Megan Howell, one of the bounty hunters who took part in the Twitter challenge.

People with deep industry expertise in areas such as credit assessment and health screening but who don’t yet have AI skills could help to close the talent gap. Bugcrowd’s Ellis points out that some of the most accomplished security bug hunters in the automotive field are car enthusiasts who got so interested in the safety issues facing the industry that they taught themselves to use coding tools.

“The idea is to put as much diversity as we can on the problem in the most real-world environment we can create.”

Alex Rice, CTO, HackerOne

While bounty programs could be useful in identifying bias, CIOs say that they should never be treated as a first line of defense. Instead, the goal should be to use tools and processes to build algorithms in ways that enable companies to explain clearly the results they produce. For instance, training algorithms using supervised learning, which involves feeding them prelabeled data sets, rather than unsupervised learning, which leaves algorithms to work out the structure of data by themselves, can help reduce the risk that biases will creep in.

Tech leaders in sectors such as banking are paying especially close attention to how their algorithms are built and perform. “Our industry being regulated . . . naturally lends itself to being more stringent with AI models,” says Sathish Muthukrishnan, the chief information, data and digital officer of $16.8 billion market cap Ally Financial. “We start with developing supervised models for customer-facing use cases.”

HackerOne’s Rice agrees that plenty can and should be done to eliminate biases in AI models during their development. Still, he thinks bounty programs are something CIOs and other executives should still be considering as a complement to their upfront efforts. “You want to find [biases] through automation, scanning, developer training, vulnerability management tools,” says Rice. “The problem is that all of these are insufficient.”

Follow me on Twitter or LinkedIn. Send me a secure tip.

I am the editor of the CIO Network at Forbes, leading coverage of the rapidly evolving role of senior technology leaders. I also develop topics and programming for Forbes CIO events.

Source: Digital Bounty Hunters Want To Help Businesses Track Down Hidden AI Biases That Can Prevent People Getting Jobs And Loans


More Contents:

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

%d bloggers like this: