Artificial Intelligence ٌٌWill Help Determine If You Get Your Next Job

With parents using artificial intelligence to scan prospective babysitters’ social media and an endless slew of articles explaining how your résumé can “beat the bots,” you might be wondering whether a robot will be offering you your next job.

We’re not there yet, but recruiters are increasingly using AI to make the first round of cuts and to determine whether a job posting is even advertised to you. Often trained on data collected about previous or similar applicants, these tools can cut down on the effort recruiters need to expend in order to make a hire. Last year, 67 percent of hiring managers and recruiters surveyed by LinkedIn said AI was saving them time.

But critics argue that such systems can introduce bias, lack accountability and transparency, and aren’t guaranteed to be accurate. Take, for instance, the Utah-based company HireVue, which sells a job interview video platform that can use artificial intelligence to assess candidates and, it claims, predict their likelihood to succeed in a position. The company says it uses on-staff psychologists to help develop customized assessment algorithms that reflect the ideal traits for a particular role a client (usually a company) hopes to hire for, like a sales representative or computer engineer.

Facial recognition boxes and dots cover the photo of a blond man.
Output of an Artificial Intelligence system from Google Vision, performing facial recognition on a photograph of a man in San Ramon, California on November 22, 2019.
Smith Collection/Gado/Getty Images

That algorithm is then used to analyze how individual candidates answer preselected questions in a recorded video interview, grading their verbal responses and, in some cases, facial movements. HireVue claims the tool — which is used by about 100 clients, including Hilton and Unilever — is more predictive of job performance than human interviewers conducting the same structured interviews.

But last month, lawyers at the Electronic Privacy Information Center (EPIC), a privacy rights nonprofit, filed a complaint with the Federal Trade Commission, pushing the agency to investigate the company for potential bias, inaccuracy, and lack of transparency. It also accused HireVue of engaging in “deceptive trade practices” because the company claims it doesn’t use facial recognition. (EPIC argues HireVue’s facial analysis qualifies as facial recognition.)

The lawsuit follows the introduction of the Algorithmic Accountability Act in Congress earlier this year, which would grant the FTC authority to create regulations to check so-called “automated decision systems” for bias. Meanwhile, the Equal Opportunity Employment Commission (EEOC) — the federal agency that deals with employment discrimination — is reportedly now investigating at least two discrimination cases involving job decision algorithms, according to Bloomberg Law.

AI can pop up throughout the recruitment and hiring process

Recruiters can make use of artificial intelligence throughout the hiring process, from advertising and attracting potential applicants to predicting candidates’ job performance. “Just like with the rest of the world’s digital advertisement, AI is helping target who sees what job descriptions [and] who sees what job marketing,” explains Aaron Rieke, a managing director at Upturn, a DC-based nonprofit digital technology research group.

And it’s not just a few outlier companies, like HireVue, that use predictive AI. Vox’s own HR staff use LinkedIn Recruiter, a popular tool that uses artificial intelligence to rank candidates. Similarly, the jobs platform ZipRecruiter uses AI to match candidates with nearby jobs that are potentially good fits, based on the traits the applicants have shared with the platform — like their listed skills, experience, and location — and previous interactions between similar candidates and prospective employers. For instance, because I applied for a few San Francisco-based tutoring gigs on ZipRecruiter last year, I’ve continued to receive emails from the platform advertising similar jobs in the area.

Overall, the company says its AI has trained on more than 1.5 billion employer-candidate interactions.

Platforms like Arya — which says it’s been used by Home Depot and Dyson — go even further, using machine learning to find candidates based on data that might be available on a company’s internal database, public job boards, social platforms like Facebook and LinkedIn, and other profiles available on the open web, like those on professional membership sites.

Arya claims it’s even able to predict whether an employee is likely to leave their old job and take a new one, based on the data it collects about a candidate, such as their promotions, movement between previous roles and industries, and the predicted fit of a new position, as well as data about the role and industry more broadly.

Another use of AI is to screen through application materials, like résumés and assessments, in order to recommend which candidates recruiters should contact first. Somen Mondal, the CEO and co-founder of one such screening and matching service, Ideal, says these systems do more than automatically search résumés for relevant keywords.

For instance, Ideal can learn to understand and compare experiences across candidates’ résumés and then rank the applicants by how closely they match an opening. “It’s almost like a recruiter Googling a company [listed on an application] and learning about it,” explains Mondal, who says his platform is used to screen 5 million candidates a month.

But AI doesn’t just operate behind the scenes. If you’ve ever applied for a job and then been engaged by a text conversation, there’s a chance you’re talking to a recruitment bot. Chatbots that use natural-language understanding created by companies like Mya can help automate the process of reaching out to previous applicants about a new opening at a company, or finding out whether an applicant meets a position’s basic requirements — like availability — thus eliminating the need for human phone-screening interviews. Mya, for instance, can reach out over text and email, as well as through messaging applications like Facebook and WhatsApp.

Another burgeoning use of artificial intelligence in job selection is talent and personality assessments. One company championing this application is Pymetrics, which sells neuroscience computer games for candidates to play (one such game involves hitting the spacebar whenever a red circle, but not a green circle, flashes on the screen).

These games are meant to predict candidates’ “cognitive and personality traits.” Pymetrics says on its website that the system studies “millions of data points” collected from the games to match applicants to jobs judged to be a good fit, based on Pymetrics’ predictive algorithms.

Proponents say AI systems are faster and can consider information human recruiters can’t calculate quickly

These tools help HR departments move more quickly through large pools of applicants and ultimately make it cheaper to hire. Proponents say they can be more fair and more thorough than overworked human recruiters skimming through hundreds of résumés and cover letters.

“Companies just can’t get through the applications. And if they do, they’re spending — on average — three seconds,” Mondal says. “There’s a whole problem with efficiency.” He argues that using an AI system can ensure that every résumé, at the very least, is screened. After all, one job posting might attract thousands of applications, with a huge share from people who are completely unqualified for a role.

Such tools can automatically recognize traits in the application materials from previous successful hires and look for signs of that trait among materials submitted by new applicants. Mondal says systems like Ideal can consider between 16 and 25 factors (or elements) in each application, pointing out that, unlike humans, it can calculate something like commute distance in “milliseconds.”

“You can start to fine-tune the system with not just the people you’ve brought in to interview, or not just the people that you’ve hired, but who ended up doing well in the position. So it’s a complete loop,” Mondal explains. “As a human, it’s very difficult to look at all that data across the lifecycle of an applicant. And [with AI] this is being done in seconds.”

These systems typically operate on a scale greater than a human recruiter. For instance, HireVue claims the artificial intelligence used in its video platform evaluates “tens of thousands of factors.” Even if companies are using the same AI-based hiring tool, they’re likely using a system that’s optimized to their own hiring preferences. Plus, an algorithm is likely changing if it’s continuously being trained on new data.

Another service, Humantic, claims it can get a sense of candidates’ psychology based on their résumés, LinkedIn profiles, and other text-based data an applicant might volunteer to submit, by mining through and studying their use of language (the product is inspired by the field of psycholinguistics). The idea is to eliminate the need for additional personality assessments. “We try to recycle the information that’s already there,” explains Amarpreet Kalkat, the company’s co-founder. He says the service is used by more than 100 companies.

Proponents of these recruiting tools also claim that artificial intelligence can be used to avoid human biases, like an unconscious preference for graduates of a particular university, or a bias against women or a racial minority. (But AI often amplifies bias; more on that later.) They argue that AI can help strip out — or abstract — information related to a candidate’s identity, like their name, age, gender, or school, and more fairly consider applicants.

The idea that AI might clamp down on — or at least do better than — biased humans inspired California lawmakers earlier this year to introduce a bill urging fellow policymakers to explore the use of new technology, including “artificial intelligence and algorithm-based technologies,” to “reduce bias and discrimination in hiring.”

AI tools reflect who builds and trains them

These AI systems are only as good as the data they’re trained on and the humans that build them. If a résumé-screening machine learning tool is trained on historical data, such as résumés collected from a company’s previously hired candidates, the system will inherit both the conscious and unconscious preferences of the hiring managers who made those selections. That approach could help find stellar, highly qualified candidates. But Rieke warns that method can also pick up “silly patterns that are nonetheless real and prominent in a data set.”

One such résumé-screening tool identified being named Jared and having played lacrosse in high school as the best predictors of job performance, as Quartz reported.

If you’re a former high school lacrosse player named Jared, that particular tool might not sound so bad. But systems can also learn to be racist, sexist, ageist, and biased in other nefarious ways. For instance, Reuters reported last year that Amazon had created a recruitment algorithm that unintentionally tended to favor male applicants over female applicants for certain positions. The system was trained on a decade of résumés submitted to the company, which Reuters reported were mostly from men.

A visitor at Intel’s Artificial Intelligence (AI) Day walks past a signboard in Bangalore, India on April 4, 2017.
Manjunath Kiran/AFP via Getty Images

(An Amazon spokesperson told Recode that the system was never used and was abandoned for several reasons, including because the algorithms were primitive and that the models randomly returned unqualified candidates.)

Mondal says there is no way to use these systems without regular, extensive auditing. That’s because, even if you explicitly instruct a machine learning tool not to discriminate against women, it might inadvertently learn to discriminate against other proxies associated with being female, like having graduated from a women’s college.

“You have to have a way to make sure that you aren’t picking people who are grouped in a specific way and that you’re only hiring those types of people,” he says. Ensuring that these systems are not introducing unjust bias means frequently checking that new hires don’t disproportionately represent one demographic group.

But there’s skepticism that efforts to “de-bias” algorithms and AI are a complete solution. And Upturn’s report on equity and hiring algorithms notes that “[de-biasing] best practices have yet to crystallize [and] [m]any techniques maintain a narrow focus on individual protected characteristics like gender or race, and rarely address intersectional concerns, where multiple protected traits produce compounding disparate effects.”

And if a job is advertised on an online platform like Facebook, it’s possible you won’t even see a posting because of biases produced by that platform’s algorithms. There’s also concern that systems like HireVue’s could inherently be built to discriminate against people with certain disabilities.

Critics are also skeptical of whether these tools do what they say, especially when they make broad claims about a candidates’ “predicted” psychology, emotion, and suitability for a position. Adina Sterling, an organizational behavior professor at Stanford, also notes that, if not designed carefully, an algorithm could drive its preferences toward a single type of candidate. Such a system might miss a more unconventional applicant who could nevertheless excel, like an actor applying for a job in sales.

“Algorithms are good for economies of scale. They’re not good for nuance,” she explains, adding that she doesn’t believe companies are being vigilant enough when studying the recruitment AI tools they use and checking what these systems actually optimize for.

Who regulates these tools?

Employment lawyer Mark Girouard says AI and algorithmic selection systems fall under the Uniform Guidelines on Employee Selection Procedures, guidance established in 1978 by federal agencies that guide companies’ selection standards and employment assessments.

Many of these AI tools say they follow the four-fifths rule, a statistical “rule of thumb” benchmark established under those employee selection guidelines. The rule is used to compare the selection rate of applicant demographic groups and investigate whether selection criteria might have had an adverse impact on a protected minority group.

But experts have noted that the rule is just one test, and Rieke emphasizes that passing the test doesn’t imply these AI tools do what they claim. A system that picked candidates randomly could pass the test, he says. Girouard explains that as long as a tool does not have a disparate impact on race or gender, there’s no law on the federal level that requires that such AI tools work as intended.

In its case against HireVue, EPIC argues that the company has failed to meet established AI transparency guidelines, including artificial intelligence principles outlined by the Organization for Economic Co-operation and Development that have been endorsed by the U.S and 41 other countries. HireVue told Recode that it follows the standards set by the Uniform Guidelines, as well as guidelines set by other professional organizations. The company also says its systems are trained on a diverse data set and that its tools have helped its clients increase the diversity of their staff.

At the state level, Illinois has made some initial headway in promoting the transparent use of these tools. In January, its Artificial Intelligence Video Interview Act will take effect, which requires that employers using artificial intelligence-based video analysis technology notify, explain, and get the consent of applicants.

Still, Rieke says few companies release the methodologies used in their bias audits in “meaningful detail.” He’s not aware of any company that has released the results of an audit conducted by a third party.

Meanwhile, senators have pushed the EEOC to investigate whether biased facial analysis algorithms could violate anti-discrimination laws, and experts have previously warned the agency about the risk of algorithmic bias. But the EEOC has yet to release any specific guidance regarding algorithmic decision-making or artificial intelligence-based tools and did not respond to Recode’s request for comment.

Rieke did highlight one potential upside for applicants. Should lawmakers one day force companies to release the results of their AI hiring selection systems, job candidates could gain new insight into how to improve their applications. But as to whether AI will ever make the final call, Sterling says that’s a long way’s off.

“Hiring is an extremely social process,” she explains. “Companies don’t want to relinquish it to tech.”


Open Sourced is made possible by Omidyar Network. All Open Sourced content is editorially independent and produced by our journalists.

By: 

Source: Artificial intelligence will help determine if you get your next job

5.58M subscribers
Robot co-workers and artificial intelligence assistants are becoming more common in the workplace. Could they edge human employees out? What then? Still haven’t subscribed to WIRED on YouTube? ►► http://wrd.cm/15fP7B7 Also, check out the free WIRED channel on Roku, Apple TV, Amazon Fire TV, and Android TV. Here you can find your favorite WIRED shows and new episodes of our latest hit series Masterminds. ABOUT WIRED WIRED is where tomorrow is realized. Through thought-provoking stories and videos, WIRED explores the future of business, innovation, and culture. The Future of Your Job in the Age of AI | Robots & Us | WIRED

This A.I. Bot Writes Such Convincing Ads, Chase Just ‘Hired’ It to Write Marketing Copy

Here are two headlines. One was written by a human. One was written by a robot. Can you guess which?

  • Access cash from the equity in your home. Take a look.

  • It’s true–You can unlock cash from the equity in your home. Click to apply.

Both lines of marketing copy were used to pitch home equity lines of credit to JPMorgan Chase customers. The second garnered nearly twice as many applications, according to the Wall Street Journal. It was generated by Persado’s artificial intelligence tool.

This is why Chase just signed a five-year deal with Persado Inc., a software company that uses artificial intelligence to tweak marketing language for its clients. After a trial period with the company, Chase has found Persado’s bot-generated copy incredibly effective. “Chase saw as high as a 450 percent lift in click-through rates on ads,” Persado said in a statement.

That email might have been written by a bot.

Chase says it will use Persado’s tool to rewrite language for email promotions, online ads, and potentially snail mail promotions. It’s also looking into using the tool for internal communications and customer service communications.

When asked if this might lead to downsizing, a Chase spokesperson told AdAge: “Our relationship with Persado hasn’t had an impact on our structure.”

Persado’s tool starts with human-written copy and analyzes it for six elements (narrative, emotion, descriptions, calls-to-action, formatting, and word positioning). It then creates thousands of combinations by making tweaks to those elements.

Kristin Lemkau, chief marketing officer at JPMorgan Chase, is fully on board with Persado. Chase began experimenting with its software three years ago. Sometimes the tool would recommend a wordier headline, which goes against marketing 101. But that longer headline garnered more clicks.

“They made a couple of changes that made sense and I was like, ‘Why were we so dumb that we didn’t figure that out?'” she told the Journal.

By: Betsy Mikel Owner, Aveck @BetsyM

Source: This A.I. Bot Writes Such Convincing Ads, Chase Just ‘Hired’ It to Write Marketing Copy

The Amazing Ways Dubai Airport Uses Artificial Intelligence

As one of the world’s busiest airports, (ranked No. 3 in 2018 according to Airports Council International’s world traffic report), Dubai International Airport is also a leader in using artificial intelligence (AI). In fact, the United Arab Emirates (UAE) leads the Arab world with its adoption of artificial intelligence in other sectors and areas of life and has a government that prioritizes artificial intelligence including an AI strategy and Ministry of Artificial Intelligence with a mandate to invest in technologies and AI tools.

AI Customs Officials

The Emirates Ministry of the Interior said that by 2020, immigration officers would no longer be needed in the UAE. They will be replaced by artificial intelligence. The plan is to have people just walk through an AI-powered security system to be scanned without taking off shoes or belts or emptying pockets. The airport was already experimenting with a virtual aquarium smart gate. Travelers would walk through a small tunnel surrounded by fish. While they looked around at the fish that swim around them, cameras could view every angle of their faces. This allowed for quick identification.

AI Baggage Handling

Tim Clark, the president of Emirates, the world’s biggest long-haul carrier, believes artificial intelligence, specifically robots, should already be handling baggage service including identifying them, putting the bags in appropriate bins and then taking them out of the aircraft without any human intervention. He envisions these robots to be similar to the automation and robotics used at Amazon.com’s warehouses.

Air Traffic Management

In a partnership with Canada-based Searidge Technologies, the UAE General Civil Aviation Authority (GCAA) is researching the use of artificial intelligence in the country’s air traffic control process. In a statement announcing the partnership in 2018, the director-general of the GCAA confirmed that it is UAE’s strategy to explore how artificial intelligence and other new technologies can enhance the aviation industry. With goals to optimize safety and efficiency within air traffic management, this is important work that could ultimately impact similar operations worldwide.

Automated Vehicles

Self-driving cars powered by artificial intelligence and 100% solar or electrical energy will soon be helping the Dubai Airport increase efficiency in its day-to-day operations, including improvements between ground transportation and air travel. Imagine how artificial intelligence could orchestrate passenger movement from arrival to the airport to leaving your destination’s airport. In the future, autonomous vehicles (already loaded with your luggage) could meet you at the curb. Maybe AI could transform luggage carts to act autonomously to get your luggage to your hotel or home, eliminating any need for baggage carousels and the hassle of dealing with your luggage.

While much attention is given to the process of vetting passengers to ensure safe air travel, artificial intelligence can also improve the staff clearance process. Some airports see the most significant security threat airports, and airlines face is with airport personnel. An EgyptAir mechanic, baggage handler and two police officers were arrested in connection with the bombing of Metrojet Flight 9268 where all 224 people on board died. There have been several arrests in Australia of border force officers linked to international drug smugglers. Part of these efforts to improve the staff clearance process includes enhancing staff entrances to enable greater control with biometrics, advanced facial recognition and the use of artificial intelligence rather than just CCTV cameras and police monitoring which is used now. Artificial intelligence can look for areas of concerns with a staff member’s behavior and record for crime and violence even before they are hired. After they are hired, AI algorithms can continue to look for changes that could indicate a security problem.

AI Projects Being Explored for the Future

Emirates is developing AI projects in its lab at the Dubai Future Accelerators facility. Some of these include using AI to assist passengers when picking their onboard meals, scheduling a pickup by a taxi as well as personalizing the experience of every Emirates passenger throughout the entire journey. They are also exploring how AI can help Emirates teach cabin crew. We can expect that artificial intelligence will be put to work to solve the problems of airplane boarding by looking at the issue in a way humans have been unable to. The goal would be for AI to architect a queue-less experience.

AI at Other Airports

The first biometric airport terminal is already running at the Hartsfield-Jackson Atlanta International Airport, and a similar system is running at Dubai International Airport for first- and business-class passengers. Here are some other ways airports and airlines around the world are using artificial intelligence or plan to:

·         Cybersecurity: Airports and airlines have shifted from identifying cybersecurity to preventing cybersecurity threats with an AI assist in response to the expansion of digitalization across aviation.

·         Immersive experiences: Augmented reality might be the future of helping travelers find their way through an airport.

·         Voice recognition technology: At Heathrow Airport, passengers can already ask Alexa to get flight updates. United Airlines allows travelers to check in to their flight through Google Assistant by simply stating, “Hey Google, check in to my flight.”

As innovation gets pushed by the UAE, Dubai International Airport and other technology innovators around the world, there will be opportunities for abuse and privacy considerations when using these new AI tools and capabilities for air travel. But, if artificial intelligence can remove the biggest headaches from travel, some people (possibly most) will be more than ready to exchange a bit of privacy for a better experience when AI takes over.

 

Follow me on Twitter or LinkedIn. Check out my website.

Bernard Marr is an internationally best-selling author, popular keynote speaker, futurist, and a strategic business & technology advisor to governments and companies. He helps organisations improve their business performance, use data more intelligently, and understand the implications of new technologies such as artificial intelligence, big data, blockchains, and the Internet of Things. Why don’t you connect with Bernard on Twitter (@bernardmarr), LinkedIn (https://uk.linkedin.com/in/bernardmarr) or instagram (bernard.marr)?

Source: The Amazing Ways Dubai Airport Uses Artificial Intelligence

%d bloggers like this: