Advertisements

Artificial Intelligence ٌٌWill Help Determine If You Get Your Next Job

With parents using artificial intelligence to scan prospective babysitters’ social media and an endless slew of articles explaining how your résumé can “beat the bots,” you might be wondering whether a robot will be offering you your next job.

We’re not there yet, but recruiters are increasingly using AI to make the first round of cuts and to determine whether a job posting is even advertised to you. Often trained on data collected about previous or similar applicants, these tools can cut down on the effort recruiters need to expend in order to make a hire. Last year, 67 percent of hiring managers and recruiters surveyed by LinkedIn said AI was saving them time.

But critics argue that such systems can introduce bias, lack accountability and transparency, and aren’t guaranteed to be accurate. Take, for instance, the Utah-based company HireVue, which sells a job interview video platform that can use artificial intelligence to assess candidates and, it claims, predict their likelihood to succeed in a position. The company says it uses on-staff psychologists to help develop customized assessment algorithms that reflect the ideal traits for a particular role a client (usually a company) hopes to hire for, like a sales representative or computer engineer.

Facial recognition boxes and dots cover the photo of a blond man.
Output of an Artificial Intelligence system from Google Vision, performing facial recognition on a photograph of a man in San Ramon, California on November 22, 2019.
Smith Collection/Gado/Getty Images

That algorithm is then used to analyze how individual candidates answer preselected questions in a recorded video interview, grading their verbal responses and, in some cases, facial movements. HireVue claims the tool — which is used by about 100 clients, including Hilton and Unilever — is more predictive of job performance than human interviewers conducting the same structured interviews.

But last month, lawyers at the Electronic Privacy Information Center (EPIC), a privacy rights nonprofit, filed a complaint with the Federal Trade Commission, pushing the agency to investigate the company for potential bias, inaccuracy, and lack of transparency. It also accused HireVue of engaging in “deceptive trade practices” because the company claims it doesn’t use facial recognition. (EPIC argues HireVue’s facial analysis qualifies as facial recognition.)

The lawsuit follows the introduction of the Algorithmic Accountability Act in Congress earlier this year, which would grant the FTC authority to create regulations to check so-called “automated decision systems” for bias. Meanwhile, the Equal Opportunity Employment Commission (EEOC) — the federal agency that deals with employment discrimination — is reportedly now investigating at least two discrimination cases involving job decision algorithms, according to Bloomberg Law.

AI can pop up throughout the recruitment and hiring process

Recruiters can make use of artificial intelligence throughout the hiring process, from advertising and attracting potential applicants to predicting candidates’ job performance. “Just like with the rest of the world’s digital advertisement, AI is helping target who sees what job descriptions [and] who sees what job marketing,” explains Aaron Rieke, a managing director at Upturn, a DC-based nonprofit digital technology research group.

And it’s not just a few outlier companies, like HireVue, that use predictive AI. Vox’s own HR staff use LinkedIn Recruiter, a popular tool that uses artificial intelligence to rank candidates. Similarly, the jobs platform ZipRecruiter uses AI to match candidates with nearby jobs that are potentially good fits, based on the traits the applicants have shared with the platform — like their listed skills, experience, and location — and previous interactions between similar candidates and prospective employers. For instance, because I applied for a few San Francisco-based tutoring gigs on ZipRecruiter last year, I’ve continued to receive emails from the platform advertising similar jobs in the area.

Overall, the company says its AI has trained on more than 1.5 billion employer-candidate interactions.

Platforms like Arya — which says it’s been used by Home Depot and Dyson — go even further, using machine learning to find candidates based on data that might be available on a company’s internal database, public job boards, social platforms like Facebook and LinkedIn, and other profiles available on the open web, like those on professional membership sites.

Arya claims it’s even able to predict whether an employee is likely to leave their old job and take a new one, based on the data it collects about a candidate, such as their promotions, movement between previous roles and industries, and the predicted fit of a new position, as well as data about the role and industry more broadly.

Another use of AI is to screen through application materials, like résumés and assessments, in order to recommend which candidates recruiters should contact first. Somen Mondal, the CEO and co-founder of one such screening and matching service, Ideal, says these systems do more than automatically search résumés for relevant keywords.

For instance, Ideal can learn to understand and compare experiences across candidates’ résumés and then rank the applicants by how closely they match an opening. “It’s almost like a recruiter Googling a company [listed on an application] and learning about it,” explains Mondal, who says his platform is used to screen 5 million candidates a month.

But AI doesn’t just operate behind the scenes. If you’ve ever applied for a job and then been engaged by a text conversation, there’s a chance you’re talking to a recruitment bot. Chatbots that use natural-language understanding created by companies like Mya can help automate the process of reaching out to previous applicants about a new opening at a company, or finding out whether an applicant meets a position’s basic requirements — like availability — thus eliminating the need for human phone-screening interviews. Mya, for instance, can reach out over text and email, as well as through messaging applications like Facebook and WhatsApp.

Another burgeoning use of artificial intelligence in job selection is talent and personality assessments. One company championing this application is Pymetrics, which sells neuroscience computer games for candidates to play (one such game involves hitting the spacebar whenever a red circle, but not a green circle, flashes on the screen).

These games are meant to predict candidates’ “cognitive and personality traits.” Pymetrics says on its website that the system studies “millions of data points” collected from the games to match applicants to jobs judged to be a good fit, based on Pymetrics’ predictive algorithms.

Proponents say AI systems are faster and can consider information human recruiters can’t calculate quickly

These tools help HR departments move more quickly through large pools of applicants and ultimately make it cheaper to hire. Proponents say they can be more fair and more thorough than overworked human recruiters skimming through hundreds of résumés and cover letters.

“Companies just can’t get through the applications. And if they do, they’re spending — on average — three seconds,” Mondal says. “There’s a whole problem with efficiency.” He argues that using an AI system can ensure that every résumé, at the very least, is screened. After all, one job posting might attract thousands of applications, with a huge share from people who are completely unqualified for a role.

Such tools can automatically recognize traits in the application materials from previous successful hires and look for signs of that trait among materials submitted by new applicants. Mondal says systems like Ideal can consider between 16 and 25 factors (or elements) in each application, pointing out that, unlike humans, it can calculate something like commute distance in “milliseconds.”

“You can start to fine-tune the system with not just the people you’ve brought in to interview, or not just the people that you’ve hired, but who ended up doing well in the position. So it’s a complete loop,” Mondal explains. “As a human, it’s very difficult to look at all that data across the lifecycle of an applicant. And [with AI] this is being done in seconds.”

These systems typically operate on a scale greater than a human recruiter. For instance, HireVue claims the artificial intelligence used in its video platform evaluates “tens of thousands of factors.” Even if companies are using the same AI-based hiring tool, they’re likely using a system that’s optimized to their own hiring preferences. Plus, an algorithm is likely changing if it’s continuously being trained on new data.

Another service, Humantic, claims it can get a sense of candidates’ psychology based on their résumés, LinkedIn profiles, and other text-based data an applicant might volunteer to submit, by mining through and studying their use of language (the product is inspired by the field of psycholinguistics). The idea is to eliminate the need for additional personality assessments. “We try to recycle the information that’s already there,” explains Amarpreet Kalkat, the company’s co-founder. He says the service is used by more than 100 companies.

Proponents of these recruiting tools also claim that artificial intelligence can be used to avoid human biases, like an unconscious preference for graduates of a particular university, or a bias against women or a racial minority. (But AI often amplifies bias; more on that later.) They argue that AI can help strip out — or abstract — information related to a candidate’s identity, like their name, age, gender, or school, and more fairly consider applicants.

The idea that AI might clamp down on — or at least do better than — biased humans inspired California lawmakers earlier this year to introduce a bill urging fellow policymakers to explore the use of new technology, including “artificial intelligence and algorithm-based technologies,” to “reduce bias and discrimination in hiring.”

AI tools reflect who builds and trains them

These AI systems are only as good as the data they’re trained on and the humans that build them. If a résumé-screening machine learning tool is trained on historical data, such as résumés collected from a company’s previously hired candidates, the system will inherit both the conscious and unconscious preferences of the hiring managers who made those selections. That approach could help find stellar, highly qualified candidates. But Rieke warns that method can also pick up “silly patterns that are nonetheless real and prominent in a data set.”

One such résumé-screening tool identified being named Jared and having played lacrosse in high school as the best predictors of job performance, as Quartz reported.

If you’re a former high school lacrosse player named Jared, that particular tool might not sound so bad. But systems can also learn to be racist, sexist, ageist, and biased in other nefarious ways. For instance, Reuters reported last year that Amazon had created a recruitment algorithm that unintentionally tended to favor male applicants over female applicants for certain positions. The system was trained on a decade of résumés submitted to the company, which Reuters reported were mostly from men.

A visitor at Intel’s Artificial Intelligence (AI) Day walks past a signboard in Bangalore, India on April 4, 2017.
Manjunath Kiran/AFP via Getty Images

(An Amazon spokesperson told Recode that the system was never used and was abandoned for several reasons, including because the algorithms were primitive and that the models randomly returned unqualified candidates.)

Mondal says there is no way to use these systems without regular, extensive auditing. That’s because, even if you explicitly instruct a machine learning tool not to discriminate against women, it might inadvertently learn to discriminate against other proxies associated with being female, like having graduated from a women’s college.

“You have to have a way to make sure that you aren’t picking people who are grouped in a specific way and that you’re only hiring those types of people,” he says. Ensuring that these systems are not introducing unjust bias means frequently checking that new hires don’t disproportionately represent one demographic group.

But there’s skepticism that efforts to “de-bias” algorithms and AI are a complete solution. And Upturn’s report on equity and hiring algorithms notes that “[de-biasing] best practices have yet to crystallize [and] [m]any techniques maintain a narrow focus on individual protected characteristics like gender or race, and rarely address intersectional concerns, where multiple protected traits produce compounding disparate effects.”

And if a job is advertised on an online platform like Facebook, it’s possible you won’t even see a posting because of biases produced by that platform’s algorithms. There’s also concern that systems like HireVue’s could inherently be built to discriminate against people with certain disabilities.

Critics are also skeptical of whether these tools do what they say, especially when they make broad claims about a candidates’ “predicted” psychology, emotion, and suitability for a position. Adina Sterling, an organizational behavior professor at Stanford, also notes that, if not designed carefully, an algorithm could drive its preferences toward a single type of candidate. Such a system might miss a more unconventional applicant who could nevertheless excel, like an actor applying for a job in sales.

“Algorithms are good for economies of scale. They’re not good for nuance,” she explains, adding that she doesn’t believe companies are being vigilant enough when studying the recruitment AI tools they use and checking what these systems actually optimize for.

Who regulates these tools?

Employment lawyer Mark Girouard says AI and algorithmic selection systems fall under the Uniform Guidelines on Employee Selection Procedures, guidance established in 1978 by federal agencies that guide companies’ selection standards and employment assessments.

Many of these AI tools say they follow the four-fifths rule, a statistical “rule of thumb” benchmark established under those employee selection guidelines. The rule is used to compare the selection rate of applicant demographic groups and investigate whether selection criteria might have had an adverse impact on a protected minority group.

But experts have noted that the rule is just one test, and Rieke emphasizes that passing the test doesn’t imply these AI tools do what they claim. A system that picked candidates randomly could pass the test, he says. Girouard explains that as long as a tool does not have a disparate impact on race or gender, there’s no law on the federal level that requires that such AI tools work as intended.

In its case against HireVue, EPIC argues that the company has failed to meet established AI transparency guidelines, including artificial intelligence principles outlined by the Organization for Economic Co-operation and Development that have been endorsed by the U.S and 41 other countries. HireVue told Recode that it follows the standards set by the Uniform Guidelines, as well as guidelines set by other professional organizations. The company also says its systems are trained on a diverse data set and that its tools have helped its clients increase the diversity of their staff.

At the state level, Illinois has made some initial headway in promoting the transparent use of these tools. In January, its Artificial Intelligence Video Interview Act will take effect, which requires that employers using artificial intelligence-based video analysis technology notify, explain, and get the consent of applicants.

Still, Rieke says few companies release the methodologies used in their bias audits in “meaningful detail.” He’s not aware of any company that has released the results of an audit conducted by a third party.

Meanwhile, senators have pushed the EEOC to investigate whether biased facial analysis algorithms could violate anti-discrimination laws, and experts have previously warned the agency about the risk of algorithmic bias. But the EEOC has yet to release any specific guidance regarding algorithmic decision-making or artificial intelligence-based tools and did not respond to Recode’s request for comment.

Rieke did highlight one potential upside for applicants. Should lawmakers one day force companies to release the results of their AI hiring selection systems, job candidates could gain new insight into how to improve their applications. But as to whether AI will ever make the final call, Sterling says that’s a long way’s off.

“Hiring is an extremely social process,” she explains. “Companies don’t want to relinquish it to tech.”


Open Sourced is made possible by Omidyar Network. All Open Sourced content is editorially independent and produced by our journalists.

By: 

Source: Artificial intelligence will help determine if you get your next job

5.58M subscribers
Robot co-workers and artificial intelligence assistants are becoming more common in the workplace. Could they edge human employees out? What then? Still haven’t subscribed to WIRED on YouTube? ►► http://wrd.cm/15fP7B7 Also, check out the free WIRED channel on Roku, Apple TV, Amazon Fire TV, and Android TV. Here you can find your favorite WIRED shows and new episodes of our latest hit series Masterminds. ABOUT WIRED WIRED is where tomorrow is realized. Through thought-provoking stories and videos, WIRED explores the future of business, innovation, and culture. The Future of Your Job in the Age of AI | Robots & Us | WIRED

Advertisements

How Artificial Intelligence Could Save Psychiatry

Five years from now, the U.S.’ already overburdened mental health system may be short as many as 15,600 psychiatrists as the growth in demand for their services outpaces supply, according to a 2017 report from the National Council for Behavioral Health. But some proponents say that, by then, an unlikely tool—artificial intelligence—may be ready to help mental health practitioners mitigate the impact of the deficit.

Medicine is already a fruitful area for artificial intelligence; it has shown promise in diagnosing disease, interpreting images and zeroing in on treatment plans. Though psychiatry is in many ways a uniquely human field, requiring emotional intelligence and perception that computers can’t simulate, even here, experts say, AI could have an impact. The field, they argue, could benefit from artificial intelligence’s ability to analyze data and pick up on patterns and warning signs so subtle humans might never notice them.

“Clinicians actually get very little time to interact with patients,” says Peter Foltz, a research professor at the University of Colorado Boulder who this month published a paper about AI’s promise in psychiatry. “Patients tend to be remote, it’s very hard to get appointments and oftentimes they may be seen by a clinician [only] once every three months or six months.”

AI could be an effective way for clinicians to both make the best of the time they do have with patients, and bridge any gaps in access, Foltz says. AI-aided data analysis could help clinicians make diagnoses more quickly and accurately, getting patients on the right course of treatment faster—but perhaps more excitingly, Foltz says, apps or other programs that incorporate AI could allow clinicians to monitor their patients remotely, alerting them to issues or changes that arise between appointments and helping them incorporate that knowledge into treatment plans. That information could be lifesaving, since research has shown that regularly checking in with patients who are suicidal or in mental distress can keep them safe.

Some mental-health apps and programs already incorporate AI—like Woebot, an app-based mood tracker and chatbot that combines AI and principles from cognitive behavioral therapy—but it’ll probably be some five to 10 years before algorithms are routinely used in clinics, according to psychiatrists interviewed by TIME.

Even then, Dr. John Torous, director of digital psychiatry at Beth Israel Deaconess Medical Center in Boston and chair of the American Psychiatric Association’s Committee on Mental Health Information Technology, cautions that “artificial intelligence is only as strong as the data it’s trained on,” and, he says, mental health diagnostics have not been quantified well enough to program an algorithm. It’s possible that will happen in the future, with more and larger psychological studies, but, Torous says “it’s going to be an uphill challenge.”

Not everyone shares that position. Speech and language have emerged as two of the clearest applications for AI in psychiatry, says Dr. Henry Nasrallah, a psychiatrist at the University of Cincinnati Medical Center who has written about AI’s place in the field. Speech and mental health are closely linked, he explains.

Talking in a monotone can be a sign of depression; fast speech can point to mania; and disjointed word choice can be connected to schizophrenia. When these traits are pronounced enough, a human clinician might pick up on them—but AI algorithms, Nasrallah says, could be trained to flag signals and patterns too subtle for humans to detect.

Foltz and his team in Boulder are working in this space, as are big-name companies like IBM. Foltz and his colleagues designed a mobile app that takes patients through a series of repeatable verbal exercises, like telling a story and answering questions about their emotional state. An AI system then assesses those soundbites for signs of mental distress, both by analyzing how they compare to the individual’s previous responses, and by measuring the clips against responses from a larger patient population.

The team tested the system on 225 people living in either Northern Norway or rural Louisiana—two places with inadequate access to mental health care—and found that the app was at least as accurate as clinicians at picking up on speech-based signs of mental distress.

Foltz and his team in Boulder are working in this space, as are big-name companies like IBM. Foltz and his colleagues designed a mobile app that takes patients through a series of repeatable verbal exercises, like telling a story and answering questions about their emotional state. An AI system then assesses those soundbites for signs of mental distress, both by analyzing how they compare to the individual’s previous responses, and by measuring the clips against responses from a larger patient population. The team tested the system on 225 people living in either Northern Norway or rural Louisiana—two places with inadequate access to mental health care—and found that the app was at least as accurate as clinicians at picking up on speech-based signs of mental distress.

Written language is also a promising area for AI-assisted mental health care, Nasrallah says. Studies have shown that machine learning algorithms trained to assess word choice and order are better than clinicians at distinguishing between real and fake suicide notes, meaning they’re good at picking up on signs of distress. Using these systems to regularly monitor a patient’s writing, perhaps through an app or periodic remote check-in with mental health professionals, could feasibly offer a way to assess their risk of self-harm.

Even if these applications do pan out, Torous cautions that “nothing has ever been a panacea.” On one hand, he says, it’s exciting that technology is being pitched as a solution to problems that have long plagued the mental health field; but, on the other hand, “in some ways there’s so much desperation to make improvements to mental health that perhaps the tools are getting overvalued.”

Nasrallah and Foltz emphasize that AI isn’t meant to replace human psychiatrists or completely reinvent the wheel. (“Our brain is a better computer than any AI,” Nasrallah says.) Instead, they say, it can provide data and insights that will streamline treatment.

Alastair Denniston, an ophthalmologist and honorary professor at the U.K.’s University of Birmingham who this year published a research review about AI’s ability to diagnose disease, argues that, if anything, technology can help doctors focus on the human elements of medicine, rather than getting bogged down in the minutiae of diagnosis and data collection.

Artificial intelligence “may allow us to have more time in our day to spend actually communicating effectively and being more human,” Denniston says. “Rather than being diagnostic machines… [doctors can] provide some of that empathy that can get swallowed up by the business of what we do.”

By Jamie Ducharme

November 20, 2019

Source: How Artificial Intelligence Could Save Psychiatry | Time

44 subscribers
Hi! I’m Chris Lovejoy, a doctor working in London and a clinical data scientist working to bring AI to healthcare. Timestamps: 0:13 – Some general thoughts on artificial intelligence in healthcare 1:41 – AI in diagnosing psychiatric conditions 2:19 – AI in monitoring mental health 3:00 – AI in treatment of psychiatric conditions 4:38 – AI for increasing efficiency for clinicians 5:38 – Important considerations and concerns 6:17 – Good things about AI for healthcare in general 6:38 – Closing thoughts To download my article on the subject, visit: https://chrislovejoy.me/psychiatry/ Papers referenced in video: (1) Jaiswal S, Valstar M, Gillott A, Daley D. Automatic detection of ADHD and ASD from expressive behaviour in RGBD data. December 7 2016, ArXiv161202374 Cs. Available from: http://arxiv.org/abs/1612.02374. (2) Corcoran CM, Carrillo F, Fernández-Slezak D, Bedi G, Klim C, Javitt DC, et al. Prediction of psychosis across protocols and risk cohorts using automated language analysis. World Psychiatry 2018;17(February (1)):67–75. (3) Place S, Blanch-Hartigan D, Rubin C, Gorrostieta C, Mead C, Kane J, et al. Behavioral indicators on a mobile sensing platform predict clinically validated psychiatric symptoms of mood and anxiety disorders. J Med Internet Res 2017;19(March (3)):e75. (4) Fitzpatrick KK, Darcy A, Vierhile M. Delivering cognitive behavior therapy to young adults with symptoms of depression and anxiety using a fully automated conversational agent (Woebot): a randomized controlled trial. JMIR Ment Health 2017;4(June (2)):e19. (5) Standalone effects of a cognitive behavioral intervention using a mobile phone app on psychological distress and alcohol consumption among Japanese workers: pilot nonrandomized controlled trial | Hamamura | JMIR Mental Health. Available from: http://mental.jmir.org/2018/1/e24/. (6) Lovejoy CA, Buch V, Maruthappu M. Technology and mental health: The role of artificial intelligence. Eur Psychiatry. 2019 Jan;55:1-3. doi: 10.1016/j.eurpsy.2018.08.004. Epub 2018 Oct 28.

This A.I. Bot Writes Such Convincing Ads, Chase Just ‘Hired’ It to Write Marketing Copy

Here are two headlines. One was written by a human. One was written by a robot. Can you guess which?

  • Access cash from the equity in your home. Take a look.

  • It’s true–You can unlock cash from the equity in your home. Click to apply.

Both lines of marketing copy were used to pitch home equity lines of credit to JPMorgan Chase customers. The second garnered nearly twice as many applications, according to the Wall Street Journal. It was generated by Persado’s artificial intelligence tool.

This is why Chase just signed a five-year deal with Persado Inc., a software company that uses artificial intelligence to tweak marketing language for its clients. After a trial period with the company, Chase has found Persado’s bot-generated copy incredibly effective. “Chase saw as high as a 450 percent lift in click-through rates on ads,” Persado said in a statement.

That email might have been written by a bot.

Chase says it will use Persado’s tool to rewrite language for email promotions, online ads, and potentially snail mail promotions. It’s also looking into using the tool for internal communications and customer service communications.

When asked if this might lead to downsizing, a Chase spokesperson told AdAge: “Our relationship with Persado hasn’t had an impact on our structure.”

Persado’s tool starts with human-written copy and analyzes it for six elements (narrative, emotion, descriptions, calls-to-action, formatting, and word positioning). It then creates thousands of combinations by making tweaks to those elements.

Kristin Lemkau, chief marketing officer at JPMorgan Chase, is fully on board with Persado. Chase began experimenting with its software three years ago. Sometimes the tool would recommend a wordier headline, which goes against marketing 101. But that longer headline garnered more clicks.

“They made a couple of changes that made sense and I was like, ‘Why were we so dumb that we didn’t figure that out?'” she told the Journal.

By: Betsy Mikel Owner, Aveck @BetsyM

Source: This A.I. Bot Writes Such Convincing Ads, Chase Just ‘Hired’ It to Write Marketing Copy

The Amazing Ways Dubai Airport Uses Artificial Intelligence

As one of the world’s busiest airports, (ranked No. 3 in 2018 according to Airports Council International’s world traffic report), Dubai International Airport is also a leader in using artificial intelligence (AI). In fact, the United Arab Emirates (UAE) leads the Arab world with its adoption of artificial intelligence in other sectors and areas of life and has a government that prioritizes artificial intelligence including an AI strategy and Ministry of Artificial Intelligence with a mandate to invest in technologies and AI tools.

AI Customs Officials

The Emirates Ministry of the Interior said that by 2020, immigration officers would no longer be needed in the UAE. They will be replaced by artificial intelligence. The plan is to have people just walk through an AI-powered security system to be scanned without taking off shoes or belts or emptying pockets. The airport was already experimenting with a virtual aquarium smart gate. Travelers would walk through a small tunnel surrounded by fish. While they looked around at the fish that swim around them, cameras could view every angle of their faces. This allowed for quick identification.

AI Baggage Handling

Tim Clark, the president of Emirates, the world’s biggest long-haul carrier, believes artificial intelligence, specifically robots, should already be handling baggage service including identifying them, putting the bags in appropriate bins and then taking them out of the aircraft without any human intervention. He envisions these robots to be similar to the automation and robotics used at Amazon.com’s warehouses.

Air Traffic Management

In a partnership with Canada-based Searidge Technologies, the UAE General Civil Aviation Authority (GCAA) is researching the use of artificial intelligence in the country’s air traffic control process. In a statement announcing the partnership in 2018, the director-general of the GCAA confirmed that it is UAE’s strategy to explore how artificial intelligence and other new technologies can enhance the aviation industry. With goals to optimize safety and efficiency within air traffic management, this is important work that could ultimately impact similar operations worldwide.

Automated Vehicles

Self-driving cars powered by artificial intelligence and 100% solar or electrical energy will soon be helping the Dubai Airport increase efficiency in its day-to-day operations, including improvements between ground transportation and air travel. Imagine how artificial intelligence could orchestrate passenger movement from arrival to the airport to leaving your destination’s airport. In the future, autonomous vehicles (already loaded with your luggage) could meet you at the curb. Maybe AI could transform luggage carts to act autonomously to get your luggage to your hotel or home, eliminating any need for baggage carousels and the hassle of dealing with your luggage.

While much attention is given to the process of vetting passengers to ensure safe air travel, artificial intelligence can also improve the staff clearance process. Some airports see the most significant security threat airports, and airlines face is with airport personnel. An EgyptAir mechanic, baggage handler and two police officers were arrested in connection with the bombing of Metrojet Flight 9268 where all 224 people on board died. There have been several arrests in Australia of border force officers linked to international drug smugglers. Part of these efforts to improve the staff clearance process includes enhancing staff entrances to enable greater control with biometrics, advanced facial recognition and the use of artificial intelligence rather than just CCTV cameras and police monitoring which is used now. Artificial intelligence can look for areas of concerns with a staff member’s behavior and record for crime and violence even before they are hired. After they are hired, AI algorithms can continue to look for changes that could indicate a security problem.

AI Projects Being Explored for the Future

Emirates is developing AI projects in its lab at the Dubai Future Accelerators facility. Some of these include using AI to assist passengers when picking their onboard meals, scheduling a pickup by a taxi as well as personalizing the experience of every Emirates passenger throughout the entire journey. They are also exploring how AI can help Emirates teach cabin crew. We can expect that artificial intelligence will be put to work to solve the problems of airplane boarding by looking at the issue in a way humans have been unable to. The goal would be for AI to architect a queue-less experience.

AI at Other Airports

The first biometric airport terminal is already running at the Hartsfield-Jackson Atlanta International Airport, and a similar system is running at Dubai International Airport for first- and business-class passengers. Here are some other ways airports and airlines around the world are using artificial intelligence or plan to:

·         Cybersecurity: Airports and airlines have shifted from identifying cybersecurity to preventing cybersecurity threats with an AI assist in response to the expansion of digitalization across aviation.

·         Immersive experiences: Augmented reality might be the future of helping travelers find their way through an airport.

·         Voice recognition technology: At Heathrow Airport, passengers can already ask Alexa to get flight updates. United Airlines allows travelers to check in to their flight through Google Assistant by simply stating, “Hey Google, check in to my flight.”

As innovation gets pushed by the UAE, Dubai International Airport and other technology innovators around the world, there will be opportunities for abuse and privacy considerations when using these new AI tools and capabilities for air travel. But, if artificial intelligence can remove the biggest headaches from travel, some people (possibly most) will be more than ready to exchange a bit of privacy for a better experience when AI takes over.

 

Follow me on Twitter or LinkedIn. Check out my website.

Bernard Marr is an internationally best-selling author, popular keynote speaker, futurist, and a strategic business & technology advisor to governments and companies. He helps organisations improve their business performance, use data more intelligently, and understand the implications of new technologies such as artificial intelligence, big data, blockchains, and the Internet of Things. Why don’t you connect with Bernard on Twitter (@bernardmarr), LinkedIn (https://uk.linkedin.com/in/bernardmarr) or instagram (bernard.marr)?

Source: The Amazing Ways Dubai Airport Uses Artificial Intelligence

How AI Is Revolutionizing Healthcare

Great strides are being made by AI in the healthcare sector. The AI market in healthcare is due to increase tenfold by 2025, becoming a $13 billion industry, according to Global Market Insights. But currently, advances are generally tied to frontline medicine, rather than back-office administrative and finance functions. “Most of the machine learning and artificial intelligence gains we’re seeing right now are on the clinical and diagnostic sides,” explains Brian Sanderson, National Managing Principal of Healthcare Services at the accounting, consulting and technology firm Crowe.

But there’s a value opportunity to be gained from harnessing machine learning and AI beyond the bedside, too. It’s one that can help hospitals save money on administration and allow health system leadership to focus more on what should be at the core of everything they do: keeping people healthy. While the revolution is well underway in frontline medicine, hospital administrators are just beginning to recognize the power and applications of AI. Here, explore three areas of back-office healthcare where the chance for revolution, aided by AI, is ripe: exceptions management, hospital administration and revenue cycle operations.

1. Exceptions Management: Reducing Errors All Around

For decades, hospital business office personnel have been attempting to recognize, resolve and prevent billing exceptions, i.e. claims that did not smoothly complete the payment cycle. But with machine learning and AI, it’s possible to put actual computing power to work spotting patterns that even the most skilled humans cannot.

Working with a large health system, Crowe used AI to analyze a large health system’s credit balances — patient accounts that did not resolve to zero. “They had anomalies, and they had exceptions,” says Sanderson. “There shouldn’t be any if your manufacturing process is running correctly.” The system had 17 people working on resolving and processing credit exceptions.

As soon as Crowe put AI on the case, it discovered that a single compliance issue was occurring thousands of times per month. “We found 16,000 of them by using AI, and were able to turn it off and fix it,” says Sanderson. “Suddenly 16,000 exceptions stop coming.”

“Cost-driven automation,” as Sanderson calls it, is a transformative innovation for the healthcare space.

2. Administration: Keeping Hospital Operations From Flatlining

Current C-suite staff focused on finance are tasked with juggling plenty of plates. The chief financial officer (CFO) keeps an eye on revenues, while the chief operating officer (COO) has to look at the bottom line and keep costs low. But aided by AI, the CFO can oversee both sides of the equation with ease, freeing up the COO to keep services running smoothly on a day-to-day basis. This kind of leadership and staffing efficiency is essential because hospitals are always at risk of taking their eye off the main goal: keeping people healthy and ensuring that the day-to-day operation of healthcare systems runs smoothly.

The AI revolution will involve feeding in and parsing data from entire specialty wings and specific beds within a hospital or hospital group to better allocate resources automatically, Sanderson believes. “I think you will be able to look at the trends and diagnoses that are within the four walls of your hospital and be able to use that as an operational managerial tool,” he says. “You’ll be able to determine what your labor needs are, your food supplies and your medications” — all with better precision than ever before.

This is just around the corner, he notes, and is likely to manifest in the next few years. “It’s about as hot as it can be right now, with respect to interest and applicability,” he says of the AI buzz in hospital finance. Crowe, for one, is using technology that helps CFOs at its client organizations get a better handle on what financial position they’ll finish the year in. That use of technology is likely to expand in the near future, using information at present (including its current financials, sickness levels and hospital performance) and broader trends in the industry to project what a healthcare company’s financial performance will be in the future.

Better prediction and projection can help health systems take better risks, too, says Sanderson. Bolstered by big data, hospitals know when to take the plunge on investing several million in a new wing or diagnostic machine, for example, and when they’ll need to funnel all resources into keeping pace with more immediate concerns. “It can incorporate things like what happens when flu season hits, if there are implications from weather or if competitors open up particular facilities.”

3. Revenue Cycles: It’s a Journey To Automation

“Every health system has to become more efficient to reduce costs,” explains Sanderson. It’s a simple fact of business. But to truly bolster the revenue cycle, health systems must follow a multi-stage journey to reach maximum efficiency, according to Crowe. The first step in the process is to recognize people and processes that set the standard for optimal operations.

The second is to standardize processes to mimic highest achievers: encourage everyone to follow the path that one high achiever takes. “A lot of consulting sort of stops there,” says Sanderson: “‘This is the way you should be doing it; we want everybody doing it this way.’” But taking it a step further — and systematizing your processes — can unlock even greater efficiency. Utilizing the appropriate technology, either by ensuring the effective use of systems already in place or investing in technology that enables your teams to complete, and repeat, the correct process, is essential. Enhancing standardized human workers by giving them access to AI tools and big data helps compel them to work smarter. However, it’s still not the most efficient method of handling the revenue cycle. That comes with automation: utilizing AI across organizations to determine the best industry practice and delegating redundant tasks to machines through RPA, thus freeing up human workers to take on more uniquely human problems and relying on fewer staff to monitor machine performance.

“Where are most [organizations] on that spectrum?” asks Sanderson. “Most are somewhere in the middle of that journey.” Progress is linear and must incorporate every step, he says. It’s not possible to skip straight to automation, since the processes in place and being automated might incorporate glitches.

But as more health systems progress further along that journey, feeding more data into the bigger picture, the benefits become greater too. Crowe currently has access to 1,200 hospitals’ data, across a wide geographic span, and is leveraging that to improve performance across the board. It allows the company to take in the entire scope of current innovation — and help clients learn from best practices and peers. “The future is an amalgamation of data to allow for the best of the best,” says Sanderson. “The idea is to take an entire industry worth of data and build something scalable, and adoptable, for the industry.” In so doing, healthcare organizations allow their professionals to focus on the real goal: keeping people healthy and providing better care for all.

Crowe Contributor Crowe Contributor Brand Contributor

Crowe LLP is a public accounting, consulting, and technology firm with offices around the world.

Source: How AI Is Revolutionizing Healthcare

The Soup Has a Familiar Face: How Artificial Intelligence Is Changing Kroger, Walgreens And Others

In their efforts to eliminate marketing misfires in the aisles, more retailers are investing in ways to physically connect with their customers within their stores. From cooler doors that recognize a face to dressing room mirrors that can dim the lights, retailers are investing in artificial intelligence (AI) for one key purpose: to accurately anticipate customer behavior at scale. This was a theme recently of the National Retail Federation’s Big Show in New York. Specifically, retailers are using AI, facial recognition and other advanced technologies for their physical tracking capabilities………………

Source: The Soup Has a Familiar Face: How Artificial Intelligence Is Changing Kroger, Walgreens And Others

China’s Ping An Insurance Firm Partner With Sanya City Authorities to Build DLT-Powered Smart City – Ogwu Osaemezu Emmanuel

1.jpg

Ping An Insurance Group, a highly reputed China-based insurance corporation has joined forces with the Sanya municipal government to develop a “smart city” that would be powered by blockchain technology, artificial intelligence (AI) and other new technologies,” according to a local news source, People’s Daily on November 14, 2018. Per sources close to the matter, in a bid to contribute its bit to urban development in China, Ping An Group has reportedly inked a strategic agreement with Sanya Municipal People’s Government to construct a “Smart City” run entirely by innovative technologies including the revolutionary blockchain technology, big data, artificial intelligence, and others…………..

Read more: https://btcmanager.com/chinas-insurance-dlt-powered-smart-city/

 

 

 

 

Your kindly Donations would be so effective in order to fulfill our future research and endeavors – Thank you

These 5 Innovative AI Companies Are Changing The Way We Live – Rosie Brown

1.jpg

It’s 2018 and the world doesn’t quite look like a scene from “The Jetsons.” However, technological innovation spurred by advancements in computing has allowed for artificial intelligence to bring significant changes to the way businesses operate, impacting our everyday lives. Here are five industries impacted by AI…….

Read more: https://www.forbes.com/sites/nvidia/2018/11/01/these-5-innovative-ai-companies-are-changing-the-way-we-live/#2dc9a60e5a7f

 

 

 

 

Your kindly Donations would be so effective in order to fulfill our future research and endeavors – Thank you

 

 

 

 

A.I. Is Helping Scientists Predict When and Where the Next Big Earthquake Will Be – Thomas Fuller & Cade Metz

1.jpg

Countless dollars and entire scientific careers have been dedicated to predicting where and when the next big earthquake will strike. But unlike weather forecasting, which has significantly improved with the use of better satellites and more powerful mathematical models, earthquake prediction has been marred by repeated failure. Some of the world’s most destructive earthquakes China in 2008, Haiti in 2010 and Japan in 2011, among them……..

Read more: https://www.nytimes.com/2018/10/26/technology/earthquake-predictions-artificial-intelligence.html

 

 

 

 

Your kindly Donations would be so effective in order to fulfill our future research and endeavors – Thank you

AI Innovators: This Researcher Uses Deep Learning To Prevent Future Natural Disasters – Lisa Lahde

1.jpg

Meet Damian Borth, chair in the Artificial Intelligence & Machine Learning department at the University of St. Gallen (HSG) in Switzerland, and past director of the Deep Learning Competence Center at the German Research Center for Artificial Intelligence (DFKI). He is also a founding co-director of Sociovestix Labs, a social enterprise in the area of financial data science. Damian’s background is in research where he focuses on large-­scale multimedia opinion mining applying machine learning and in particular deep learning to mine insights (trends, sentiment) from online media streams. Damian talks about his realization in deep learning and shares why integrating his work with deep learning is an important part to help prevent future natural disasters……..

Read more: https://www.forbes.com/sites/nvidia/2018/09/19/ai-innovators-this-researcher-uses-deep-learning-to-prevent-future-natural-disasters/#be6f7b16cd16

 

 

 

 

Your kindly Donations would be so effective in order to fulfill our future research and endeavors – Thank you

 

%d bloggers like this:
Skip to toolbar