Facebook Will Shut Down Facial Recognition System

Facebook Inc (FB.O) announced on Tuesday it is shutting down its facial recognition system, which automatically identifies users in photos and videos, citing growing societal concerns about the use of such technology.

“Regulators are still in the process of providing a clear set of rules governing its use,” Jerome Pesenti, vice president of artificial intelligence at Facebook, wrote in a blog post. “Amid this ongoing uncertainty, we believe that limiting the use of facial recognition to a narrow set of use cases is appropriate.”

The removal of face recognition by the world’s largest social media platform comes as the tech industry has faced a reckoning over the past few years over the ethics of using the technology.

Critics say facial recognition technology – which is popular among retailers, hospitals and other businesses for security purposes – could compromise privacy, target marginalized groups and normalize intrusive surveillance. IBM has permanently ended facial recognition product sales, and Microsoft Corp (MSFT.O) and Amazon.com Inc (AMZN.O) have suspended sales to police indefinitely.

The news also comes as Facebook has been under intense scrutiny from regulators and lawmakers over user safety and a wide range of abuses on its platforms.

The company, which last week renamed itself Meta Platforms Inc, said more than one-third of Facebook’s daily active users have opted into the face recognition setting on the social media site, and the change will now delete the “facial recognition templates” of more than 1 billion people.

The removal will roll out globally and is expected to be complete by December, a Facebook spokesperson said. Privacy advocacy and digital rights groups welcomed the move.

Alan Butler, executive director of the Electronic Privacy Information Center, said, “For far too long Internet users have suffered personal data abuses at the whims of Facebook and other platforms. EPIC first called for an end to this program in 2011,” though he said comprehensive data protection regulations were still needed in the United States.

Adam Schwartz, senior staff attorney at the Electronic Frontier Foundation, said that although Facebook’s action comes after moves from other tech companies, it could mark a “notable moment in the national turning-away from face recognition.”

Facebook added that its automatic alt text tool, which creates image descriptions for visually impaired people, will no longer include the names of people recognized in photos after the removal of face recognition, but will otherwise function normally.Facebook did not rule out using facial recognition technology in other products, saying it still sees it as a “powerful tool” for identity verification for example.

The company’s facial recognition software has long been the subject of scrutiny. The U.S. Federal Trade Commission included it among the concerns when it fined Facebook $5 billion to settle privacy complaints in 2019. A judge this year approved Facebook’s $650 million settlement of a class action in Illinois over allegations it collected and stored biometric data of users without proper consent.

By:  and

Source: Facebook will shut down facial recognition system | Reuters

.

Related Contents:

Staff, By GCN; Jun 10, 2020. “IBM bows out of facial recognition market -“. GCN. Retrieved October 7, 2021.“Mugspot Can Find A Face In The Crowd – Face-Recognition Software Prepares To Go To Work In The Streets

Bonsor, K. (September 4, 2001). “How Facial Recognition Systems Work”ford, Mark. “Facial recognition progress report

  • October 6, 2011.

Kimmel, Ron. “Three-dimensional face reco

Riggan, Benjamin; Short, Nathaniel; Hu, Shuowen (March 2018). “Thermal to Visible Synthesis of Face Images using Multiple Regions”

“Galaxy S8 face recognition already defeated with a simple picture”. Ars Technica. Retrieved November 2, 2017.

“Facial recognition technology is coming to Canadian airports this spring”. CBC News. Retrieved March 3, 2017.

“TSA had expressed its intention to adopt a similar program for domestic air travel”. USA Today. August 16, 2019.“Police use facial recognition technology to detect wanted criminals during beer festival in Chinese city of Qingdao”. opengovasia.com. OpenGovAsia. Archived from the original on N

Dai, Sarah (June 5, 2019). “AI unicorn Megvii not behind app used for surveillance in Xinjiang, says human rights group”. South China Morning Post.

How AI Is Revolutionizing Health Care

Photo:

The market value of AI in the health care industry is predicted to reach $6.6 billion by 2021. Artificial intelligence is increasingly growing in popularity throughout various industries. Most of us associate AI with things like robots, Alexa and self-driving cars.

But AI is a lot more than that. AI experts see it as a revolutionary technology that could benefit many industries.

The impact of AI in the health care sector is genuinely life-changing. It is driving innovations in clinical operations, drug development, surgery and data management. AI technology is also rapidly finding its way into hospitals.

AI applications are centered on three main investment areas: digitization, engagement and diagnostics. Looking at some examples of artificial intelligence in health care, it is clear that there are exciting breakthroughs in incorporating AI in medical services.

Let’s explore some of the amazing applications of AI that are revolutionizing health care.

Robot Doctors

AI does not get more exciting than robots. However, these are not the humanlike droids from science fiction films. We are talking complex and intelligent machines designed for specific tasks.

In 2017, a robot in China passed the medical licensing exam using only its AI brain. In the same year, the first semi-automated surgical robot was used to suture blood vessels as narrow as 0.03 mm.

Today, top-of-the-line hospitals are awash with intelligent machines. Surgical robots operate with a precision rivaling that of the best-skilled surgeons. A Chinese robot dentist equipped with AI skills can autonomously perform complex and delicate dental procedures.

What about robot-assisted surgery?

https://i0.wp.com/onlinemarketingscoops.com/wp-content/uploads/2020/01/local-seo-medical-doctors-845x684-1.jpg?resize=159%2C129&ssl=1

Intelligent robots are also used as transporting units and recovery and consulting assistance. Transport nurse robots navigate the hospital pathways to deliver medical supplies. Most of these robots are not fully automated. However, these machines show great potential in changing the way medical procedures are performed.

Clinical Diagnosis

AI algorithms diagnose diseases faster and more accurately than doctors. They are particularly successful in detecting diseases from image-based test results.

Late last year, Google’s DeepMind trained a neural network to accurately detect over 50 types of eye diseases by simply analyzing 3D rental scans. This shows just how effective AI technology can be at identifying real anomalies.

Effective treatment of cancer heavily depends on early detection and preemptive measures. Certain types of cancer, such as different types of melanoma, are notoriously difficult to detect during the early stages. AI algorithms can scan and analyze biopsy images and MRI scans 1,000 times faster than doctors. The algorithms can diagnose with an 87% accuracy rate. Diagnosis errors and delays are becoming a thing of the past.

https://i0.wp.com/onlinemarketingscoops.com/wp-content/uploads/2020/01/amazonbanner-05112019.jpg?resize=838%2C335&ssl=1

Precision Medication

Precision medication refers to dispensing the correct treatment depending on the patient’s characteristics and behavior. Equally essential to correct diagnosis is the provision of the appropriate treatment. This mostly means the exact prescription and recovery routines for the best outcome.

Precision medicine depends on the interpretation of vast volumes of data. The patient’s data is used in determining the most effective medication. The data includes treatment history, restrictions, hereditary traits and lifestyle.

Data organization happens to be a strong suit for machine learning and AI algorithms. AI-powered data management systems seamlessly store and organize large amounts of data to draw meaningful conclusions and predictions.

Hospitals and other health care facilities collect a lot of information from their patients. The data ends up sitting on a hard drive or in a file cabinet. AI medication systems can browse through these archives to assist doctors in formulating precision medication for individual patients.

AI prescription systems are now equipped to deal with non-adherence with medical prescriptions. They do this by studying the patient’s medical history and determining the likelihood that the patient will take the medication as prescribed.

Drug Discovery

Drug development is a tedious venture that may take years and thousands of failed attempts. It can cost medical researchers billions of dollars in the process. Only five in 5,000 drugs that begin pre-clinical trials ever make it to human testing. And only one of the five may find its way to pharmacies.

Many pharmaceutical giants like Sanofi and Pfizer are teaming up with tech companies IBM and Google. These are tech experts who are already invested in AI technology. The idea is to build a drug discovery program using deep learning and AI. The results are already paying off.

Rather than using the traditional trial-and-error approach, drug discovery is now data-driven. Intelligent simulations of better cures are possible through analysis of the existing medicine, patients and pathogens. Researchers have even been able to redirect already existing drugs to combat new infections. This is a process that now takes days rather than months or years, thanks to AI research platforms.

Personal Health Assistants

An everyday example of artificial intelligence in health care is personal health monitoring.

Thanks to the internet of medical things (IoMT) and advanced AI, there is a host of consumer-oriented products geared to promoting good health. Over the last few years, we have seen mobile apps, wearables, and discrete monitors that continually collect data and check the vitals.

https://fliphound.com/creative-example-image/4b02f1fe-7cf3-41dd-b931-a4fbfb29be65/fliphound-healthcare-family-counseling-services-billboard-ad

These gadgets use the data to make recommendations. This is an attempt to remedy any irregularities. Most of these devices store data locally or online. The data can be retrieved and used by medical practitioners as a medical report.

Adopting Examples Of Artificial Intelligence In Health Care

AI is here to stay. It will not replace doctors with machines but work alongside them. The goal is to achieve cheaper and more efficient health care services. Being a relatively new technology in health care, AI still has a long way to go, but the progress is impressive.

We can expect improvements and new applications as this amazing technology continues to advance with time. The improvements will not only be in the health care industry but in other areas as well.

Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?

Terence Mills, CEO of AI.io and Moonshot, is an AI pioneer and digital technology specialist.

Source: How AI Is Revolutionizing Health Care

https://i0.wp.com/onlinemarketingscoops.com/wp-content/uploads/2020/01/healthcare-seo-london-services.jpg?resize=840%2C560&ssl=1

https://i0.wp.com/onlinemarketingscoops.com/wp-content/uploads/2020/01/getty_460308440_145941.jpg?resize=778%2C361&ssl=1

https://miro.medium.com/max/800/0*C3-pKn1sf8YPJ0hK

Artificial Intelligence ٌٌWill Help Determine If You Get Your Next Job

With parents using artificial intelligence to scan prospective babysitters’ social media and an endless slew of articles explaining how your résumé can “beat the bots,” you might be wondering whether a robot will be offering you your next job.

We’re not there yet, but recruiters are increasingly using AI to make the first round of cuts and to determine whether a job posting is even advertised to you. Often trained on data collected about previous or similar applicants, these tools can cut down on the effort recruiters need to expend in order to make a hire. Last year, 67 percent of hiring managers and recruiters surveyed by LinkedIn said AI was saving them time.

But critics argue that such systems can introduce bias, lack accountability and transparency, and aren’t guaranteed to be accurate. Take, for instance, the Utah-based company HireVue, which sells a job interview video platform that can use artificial intelligence to assess candidates and, it claims, predict their likelihood to succeed in a position. The company says it uses on-staff psychologists to help develop customized assessment algorithms that reflect the ideal traits for a particular role a client (usually a company) hopes to hire for, like a sales representative or computer engineer.

Facial recognition boxes and dots cover the photo of a blond man.
Output of an Artificial Intelligence system from Google Vision, performing facial recognition on a photograph of a man in San Ramon, California on November 22, 2019.
Smith Collection/Gado/Getty Images

That algorithm is then used to analyze how individual candidates answer preselected questions in a recorded video interview, grading their verbal responses and, in some cases, facial movements. HireVue claims the tool — which is used by about 100 clients, including Hilton and Unilever — is more predictive of job performance than human interviewers conducting the same structured interviews.

But last month, lawyers at the Electronic Privacy Information Center (EPIC), a privacy rights nonprofit, filed a complaint with the Federal Trade Commission, pushing the agency to investigate the company for potential bias, inaccuracy, and lack of transparency. It also accused HireVue of engaging in “deceptive trade practices” because the company claims it doesn’t use facial recognition. (EPIC argues HireVue’s facial analysis qualifies as facial recognition.)

The lawsuit follows the introduction of the Algorithmic Accountability Act in Congress earlier this year, which would grant the FTC authority to create regulations to check so-called “automated decision systems” for bias. Meanwhile, the Equal Opportunity Employment Commission (EEOC) — the federal agency that deals with employment discrimination — is reportedly now investigating at least two discrimination cases involving job decision algorithms, according to Bloomberg Law.

AI can pop up throughout the recruitment and hiring process

Recruiters can make use of artificial intelligence throughout the hiring process, from advertising and attracting potential applicants to predicting candidates’ job performance. “Just like with the rest of the world’s digital advertisement, AI is helping target who sees what job descriptions [and] who sees what job marketing,” explains Aaron Rieke, a managing director at Upturn, a DC-based nonprofit digital technology research group.

And it’s not just a few outlier companies, like HireVue, that use predictive AI. Vox’s own HR staff use LinkedIn Recruiter, a popular tool that uses artificial intelligence to rank candidates. Similarly, the jobs platform ZipRecruiter uses AI to match candidates with nearby jobs that are potentially good fits, based on the traits the applicants have shared with the platform — like their listed skills, experience, and location — and previous interactions between similar candidates and prospective employers. For instance, because I applied for a few San Francisco-based tutoring gigs on ZipRecruiter last year, I’ve continued to receive emails from the platform advertising similar jobs in the area.

Overall, the company says its AI has trained on more than 1.5 billion employer-candidate interactions.

Platforms like Arya — which says it’s been used by Home Depot and Dyson — go even further, using machine learning to find candidates based on data that might be available on a company’s internal database, public job boards, social platforms like Facebook and LinkedIn, and other profiles available on the open web, like those on professional membership sites.

Arya claims it’s even able to predict whether an employee is likely to leave their old job and take a new one, based on the data it collects about a candidate, such as their promotions, movement between previous roles and industries, and the predicted fit of a new position, as well as data about the role and industry more broadly.

Another use of AI is to screen through application materials, like résumés and assessments, in order to recommend which candidates recruiters should contact first. Somen Mondal, the CEO and co-founder of one such screening and matching service, Ideal, says these systems do more than automatically search résumés for relevant keywords.

For instance, Ideal can learn to understand and compare experiences across candidates’ résumés and then rank the applicants by how closely they match an opening. “It’s almost like a recruiter Googling a company [listed on an application] and learning about it,” explains Mondal, who says his platform is used to screen 5 million candidates a month.

But AI doesn’t just operate behind the scenes. If you’ve ever applied for a job and then been engaged by a text conversation, there’s a chance you’re talking to a recruitment bot. Chatbots that use natural-language understanding created by companies like Mya can help automate the process of reaching out to previous applicants about a new opening at a company, or finding out whether an applicant meets a position’s basic requirements — like availability — thus eliminating the need for human phone-screening interviews. Mya, for instance, can reach out over text and email, as well as through messaging applications like Facebook and WhatsApp.

Another burgeoning use of artificial intelligence in job selection is talent and personality assessments. One company championing this application is Pymetrics, which sells neuroscience computer games for candidates to play (one such game involves hitting the spacebar whenever a red circle, but not a green circle, flashes on the screen).

These games are meant to predict candidates’ “cognitive and personality traits.” Pymetrics says on its website that the system studies “millions of data points” collected from the games to match applicants to jobs judged to be a good fit, based on Pymetrics’ predictive algorithms.

Proponents say AI systems are faster and can consider information human recruiters can’t calculate quickly

These tools help HR departments move more quickly through large pools of applicants and ultimately make it cheaper to hire. Proponents say they can be more fair and more thorough than overworked human recruiters skimming through hundreds of résumés and cover letters.

“Companies just can’t get through the applications. And if they do, they’re spending — on average — three seconds,” Mondal says. “There’s a whole problem with efficiency.” He argues that using an AI system can ensure that every résumé, at the very least, is screened. After all, one job posting might attract thousands of applications, with a huge share from people who are completely unqualified for a role.

Such tools can automatically recognize traits in the application materials from previous successful hires and look for signs of that trait among materials submitted by new applicants. Mondal says systems like Ideal can consider between 16 and 25 factors (or elements) in each application, pointing out that, unlike humans, it can calculate something like commute distance in “milliseconds.”

“You can start to fine-tune the system with not just the people you’ve brought in to interview, or not just the people that you’ve hired, but who ended up doing well in the position. So it’s a complete loop,” Mondal explains. “As a human, it’s very difficult to look at all that data across the lifecycle of an applicant. And [with AI] this is being done in seconds.”

These systems typically operate on a scale greater than a human recruiter. For instance, HireVue claims the artificial intelligence used in its video platform evaluates “tens of thousands of factors.” Even if companies are using the same AI-based hiring tool, they’re likely using a system that’s optimized to their own hiring preferences. Plus, an algorithm is likely changing if it’s continuously being trained on new data.

Another service, Humantic, claims it can get a sense of candidates’ psychology based on their résumés, LinkedIn profiles, and other text-based data an applicant might volunteer to submit, by mining through and studying their use of language (the product is inspired by the field of psycholinguistics). The idea is to eliminate the need for additional personality assessments. “We try to recycle the information that’s already there,” explains Amarpreet Kalkat, the company’s co-founder. He says the service is used by more than 100 companies.

Proponents of these recruiting tools also claim that artificial intelligence can be used to avoid human biases, like an unconscious preference for graduates of a particular university, or a bias against women or a racial minority. (But AI often amplifies bias; more on that later.) They argue that AI can help strip out — or abstract — information related to a candidate’s identity, like their name, age, gender, or school, and more fairly consider applicants.

The idea that AI might clamp down on — or at least do better than — biased humans inspired California lawmakers earlier this year to introduce a bill urging fellow policymakers to explore the use of new technology, including “artificial intelligence and algorithm-based technologies,” to “reduce bias and discrimination in hiring.”

AI tools reflect who builds and trains them

These AI systems are only as good as the data they’re trained on and the humans that build them. If a résumé-screening machine learning tool is trained on historical data, such as résumés collected from a company’s previously hired candidates, the system will inherit both the conscious and unconscious preferences of the hiring managers who made those selections. That approach could help find stellar, highly qualified candidates. But Rieke warns that method can also pick up “silly patterns that are nonetheless real and prominent in a data set.”

One such résumé-screening tool identified being named Jared and having played lacrosse in high school as the best predictors of job performance, as Quartz reported.

If you’re a former high school lacrosse player named Jared, that particular tool might not sound so bad. But systems can also learn to be racist, sexist, ageist, and biased in other nefarious ways. For instance, Reuters reported last year that Amazon had created a recruitment algorithm that unintentionally tended to favor male applicants over female applicants for certain positions. The system was trained on a decade of résumés submitted to the company, which Reuters reported were mostly from men.

A visitor at Intel’s Artificial Intelligence (AI) Day walks past a signboard in Bangalore, India on April 4, 2017.
Manjunath Kiran/AFP via Getty Images

(An Amazon spokesperson told Recode that the system was never used and was abandoned for several reasons, including because the algorithms were primitive and that the models randomly returned unqualified candidates.)

Mondal says there is no way to use these systems without regular, extensive auditing. That’s because, even if you explicitly instruct a machine learning tool not to discriminate against women, it might inadvertently learn to discriminate against other proxies associated with being female, like having graduated from a women’s college.

“You have to have a way to make sure that you aren’t picking people who are grouped in a specific way and that you’re only hiring those types of people,” he says. Ensuring that these systems are not introducing unjust bias means frequently checking that new hires don’t disproportionately represent one demographic group.

But there’s skepticism that efforts to “de-bias” algorithms and AI are a complete solution. And Upturn’s report on equity and hiring algorithms notes that “[de-biasing] best practices have yet to crystallize [and] [m]any techniques maintain a narrow focus on individual protected characteristics like gender or race, and rarely address intersectional concerns, where multiple protected traits produce compounding disparate effects.”

And if a job is advertised on an online platform like Facebook, it’s possible you won’t even see a posting because of biases produced by that platform’s algorithms. There’s also concern that systems like HireVue’s could inherently be built to discriminate against people with certain disabilities.

Critics are also skeptical of whether these tools do what they say, especially when they make broad claims about a candidates’ “predicted” psychology, emotion, and suitability for a position. Adina Sterling, an organizational behavior professor at Stanford, also notes that, if not designed carefully, an algorithm could drive its preferences toward a single type of candidate. Such a system might miss a more unconventional applicant who could nevertheless excel, like an actor applying for a job in sales.

“Algorithms are good for economies of scale. They’re not good for nuance,” she explains, adding that she doesn’t believe companies are being vigilant enough when studying the recruitment AI tools they use and checking what these systems actually optimize for.

Who regulates these tools?

Employment lawyer Mark Girouard says AI and algorithmic selection systems fall under the Uniform Guidelines on Employee Selection Procedures, guidance established in 1978 by federal agencies that guide companies’ selection standards and employment assessments.

Many of these AI tools say they follow the four-fifths rule, a statistical “rule of thumb” benchmark established under those employee selection guidelines. The rule is used to compare the selection rate of applicant demographic groups and investigate whether selection criteria might have had an adverse impact on a protected minority group.

But experts have noted that the rule is just one test, and Rieke emphasizes that passing the test doesn’t imply these AI tools do what they claim. A system that picked candidates randomly could pass the test, he says. Girouard explains that as long as a tool does not have a disparate impact on race or gender, there’s no law on the federal level that requires that such AI tools work as intended.

In its case against HireVue, EPIC argues that the company has failed to meet established AI transparency guidelines, including artificial intelligence principles outlined by the Organization for Economic Co-operation and Development that have been endorsed by the U.S and 41 other countries. HireVue told Recode that it follows the standards set by the Uniform Guidelines, as well as guidelines set by other professional organizations. The company also says its systems are trained on a diverse data set and that its tools have helped its clients increase the diversity of their staff.

At the state level, Illinois has made some initial headway in promoting the transparent use of these tools. In January, its Artificial Intelligence Video Interview Act will take effect, which requires that employers using artificial intelligence-based video analysis technology notify, explain, and get the consent of applicants.

Still, Rieke says few companies release the methodologies used in their bias audits in “meaningful detail.” He’s not aware of any company that has released the results of an audit conducted by a third party.

Meanwhile, senators have pushed the EEOC to investigate whether biased facial analysis algorithms could violate anti-discrimination laws, and experts have previously warned the agency about the risk of algorithmic bias. But the EEOC has yet to release any specific guidance regarding algorithmic decision-making or artificial intelligence-based tools and did not respond to Recode’s request for comment.

Rieke did highlight one potential upside for applicants. Should lawmakers one day force companies to release the results of their AI hiring selection systems, job candidates could gain new insight into how to improve their applications. But as to whether AI will ever make the final call, Sterling says that’s a long way’s off.

“Hiring is an extremely social process,” she explains. “Companies don’t want to relinquish it to tech.”


Open Sourced is made possible by Omidyar Network. All Open Sourced content is editorially independent and produced by our journalists.

By: 

Source: Artificial intelligence will help determine if you get your next job

5.58M subscribers
Robot co-workers and artificial intelligence assistants are becoming more common in the workplace. Could they edge human employees out? What then? Still haven’t subscribed to WIRED on YouTube? ►► http://wrd.cm/15fP7B7 Also, check out the free WIRED channel on Roku, Apple TV, Amazon Fire TV, and Android TV. Here you can find your favorite WIRED shows and new episodes of our latest hit series Masterminds. ABOUT WIRED WIRED is where tomorrow is realized. Through thought-provoking stories and videos, WIRED explores the future of business, innovation, and culture. The Future of Your Job in the Age of AI | Robots & Us | WIRED

How Artificial Intelligence Could Save Psychiatry

Five years from now, the U.S.’ already overburdened mental health system may be short as many as 15,600 psychiatrists as the growth in demand for their services outpaces supply, according to a 2017 report from the National Council for Behavioral Health. But some proponents say that, by then, an unlikely tool—artificial intelligence—may be ready to help mental health practitioners mitigate the impact of the deficit.

Medicine is already a fruitful area for artificial intelligence; it has shown promise in diagnosing disease, interpreting images and zeroing in on treatment plans. Though psychiatry is in many ways a uniquely human field, requiring emotional intelligence and perception that computers can’t simulate, even here, experts say, AI could have an impact. The field, they argue, could benefit from artificial intelligence’s ability to analyze data and pick up on patterns and warning signs so subtle humans might never notice them.

“Clinicians actually get very little time to interact with patients,” says Peter Foltz, a research professor at the University of Colorado Boulder who this month published a paper about AI’s promise in psychiatry. “Patients tend to be remote, it’s very hard to get appointments and oftentimes they may be seen by a clinician [only] once every three months or six months.”

AI could be an effective way for clinicians to both make the best of the time they do have with patients, and bridge any gaps in access, Foltz says. AI-aided data analysis could help clinicians make diagnoses more quickly and accurately, getting patients on the right course of treatment faster—but perhaps more excitingly, Foltz says, apps or other programs that incorporate AI could allow clinicians to monitor their patients remotely, alerting them to issues or changes that arise between appointments and helping them incorporate that knowledge into treatment plans. That information could be lifesaving, since research has shown that regularly checking in with patients who are suicidal or in mental distress can keep them safe.

Some mental-health apps and programs already incorporate AI—like Woebot, an app-based mood tracker and chatbot that combines AI and principles from cognitive behavioral therapy—but it’ll probably be some five to 10 years before algorithms are routinely used in clinics, according to psychiatrists interviewed by TIME.

Even then, Dr. John Torous, director of digital psychiatry at Beth Israel Deaconess Medical Center in Boston and chair of the American Psychiatric Association’s Committee on Mental Health Information Technology, cautions that “artificial intelligence is only as strong as the data it’s trained on,” and, he says, mental health diagnostics have not been quantified well enough to program an algorithm. It’s possible that will happen in the future, with more and larger psychological studies, but, Torous says “it’s going to be an uphill challenge.”

Not everyone shares that position. Speech and language have emerged as two of the clearest applications for AI in psychiatry, says Dr. Henry Nasrallah, a psychiatrist at the University of Cincinnati Medical Center who has written about AI’s place in the field. Speech and mental health are closely linked, he explains.

Talking in a monotone can be a sign of depression; fast speech can point to mania; and disjointed word choice can be connected to schizophrenia. When these traits are pronounced enough, a human clinician might pick up on them—but AI algorithms, Nasrallah says, could be trained to flag signals and patterns too subtle for humans to detect.

Foltz and his team in Boulder are working in this space, as are big-name companies like IBM. Foltz and his colleagues designed a mobile app that takes patients through a series of repeatable verbal exercises, like telling a story and answering questions about their emotional state. An AI system then assesses those soundbites for signs of mental distress, both by analyzing how they compare to the individual’s previous responses, and by measuring the clips against responses from a larger patient population.

The team tested the system on 225 people living in either Northern Norway or rural Louisiana—two places with inadequate access to mental health care—and found that the app was at least as accurate as clinicians at picking up on speech-based signs of mental distress.

Foltz and his team in Boulder are working in this space, as are big-name companies like IBM. Foltz and his colleagues designed a mobile app that takes patients through a series of repeatable verbal exercises, like telling a story and answering questions about their emotional state. An AI system then assesses those soundbites for signs of mental distress, both by analyzing how they compare to the individual’s previous responses, and by measuring the clips against responses from a larger patient population. The team tested the system on 225 people living in either Northern Norway or rural Louisiana—two places with inadequate access to mental health care—and found that the app was at least as accurate as clinicians at picking up on speech-based signs of mental distress.

Written language is also a promising area for AI-assisted mental health care, Nasrallah says. Studies have shown that machine learning algorithms trained to assess word choice and order are better than clinicians at distinguishing between real and fake suicide notes, meaning they’re good at picking up on signs of distress. Using these systems to regularly monitor a patient’s writing, perhaps through an app or periodic remote check-in with mental health professionals, could feasibly offer a way to assess their risk of self-harm.

Even if these applications do pan out, Torous cautions that “nothing has ever been a panacea.” On one hand, he says, it’s exciting that technology is being pitched as a solution to problems that have long plagued the mental health field; but, on the other hand, “in some ways there’s so much desperation to make improvements to mental health that perhaps the tools are getting overvalued.”

Nasrallah and Foltz emphasize that AI isn’t meant to replace human psychiatrists or completely reinvent the wheel. (“Our brain is a better computer than any AI,” Nasrallah says.) Instead, they say, it can provide data and insights that will streamline treatment.

Alastair Denniston, an ophthalmologist and honorary professor at the U.K.’s University of Birmingham who this year published a research review about AI’s ability to diagnose disease, argues that, if anything, technology can help doctors focus on the human elements of medicine, rather than getting bogged down in the minutiae of diagnosis and data collection.

Artificial intelligence “may allow us to have more time in our day to spend actually communicating effectively and being more human,” Denniston says. “Rather than being diagnostic machines… [doctors can] provide some of that empathy that can get swallowed up by the business of what we do.”

By Jamie Ducharme

November 20, 2019

Source: How Artificial Intelligence Could Save Psychiatry | Time

44 subscribers
Hi! I’m Chris Lovejoy, a doctor working in London and a clinical data scientist working to bring AI to healthcare. Timestamps: 0:13 – Some general thoughts on artificial intelligence in healthcare 1:41 – AI in diagnosing psychiatric conditions 2:19 – AI in monitoring mental health 3:00 – AI in treatment of psychiatric conditions 4:38 – AI for increasing efficiency for clinicians 5:38 – Important considerations and concerns 6:17 – Good things about AI for healthcare in general 6:38 – Closing thoughts To download my article on the subject, visit: https://chrislovejoy.me/psychiatry/ Papers referenced in video: (1) Jaiswal S, Valstar M, Gillott A, Daley D. Automatic detection of ADHD and ASD from expressive behaviour in RGBD data. December 7 2016, ArXiv161202374 Cs. Available from: http://arxiv.org/abs/1612.02374. (2) Corcoran CM, Carrillo F, Fernández-Slezak D, Bedi G, Klim C, Javitt DC, et al. Prediction of psychosis across protocols and risk cohorts using automated language analysis. World Psychiatry 2018;17(February (1)):67–75. (3) Place S, Blanch-Hartigan D, Rubin C, Gorrostieta C, Mead C, Kane J, et al. Behavioral indicators on a mobile sensing platform predict clinically validated psychiatric symptoms of mood and anxiety disorders. J Med Internet Res 2017;19(March (3)):e75. (4) Fitzpatrick KK, Darcy A, Vierhile M. Delivering cognitive behavior therapy to young adults with symptoms of depression and anxiety using a fully automated conversational agent (Woebot): a randomized controlled trial. JMIR Ment Health 2017;4(June (2)):e19. (5) Standalone effects of a cognitive behavioral intervention using a mobile phone app on psychological distress and alcohol consumption among Japanese workers: pilot nonrandomized controlled trial | Hamamura | JMIR Mental Health. Available from: http://mental.jmir.org/2018/1/e24/. (6) Lovejoy CA, Buch V, Maruthappu M. Technology and mental health: The role of artificial intelligence. Eur Psychiatry. 2019 Jan;55:1-3. doi: 10.1016/j.eurpsy.2018.08.004. Epub 2018 Oct 28.

The 7 Biggest Technology Trends In 2020 Everyone Must Get Ready For Now

We are amidst the 4th Industrial Revolution, and technology is evolving faster than ever. Companies and individuals that don’t keep up with some of the major tech trends run the risk of being left behind. Understanding the key trends will allow people and businesses to prepare and grasp the opportunities. As a business and technology futurist, it is my job to look ahead and identify the most important trends. In this article, I share with you the seven most imminent trends everyone should get ready for in 2020.

AI-as-a-service

Artificial Intelligence (AI) is one of the most transformative tech evolutions of our times. As I highlighted in my book ‘Artificial Intelligence in Practice’, most companies have started to explore how they can use AI to improve the customer experience and to streamline their business operations. This will continue in 2020, and while people will increasingly become used to working alongside AIs, designing and deploying our own AI-based systems will remain an expensive proposition for most businesses.

For this reason, much of the AI applications will continue to be done through providers of as-a-service platforms, which allow us to simply feed in our own data and pay for the algorithms or compute resources as we use them.

Currently, these platforms, provided by the likes of Amazon, Google, and Microsoft, tend to be somewhat broad in scope, with (often expensive) custom-engineering required to apply them to the specific tasks an organization may require. During 2020, we will see wider adoption and a growing pool of providers that are likely to start offering more tailored applications and services for specific or specialized tasks. This will mean no company will have any excuses left not to use AI.

Today In: Innovation

5G data networks

The 5th generation of mobile internet connectivity is going to give us super-fast download and upload speeds as well as more stable connections. While 5G mobile data networks became available for the first time in 2019, they were mostly still expensive and limited to functioning in confined areas or major cities. 2020 is likely to be the year when 5G really starts to fly, with more affordable data plans as well as greatly improved coverage, meaning that everyone can join in the fun.

Super-fast data networks will not only give us the ability to stream movies and music at higher quality when we’re on the move. The greatly increased speeds mean that mobile networks will become more usable even than the wired networks running into our homes and businesses. Companies must consider the business implications of having super-fast and stable internet access anywhere. The increased bandwidth will enable machines, robots, and autonomous vehicles to collect and transfer more data than ever, leading to advances in the area of the Internet of Things (IoT) and smart machinery. Smart cities

Autonomous Driving

While we still aren’t at the stage where we can expect to routinely travel in, or even see, autonomous vehicles in 2020, they will undoubtedly continue to generate a significant amount of excitement.

Tesla chief Elon Musk has said he expects his company to create a truly “complete” autonomous vehicle by this year, and the number of vehicles capable of operating with a lesser degree of autonomy – such as automated braking and lane-changing – will become an increasingly common sight. In addition to this, other in-car systems not directly connected to driving, such as security and entertainment functions – will become increasingly automated and reliant on data capture and analytics. Google’s sister-company Waymo has just completed a trial of autonomous taxis in California, where it transported more than Xk people.

It won’t just be cars, of course – trucking and shipping are becoming more autonomous, and breakthroughs in this space are likely to continue to hit the headlines throughout 2020.

With the maturing of autonomous driving technology, we will also increasingly hear about the measures that will be taken by regulators, legislators, and authorities. Changes to laws, existing infrastructure, and social attitudes are all likely to be required before autonomous driving becomes a practical reality for most of us. During 2020, it’s likely we will start to see the debate around autonomous driving spread outside of the tech world, as more and more people come round to the idea that the question is not “if,” but “when,” it will become a reality.

Personalized and predictive medicine

Technology is currently transforming healthcare at an unprecedented rate. Our ability to capture data from wearable devices such as smartwatches will give us the ability to increasingly predict and treat health issues in people even before they experience any symptoms.

When it comes to treatment, we will see much more personalized approaches. This is also referred to as precision medicine which allows doctors to more precisely prescribe medicines and apply treatments, thanks to a data-driven understanding of how effective they are likely to be for a specific patient.

Although not a new idea, thanks to recent breakthroughs in technology, especially in the fields of genomics and AI, it is giving us a greater understanding of how different people’s bodies are better or worse equipped to fight off specific diseases, as well as how they are likely to react to different types of medication or treatment.

Throughout 2020 we will see new applications of predictive healthcare and the introduction of more personalized and effective treatments to ensure better outcomes for individual patients.

Computer Vision

In computer terms, “vision” involves systems that are able to identify items, places, objects or people from visual images – those collected by a camera or sensor. It’s this technology that allows your smartphone camera to recognize which part of the image it’s capturing is a face, and powers technology such as Google Image Search.

As we move through 2020, we’re going to see computer vision equipped tools and technology rolled out for an ever-increasing number of uses. It’s fundamental to the way autonomous cars will “see” and navigate their way around danger. Production lines will employ computer vision cameras to watch for defective products or equipment failures, and security cameras will be able to alert us to anything out of the ordinary, without requiring 24/7 monitoring.

Computer vision is also enabling face recognition, which we will hear a lot about in 2020. We have already seen how useful the technology is in controlling access to our smartphones in the case of Apple’s FaceID and how Dubai airport uses it to provide a smoother customer journey [add link]. However, as the use cases will grow in 2020, we will also have more debates about limiting the use of this technology because of its potential to erode privacy and enable ‘Big Brother’-like state control.

Extended Reality

Extended Reality (XR) is a catch-all term that covers several new and emerging technologies being used to create more immersive digital experiences. More specifically, it refers to virtual, augmented, and mixed reality. Virtual reality (VR) provides a fully digitally immersive experience where you enter a computer-generated world using headsets that blend out the real world. Augmented reality (AR) overlays digital objects onto the real world via smartphone screens or displays (think Snapchat filters). Mixed reality (MR) is an extension of AR, that means users can interact with digital objects placed in the real world (think playing a holographic piano that you have placed into your room via an AR headset).

These technologies have been around for a few years now but have largely been confined to the world of entertainment – with Oculus Rift and Vive headsets providing the current state-of-the-art in videogames, and smartphone features such as camera filters and Pokemon Go-style games providing the most visible examples of AR.

From 2020 expect all of that to change, as businesses get to grips with the wealth of exciting possibilities offered by both current forms of XR. Virtual and augmented reality will become increasingly prevalent for training and simulation, as well as offering new ways to interact with customers.

Blockchain Technology

Blockchain is a technology trend that I have covered extensively this year, and yet you’re still likely to get blank looks if you mention in non-tech-savvy company. 2020 could finally be the year when that changes, though. Blockchain is essentially a digital ledger used to record transactions but secured due to its encrypted and decentralized nature. During 2019 some commentators began to argue that the technology was over-hyped and perhaps not as useful as first thought. However, continued investment by the likes of FedEx, IBM, Walmart and Mastercard during 2019 is likely to start to show real-world results, and if they manage to prove its case, could quickly lead to an increase in adoption by smaller players.

And if things are going to plan, 2020 will also see the launch of Facebook’s own blockchain-based crypto currently Libra, which is going to create quite a stir.

If you would like to keep track of these technologies, simply follow me on YouTube, Twitter, LinkedIn, and Instagram, or head to my website for many more in-depth articles on these topics.

Follow me on Twitter or LinkedIn. Check out my website.

Bernard Marr is an internationally best-selling author, popular keynote speaker, futurist, and a strategic business & technology advisor to governments and companies. He helps organisations improve their business performance, use data more intelligently, and understand the implications of new technologies such as artificial intelligence, big data, blockchains, and the Internet of Things. Why don’t you connect with Bernard on Twitter (@bernardmarr), LinkedIn (https://uk.linkedin.com/in/bernardmarr) or instagram (bernard.marr)?

Source: The 7 Biggest Technology Trends In 2020 Everyone Must Get Ready For Now

138K subscribers
In this Intellipaat’s top 10 technologies to learn in 2019 video, you will learn all the trending technologies in the market in 2019. The end goal of this video is to educate you about the latest technologies to learn and all the top 10 trending technologies you can watch for in order to make a fantastic career in IT technologies in 2019. Do subscribe to Intellipaat channel to get regular updates on them: https://goo.gl/hhsGWb Intellipaat Online Training: https://goo.gl/LeiW5S AI & Deep Learning Training: https://goo.gl/amnqEK Blockchain Training: https://goo.gl/CgDPyu Cloud Computing Training: https://goo.gl/PY2nbX Big Data Hadoop Training: https://goo.gl/NJaDuf BI Tools Training: https://goo.gl/SbkRXT DevOps Training: https://goo.gl/zz15qn Salesforce Training: https://goo.gl/zN3tLj SAP HANA Training: https://goo.gl/x2Jiu7 Python Programming Training: https://goo.gl/8urtdD Oracle DBA Training: https://goo.gl/LhYLTS Are you interested to learn any of the trending technology 2019 mentioned in the video? Enroll in our Intellipaat courses & become a certified Professional (https://goo.gl/LeiW5S). All Intellipaat trainings are provided by Industry experts and is completely aligned with industry standards and certification bodies. If you’ve enjoyed this top technologies to learn video, Like us and Subscribe to our channel for more trending technologies of 2019 tutorials. Got any questions about the top technologies to learn in 2019? Ask us in the comment section below. —————————- Intellipaat Edge 1. 24*7 Life time Access & Support 2. Flexible Class Schedule 3. Job Assistance 4. Mentors with +14 yrs 5. Industry Oriented Course ware 6. Life time free Course Upgrade #Top10TechnologiesToLearnIn2019 #TrendingTechnologies2019 #Top10ITTechnologiesIn2019 —————————— For more Information: Please write us to sales@intellipaat.com, or call us at: +91- 7847955955 Website: https://goo.gl/LeiW5S Facebook: https://www.facebook.com/intellipaato… LinkedIn: https://www.linkedin.com/in/intellipaat/ Twitter: https://twitter.com/Intellipaat
%d bloggers like this: