AI Breakthrough Could Spark Medical Revolution

Artificial intelligence has been used to predict the structures of almost every protein made by the human body. The development could help supercharge the discovery of new drugs to treat disease, alongside other applications. Proteins are essential building blocks of living organisms; every cell we have in us is packed with them.

Understanding the shapes of proteins is critical for advancing medicine, but until now, only a fraction of these have been worked out. Researchers used a program called AlphaFold to predict the structures of 350,000 proteins belonging to humans and other organisms. The instructions for making human proteins are contained in our genomes – the DNA contained in the nuclei of human cells.

There are around 20,000 of these proteins expressed by the human genome. Collectively, biologists refer to this full complement as the “proteome”. Commenting on the results from AlphaFold, Dr Demis Hassabis, chief executive and co-founder of artificial intelligence company Deep Mind, said: “We believe it’s the most complete and accurate picture of the human proteome to date.

“We believe this work represents the most significant contribution AI has made to advancing the state of scientific knowledge to date. “And I think it’s a great illustration and example of the kind of benefits AI can bring to society.” He added: “We’re just so excited to see what the community is going to do with this.”

Proteins are made up of chains of smaller building blocks called amino acids. These chains fold in myriad different ways, forming a unique 3D shape. A protein’s shape determines its function in the human body. The 350,000 protein structures predicted by AlphaFold include not only the 20,000 contained in the human proteome, but also those of so-called model organisms used in scientific research, such as E. coli, yeast, the fruit fly and the mouse.

This giant leap in capability is described by DeepMind researchers and a team from the European Molecular Biology Laboratory (EMBL) in the prestigious journal Nature.  AlphaFold was able to make a confident prediction of the structural positions for 58% of the amino acids in the human proteome.

The positions of 35.7% were predicted with a very high degree of confidence – double the number confirmed by experiments. Traditional techniques to work out protein structures include X-ray crystallography, cryogenic electron microscopy (Cryo-EM) and others. But none of these is easy to do: “It takes a huge amount of money and resources to do structures,” Prof John McGeehan, a structural biologist at the University of Portsmouth, told BBC News.

Therefore, the 3D shapes are often determined as part of targeted scientific investigations, but no project until now had systematically determined structures for all the proteins made by the body. In fact, just 17% of the proteome is covered by a structure confirmed experimentally. Commenting on the predictions from AlphaFold, Prof McGeehan said: “It’s just the speed – the fact that it was taking us six months per structure and now it takes a couple of minutes. We couldn’t really have predicted that would happen so fast.”

“When we first sent our seven sequences to the DeepMind team, two of those we already had the experimental structures for. So we were able to test those when they came back. It was one of those moments – to be honest – where the hairs stood up on the back of my neck because the structures [AlphaFold] produced were identical.”

Prof Edith Heard, from EMBL, said: “This will be transformative for our understanding of how life works. That’s because proteins represent the fundamental building blocks from which living organisms are made.” “The applications are limited only by our understanding.” Those applications we can envisage now include developing new drugs and treatments for disease, designing future crops that can resist climate change, and enzymes that can break down the plastic that pervades the environment.

Prof McGeehan’s group is already using AlphaFold’s data to help develop faster enzymes for degrading plastic. He said the program had provided predictions for proteins of interest whose structures could not be determined experimentally – helping accelerate their project by “multiple years”.

Dr Ewan Birney, director of EMBL’s European Bioinformatics Institute, said the AlphaFold predicted structures were “one of the most important datasets since the mapping of the human genome”. DeepMind has teamed up with EMBL to make the AlphaFold code and protein structure predictions openly available to the global scientific community.

Dr Hassabis said DeepMind planned to vastly expand the coverage in the database to almost every sequenced protein known to science – over 100 million structures.

By : Paul Rincon / Science editor, BBC News website

Source: AI breakthrough could spark medical revolution – BBC News

.

Digital Transformation Depends on Diversity

Across industries, businesses are now tech and data companies. The sooner they grasp and live that, the quicker they will meet their customer needs and expectations, create more business value and grow. It is increasingly important to re-imagine business and use digital technologies to create new business processes, cultures, customer experiences and opportunities.

One of the myths about digital transformation is that it’s all about harnessing technology. It’s not. To succeed, digital transformation inherently requires and relies on diversity. Artificial intelligence (AI) is the result of human intelligence, enabled by its vast talents and also susceptible to its limitations.

Therefore, it is imperative for organizations and teams to make diversity a priority and think about it beyond the traditional sense. For me, diversity centers around three key pillars.

People

People are the most important part of artificial intelligence; the fact is that humans create artificial intelligence. The diversity of people — the team of decision-makers in the creation of AI algorithms — must reflect the diversity of the general population.

This goes beyond ensuring opportunities for women in AI and technology roles. In addition, it includes the full dimensions of gender, race, ethnicity, skill set, experience, geography, education, perspectives, interests and more. Why? When you have diverse teams reviewing and analyzing data to make decisions, you mitigate the chances of their own individual and uniquely human experiences, privileges and limitations blinding them to the experiences of others.

One of the myths about digital transformation is that it’s all about harnessing technology. It’s not.

Collectively, we have an opportunity to apply AI and machine learning to propel the future and do good. That begins with diverse teams of people who reflect the full diversity and rich perspectives of our world.

Diversity of skills, perspectives, experiences and geographies has played a key role in our digital transformation. At Levi Strauss & Co., our growing strategy and AI team doesn’t include solely data and machine learning scientists and engineers. We recently tapped employees from across the organization around the world and deliberately set out to train people with no previous experience in coding or statistics.

We took people in retail operations, distribution centers and warehouses, and design and planning and put them through our first-ever machine learning bootcamp, building on their expert retail skills and supercharging them with coding and statistics.

We did not limit the required backgrounds; we simply looked for people who were curious problem solvers, analytical by nature and persistent to look for various ways of approaching business issues. The combination of existing expert retail skills and added machine learning knowledge meant employees who graduated from the program now have meaningful new perspectives on top of their business value. This first-of-its-kind initiative in the retail industry helped us develop a talented and diverse bench of team members.

Data

AI and machine learning capabilities are only as good as the data put into the system. We often limit ourselves to thinking of data in terms of structured tables — numbers and figures — but data is anything that can be digitized.

The digital images of the jeans and jackets our company has been producing for the past 168 years are data. The customer service conversations (recorded only with permissions) are data. The heatmaps from how people move in our stores are data. The reviews from our consumers are data. Today, everything that can be digitized becomes data. We need to broaden how we think of data and ensure we constantly feed all data into AI work.

Most predictive models use data from the past to predict the future. But because the apparel industry is still in the nascent stages of digital, data and AI adoption, having past data to reference is often a common problem. In fashion, we’re looking ahead to predict trends and demand for completely new products, which have no sales history. How do we do that?

We use more data than ever before, for example, both images of the new products and a database of our products from past seasons. We then apply computer vision algorithms to detect similarity between past and new fashion products, which helps us predict demand for those new products. These applications provide much more accurate estimates than experience or intuition do, supplementing previous practices with data- and AI-powered predictions.

At Levi Strauss & Co., we also use digital images and 3D assets to simulate how clothes feel and even create new fashion. For example, we train neural networks to understand the nuances around various jean styles like tapered legs, whisker patterns and distressed looks, and detect the physical properties of the components that affect the drapes, folds and creases. We’re then able to combine this with market data, where we can tailor our product collections to meet changing consumer needs and desires and focus on the inclusiveness of our brand across demographics.

Furthermore, we use AI to create new styles of apparel while always retaining the creativity and innovation of our world-class designers.

Tools and techniques

In addition to people and data, we need to ensure diversity in the tools and techniques we use in the creation and production of algorithms. Some AI systems and products use classification techniques, which can perpetuate gender or racial bias.

For example, classification techniques assume gender is binary and commonly assign people as “male” or “female” based on physical appearance and stereotypical assumptions, meaning all other forms of gender identity are erased. That’s a problem, and it’s upon all of us working in this space, in any company or industry, to prevent bias and advance techniques in order to capture all the nuances and ranges in people’s lives. For example, we can take race out of the data to try and render an algorithm race-blind while continuously safeguarding against bias.

We are committed to diversity in our AI products and systems and, in striving for that, we use open-source tools. Open-source tools and libraries by their nature are more diverse because they are available to everyone around the world and people from all backgrounds and fields work to enhance and advance them, enriching with their experiences and thus limiting bias.

An example of how we do this at Levi Strauss & Company is with our U.S. Red Tab loyalty program. As fans set up their profiles, we don’t ask them to pick a gender or allow the AI system to make assumptions. Instead, we ask them to pick their style preferences (Women, Men, Both or Don’t Know) in order to help our AI system build tailored shopping experiences and more personalized product recommendations.

Diversity of people, data, and techniques and tools is helping Levi Strauss & Co. revolutionize its business and our entire industry, transforming manual to automated, analog to digital, and intuitive to predictive. We are also building on the legacy of our company’s social values, which has stood for equality, democracy and inclusiveness for 168 years. Diversity in AI is one of the latest opportunities to continue this legacy and shape the future of fashion.

By: Katia Walsh

Source: Digital transformation depends on diversity | TechCrunch

.

More Contents:

FreshBooks reaches $1B+ valuation with $130.75M for its SMB-focused accounting platform

Latent AI, which says it can compress common AI models by 10x, lands some key backing

5 ways AI can help mitigate the global shipping crisis

Daily Crunch: Bangalore-based UpGrad becomes India’s newest unicorn with $185M funding round

Scientists Predict Early Covid-19 Symptoms Using AI (And An App)

Combining self-reported symptoms with Artificial Intelligence can predict the early symptoms of Covid-19, according to research led by scientists at Kings College London. Previous studies have predicted whether people will develop Covid using symptoms from the peak of viral infection, which can be less relevant over time — fever is common during later phases, for instance.

The new study reveals which symptoms of infection can be used for early detection of the disease. Published in the journal The Lancet Digital Health, the research used data collected via the ZOE COVID Symptom Study smartphone app. Each app user logged any symptoms that they experienced over the first 3 days, plus the result of a subsequent PCR test for Coronavirus and personal information like age and sex.

Researchers used those self-reported data from the app to assess three models for predicting Covid in advance, which involved using one dataset to train a given model before its performance was tested on another set. The training set included almost 183,000 people who reported symptoms from 16 October to 30 November 2020, while the test dataset consisted of more than 15,000 participants with data between 16 October and 30 November.

The three models were: 1) a statistical method called logical regression; 2) a National Health Service (NHS) algorithm, and; 3) an Artificial Intelligence (AI) approach known as a ‘hierarchical Gaussian process’. Of the three prediction models, the AI approach performed the best, so it was then used to identify patterns in the data. The AI prediction model was sensitive enough to find which symptoms were most relevant in various groups of people.

The subgroups were occupation (healthcare professional versus non-healthcare), age group (16-39, 40-59, 60-79, 80+ years old), sex (male or female), Body-Mass Index (BMI as underweight, normal, overweight/obese) and several well-known health conditions. According to results produced by the AI model, loss of smell was the most relevant early symptom among both healthcare and non-healthcare workers, and the two groups also reported chest pain and a persistent cough.

The symptoms varied among age groups: loss of smell had less relevance to people over 60 years old, for instance, and seemed irrelevant to those over 80 — highlighting age as a key factor in early Covid detection. There was no big difference between sexes for their reported symptoms, but shortness of breath, fatigue and chills/shivers were more relevant signs for men than for women.

No particular patterns were found in BMI subgroups either and, in terms of health conditions, heart disease was most relevant for predicting Covid. As the study’s symptoms were from 2020, its results might only apply to the original strain of the SARS-CoV-2 virus and Alpha variant – the two variants with highest prevalence in the UK that year.

The predictions wouldn’t have been possible without the self-reported data from the ZOE COVID Symptom Study project, a non-profit collaboration between scientists and personalized health company ZOE, which was co-founded by genetic epidemiologist Tim Spector of Kings College London.

The project’s website keeps an up-to-date ranking of the top 5 Covid symptoms reported by British people who are now fully vaccinated (with a Pfizer or AstraZeneca vaccine), have so far received one of the two doses, or are still unvaccinated. Those top 5 symptoms provide a useful resource if you want to know which signs are common for the most prevalent variant circulating in a population — currently Delta – as distinct variants can be associated with different symptoms.

When a new variant emerges in future, you could pass some personal information (such as age) to the AI prediction model so it shows the early symptoms most relevant to you — and, if you developed those symptoms, take a Covid test and perhaps self-isolate before you transmit the virus to other people. As the new study concludes, such steps would help alleviate stress on public health services:

“Early detection of SARS-CoV-2-infected individuals is crucial to contain the spread of the COVID-19 pandemic and efficiently allocate medical resources.” Follow me on Twitter or LinkedIn. Check out my website or some of my other work here.

I’m a science communicator and award-winning journalist with a PhD in evolutionary biology. I specialize in explaining scientific concepts that appear in popular culture and mainly write about health, nature and technology. I spent several years at BBC Science Focus magazine, running the features section and writing about everything from gay genes and internet memes to the science of death and origin of life. I’ve also contributed to Scientific American and Men’s Health. My latest book is ’50 Biology Ideas You Really Need to Know’.

Source: Scientists Predict Early Covid-19 Symptoms Using AI (And An App)

.

Critics:

Healthcare providers and researchers are faced with an exponentially increasing volume of information about COVID-19, which makes it difficult to derive insights that can inform treatment. In response, AWS launched CORD-19 Search, a new search website powered by machine learning, that can help researchers quickly and easily search for research papers and documents and answer questions like “When is the salivary viral load highest for COVID-19?”

Built on the Allen Institute for AI’s CORD-19 open research dataset of more than 128,000 research papers and other materials, this machine learning solution can extract relevant medical information from unstructured text and delivers robust natural-language query capabilities, helping to accelerate the pace of discovery.

In the field of medical imaging, meanwhile, researchers are using machine learning to help recognize patterns in images, enhancing the ability of radiologists to indicate the probability of disease and diagnose it earlier.

UC San Diego Health has engineered a new method to diagnose pneumonia earlier, a condition associated with severe COVID-19. This early detection helps doctors quickly triage patients to the appropriate level of care even before a COVID-19 diagnosis is confirmed. Trained with 22,000 notations by human radiologists, the machine learning algorithm overlays x-rays with colour-coded maps that indicate pneumonia probability. With credits donated from the AWS Diagnostic Development Initiative, these methods have now been deployed to every chest x-ray and CT scan throughout UC San Diego Health in a clinical research study.

Related Links:

Governments must build trust in AI to fight COVID-19 – Here’s how they can do it

This AI model has predicted which patients will get the sickest from COVID-19

Coalition for Epidemic Preparedness Innovations

What history tells us about pandemics’ impact on inflation

How to back an inclusive post-COVID recovery

Survey: How US employees feel about a full return to the workplace

Artificial Intelligence Is Developing A Sense Of Smell: What Could A Digital Nose Mean In Practice?

We already know we can teach machines to see. Sensors enable autonomous cars to take in visual information and make decisions about what to do next when they’re on the road. But did you know machines can smell, too?

Aryballe, a startup that uses artificial intelligence and digital olfaction technology to mimic the human sense of smell, helps their business customers turn odor data into actionable information.

You’d be surprised how many practical use cases there are for technology like this. I interviewed Sam Guillaume, CEO of Aryballe, and asked him how digital olfaction works, how it’s currently being used on the market, and what his predictions are for the future of fragrance tech.

How the Nose Knows

Our human noses work by processing odor molecules released by organic and inorganic objects. When energy in objects increases (through pressure, agitation, or temperature changes), odors evaporate, making it possible for us to inhale and absorb them through our nasal cavities.

The odors then stimulate our nasal olfactory neurons and the olfactory bulb. Our brains pull together other information (like visual cues and memories of things we’ve smelled before) to identify the smell and decide what to do next.

You can watch my full interview with Aryballe CEO Sam Guillaume here:

Digital olfaction mimics the way humans smell by capturing odor signatures using biosensors, then using software solutions to analyze and display the odor data. Artificial Intelligence (AI) interprets the signatures and classifies them based on a database of previously collected smells.

“Over the last few years, technology has allowed us to essentially duplicate the way human olfaction actually works,” Guillaume says. “And by porting this technology to readily available techniques like semiconductors, for instance, we can make sensors that are small, convenient, easy to use, and cheap. In terms of performance and its ability to discriminate between smells, it’s pretty close to the way your nose works.”

Practical Use Cases for Digital Olfaction

So how does all this digital olfaction data turn into valuable insights for companies?

Odor analytics can help companies do things like:

●    Engineer the perfect “new car” smells in the automotive industry

●    Predict when maintenance needs to be done in industrial or automotive equipment

●    Automatically detect food spoilage in consumer appliances

●    Reject or approve raw material supply

●    Reduce R&D time for new foods and beverages

●    Ensure fragrances of personal care products like deodorants and shampoos last for a long time

●    Give riders peace of mind on public transportation by emitting an ambient smell

●    Create personal care devices and health sensors that use odors to detect issues and alert users

Leveraging the Power of Odor Data

In the future, companies like Aryballe will potentially be collaborating on projects that will create digital odor libraries for companies, or even creating devices that help COVID-19 patients recover their sense of smell.

Look for more advances as we can continue to find ways to teach computers how to sense the world around them and use the data they collect to help us in our everyday lives.

Find out more about Aryballe’s technology here, and learn more about machine learning and artificial intelligence on my blog.

Bernard Marr is an internationally best-selling author, popular keynote speaker, futurist, and a strategic business & technology advisor to governments and companies. He helps organisations improve their business performance, use data more intelligently, and understand the implications of new technologies such as artificial intelligence, big data, blockchains, and the Internet of Things. Why don’t you connect with Bernard on Twitter (@bernardmarr), LinkedIn (https://uk.linkedin.com/in/bernardmarr) or instagram (bernard.marr)?

Source: Artificial Intelligence Is Developing A Sense Of Smell: What Could A Digital Nose Mean In Practice?

.

More Contents:

Big Ethical Questions about the Future of AI

Artificial intelligence is already changing the way we live our daily lives and interact with machines. From optimizing supply chains to chatting with Amazon Alexa, artificial intelligence already has a profound impact on our society and economy. Over the coming years, that impact will only grow as the capabilities and applications of AI continue to expand.

AI promises to make our lives easier and more connected than ever. However, there are serious ethical considerations to any technology that affects society so profoundly. This is especially true in the case of designing and creating intelligence that humans will interact with and trust. Experts have warned about the serious ethical dangers involved in developing AI too quickly or without proper forethought. These are the top issues keeping AI researchers up at night.

Bias: Is AI fair

Bias is a well-established facet of AI (or of human intelligence, for that matter). AI takes on the biases of the dataset it learns from. This means that if researchers train an AI on data that are skewed for race, gender, education, wealth, or any other point of bias, the AI will learn that bias. For instance, an artificial intelligence application used to predict future criminals in the United States showed higher risk scores and recommended harsher actions for black people than white based on the racial bias in America’s criminal incarceration data.

Of course, the challenge with AI training is there’s no such thing as a perfect dataset. There will always be under- and overrepresentation in any sample. These are not problems that can be addressed quickly. Mitigating bias in training data and providing equal treatment from AI is a major key to developing ethical artificial intelligence.

Liability: Who is responsible for AI?

Last month when an Uber autonomous vehicle killed a pedestrian, it raised many ethical questions. Chief among them is “Who is responsible, and who’s to blame when something goes wrong?” One could blame the developer who wrote the code, the sensor hardware manufacturer, Uber itself, the Uber supervisor sitting in the car, or the pedestrian for crossing outside a crosswalk.

Developing AI will have errors, long-term changes, and unforeseen consequences of the technology. Since AI is so complex, determining liability isn’t trivial. This is especially true when AI has serious implications on human lives, like piloting vehicles, determining prison sentences, or automating university admissions. These decisions will affect real people for the rest of their lives. On one hand, AI may be able to handle these situations more safely and efficiently than humans. On the other hand, it’s unrealistic to expect AI will never make a mistake. Should we write that off as the cost of switching to AI systems, or should we prosecute AI developers when their models inevitably make mistakes?

Security: How do we protect access to AI from bad actors?

As AI becomes more powerful across our society, it will also become more dangerous as a weapon. It’s possible to imagine a scary scenario where a bad actor takes over the AI model that controls a city’s water supply, power grid, or traffic signals. More scary is the militarization of AI, where robots learn to fight and drones can fly themselves into combat.

Cybersecurity will become more important than ever. Controlling access to the power of AI is a huge challenge and a difficult tightrope to walk. We shouldn’t centralise the benefits of AI, but we also don’t want the dangers of AI to spread. This becomes especially challenging in the coming years as AI becomes more intelligent and faster than our brains by an order of magnitude.

Human Interaction: Will we stop talking to one another?

An interesting ethical dilemma of AI is the decline in human interaction. Now more than any time in history it’s possible to entertain yourself at home, alone. Online shopping means you don’t ever have to go out if you don’t want to.

While most of us still have a social life, the amount of in-person interactions we have has diminished. Now, we’re content to maintain relationships via text messages and Facebook posts. In the future, AI could be a better friend to you than your closest friends. It could learn what you like and tell you what you want to hear. Many have worried that this digitization (and perhaps eventual replacement) of human relationships is sacrificing an essential, social part of our humanity.

Employment: Is AI getting rid of jobs?

This is a concern that repeatedly appears in the press. It’s true that AI will be able to do some of today’s jobs better than humans. Inevitably, those people will lose their jobs, and it will take a major societal initiative to retrain those employees for new work. However, it’s likely that AI will replace jobs that were boring, menial, or unfulfilling. Individuals will be able to spend their time on more creative pursuits, and higher-level tasks. While jobs will go away, AI will also create new markets, industries, and jobs for future generations.

Wealth Inequality: Who benefits from AI?

The companies who are spending the most on AI development today are companies that have a lot of money to spend. A major ethical concern is AI will only serve to centralizecoro wealth further. If an employer can lay off workers and replace them with unpaid AI, then it can generate the same amount of profit without the need to pay for employees.

Machines will create wealth more than ever in the economy of the future. Governments and corporations should start thinking now about how we redistribute that wealth so that everyone can participate in the AI-powered economy.

Power & Control: Who decides how to deploy AI?

Along with the centralization of wealth comes the centralization of power and control. The companies that control AI will have tremendous influence over how our society thinks and acts each day. Regulating the development and operation of AI applications will be critical for governments and consumers. Just as we’ve recently seen Facebook get in trouble for the influence its technology and advertising has had on society, we might also see AI regulations that codify equal opportunity for everyone and consumer data privacy.

Robot Rights: Can AI suffer?

A more conceptual ethical concern is whether AI can or should have rights. As a piece of computer code, it’s tempting to think that artificially intelligent systems can’t have feelings. You can get angry with Siri or Alexa without hurting their feelings. However, it’s clear that consciousness and intelligence operate on a system of reward and aversion. As artificially intelligent machines become smarter than us, we’ll want them to be our partners, not our enemies. Codifying humane treatment of machines could play a big role in that.

Ethics in AI in the coming years

Artificial intelligence is one of the most promising technological innovations in human history. It could help us solve a myriad of technical, economic, and societal problems. However, it will also come with serious drawbacks and ethical challenges. It’s important that experts and consumers alike be mindful of these questions, as they’ll determine the success and fairness of AI over the coming years.

By: By Steve Kilpatrick
Co-Founder & Director
Artificial Intelligence & Machine Learning

More contents:

Future Space

Future Robotics

Future of Mankind

Future Medicine

Artificial Human Beings: The Amazing Examples Of Robotic Humanoids And Digital Humans

As artificial intelligence continues to mature, we are seeing a corresponding growth in sophistication for humanoid robots and the applications for digital human beings in many aspects of modern-day life. To help you see the possibilities, we have pulled together some of the best examples of humanoid robots and where you might see digital humans in your everyday life today.

Humanoid Robots

Even though the earliest form of humanoid was created by Leonardo Da Vinci in 1495 (a mechanical armored suit that could sit, stand and walk), today’s humanoid robots are powered by artificial intelligence and can listen, talk, move and respond. They use sensors and actuators (motors that control movement) and have features that are modeled after human parts. Whether they are structurally similar to a male (called an Android) or a female (Gynoid), it’s a challenge to create realistic robots that replicate human capabilities.

The first modern-day humanoid robots were created to learn how to make better prosthetics for humans, but now they are developed to do many things to entertain us, specific jobs such as a home health worker or manufacturer, and more. Artificial intelligence makes robots human-like and helps humanoids listen, understand, and respond to their environment and interactions with humans. Here are some of the most innovative humanoid robots in development today:

Atlas: When you see Atlas in action (doing backflips and jumping from one platform to another), you can see why its creators call it “the world’s most dynamic humanoid.” It was unveiled in 2013, but its prowess for jumping platforms was released in a video in 2017. Atlas was created to carry out search and rescue missions.

Ocean One: Stanford Robotics Lab developed Ocean One, a bimanual underwater humanoid robot. Since Ocean One can reach depths that humans cannot, it can be very instrumental in researching coral reefs and other deep-sea inhabitants and features when it explores. Its anthropomorphic design and resemblance to a human diver make it very maneuverable.

Petman: Boston Dynamics, the same company responsible for Atlas, also created Petman (Protection Ensemble Test Mannequin) to test chemical and biological suits for the U.S. military. When you see bipedal Petman in motion, it’s easy to see its human-like characteristics.

Robear: Other humanoid robots such as Robear might look more cartoon than human, but their actions definitely mimic human movement. Robear was developed to possibly help with the shortage of caregivers in Japan as the population ages. As a result, this humanoid has very gentle movements.

Sophia: A humanoid robot developed by Hanson Robotics, is one of the most human-like robots. Sophia is able to have a human-like conversation and is able to make many human-like facial expressions. She has been made the world’s first robot citizen and is the robot Innovation Ambassador for the United Nations Development Programme.

Digital Human Beings

Digital human beings are photorealistic digitized virtual versions of humans. Consider them avatars. While they don’t necessarily have to be created in the likeness of a specific individual (they can be entirely unique), they do look and act like humans. Unlike digital assistants such as Alexa or Siri, these AI-powered virtual beings are designed to interact, sympathize, and have conversations just like a fellow human would. Here are a few digital human beings in development or at work today:

Neons: AI-powered lifeforms created by Samsung’s STAR Labs and called Neons include unique personalities such as a banker, K-pop star, and yoga instructor. While the technology is still young, the company expects that, ultimately, Neons will be available on a subscription basis to provide services such as a customer service or concierge.

Digital Pop Stars: In Japan, new pop stars are getting attention—and these pop stars are made of pixels. One of the band members of AKB48, Amy, is entirely digital and was made from borrowing features from the human artists in the group. Another Japanese artist, Hatsune Miku, is a virtual character from Crypton Future Media. Although she started out as the illustration to promote a voice synthesizer with the same name, she now draws her own fans to sold-out auditoriums. With Auxuman, artificial intelligence is actually making the music and creating the digital performers that perform the original compositions.

AI Hosts: Virtual copies of celebrities were created by ObEN Inc to host the Spring Festival Gala, a celebration of the Chinese lunar new year. This project illustrates the potential of personal AIs—a substitute for a real person when they can’t be present in person. Similarly, China’s Xinhua news agency introduced an AI news anchor that will report the news 24/7.

Fashion Models and Social Media Influencers: Another way digital human beings are being used is in the fashion world. H&M used computer-generated models on its website, and Artificial Talent Co. created an entire business to generate completely photorealistic and customizable fashion models. And it turns out you don’t have to be a real-life human to attract a social media following. Miquela, an artificial intelligence “influencer,” has 1.3 million Instagram followers.

Digital humans have been used in television, movies, and video games already, but there are limitations to using them to replace human actors. And while it’s challenging to predict exactly how digital humans will alter our futures, there are people pondering what digital immortality would be like or how to control the negative possibilities of the technology.

Follow me on Twitter or LinkedIn. Check out my website.

Bernard Marr is an internationally best-selling author, popular keynote speaker, futurist, and a strategic business & technology advisor to governments and companies. He helps organisations improve their business performance, use data more intelligently, and understand the implications of new technologies such as artificial intelligence, big data, blockchains, and the Internet of Things. Why don’t you connect with Bernard on Twitter (@bernardmarr), LinkedIn (https://uk.linkedin.com/in/bernardmarr) or instagram (bernard.marr)?

Source: Artificial Human Beings: The Amazing Examples Of Robotic Humanoids And Digital Humans

5 Wonderful Humanoid Robots With Emotions & Artificial Intelligence || Best Robots #23 —————————————————————————————————— Join Amazon Prime For Amazing Offers(Free 30 Days Trial) : http://amzn.to/2HjTpKM ——————————————————————————————————- We need your support by subscribing, SUBSCRIBE : https://goo.gl/Jq4EwQ Do you have any idea about smart home robots? The futuristic modern robots are available in the market now. They are very intelligent robots. These all are so expert and very helpful to be part of your daily life. They can do lots of works even think like a human being and help you in your every task of your dailies life, you can not imagine. They can guide you, they can work like teachers, entertainer, security guard, postman, and anything you deserve. In a word, They are superb, Incredible. Here are the details and the links for the robots are given below. 1. JIA JIA : https://goo.gl/BPJaq4 2. SOFIA : http://www.hansonrobotics.com/robot/s… 3. Asuna : https://goo.gl/EUtTjn 4. ACTROID : http://www.roboticstoday.com/robots/a… 5. ERICA : http://www.jst.go.jp/erato/ishiguro/e… ——————————————————————————– Best 3 robots 2017, You will intend to buy in future. https://youtu.be/Y23TR34n618?list=PLs… ——————————————————————————– Best 3 robots 2017, You will intend to buy in future. https://youtu.be/Y23TR34n618?list=PLs… —————————————————————————— 5 Best 360 camera – World’s smallest Video camera / Gopro Camera #1- Nico, Moka, Luna, Two Eyes Vr, https://youtu.be/aBjy1G0A7A0 5 Best Fidget Spinners You’ll Intend to buy – Infinity cube, Zerohour, Centerspin, Revoq, Triportal https://www.youtube.com/watch?v=_SX5J… Best 5 Drones, You wanna Buy #2- Mini Drone, Waterproof Drone – Hovercraft, Airblock, Bionic Bird https://www.youtube.com/watch?v=fswVv… BIKI: First Bionic Wireless Underwater Fish Drone – Underwater Drone / Underwater Camera #2 https://www.youtube.com/watch?v=0hRHW… BEST 5 UNDERWATER DRONES / UNDERWATER CAMERA – BEST DRONE WITH CAMERA /WATERPROOF CAMERA #3 https://www.youtube.com/watch?v=sVZR5… Best 5 Humanoid Robots-New Generation Super Robots(Robothespian, Asimo, HRP4, Atlas robot,Valkyrie) https://www.youtube.com/watch?v=raMiw… Best 3 robots 2017, You will intend to buy in future https://www.youtube.com/watch?v=Y23TR…

 

Artificial Intelligence ٌٌWill Help Determine If You Get Your Next Job

With parents using artificial intelligence to scan prospective babysitters’ social media and an endless slew of articles explaining how your résumé can “beat the bots,” you might be wondering whether a robot will be offering you your next job.

We’re not there yet, but recruiters are increasingly using AI to make the first round of cuts and to determine whether a job posting is even advertised to you. Often trained on data collected about previous or similar applicants, these tools can cut down on the effort recruiters need to expend in order to make a hire. Last year, 67 percent of hiring managers and recruiters surveyed by LinkedIn said AI was saving them time.

But critics argue that such systems can introduce bias, lack accountability and transparency, and aren’t guaranteed to be accurate. Take, for instance, the Utah-based company HireVue, which sells a job interview video platform that can use artificial intelligence to assess candidates and, it claims, predict their likelihood to succeed in a position. The company says it uses on-staff psychologists to help develop customized assessment algorithms that reflect the ideal traits for a particular role a client (usually a company) hopes to hire for, like a sales representative or computer engineer.

Facial recognition boxes and dots cover the photo of a blond man.
Output of an Artificial Intelligence system from Google Vision, performing facial recognition on a photograph of a man in San Ramon, California on November 22, 2019.
Smith Collection/Gado/Getty Images

That algorithm is then used to analyze how individual candidates answer preselected questions in a recorded video interview, grading their verbal responses and, in some cases, facial movements. HireVue claims the tool — which is used by about 100 clients, including Hilton and Unilever — is more predictive of job performance than human interviewers conducting the same structured interviews.

But last month, lawyers at the Electronic Privacy Information Center (EPIC), a privacy rights nonprofit, filed a complaint with the Federal Trade Commission, pushing the agency to investigate the company for potential bias, inaccuracy, and lack of transparency. It also accused HireVue of engaging in “deceptive trade practices” because the company claims it doesn’t use facial recognition. (EPIC argues HireVue’s facial analysis qualifies as facial recognition.)

The lawsuit follows the introduction of the Algorithmic Accountability Act in Congress earlier this year, which would grant the FTC authority to create regulations to check so-called “automated decision systems” for bias. Meanwhile, the Equal Opportunity Employment Commission (EEOC) — the federal agency that deals with employment discrimination — is reportedly now investigating at least two discrimination cases involving job decision algorithms, according to Bloomberg Law.

AI can pop up throughout the recruitment and hiring process

Recruiters can make use of artificial intelligence throughout the hiring process, from advertising and attracting potential applicants to predicting candidates’ job performance. “Just like with the rest of the world’s digital advertisement, AI is helping target who sees what job descriptions [and] who sees what job marketing,” explains Aaron Rieke, a managing director at Upturn, a DC-based nonprofit digital technology research group.

And it’s not just a few outlier companies, like HireVue, that use predictive AI. Vox’s own HR staff use LinkedIn Recruiter, a popular tool that uses artificial intelligence to rank candidates. Similarly, the jobs platform ZipRecruiter uses AI to match candidates with nearby jobs that are potentially good fits, based on the traits the applicants have shared with the platform — like their listed skills, experience, and location — and previous interactions between similar candidates and prospective employers. For instance, because I applied for a few San Francisco-based tutoring gigs on ZipRecruiter last year, I’ve continued to receive emails from the platform advertising similar jobs in the area.

Overall, the company says its AI has trained on more than 1.5 billion employer-candidate interactions.

Platforms like Arya — which says it’s been used by Home Depot and Dyson — go even further, using machine learning to find candidates based on data that might be available on a company’s internal database, public job boards, social platforms like Facebook and LinkedIn, and other profiles available on the open web, like those on professional membership sites.

Arya claims it’s even able to predict whether an employee is likely to leave their old job and take a new one, based on the data it collects about a candidate, such as their promotions, movement between previous roles and industries, and the predicted fit of a new position, as well as data about the role and industry more broadly.

Another use of AI is to screen through application materials, like résumés and assessments, in order to recommend which candidates recruiters should contact first. Somen Mondal, the CEO and co-founder of one such screening and matching service, Ideal, says these systems do more than automatically search résumés for relevant keywords.

For instance, Ideal can learn to understand and compare experiences across candidates’ résumés and then rank the applicants by how closely they match an opening. “It’s almost like a recruiter Googling a company [listed on an application] and learning about it,” explains Mondal, who says his platform is used to screen 5 million candidates a month.

But AI doesn’t just operate behind the scenes. If you’ve ever applied for a job and then been engaged by a text conversation, there’s a chance you’re talking to a recruitment bot. Chatbots that use natural-language understanding created by companies like Mya can help automate the process of reaching out to previous applicants about a new opening at a company, or finding out whether an applicant meets a position’s basic requirements — like availability — thus eliminating the need for human phone-screening interviews. Mya, for instance, can reach out over text and email, as well as through messaging applications like Facebook and WhatsApp.

Another burgeoning use of artificial intelligence in job selection is talent and personality assessments. One company championing this application is Pymetrics, which sells neuroscience computer games for candidates to play (one such game involves hitting the spacebar whenever a red circle, but not a green circle, flashes on the screen).

These games are meant to predict candidates’ “cognitive and personality traits.” Pymetrics says on its website that the system studies “millions of data points” collected from the games to match applicants to jobs judged to be a good fit, based on Pymetrics’ predictive algorithms.

Proponents say AI systems are faster and can consider information human recruiters can’t calculate quickly

These tools help HR departments move more quickly through large pools of applicants and ultimately make it cheaper to hire. Proponents say they can be more fair and more thorough than overworked human recruiters skimming through hundreds of résumés and cover letters.

“Companies just can’t get through the applications. And if they do, they’re spending — on average — three seconds,” Mondal says. “There’s a whole problem with efficiency.” He argues that using an AI system can ensure that every résumé, at the very least, is screened. After all, one job posting might attract thousands of applications, with a huge share from people who are completely unqualified for a role.

Such tools can automatically recognize traits in the application materials from previous successful hires and look for signs of that trait among materials submitted by new applicants. Mondal says systems like Ideal can consider between 16 and 25 factors (or elements) in each application, pointing out that, unlike humans, it can calculate something like commute distance in “milliseconds.”

“You can start to fine-tune the system with not just the people you’ve brought in to interview, or not just the people that you’ve hired, but who ended up doing well in the position. So it’s a complete loop,” Mondal explains. “As a human, it’s very difficult to look at all that data across the lifecycle of an applicant. And [with AI] this is being done in seconds.”

These systems typically operate on a scale greater than a human recruiter. For instance, HireVue claims the artificial intelligence used in its video platform evaluates “tens of thousands of factors.” Even if companies are using the same AI-based hiring tool, they’re likely using a system that’s optimized to their own hiring preferences. Plus, an algorithm is likely changing if it’s continuously being trained on new data.

Another service, Humantic, claims it can get a sense of candidates’ psychology based on their résumés, LinkedIn profiles, and other text-based data an applicant might volunteer to submit, by mining through and studying their use of language (the product is inspired by the field of psycholinguistics). The idea is to eliminate the need for additional personality assessments. “We try to recycle the information that’s already there,” explains Amarpreet Kalkat, the company’s co-founder. He says the service is used by more than 100 companies.

Proponents of these recruiting tools also claim that artificial intelligence can be used to avoid human biases, like an unconscious preference for graduates of a particular university, or a bias against women or a racial minority. (But AI often amplifies bias; more on that later.) They argue that AI can help strip out — or abstract — information related to a candidate’s identity, like their name, age, gender, or school, and more fairly consider applicants.

The idea that AI might clamp down on — or at least do better than — biased humans inspired California lawmakers earlier this year to introduce a bill urging fellow policymakers to explore the use of new technology, including “artificial intelligence and algorithm-based technologies,” to “reduce bias and discrimination in hiring.”

AI tools reflect who builds and trains them

These AI systems are only as good as the data they’re trained on and the humans that build them. If a résumé-screening machine learning tool is trained on historical data, such as résumés collected from a company’s previously hired candidates, the system will inherit both the conscious and unconscious preferences of the hiring managers who made those selections. That approach could help find stellar, highly qualified candidates. But Rieke warns that method can also pick up “silly patterns that are nonetheless real and prominent in a data set.”

One such résumé-screening tool identified being named Jared and having played lacrosse in high school as the best predictors of job performance, as Quartz reported.

If you’re a former high school lacrosse player named Jared, that particular tool might not sound so bad. But systems can also learn to be racist, sexist, ageist, and biased in other nefarious ways. For instance, Reuters reported last year that Amazon had created a recruitment algorithm that unintentionally tended to favor male applicants over female applicants for certain positions. The system was trained on a decade of résumés submitted to the company, which Reuters reported were mostly from men.

A visitor at Intel’s Artificial Intelligence (AI) Day walks past a signboard in Bangalore, India on April 4, 2017.
Manjunath Kiran/AFP via Getty Images

(An Amazon spokesperson told Recode that the system was never used and was abandoned for several reasons, including because the algorithms were primitive and that the models randomly returned unqualified candidates.)

Mondal says there is no way to use these systems without regular, extensive auditing. That’s because, even if you explicitly instruct a machine learning tool not to discriminate against women, it might inadvertently learn to discriminate against other proxies associated with being female, like having graduated from a women’s college.

“You have to have a way to make sure that you aren’t picking people who are grouped in a specific way and that you’re only hiring those types of people,” he says. Ensuring that these systems are not introducing unjust bias means frequently checking that new hires don’t disproportionately represent one demographic group.

But there’s skepticism that efforts to “de-bias” algorithms and AI are a complete solution. And Upturn’s report on equity and hiring algorithms notes that “[de-biasing] best practices have yet to crystallize [and] [m]any techniques maintain a narrow focus on individual protected characteristics like gender or race, and rarely address intersectional concerns, where multiple protected traits produce compounding disparate effects.”

And if a job is advertised on an online platform like Facebook, it’s possible you won’t even see a posting because of biases produced by that platform’s algorithms. There’s also concern that systems like HireVue’s could inherently be built to discriminate against people with certain disabilities.

Critics are also skeptical of whether these tools do what they say, especially when they make broad claims about a candidates’ “predicted” psychology, emotion, and suitability for a position. Adina Sterling, an organizational behavior professor at Stanford, also notes that, if not designed carefully, an algorithm could drive its preferences toward a single type of candidate. Such a system might miss a more unconventional applicant who could nevertheless excel, like an actor applying for a job in sales.

“Algorithms are good for economies of scale. They’re not good for nuance,” she explains, adding that she doesn’t believe companies are being vigilant enough when studying the recruitment AI tools they use and checking what these systems actually optimize for.

Who regulates these tools?

Employment lawyer Mark Girouard says AI and algorithmic selection systems fall under the Uniform Guidelines on Employee Selection Procedures, guidance established in 1978 by federal agencies that guide companies’ selection standards and employment assessments.

Many of these AI tools say they follow the four-fifths rule, a statistical “rule of thumb” benchmark established under those employee selection guidelines. The rule is used to compare the selection rate of applicant demographic groups and investigate whether selection criteria might have had an adverse impact on a protected minority group.

But experts have noted that the rule is just one test, and Rieke emphasizes that passing the test doesn’t imply these AI tools do what they claim. A system that picked candidates randomly could pass the test, he says. Girouard explains that as long as a tool does not have a disparate impact on race or gender, there’s no law on the federal level that requires that such AI tools work as intended.

In its case against HireVue, EPIC argues that the company has failed to meet established AI transparency guidelines, including artificial intelligence principles outlined by the Organization for Economic Co-operation and Development that have been endorsed by the U.S and 41 other countries. HireVue told Recode that it follows the standards set by the Uniform Guidelines, as well as guidelines set by other professional organizations. The company also says its systems are trained on a diverse data set and that its tools have helped its clients increase the diversity of their staff.

At the state level, Illinois has made some initial headway in promoting the transparent use of these tools. In January, its Artificial Intelligence Video Interview Act will take effect, which requires that employers using artificial intelligence-based video analysis technology notify, explain, and get the consent of applicants.

Still, Rieke says few companies release the methodologies used in their bias audits in “meaningful detail.” He’s not aware of any company that has released the results of an audit conducted by a third party.

Meanwhile, senators have pushed the EEOC to investigate whether biased facial analysis algorithms could violate anti-discrimination laws, and experts have previously warned the agency about the risk of algorithmic bias. But the EEOC has yet to release any specific guidance regarding algorithmic decision-making or artificial intelligence-based tools and did not respond to Recode’s request for comment.

Rieke did highlight one potential upside for applicants. Should lawmakers one day force companies to release the results of their AI hiring selection systems, job candidates could gain new insight into how to improve their applications. But as to whether AI will ever make the final call, Sterling says that’s a long way’s off.

“Hiring is an extremely social process,” she explains. “Companies don’t want to relinquish it to tech.”


Open Sourced is made possible by Omidyar Network. All Open Sourced content is editorially independent and produced by our journalists.

By: 

Source: Artificial intelligence will help determine if you get your next job

5.58M subscribers
Robot co-workers and artificial intelligence assistants are becoming more common in the workplace. Could they edge human employees out? What then? Still haven’t subscribed to WIRED on YouTube? ►► http://wrd.cm/15fP7B7 Also, check out the free WIRED channel on Roku, Apple TV, Amazon Fire TV, and Android TV. Here you can find your favorite WIRED shows and new episodes of our latest hit series Masterminds. ABOUT WIRED WIRED is where tomorrow is realized. Through thought-provoking stories and videos, WIRED explores the future of business, innovation, and culture. The Future of Your Job in the Age of AI | Robots & Us | WIRED

How Artificial Intelligence Could Save Psychiatry

Five years from now, the U.S.’ already overburdened mental health system may be short as many as 15,600 psychiatrists as the growth in demand for their services outpaces supply, according to a 2017 report from the National Council for Behavioral Health. But some proponents say that, by then, an unlikely tool—artificial intelligence—may be ready to help mental health practitioners mitigate the impact of the deficit.

Medicine is already a fruitful area for artificial intelligence; it has shown promise in diagnosing disease, interpreting images and zeroing in on treatment plans. Though psychiatry is in many ways a uniquely human field, requiring emotional intelligence and perception that computers can’t simulate, even here, experts say, AI could have an impact. The field, they argue, could benefit from artificial intelligence’s ability to analyze data and pick up on patterns and warning signs so subtle humans might never notice them.

“Clinicians actually get very little time to interact with patients,” says Peter Foltz, a research professor at the University of Colorado Boulder who this month published a paper about AI’s promise in psychiatry. “Patients tend to be remote, it’s very hard to get appointments and oftentimes they may be seen by a clinician [only] once every three months or six months.”

AI could be an effective way for clinicians to both make the best of the time they do have with patients, and bridge any gaps in access, Foltz says. AI-aided data analysis could help clinicians make diagnoses more quickly and accurately, getting patients on the right course of treatment faster—but perhaps more excitingly, Foltz says, apps or other programs that incorporate AI could allow clinicians to monitor their patients remotely, alerting them to issues or changes that arise between appointments and helping them incorporate that knowledge into treatment plans. That information could be lifesaving, since research has shown that regularly checking in with patients who are suicidal or in mental distress can keep them safe.

Some mental-health apps and programs already incorporate AI—like Woebot, an app-based mood tracker and chatbot that combines AI and principles from cognitive behavioral therapy—but it’ll probably be some five to 10 years before algorithms are routinely used in clinics, according to psychiatrists interviewed by TIME.

Even then, Dr. John Torous, director of digital psychiatry at Beth Israel Deaconess Medical Center in Boston and chair of the American Psychiatric Association’s Committee on Mental Health Information Technology, cautions that “artificial intelligence is only as strong as the data it’s trained on,” and, he says, mental health diagnostics have not been quantified well enough to program an algorithm. It’s possible that will happen in the future, with more and larger psychological studies, but, Torous says “it’s going to be an uphill challenge.”

Not everyone shares that position. Speech and language have emerged as two of the clearest applications for AI in psychiatry, says Dr. Henry Nasrallah, a psychiatrist at the University of Cincinnati Medical Center who has written about AI’s place in the field. Speech and mental health are closely linked, he explains.

Talking in a monotone can be a sign of depression; fast speech can point to mania; and disjointed word choice can be connected to schizophrenia. When these traits are pronounced enough, a human clinician might pick up on them—but AI algorithms, Nasrallah says, could be trained to flag signals and patterns too subtle for humans to detect.

Foltz and his team in Boulder are working in this space, as are big-name companies like IBM. Foltz and his colleagues designed a mobile app that takes patients through a series of repeatable verbal exercises, like telling a story and answering questions about their emotional state. An AI system then assesses those soundbites for signs of mental distress, both by analyzing how they compare to the individual’s previous responses, and by measuring the clips against responses from a larger patient population.

The team tested the system on 225 people living in either Northern Norway or rural Louisiana—two places with inadequate access to mental health care—and found that the app was at least as accurate as clinicians at picking up on speech-based signs of mental distress.

Foltz and his team in Boulder are working in this space, as are big-name companies like IBM. Foltz and his colleagues designed a mobile app that takes patients through a series of repeatable verbal exercises, like telling a story and answering questions about their emotional state. An AI system then assesses those soundbites for signs of mental distress, both by analyzing how they compare to the individual’s previous responses, and by measuring the clips against responses from a larger patient population. The team tested the system on 225 people living in either Northern Norway or rural Louisiana—two places with inadequate access to mental health care—and found that the app was at least as accurate as clinicians at picking up on speech-based signs of mental distress.

Written language is also a promising area for AI-assisted mental health care, Nasrallah says. Studies have shown that machine learning algorithms trained to assess word choice and order are better than clinicians at distinguishing between real and fake suicide notes, meaning they’re good at picking up on signs of distress. Using these systems to regularly monitor a patient’s writing, perhaps through an app or periodic remote check-in with mental health professionals, could feasibly offer a way to assess their risk of self-harm.

Even if these applications do pan out, Torous cautions that “nothing has ever been a panacea.” On one hand, he says, it’s exciting that technology is being pitched as a solution to problems that have long plagued the mental health field; but, on the other hand, “in some ways there’s so much desperation to make improvements to mental health that perhaps the tools are getting overvalued.”

Nasrallah and Foltz emphasize that AI isn’t meant to replace human psychiatrists or completely reinvent the wheel. (“Our brain is a better computer than any AI,” Nasrallah says.) Instead, they say, it can provide data and insights that will streamline treatment.

Alastair Denniston, an ophthalmologist and honorary professor at the U.K.’s University of Birmingham who this year published a research review about AI’s ability to diagnose disease, argues that, if anything, technology can help doctors focus on the human elements of medicine, rather than getting bogged down in the minutiae of diagnosis and data collection.

Artificial intelligence “may allow us to have more time in our day to spend actually communicating effectively and being more human,” Denniston says. “Rather than being diagnostic machines… [doctors can] provide some of that empathy that can get swallowed up by the business of what we do.”

By Jamie Ducharme

November 20, 2019

Source: How Artificial Intelligence Could Save Psychiatry | Time

44 subscribers
Hi! I’m Chris Lovejoy, a doctor working in London and a clinical data scientist working to bring AI to healthcare. Timestamps: 0:13 – Some general thoughts on artificial intelligence in healthcare 1:41 – AI in diagnosing psychiatric conditions 2:19 – AI in monitoring mental health 3:00 – AI in treatment of psychiatric conditions 4:38 – AI for increasing efficiency for clinicians 5:38 – Important considerations and concerns 6:17 – Good things about AI for healthcare in general 6:38 – Closing thoughts To download my article on the subject, visit: https://chrislovejoy.me/psychiatry/ Papers referenced in video: (1) Jaiswal S, Valstar M, Gillott A, Daley D. Automatic detection of ADHD and ASD from expressive behaviour in RGBD data. December 7 2016, ArXiv161202374 Cs. Available from: http://arxiv.org/abs/1612.02374. (2) Corcoran CM, Carrillo F, Fernández-Slezak D, Bedi G, Klim C, Javitt DC, et al. Prediction of psychosis across protocols and risk cohorts using automated language analysis. World Psychiatry 2018;17(February (1)):67–75. (3) Place S, Blanch-Hartigan D, Rubin C, Gorrostieta C, Mead C, Kane J, et al. Behavioral indicators on a mobile sensing platform predict clinically validated psychiatric symptoms of mood and anxiety disorders. J Med Internet Res 2017;19(March (3)):e75. (4) Fitzpatrick KK, Darcy A, Vierhile M. Delivering cognitive behavior therapy to young adults with symptoms of depression and anxiety using a fully automated conversational agent (Woebot): a randomized controlled trial. JMIR Ment Health 2017;4(June (2)):e19. (5) Standalone effects of a cognitive behavioral intervention using a mobile phone app on psychological distress and alcohol consumption among Japanese workers: pilot nonrandomized controlled trial | Hamamura | JMIR Mental Health. Available from: http://mental.jmir.org/2018/1/e24/. (6) Lovejoy CA, Buch V, Maruthappu M. Technology and mental health: The role of artificial intelligence. Eur Psychiatry. 2019 Jan;55:1-3. doi: 10.1016/j.eurpsy.2018.08.004. Epub 2018 Oct 28.

This A.I. Bot Writes Such Convincing Ads, Chase Just ‘Hired’ It to Write Marketing Copy

Here are two headlines. One was written by a human. One was written by a robot. Can you guess which?

  • Access cash from the equity in your home. Take a look.

  • It’s true–You can unlock cash from the equity in your home. Click to apply.

Both lines of marketing copy were used to pitch home equity lines of credit to JPMorgan Chase customers. The second garnered nearly twice as many applications, according to the Wall Street Journal. It was generated by Persado’s artificial intelligence tool.

This is why Chase just signed a five-year deal with Persado Inc., a software company that uses artificial intelligence to tweak marketing language for its clients. After a trial period with the company, Chase has found Persado’s bot-generated copy incredibly effective. “Chase saw as high as a 450 percent lift in click-through rates on ads,” Persado said in a statement.

That email might have been written by a bot.

Chase says it will use Persado’s tool to rewrite language for email promotions, online ads, and potentially snail mail promotions. It’s also looking into using the tool for internal communications and customer service communications.

When asked if this might lead to downsizing, a Chase spokesperson told AdAge: “Our relationship with Persado hasn’t had an impact on our structure.”

Persado’s tool starts with human-written copy and analyzes it for six elements (narrative, emotion, descriptions, calls-to-action, formatting, and word positioning). It then creates thousands of combinations by making tweaks to those elements.

Kristin Lemkau, chief marketing officer at JPMorgan Chase, is fully on board with Persado. Chase began experimenting with its software three years ago. Sometimes the tool would recommend a wordier headline, which goes against marketing 101. But that longer headline garnered more clicks.

“They made a couple of changes that made sense and I was like, ‘Why were we so dumb that we didn’t figure that out?'” she told the Journal.

By: Betsy Mikel Owner, Aveck @BetsyM

Source: This A.I. Bot Writes Such Convincing Ads, Chase Just ‘Hired’ It to Write Marketing Copy

The Amazing Ways Dubai Airport Uses Artificial Intelligence

As one of the world’s busiest airports, (ranked No. 3 in 2018 according to Airports Council International’s world traffic report), Dubai International Airport is also a leader in using artificial intelligence (AI). In fact, the United Arab Emirates (UAE) leads the Arab world with its adoption of artificial intelligence in other sectors and areas of life and has a government that prioritizes artificial intelligence including an AI strategy and Ministry of Artificial Intelligence with a mandate to invest in technologies and AI tools.

AI Customs Officials

The Emirates Ministry of the Interior said that by 2020, immigration officers would no longer be needed in the UAE. They will be replaced by artificial intelligence. The plan is to have people just walk through an AI-powered security system to be scanned without taking off shoes or belts or emptying pockets. The airport was already experimenting with a virtual aquarium smart gate. Travelers would walk through a small tunnel surrounded by fish. While they looked around at the fish that swim around them, cameras could view every angle of their faces. This allowed for quick identification.

AI Baggage Handling

Tim Clark, the president of Emirates, the world’s biggest long-haul carrier, believes artificial intelligence, specifically robots, should already be handling baggage service including identifying them, putting the bags in appropriate bins and then taking them out of the aircraft without any human intervention. He envisions these robots to be similar to the automation and robotics used at Amazon.com’s warehouses.

Air Traffic Management

In a partnership with Canada-based Searidge Technologies, the UAE General Civil Aviation Authority (GCAA) is researching the use of artificial intelligence in the country’s air traffic control process. In a statement announcing the partnership in 2018, the director-general of the GCAA confirmed that it is UAE’s strategy to explore how artificial intelligence and other new technologies can enhance the aviation industry. With goals to optimize safety and efficiency within air traffic management, this is important work that could ultimately impact similar operations worldwide.

Automated Vehicles

Self-driving cars powered by artificial intelligence and 100% solar or electrical energy will soon be helping the Dubai Airport increase efficiency in its day-to-day operations, including improvements between ground transportation and air travel. Imagine how artificial intelligence could orchestrate passenger movement from arrival to the airport to leaving your destination’s airport. In the future, autonomous vehicles (already loaded with your luggage) could meet you at the curb. Maybe AI could transform luggage carts to act autonomously to get your luggage to your hotel or home, eliminating any need for baggage carousels and the hassle of dealing with your luggage.

While much attention is given to the process of vetting passengers to ensure safe air travel, artificial intelligence can also improve the staff clearance process. Some airports see the most significant security threat airports, and airlines face is with airport personnel. An EgyptAir mechanic, baggage handler and two police officers were arrested in connection with the bombing of Metrojet Flight 9268 where all 224 people on board died. There have been several arrests in Australia of border force officers linked to international drug smugglers. Part of these efforts to improve the staff clearance process includes enhancing staff entrances to enable greater control with biometrics, advanced facial recognition and the use of artificial intelligence rather than just CCTV cameras and police monitoring which is used now. Artificial intelligence can look for areas of concerns with a staff member’s behavior and record for crime and violence even before they are hired. After they are hired, AI algorithms can continue to look for changes that could indicate a security problem.

AI Projects Being Explored for the Future

Emirates is developing AI projects in its lab at the Dubai Future Accelerators facility. Some of these include using AI to assist passengers when picking their onboard meals, scheduling a pickup by a taxi as well as personalizing the experience of every Emirates passenger throughout the entire journey. They are also exploring how AI can help Emirates teach cabin crew. We can expect that artificial intelligence will be put to work to solve the problems of airplane boarding by looking at the issue in a way humans have been unable to. The goal would be for AI to architect a queue-less experience.

AI at Other Airports

The first biometric airport terminal is already running at the Hartsfield-Jackson Atlanta International Airport, and a similar system is running at Dubai International Airport for first- and business-class passengers. Here are some other ways airports and airlines around the world are using artificial intelligence or plan to:

·         Cybersecurity: Airports and airlines have shifted from identifying cybersecurity to preventing cybersecurity threats with an AI assist in response to the expansion of digitalization across aviation.

·         Immersive experiences: Augmented reality might be the future of helping travelers find their way through an airport.

·         Voice recognition technology: At Heathrow Airport, passengers can already ask Alexa to get flight updates. United Airlines allows travelers to check in to their flight through Google Assistant by simply stating, “Hey Google, check in to my flight.”

As innovation gets pushed by the UAE, Dubai International Airport and other technology innovators around the world, there will be opportunities for abuse and privacy considerations when using these new AI tools and capabilities for air travel. But, if artificial intelligence can remove the biggest headaches from travel, some people (possibly most) will be more than ready to exchange a bit of privacy for a better experience when AI takes over.

 

Follow me on Twitter or LinkedIn. Check out my website.

Bernard Marr is an internationally best-selling author, popular keynote speaker, futurist, and a strategic business & technology advisor to governments and companies. He helps organisations improve their business performance, use data more intelligently, and understand the implications of new technologies such as artificial intelligence, big data, blockchains, and the Internet of Things. Why don’t you connect with Bernard on Twitter (@bernardmarr), LinkedIn (https://uk.linkedin.com/in/bernardmarr) or instagram (bernard.marr)?

Source: The Amazing Ways Dubai Airport Uses Artificial Intelligence

%d bloggers like this: