Does AI Have The Answer To The Customer Experience Riddle?

The telecommunications industry is so many enterprises wrapped into one—they have to get every aspect of customer experience right. It’s a challenge every organization can learn from.

Everywhere you look, there’s another business attempting to harness data, analytics, and artificial intelligence to help them increase sales and crack the code to provide higher-quality, lower-cost goods and services.

Travel and hospitality companies want to make persuasive, personalized offers at just the right moment to drive bookings. Retailers are honing inventory management to better anticipate customer demand and drive same-store sales—while navigating the current supply chain challenges. Hospitals, health insurers, and even governments utilize AI to comb through vast data sets to develop predictive models of disease.

Financial institutions have accelerated credit and risk underwriting decisions using AI/ML models; they’ve also enhanced customer satisfaction online and on the phone with AI-driven virtual assistants. Manufacturers are employing AI to improve process efficiency, enable predictive maintenance, and scale quality control efforts in their core operations. And everyone is trying to reduce customer churn.

When you stop and think about it, the telecommunications industry and its myriad communications service providers (CSPs) do all of this—advertising, supply chain, online and physical stores, operations and maintenance, customer care—and more, for both consumers and businesses. Thus, CSPs offer a unique lens through which to examine how companies in any industry can utilize AI to convert data to insights and information to actions.

The pressure on CSPs to take action, to do more with less, has never been greater.

Growing demands on the network, growing demands for the network

CSPs are in an unusual position: As global demand for data has grown 256% between 2016 and 2020, intense competition has meant that revenues grew less than 13% over the same period. Operators have so far relied upon technical advances and gaining scale efficiencies through consolidation to manage the gap, but one of the greatest untapped opportunities remaining is to become dramatically better providers of customer service.

While the concept of “AI-driven customer service” may seem like an oxymoron—after all, what do algorithms really know about serving people better?—the answer now turns out to literally be more than you could ever know.

The decline of third-party cookies has many operators renewing their focus on collecting and acting upon their own first-party data across the customer lifecycle.

In an evolving industry like telecommunications, the race for customer acquisition and retention is paramount. This is driving heightened operator focus on better advertising performance and retail sales—whether in their own stores, their retailer partners, or various digital channels. AI can help here with informing target audience creation, creative optimization, and inventory forecasts.

Related: Google and Automation Anywhere reimagine customer experience by giving virtual agents a boost

The decline of third-party cookies has many operators renewing their focus on collecting and acting upon their own first-party data across the customer lifecycle. Here, too, AI models can help CSPs identify and act upon signals, such as usage patterns or customer care calls. This type of customer context, an often overlooked signal, can be especially valuable when it comes to identifying “at risk”’ customers for retention efforts.

Contact centers supporting upwards of 100 million subscribers are an expensive endeavor. Several top global operators have turned to conversational AI to decrease agent volumes and document AI tools to shorten call handle times. Some companies report Google’s conversational AI can cut the number of customer inquiries that need a human agent by half.  Besides helping reduce costs and maintain margins for the operator, many customers also appreciate the efficiency and control of self-service.

Furthermore, while CSPs may not have a “factory” in the traditional sense, their network operations are far-flung and national, even global, in scale. They must operate at the industry standard of “five 9’s” (i.e., 99.999%) reliability for emergency communications and simultaneously deliver massive amounts of bandwidth to meet the public’s insatiable demand for communications and data.

And if it seems like a lot now, just consider the 23% annual bandwidth growth the industry will undergo with the rise of 5G and all the IoT, VR, and Web3 experiences that come with it. Keeping up, and keeping customers happy, will take new levels of network automation and predictive maintenance that only AI can provide.

Related: Deploying and operating cloud-based 5G networks

TELUS, a world-leading communications technology company based in Canada, is already leveraging conversational AI through Google Cloud’s CCAI Insights to better serve its roster of global clients and their customers.

Read more:

Most Important Artificial Intelligence Skill: A Sense of Imagination

The Rise of Artificial Intelligence in Business and Society

How Artificial Intelligence Powered Customer Service Will Help Customer Support Agents

Artificial Empathy: Call Center Employees Are Using Voice Analytics to Predict How You Feel

“As a company that supports our customers through many channels, we are able to provide a streamlined experience that transitions from digital support to live agent support,” Phil Schultz, vice-president of customer experience, told us in an interview. “With this new experience, we are able to provide a simple, consistent, intuitive, and friendly experience for simpler tasks, with our agents being able to focus on supporting our customers’ more complex issues. CCAI and Data Insight help TELUS ensure our customers get the support they need, when they need it.”

Realizing the value of AI for customer experience

Of course all of these grand data aspirations are easy to articulate but hard to implement—at Google Cloud, we know these challenges first hand. It’s why we empathize with the added challenges CSPs face from their legacy systems, and from the network complexity that has arisen over generations of technology and industry consolidation. It’s also why we’re excited to be partnering with top CSPs to solve these challenges.

Through our experiences in these partnerships, Google Cloud has identified four key success factors for driving business value from AI applied across the customer experience:

  1. Clear Focus. Success starts with a clear and shared understanding of what CSPs are solving for and the business value of doing so. This clarity will drive every activity to follow, with the business value serving as an important motivator to plow through challenges.
  2. No Silos. Nearly all enterprises struggle with how to break down data silos. Successful companies have a proactive strategy for data integration, data management, and analytics platforms to address the current as well as future needs.
  3. Data-driven. Choosing which part of the problem to tackle first and how to do so is a major determinant of value. Leading companies rely on data to help inform their approach to everything from deciding which use cases to tackle first, to developing and optimizing AI-driven virtual assistants.
  4. Shared risk & reward. We have found that success takes a partnership in which incentives are aligned, with partners having skin in the game.

In Google Cloud’s new report, “Using AI to win the customer experience battle in telecommunications,” we delve into these dimensions, using CSPs as a vehicle, and examine new and innovative ways to apply AI, and best practices for building an AI program focused on delivering value, not just promises.

For TELUS, the investment of time and planning required to execute on AI was apparent from the start. “Through our 10-year partnership with Google, TELUS is able to dive into all the phases of our customers’ journey ensuring it is easy for them to get the support they need,” Schultz said. “This allows our customers to more easily service themselves online, and our world class agents to have all of the information they need to provide quicker and easier support to our customers.”

AI solutions offer the exciting potential to transform the customer experience and bend the value curve for enterprises. Realizing this value requires thoughtful preparation, technology excellence, iterative progress, and a committed, aligned partnership. No company—whether an operator, cloud provider, or solution provider—can afford to let the sizable program investment become just another hype-cycle science experiment that fails to deliver business results.

Sean Allbee, Senior Principal, Customer Value and Transformation Advisory, Google Cloud

Sean works with telecommunications and media companies

Amol Phadke joined Google Cloud in June 2020 as managing director: global telecom industry solutions. He is responsible for working with the product and

Source: Does AI Have The Answer To The Customer Experience Riddle?

.

Related Posts

Artificial Intelligence Is Developing A Sense Of Smell: What Could A Digital Nose Mean In Practice?

We already know we can teach machines to see. Sensors enable autonomous cars to take in visual information and make decisions about what to do next when they’re on the road. But did you know machines can smell, too?

Aryballe, a startup that uses artificial intelligence and digital olfaction technology to mimic the human sense of smell, helps their business customers turn odor data into actionable information.

You’d be surprised how many practical use cases there are for technology like this. I interviewed Sam Guillaume, CEO of Aryballe, and asked him how digital olfaction works, how it’s currently being used on the market, and what his predictions are for the future of fragrance tech.

How the Nose Knows

Our human noses work by processing odor molecules released by organic and inorganic objects. When energy in objects increases (through pressure, agitation, or temperature changes), odors evaporate, making it possible for us to inhale and absorb them through our nasal cavities.

The odors then stimulate our nasal olfactory neurons and the olfactory bulb. Our brains pull together other information (like visual cues and memories of things we’ve smelled before) to identify the smell and decide what to do next.

You can watch my full interview with Aryballe CEO Sam Guillaume here:

Digital olfaction mimics the way humans smell by capturing odor signatures using biosensors, then using software solutions to analyze and display the odor data. Artificial Intelligence (AI) interprets the signatures and classifies them based on a database of previously collected smells.

“Over the last few years, technology has allowed us to essentially duplicate the way human olfaction actually works,” Guillaume says. “And by porting this technology to readily available techniques like semiconductors, for instance, we can make sensors that are small, convenient, easy to use, and cheap. In terms of performance and its ability to discriminate between smells, it’s pretty close to the way your nose works.”

Practical Use Cases for Digital Olfaction

So how does all this digital olfaction data turn into valuable insights for companies?

Odor analytics can help companies do things like:

●    Engineer the perfect “new car” smells in the automotive industry

●    Predict when maintenance needs to be done in industrial or automotive equipment

●    Automatically detect food spoilage in consumer appliances

●    Reject or approve raw material supply

●    Reduce R&D time for new foods and beverages

●    Ensure fragrances of personal care products like deodorants and shampoos last for a long time

●    Give riders peace of mind on public transportation by emitting an ambient smell

●    Create personal care devices and health sensors that use odors to detect issues and alert users

Leveraging the Power of Odor Data

In the future, companies like Aryballe will potentially be collaborating on projects that will create digital odor libraries for companies, or even creating devices that help COVID-19 patients recover their sense of smell.

Look for more advances as we can continue to find ways to teach computers how to sense the world around them and use the data they collect to help us in our everyday lives.

Find out more about Aryballe’s technology here, and learn more about machine learning and artificial intelligence on my blog.

Bernard Marr is an internationally best-selling author, popular keynote speaker, futurist, and a strategic business & technology advisor to governments and companies. He helps organisations improve their business performance, use data more intelligently, and understand the implications of new technologies such as artificial intelligence, big data, blockchains, and the Internet of Things. Why don’t you connect with Bernard on Twitter (@bernardmarr), LinkedIn (https://uk.linkedin.com/in/bernardmarr) or instagram (bernard.marr)?

Source: Artificial Intelligence Is Developing A Sense Of Smell: What Could A Digital Nose Mean In Practice?

.

More Contents:

What AI Practitioners Could Learn From A 1989 MIT Dissertation

Child at laptop

More than thirty years ago, Fred Davis developed the Technology Acceptance Model (TAM) as part of his dissertation at MIT. It’s one of the most widely cited papers in the field of technology acceptance (a.k.a. adoption). Since 1989, it’s spawned an entire field of research that extends and adds to it. What does TAM convey and how might today’s AI benefit from it?

TAM is an intuitive framework. It feels obvious yet powerful and has withstood the test of time. Davis started with a premise so simple that it’s easy to take it for granted: A person will only try, use and ultimately adopt technology if they are willing to exert some effort. And what could motivate users to expend this effort?

He outlined several variables that could motivate users, and many researchers have added to his list over the years, but these two variables are the ones that were most important: 1. Does it look easy to use? 2. Will it be useful? If the learning curve doesn’t look too steep and there’s something in it for them, a user will be inclined to adopt. Many researchers have added to this foundation over the years. For example, we’ve learned that a user’s intention can also be influenced by subjective norms.

We’re motivated to adopt new tech at work when senior leadership thinks it’s important. Perceived usefulness can also be influenced by image, as in, “Does adopting this tech make me look good?” And lastly, usefulness is high if relevance to the job is high.

TAM can be a powerful concept for an AI practitioner. It should be front-of-mind when embedding AI in an existing tool or process and when developing an AI-first product, as in, one that’s been designed with AI at the center of its functionality from the start. (Think Netflix.) Furthermore, AI can be used to drive adoption by levering TAM principles that increase user motivation.

Making AI more adoptable

With the proliferation of AI in sales organizations, AI algorithms are increasingly embedded in tools and processes leveraged by sales representatives and sales managers. Adding decision engines to assist sales representatives is becoming increasingly common. A sales organization may embed models that help determine a customer’s propensity to buy or churn, recommend next best actions or communications and more. The problem is, many of these initiatives don’t work because of a lack of adoption.

TAM can help us design these initiatives more carefully, so that we maximize the chances of acceptance. For example, if these models surface recommendations and results that fit seamlessly into reps’ tools and processes, they would perceive them as easy to use.

And if the models make recommendations that help a sales person land a new customer, prevent one from leaving and help them upsell or cross-sell when appropriate, reps would perceive them as useful. In other words, if the AI meets employees where they are and offers timely, beneficial support, adoption becomes a no-brainer.

We also see many new products and services that are AI first. For these solutions, if perceived ease of use or perceived usefulness are not high, there would be no adoption. Consider a bank implementing a tech-enabled solution like mobile check deposits. This service depends on customers having a trouble-free experience.

The Newark airport’s global entry system uses facial recognition to scan international flyers’ faces. It’s voluntary, and the experience is fantastic. The kiosk recognizes my face, and a ticket is printed for me to take to the immigration officer. Personally, I find this AI-first process a better experience than the previous system that depended on fingerprints, and now I will always opt for the new one.

Using AI to drive adoption

And perhaps counter intuitively, what if AI was used to drive elements of TAM within existing technology? Can AI impact perceived usefulness? Can AI impact perceived ease of use? Consider CRM. It has been improved and refined over the years and is in use within most sales organizations, yet the level of dissatisfaction with CRM is high and adoption remains a challenge.

How can AI help? A machine learning algorithm that uses location services can recommend that a rep visit a nearby customer, increasing the perceived usefulness of their CRM solution. Intelligent process automation can also help reps see relevant information from a contracting database as information on renewals are being entered. Bots can engage customers on behalf of the representatives to serve up more qualified leads. The possibilities are numerous. All these AI features are designed to ensure that CRM lives up to its promise as a source of value to the sales representative.

Outside of sales, consider patients. In the past few years, many new technologies have been introduced to help diabetics. Adoption of this technology is critical to self-management, and self-management is critical to treating the disease. For any new technology in this space, patients need to see that it’s useful to them.

AI can play a role in gathering information such as glucose levels, activity and food intake and make recommendations on insulin dosing or caloric intake. Such information gathering could go a long way toward reducing the fatigue that diabetics feel while they make countless health and nutrition decisions throughout the day.

AI’s algorithmic nature makes it easy to forget that it’s another technology and that it can aid technology. Its novelty can convince us that everything about it is new. TAM holds up because it’s intuitive, straightforward and proven. While we boldly innovate a path forward in the world of AI, shed convention and think like a disruptor, let’s keep an eye on our history too. There’s some useful stuff in there.

Follow me on Twitter or LinkedIn. Check out my website.

Arun provides strategy and advisory services, helping clients build their analytics capabilities and leverage their data and analytics for greater commercial effectiveness. He currently works with clients on a broad range of analytics needs that span multiple industries, including technology, telecommunications, financial services, travel and transportation and healthcare. His areas of focus are AI adoption and ethics, as well as analytics organization design, capability building, AI explainability and process optimization.

Source: What AI Practitioners Could Learn From A 1989 MIT Dissertation

.

.

The AI Practitioners Guide for Beginners is a series that will provide you with a high-level overview of business and data strategy that a machine learning practitioner needs to know, followed by a detailed walkthrough of how to install and validate one of the popular artificial intelligence frameworks: TensorFlow on the Intel® Xeon® Scalable platform. Read the AI Practitioners Guide for Beginners article:
https://intel.ly/2WQaiE8 Subscribe to the Intel Software YouTube Channel: http://bit.ly/2iZTCsz About Intel Software: The Intel® Developer Zone encourages and supports software developers that are developing applications for Intel hardware and software products. The Intel Software YouTube channel is a place to learn tips and tricks, get the latest news, watch product demos from both Intel, and our many partners across multiple fields.
You’ll find videos covering the topics listed below, and to learn more, you can follow the links provided! Connect with Intel Software: Visit INTEL SOFTWARE WEBSITE: https://intel.ly/2KeP1hD Like INTEL SOFTWARE on FACEBOOK: http://bit.ly/2z8MPFF Follow INTEL SOFTWARE on TWITTER: http://bit.ly/2zahGSn INTEL SOFTWARE GITHUB: http://bit.ly/2zaih6z INTEL DEVELOPER ZONE LINKEDIN: http://bit.ly/2z979qs INTEL DEVELOPER ZONE INSTAGRAM: http://bit.ly/2z9Xsby INTEL GAME DEV TWITCH: http://bit.ly/2BkNshu See also Intel Optimization Notice: https://intel.ly/2HVXVo5 Introduction | AI Practitioners Guide for Beginners | Episode 1 | Intel Software https://www.youtube.com/intelsoftware
.
More Contents:
AI Access: Applied Analytics from End-to-End Tickets, Tue, Mar 9, 2021 at 12:00 PM
[…] AI trends, learn about best practices, and deep dive into a real-world use case with current AI practitioners […]
N/A
Why machine learning strategies fail – TECHOSMO
techosmo.com – February 26
[…] “As AI practitioners can demonstrate practical examples of how AI can benefit their specific company — leadership wil […]
N/A
Why machine understanding techniques fail
http://www.thespuzz.com – February 26
[…] “As AI practitioners can demonstrate practical examples of how AI can benefit their specific company — leadership wil […]
1
Why machine learning strategies fail
venturebeat.com – February 26
[…] “As AI practitioners can demonstrate practical examples of how AI can benefit their specific company — leadership wil […]
N/A
Why machine learning strategies fail
venturebeat.com – February 26
[…] “As AI practitioners can demonstrate practical examples of how AI can benefit their specific company — leadership wil […]
1
The Batch: Face Datasets Under Fire, Baking With AI, Human Disabilities Baffle Algorithms, Ginormous Transformers
info.deeplearning.ai – February 26
[…] Our practices have evolved — and continue to do so — as both society and AI practitioners have come to recognize the importance of privacy […]
N/A
What AI Practitioners Could Learn From A 1989 MIT Dissertation
http://www.forbes.com – February 26
We can use legacy adoption principals to drive user behavior for cutting edge AI. We can also use AI to drive adoption in legacy technologies….
N/A
GCHQ | Pioneering a New National Security: The Ethics of Artificial Intelligence
http://www.gchq.gov.uk – February 25
[…] GCHQ has a growing community of data science and AI practitioners and researchers, including an industry-facing AI Lab dedicated to prototyping projects whic […]
N/A
Shingai Manjengwa on LinkedIn: It’s been an incredible week in the ‘Bias in AI’ course at Vector
http://www.linkedin.com – February 25
[…] week in the ‘Bias in AI’ course at Vector Institute – Elliot Creager took the group of SME AI practitioners through codifying bias mathematically and mitigation calculations […]
0
Why most machine learning strategies fail –
bdtechtalks.com – February 25
[…] “As AI practitioners can demonstrate practical examples of how AI can benefit their specific company—leadership wil […]
1
AI For Everyone
http://www.coursera.org – February 25
[…] AIs expert-led educational experiences provide AI practitioners and non-technical professionals with the necessary tools to go all the way from foundational basics […]
1
Why ‘containment rate’ is NOT the best way to measure your chatbot or voicebot •
vux.world – February 25
[…] inbox every week, as well as invites to our weekly live podcast where we interview conversational AI practitioners about the details of how to implement conversational automation and industry trends […]
N/A
Issue 80
[…] Our practices have evolved — and continue to do so — as both society and AI practitioners have come to recognize the importance of privacy […]
N/A
AI: Decoded: Africa calling — Google’s AI HR troubles continue — Facebook’s foray into academia –
http://www.politico.eu – February 24
[…] called on AI conferences to drop Google sponsorship and deny their recruiters access, and for AI practitioners to draft an open letter refusing to work for Google, among other things […]
0
A framework for consistently measuring the usability of voice and conversational interfaces •
vux.world – February 23
[…] Don’t forget real users It’s easy for conversational AI practitioners and conversation designers to assume that everyone know how to use voice assistants and chatbots […]
N/A
Events —
http://www.acukltd.com – February 23
[…] LONDON This event offers a great opportunity to meet like minded professionals, MAPP graduates and AI practitioners, researcher Dr Caroly Yousef-Morgan will be providing an update on the newest findings in the field […]
N/A
Artificial Intelligence: Week #7 | 2021
sixgill.com – February 22
[…]   Connect with AI practitioners of all levels Stay connected with artificial intelligence and machine learning practitioners around […]
2
Can We Engineer Ethical AI?
montrealethics.ai – February 22
[…] our next big theme of the discussion appeared, the polemic topic of pushing for licensing for AI practitioners. Licensing AI practitioners Currently, there is no requirement for AI practitioners to be licensed […] observed a lack of understanding within the AI ethics debate on actually being able to tell if AI practitioners are actually complying with the ethical measures established in their place of work (n […]
N/A
Natural Language Processing in TensorFlow
http://www.coursera.org – February 20
[…] AIs expert-led educational experiences provide AI practitioners and non-technical professionals with the necessary tools to go all the way from foundational basics […]
3
すべての人のためのAI【日本語版】
ja.coursera.org – February 20
[…] AIs expert-led educational experiences provide AI practitioners and non-technical professionals with the necessary tools to go all the way from foundational basics […]
1
Deep Learning
http://www.coursera.org – February 20
[…] AIs expert-led educational experiences provide AI practitioners and non-technical professionals with the necessary tools to go all the way from foundational basics […]
1.5K
TensorFlow: Advanced Techniques
http://www.coursera.org – February 19
[…] AIs expert-led educational experiences provide AI practitioners and non-technical professionals with the necessary tools to go all the way from foundational basics […]
N/A
GeoWeaver: Improving Workflows for AI and Machine Learning
http://www.uidaho.edu – February 19
[…] GeoWeaver is the open-source workflow management solution that many AI practitioners urgently need […]
N/A
AI in Finance | Online Course by Industry Experts
my.cfte.education – February 18
[…] knowledge on AI PARTICIPANTS WILL ACCESS HIGH QUALITY KNOWLEDGE AND GAIN FIRST HAND EXPERIENCE FROM AI PRACTITIONERS THEMSELVES 01 Welcome to AI in Finance About the course Format of the course and Tips Certificat […]
N/A
Building A Responsible AI Eco-system
analyticsindiamag.com – February 18
[…] ”  This calls for a serious question on Auditing like financial auditing by qualified AI practitioners […] AI or IBM’S Explainable AI  Google’s Model Cards for documentation Deploy diversified team of AI practitioners while developing the models […]
1
ODSC Team Training
odsc.com – February 18
[…] Join the fastest growing network of AI practitioners, sharing knowledge, projects, failures… Team bonding through learning together, interacting wit […]
0
Big data analytics in the cloud with free public datasets
cloud.google.com – February 18
[…] Explore Looker’s blocks here and request a demo to learn more See how a cross-industry team of AI practitioners ramped up data use to fight COVID […]
N/A
Building Ethics Into the Machine Learning Pipeline Tickets, Wed, Feb 17, 2021 at 3:00 PM
[…] about AI ethics education, and has designed original courses, workshops, and frameworks to help AI practitioners learn how to think critically about the ethics of their work […]
N/A
Data Engineering Weekly #29 – Data Engineering Weekly
[…] Google research published a report on data practices in high-stakes AI from interviews with 53 AI practitioners in India, East and West African countries, and the USA […] One of the disrupting read to know 92% of AI practitioners reported experiencing one or more, and 45 […]
N/A
Online AI Course For Business Leaders | AI For Managers Program
[…] combining conceptual understanding with use cases and demos Mentored learning sessions with AI practitioners, focusing on doubt-resolution and case-study based practice Industry case sessions by experts a […] What are “Industry Case Sessions”? Industry case sessions are led by AI practitioners working at a variety of partner companies […]
N/A
Karachi AI – Community of Applied AI Practitioners Public Group | Facebook
http://www.facebook.com – February 14
As I discussed from last couple of weeks? … There a lot of spaces where Semantic Searching Capabilities can help. This wonderful system is made by…
N/A
Unfortunately, Commercial AI is Failing. Here’s Why.
[…] As a practice, AI practitioners must “clean” the data […]
2
News Feature: What are the limits of deep learning?
[…] ” That’s a widely shared sentiment among AI practitioners, any of whom can easily rattle off a long list of deep learning’s drawbacks […]
N/A
Artificial Intelligence: Week #6 | 2021
sixgill.com – February 13
[…] Notable Research Papers: Connect with AI practitioners of all levels Stay connected with artificial intelligence and machine learning practitioners around […]
N/A
Ethics as a service: a pragmatic operationalisation of AI Ethics by Jessica Morley, Anat Elhalal, Francesca Garcia, Libby Kinsey, Jakob Mokander, Luciano Floridi :: SSRN
papers.ssrn.com – February 12
[…] pro-ethical design endeavour rendered futile? And, if no, then how can AI ethics be made useful for AI practitioners? This is the question we seek to address here by exploring why principles and technica […]
N/A
Managing Complex AI Projects | PMI Blog
community.pmi.org – February 11
[…] Capability-building for existing DS/AI practitioners, focusing on the basic work-flow of DS/AI projects […]
N/A
Artificial Intelligence for Ethical Integrity? Questions and Challenges for AI in Times of a Pandemic  –
globaldigitalcultures.org – February 11
[…] can contribute to the imagination of realities and matters of public concern (Milan, 2020) by AI practitioners and policymakers that exist outside their own imaginative faculty […]
N/A
How Andy Jassy Will Lead Amazon’s AI Strategy?
analyticsindiamag.com – February 10
[…] Jassy said all ML experts and AI practitioners get hired in big tech companies, and the low-key enterprises and startups tend to miss out on th […]
1
[2102.02437v1] EUCA: A Practical Prototyping Framework towards End-User-Centered Explainable Artificial Intelligence
arxiv.org – February 10
[…] It serves as a practical prototyping toolkit for HCI/AI practitioners and researchers to build end-user-centered explainable AI […]
N/A
DIU
http://www.diu.mil – February 9
[…] xBD is currently the largest and most diverse annotated building damage dataset, allowing ML/AI practitioners to generate and test models to help automate building damage assessment […]
1
Workshops List (AAAI-21) | AAAI 2021 Conference
aaai.org – February 9
[…] (AAAI-21) builds on the success of last year’s AAAI PPAI to provide a platform for researchers, AI practitioners, and policymakers to discuss technical and societal issues and present solutions related to privacy […]
1
What is Responsible AI?. “It’s not artificial intelligence I’m… | by Yash Lara | Analytics Vidhya | Jan, 2021
medium.com – February 8
[…] But there is something called as ‘Ethical AI Practitioners’ […]
1
Playing games, gamification, and the gulf between them
[…] Today, AI practitioners have a rich inventory of hundreds of games, with a myriad of variations […]
N/A
What I Learned From Attending TWIMLcon 2021 —
jameskle.com – February 8
[…] There was a wide range of both technical and case-study sessions curated for ML/AI practitioners […]
0
A Startup’s Journey Towards Artificial Intelligence With AI101 | by Jojo Anonuevo | The Startup | Jan, 2021
medium.com – February 7
[…] I highly recommend these for those who want to be AI practitioners and those tasked to build a team to help them understand the skills needed to recruit and interview […]
0
“Everyone wants to do the model work, not the data work”: Data Cascades in High-Stakes AI –
research.google – February 6
[…] In this paper, we report on data practices in high-stakes AI, from interviews with 53 AI practitioners in India, East and West African countries, and USA […]
4
AI is Only Going to Get Smarter. How people are already “cyborgs”… | by Michel Kana, Ph.D | Feb, 2021
michel-kana.medium.com – February 5
[…] a critical mass of AI practitioners […]
N/A
Home
mesumrazahemani.wixsite.com – February 5
Karachi.AI is a premier community of Applied AI practitioners. Founded in 2017, the community has staggering 4000+ members from wide variety of domains.   The unique diversity embodies our vision to educate masses towards Artificial Intelligence and upcoming Machine First era, where Jobs of the future will change drastically. ​ Our vision carries around three pillars of execution: 1. Awareness 2. Engagement 3. Empowerment
N/A
2021 will be the year of MLOps
[…] The implementation of MLOps and closer collaboration of software developers and AI practitioners will bring a maturity to the market in 2021 […]
N/A
What’s with the “Cambrian-AI” theme?
cambrian-ai.com – February 4
[…] Hence, I created Cambrian AI Research, where investors, media, and AI practitioners can keep up with the latest AI innovations, and communicate their plans and innovations, with 100’s […]
0
HPE data science experts help customers navigate the new AI accelerator landscape
community.hpe.com – February 3
[…] AI practitioners want competitive alternatives to CPUs and GPUs […]
N/A
Hands-on Guide to AI Habitat: A Platform For Embodied AI Research –
analyticsindiamag.com – February 3
[…] Unlike the strictly algorithm-led approach of such traditional AI practices, embodied AI practitioners try to first understand the working of biological systems, then develop general principles o […]
1
Projects To Know – Issue #67
eepurl.com – February 3
[…] that occur due to data quality issues arising from technical debt) through interviews with 53 AI practitioners across the world. They find that AI practitioners are not properly incentivized to address data quality problems – instead, they are motivated t […]
N/A
AI Strategies and Roadmap: Systems Engineering Approach to AI Development and Deployment | Professional Education
professional.mit.edu – February 3
[…] Communicate your value proposition to stakeholders Receive practical experience from the “voice of AI practitioners” across various industries Formulate a strategic vision and development plan focused on AI products […]
N/A
Designing Ethics Frameworks for AI with Dr. Willie Costello Tickets, Thu, Feb 11, 2021 at 4:00 PM
[…] about AI ethics education, and has designed original courses, workshops, and frameworks to help AI practitioners learn how to think critically about the ethics of their work […]
N/A
Artificial Intelligence: Week #4 | 2021
sixgill.com – February 1
[…] Notable Research Papers: Connect with AI practitioners of all levels Stay connected with artificial intelligence and machine learning practitioners around […]
1
RCV at CVPR 2021
sites.google.com – February 1
[…] policy implications to Consider while constructing representative datasets and training models by AI practitioners […]
N/A
Project Manager — Village Data Analytics (VIDA) | by Nabin Raj Gaihre | Work with TFE Energy | Feb, 2021
medium.com – February 1
[…] The team includes both top-notch AI practitioners, as well as frontier market entrepreneurs with backgrounds in engineering, renewable energy, an […]
N/A
Software engineering intern / Working student / Master thesis | by Nabin Raj Gaihre | Work with TFE Energy | Feb, 2021
medium.com – February 1
[…] The team includes both top-notch AI practitioners, as well as frontier market entrepreneurs with backgrounds in engineering, renewable energy, an […]
N/A
[Proposal] Ocean Academy: Project Oyster �� – Round 2
port.oceanprotocol.com – February 1
[…] series targets business people and organizations dealing with data, data architects, scientists and AI practitioners […]
0
Environmental data justice
http://www.thelancet.com – February 1
[…] there is growing pressure from a community of researchers, activists, and artificial intelligence (AI) practitioners to make Environmental Data Justice (EDJ) a top priority […] One overarching and fundamental concern in the data justice field is the ability of data and AI practitioners to decide what and whose knowledge and data is counted as valid, and what goes ignored an […] As industry and governments increasingly look to AI practitioners and researchers for the solutions to important societal issues, understanding the systemic an […]

Big Ethical Questions about the Future of AI

Artificial intelligence is already changing the way we live our daily lives and interact with machines. From optimizing supply chains to chatting with Amazon Alexa, artificial intelligence already has a profound impact on our society and economy. Over the coming years, that impact will only grow as the capabilities and applications of AI continue to expand.

AI promises to make our lives easier and more connected than ever. However, there are serious ethical considerations to any technology that affects society so profoundly. This is especially true in the case of designing and creating intelligence that humans will interact with and trust. Experts have warned about the serious ethical dangers involved in developing AI too quickly or without proper forethought. These are the top issues keeping AI researchers up at night.

Bias: Is AI fair

Bias is a well-established facet of AI (or of human intelligence, for that matter). AI takes on the biases of the dataset it learns from. This means that if researchers train an AI on data that are skewed for race, gender, education, wealth, or any other point of bias, the AI will learn that bias. For instance, an artificial intelligence application used to predict future criminals in the United States showed higher risk scores and recommended harsher actions for black people than white based on the racial bias in America’s criminal incarceration data.

Of course, the challenge with AI training is there’s no such thing as a perfect dataset. There will always be under- and overrepresentation in any sample. These are not problems that can be addressed quickly. Mitigating bias in training data and providing equal treatment from AI is a major key to developing ethical artificial intelligence.

Liability: Who is responsible for AI?

Last month when an Uber autonomous vehicle killed a pedestrian, it raised many ethical questions. Chief among them is “Who is responsible, and who’s to blame when something goes wrong?” One could blame the developer who wrote the code, the sensor hardware manufacturer, Uber itself, the Uber supervisor sitting in the car, or the pedestrian for crossing outside a crosswalk.

Developing AI will have errors, long-term changes, and unforeseen consequences of the technology. Since AI is so complex, determining liability isn’t trivial. This is especially true when AI has serious implications on human lives, like piloting vehicles, determining prison sentences, or automating university admissions. These decisions will affect real people for the rest of their lives. On one hand, AI may be able to handle these situations more safely and efficiently than humans. On the other hand, it’s unrealistic to expect AI will never make a mistake. Should we write that off as the cost of switching to AI systems, or should we prosecute AI developers when their models inevitably make mistakes?

Security: How do we protect access to AI from bad actors?

As AI becomes more powerful across our society, it will also become more dangerous as a weapon. It’s possible to imagine a scary scenario where a bad actor takes over the AI model that controls a city’s water supply, power grid, or traffic signals. More scary is the militarization of AI, where robots learn to fight and drones can fly themselves into combat.

Cybersecurity will become more important than ever. Controlling access to the power of AI is a huge challenge and a difficult tightrope to walk. We shouldn’t centralise the benefits of AI, but we also don’t want the dangers of AI to spread. This becomes especially challenging in the coming years as AI becomes more intelligent and faster than our brains by an order of magnitude.

Human Interaction: Will we stop talking to one another?

An interesting ethical dilemma of AI is the decline in human interaction. Now more than any time in history it’s possible to entertain yourself at home, alone. Online shopping means you don’t ever have to go out if you don’t want to.

While most of us still have a social life, the amount of in-person interactions we have has diminished. Now, we’re content to maintain relationships via text messages and Facebook posts. In the future, AI could be a better friend to you than your closest friends. It could learn what you like and tell you what you want to hear. Many have worried that this digitization (and perhaps eventual replacement) of human relationships is sacrificing an essential, social part of our humanity.

Employment: Is AI getting rid of jobs?

This is a concern that repeatedly appears in the press. It’s true that AI will be able to do some of today’s jobs better than humans. Inevitably, those people will lose their jobs, and it will take a major societal initiative to retrain those employees for new work. However, it’s likely that AI will replace jobs that were boring, menial, or unfulfilling. Individuals will be able to spend their time on more creative pursuits, and higher-level tasks. While jobs will go away, AI will also create new markets, industries, and jobs for future generations.

Wealth Inequality: Who benefits from AI?

The companies who are spending the most on AI development today are companies that have a lot of money to spend. A major ethical concern is AI will only serve to centralizecoro wealth further. If an employer can lay off workers and replace them with unpaid AI, then it can generate the same amount of profit without the need to pay for employees.

Machines will create wealth more than ever in the economy of the future. Governments and corporations should start thinking now about how we redistribute that wealth so that everyone can participate in the AI-powered economy.

Power & Control: Who decides how to deploy AI?

Along with the centralization of wealth comes the centralization of power and control. The companies that control AI will have tremendous influence over how our society thinks and acts each day. Regulating the development and operation of AI applications will be critical for governments and consumers. Just as we’ve recently seen Facebook get in trouble for the influence its technology and advertising has had on society, we might also see AI regulations that codify equal opportunity for everyone and consumer data privacy.

Robot Rights: Can AI suffer?

A more conceptual ethical concern is whether AI can or should have rights. As a piece of computer code, it’s tempting to think that artificially intelligent systems can’t have feelings. You can get angry with Siri or Alexa without hurting their feelings. However, it’s clear that consciousness and intelligence operate on a system of reward and aversion. As artificially intelligent machines become smarter than us, we’ll want them to be our partners, not our enemies. Codifying humane treatment of machines could play a big role in that.

Ethics in AI in the coming years

Artificial intelligence is one of the most promising technological innovations in human history. It could help us solve a myriad of technical, economic, and societal problems. However, it will also come with serious drawbacks and ethical challenges. It’s important that experts and consumers alike be mindful of these questions, as they’ll determine the success and fairness of AI over the coming years.

By: By Steve Kilpatrick
Co-Founder & Director
Artificial Intelligence & Machine Learning

More contents:

Future Space

Future Robotics

Future of Mankind

Future Medicine

The Future Of Jobs And Education

The world of work has been changing for some time, with an end to the idea of jobs for life and the onset of the gig economy. But just as in every other field where digital transformation is ongoing, the events of 2020 have accelerated the pace of this change dramatically.

The International Labor Organization has estimated that almost 300 million jobs are at risk due to the coronavirus pandemic. Of those that are lost, almost 40% will not come back. According to research by the University of Chicago, they will be replaced by automation to get work done more safely and efficiently.

Particularly at risk are so-called “frontline” jobs – customer service, cashiers, retail assistant, and public transport being just a few examples. But no occupation or profession is entirely future proof. Thanks to artificial intelligence (AI) and machine learning (ML), even tasks previously reserved for highly trained doctors and lawyers – diagnosing illness from medical images, or reviewing legal case history, for example – can now be carried out by machines.

At the same time, the World Economic Forum, in its 2020 Future of Jobs report, finds that 94% of companies in the UK will accelerate the digitization of their operations as a result of the pandemic, and 91% are saying they will provide more flexibility around home or remote working.

PROMOTED

If you’re in education or training now, this creates a dilemma. Forget the old-fashioned concept of a “job for life,” which we all know is dead – but will the skills you’re learning now even still be relevant by the time you graduate?

One thing that’s sure is that we’re moving into an era where education is life-long. With today’s speed of change, there are fewer and fewer careers where you can expect the knowledge you pick up in school or university to see you through to retirement. MORE FOR YOUThese Are The World’s Best Employers 2020The Value Of Resilient LeadershipEmployers Must Act Now To Mitigate The Impacts Of The Pandemic On Women’s Careers

All of this has created a perfect environment for online learning to boom. Rather than moving to a new city and dedicating several years to studying for a degree, it’s becoming increasingly common to simply log in from home and fit education around existing work and family responsibilities.

This fits with the vision of Jeff Maggioncalda, CEO of online learning platform Coursera. Coursera was launched in 2012 by a group of Stanford professors interested in using the internet to widen access to world-class educational content. Today, 76 million learners have taken 4,500 different courses from 150 universities, and the company is at the forefront of the wave of transformation spreading through education.

 “The point I focus on,” he told me during our recent conversation, “is that the people who have the jobs that are going to be automated do not currently have the skills to get the new jobs that are going to be created.”

Without intervention, this could lead to an “everyone loses” scenario, where high levels of unemployment coincide with large numbers of vacancies going unfilled because businesses can’t find people with the necessary skills.

TURN 500$ INTO 2500$ IN ONE WEEK COMPLTELEY LEGITIMATE

The answer here is a rethink of education from the ground up, Maggioncalda says, and it’s an opinion that is widely shared. Another WEF statistic tells us 66% of employers say they are accelerating programs for upskilling employees to work with new technology and data.Models of education will change, too, as the needs of industry change. Coursera is preparing for this by creating new classes of qualification such as its Entry-Level Professional Certificates. Often provided directly by big employers, including Google and Facebook, these impart a grounding in the fundamentals needed to take on an entry-level position in a technical career, with the expectation that the student would go on to continue their education to degree level while working, through online courses, or accelerated on-campus semesters.

“The future of education is going to be much more flexible, modular, and online. Because people will not quit their job to go back to campus for two or three years to get a degree, they can’t afford to be out of the workplace that long and move their families. There’s going to be much more flexible, bite-sized modular certificate programs that add up to degrees, and it’s something people will experience over the course of their working careers,” says Maggioncalda.

All of this ties nicely with the growing requirements that industry has for workers that are able to continuously reskill and upskill to keep pace with technological change. It could lead to an end of the traditional model where our status as students expires as we pass into adulthood and employment.

Rather than simply graduating and waving goodbye to their colleges as they throw their mortarboards skywards, students could end up with life-long relationships with their preferred providers of education, paying a subscription to remain enrolled and able to continue their learning indefinitely.

“Because why wouldn’t the university want to be your lifelong learning partner?” Maggioncalda says.

“As the world changes, you have a community that you’re familiar with, and you can continue to go back and learn – and your degree is kind of never really done – you’re getting micro-credentials and rounding out your portfolio. This creates a great opportunity for higher education.”

Personally, I feel that this all points to an exciting future where barriers to education are broken down, and people are no longer blocked from studying by the fact they also need to hold down a job, or simply because they can’t afford to move away to start a university course.

With remote working increasingly common, factors such as where we happen to grow up, or where we want to settle and raise families, will no longer limit our aspirations for careers and education. This could lead to a “democratization of education,” with lower costs to the learner as employers willingly pick up the tab for those who show they can continually improve their skillsets.

As the world changes, education changes too. Austere school rooms and ivory-tower academia are relics of the last century. While formal qualifications and degrees aren’t likely to vanish any time soon, the way they are delivered in ten years’ time is likely to be vastly different than today, and ideas such as modular, lifelong learning, and entry-level certificates are a good indication of the direction things are heading.

You can watch my conversation with Jeff Maggioncalda in full, where among other topics, we also cover the impact of Covid-19 on building corporate cultures and the implications of the increasingly globalized, remote workforce. Follow me on Twitter or LinkedIn. Check out my website.

Bernard Marr

 Bernard Marr

Bernard Marr is an internationally best-selling author, popular keynote speaker, futurist, and a strategic business & technology advisor to governments and companies. He helps organisations improve their business performance, use data more intelligently, and understand the implications of new technologies such as artificial intelligence, big data, blockchains, and the Internet of Things. Why don’t you connect with Bernard on Twitter (@bernardmarr), LinkedIn (https://uk.linkedin.com/in/bernardmarr) or instagram (bernard.marr)?

.

.

World Economic Forum

The Future of Jobs report maps the jobs and skills of the future, tracking the pace of change. It aims to shed light on the pandemic-related disruptions in 2020, contextualized within a longer history of economic cycles and the expected outlook for technology adoption, jobs and skills in the next five years. Learn more and read the report: wef.ch/futureofjobs2020 The World Economic Forum is the International Organization for Public-Private Cooperation. The Forum engages the foremost political, business, cultural and other leaders of society to shape global, regional and industry agendas. We believe that progress happens by bringing together people from all walks of life who have the drive and the influence to make positive change. World Economic Forum Website ► http://www.weforum.org/ Facebook ► https://www.facebook.com/worldeconomi… YouTube ► https://www.youtube.com/wef Instagram ► https://www.instagram.com/worldeconom… Twitter ► https://twitter.com/wef LinkedIn ► https://www.linkedin.com/company/worl… TikTok ► https://www.tiktok.com/@worldeconomic… Flipboard ► https://flipboard.com/@WEF#WorldEconomicForum

License

Creative Commons Attribution license (reuse allowed)

%d bloggers like this: