Artificial Intelligence Will Improve Medical Treatments – Jennifer Kite Powell

1.jpg

A digital health company from the UK wants to change the way a patient interacts with a doctor through the creation of an artificial intelligence (AI) doctor in the form of an AI chatbot.

Babylon Health raised close to $60 million in April 2017 to diagnose illnesses with an AI chatbot on your smartphone. Around the same time, Berlin and London based start up Ada announced its push into the AI chat bot space.

“The news that Babylon Health has raised near £50M to build an ‘AI doctor’ is a promising development for the health industry; trials are currently ongoing in London, where Babylon’s tech is being used as an alternative to the non-emergency 111 number,” said Dr. Joseph Reger, CTO, Fujitsu EMEIA.

The rapid commercialization of machine learning and big data has helped bring AI to the forefront of healthcare and life sciences and is set to change how the industry diagnoses and treats disease.

In a 2016 study by Frost & Sullivan, the market for AI in healthcare is projected to reach $6.6 billion by 2021, a 40% growth rate. The report goes on to say that clinical support from AI will strengthen medical imaging diagnosis processes and using AI solutions for hospital workflows will enhance care delivery. Frost & Sullivan also reports that AI has the potential to improve outcomes by 30 to 40 percent at the same time the costs of treatment by as much as 50%.

“AI is now disrupting how businesses operate and will change the way that organizations create real value for the customer or patient. Industries can reap huge benefits by developing cooperative models that can quickly combine businesses needs with AI tech,” said Reger.

In their Fit for Digital research, Fujitsu identified that 67% of business leaders believed that partnering with technology experts is essential.

AI In Chinese Hospitals

China has one of the highest lung cancer rates in the world. Forbes reported in April 2017 that there were more than 700,000 new cases of lung cancer in the country in 2015 and there are 80,000 radiologists in China who diagnose around 1.4 billion radiology scans a year.

At Shanghai Changzheng Hospital in China, radiologists have been utilizing AI technology from Infervison to improve medical diagnosis in reading CT scans and x-rays and identify suspicious lesions and nodules in lung cancer patients.

The company, which partners GE Healthcare, Cisco, and Nvidia and works with 20 tertiary grade A hospitals in China, pairs a computerized tomography (CT) scan with AI that learns the core characteristics of lung cancer and then detects the suspected cancer features through different CT image sequences. Earlier diagnosis allows doctors to prescribe treatments earlier.

In a statement, Chen Kuan, founder, and CEO, Infervision said that in no way will this technology ever replace doctors.

“It’s intended to eliminate much of the highly repetitive work and empower doctors to help them deliver faster and more accurate reports,” said Reger.

Fujitsu’s Reger says the process of machine learning is considered to be time-saving but will only be successful if data is implemented as the lifeblood of the system.

“In this instance, data will enable AI machines to learn and understand new medical functions, and then critically provide humans e.g. doctors with the necessary information to diagnose problems,” added Reger. “The potential application of AI in healthcare could even grow to possibly predict future illnesses even before they manifest, improving the quality of services for patients. All of this will not be achieved without vast swathes of data, an acceptance that AI will supplement jobs, not replace them, and the overall investment in the technology itself.

If everyone who read the articles and like it , that would be favorable to have your donations – Thank you

 

Advertisements

The Neuroscience of Attention & Why Instructional Designers Should Know About It – Raluca C

1.jpg

You know all those classic arguments couples have that begin with “I told you but you never listen!”? In truth, the listening part is not the issue, the remembering (or absence of) is the real problem. Paying attention is no easy thing and grabbing and holding someone’s attention is even trickier.

A fairly recent study calculated that the average attention span of a person has dropped from twelve to eight seconds, rendering us below the focusing capabilities of goldfish. Apparently this decrease is due to the fact that Heavy multi-screeners find it difficult to filter out irrelevant stimuli — they’re more easily distracted by multiple streams of media.

On the plus side, the report found that people’s ability to multitask has dramatically improved. Researchers concluded that the changes were a result of the brain’s ability to adapt and change itself over time and a weaker attention span is a direct consequence of going mobile.

What instructional designers should know about brain wirings…

For e-learning designers who face the challenge of creating quality modules that facilitate information retention and transfer it’s important to know how the brain works when it comes to attention – this being the first step in any learning process.

When faced with the challenge of processing the huge amounts of information it is being presented with, the brain brings forth several control measures. First it prioritizes the different types of stimuli – it chooses what information to recognize and what to ignore as well as establishing a hierarchy of what item deserves how high a level of concentration.

The brain is also wired to connect any new information to prior knowledge to aid the understanding of a new idea as well as to get a better picture of broader concepts.

Last but not least, the amount of time a person spends focusing on a certain topic is also important – some things can be learned in a few minutes, others take much longer than that and also require some pause.

4.jpg

Since concentration means effort and that is no favorite of anyone’s, it’s important for difficult information to be presented in an engaging way.

… And about the cortexes involved

What neuroscience tells us is that in order for people to start paying attention, the stimuli need to make the cut. The brain’s capacity to discern between these stimuli is located in two different areas: the prefrontal and parietal cortexes.

The first is located behind the forehead and spanning to the left and right sides of the brain and has to do with conscious concentration. It is an important wheel of the motivational system and helps a person focus attention on a goal. The parietal cortex lies right behind the ear and is activated when we face sudden events requiring some action – it is what kept the human race alive through numerous encounters with those who considered us dinner.

Of course, throwing in a really big threatening dinosaur at the beginning of an e-learning module is not the way to go but it helps to keep in mind that people become focused when action is required of them or when they see how a certain learning experience might help them achieve a personal goal.

How attention relates to memory

Attention is a cognitive process that is closely related to another very important aspect of learning: memory. A certain learning intervention is deemed successful when the participants are able to remember and apply what was taught. Otherwise it can be the best experience ever but with no real knowledge value.

The brain’s permanent goal is to filter the stimulus that is the most immediately relevant and valuable, so it is easiest to pay attention when information is interesting. Take televised documentaries for example. If the presentation, the script, the imagery and the voice-over are all working together, even the life of armadillos who don’t do much over a few months period can seem utterly fascinating.

For effective learning to take place, participants must focus their attention on the learning activity. It is the designer’s job to help them do so by including various elements and levels of interactivity. Simply presenting the information can prove highly counterproductive since typically the mind wanders up to 40% of the times we read something.

5

Tips for getting learners’ attention

There are, of course, a lot of great ways to get and keep learner attention. Here are a few examples:

  • Using emotionally charged storytelling – there is nothing as engaging as a good narrative, emotionally spiked at its most important points;
  • Getting the learners involved with the content – interactivity is a must if the goal is to get people on board with learning;
  • Using great visuals – the reason for our decreasing attention is that we are assaulted by imagery; carefully choosing what and how learners see has great barring on their involvement with the program;
  • Linking new concepts with familiar ones – the brain works by making connections between what we already know and what is novelty to us. Designers should facilitate this process by including the best suited comparisons in the content;
  • Keeping it simple – if something is interestingly presented, people will search for more information on their own. Cluttering screens does not help them learn more but prevents them from taking away what is essential.

Bottom line

If the learning material is not engaging, learners will have a hard time paying attention and that will lead to poor results. In order to create interesting material, instructional designers need to be mindful of what neuroscientists have to say about how the human brain works and include meaningful situations and opportunities throughout the modules.

If everyone who read the articles and like it, that would be favorable to have your donations – Thank you.

AI and The Future of Working

1.jpg

Artificial Intelligence is on the verge of penetrating every major industry from healthcare to advertising, transportation, finance, legal, education, and now inside the workplace. Many of us may have already interacted with a chatbot (defined as an automated, yet personalized, conversation between software and human users) whether it’s on Facebook Messenger to book a hotel room or ordering flowers through 1-800 flowers. According to Facebook Vice President, David Marcus, there are now more than 100,000 chatbots on the Facebook Messenger platform, up from 33,000 in 2016.

As we increase the usage of chatbots in our personal lives, we will expect to use them in the workplace to assist us with things like finding new jobs, answering frequently asked HR related questions or even receiving coaching and mentoring.  Chatbots digitize HR processes and enable employees to access HR solutions from anywhere. Using artificial intelligence in HR will create a more seamless employee experience, one that is nimbler and more user driven.

Artificial Intelligence Will Transform The Employee Experience

As I detailed in my column, The Intersection of Artificial Intelligence and Human Resources, HR leaders are beginning to pilot AI to deliver greater value to the organization by using chatbots for recruiting, employee service, employee development and coaching. A recent survey of 350 HR leaders conducted by ServiceNow finds 92% of HR leaders agree that the future of providing an enhanced level of employee service will include chatbots.

In fact, you can think of a chatbot as your newest HR team member, one that allows employees to easily retrieve answers to frequently asked questions. According to the ServiceNow survey, more than two thirds of HR leaders believe employees are comfortable accessing chatbots to get the information they need, at the time they need it. The type of questions HR leaders believe employees are comfortable using a chatbot for range from the mundane and factual ones; such as how much paid time off do I have left, to the more personal ones; such as how do I report a sexual misconduct experience. The comfort level with using AI to answer employee inquiries is shown in Figure 1:

Future Workplace

According to Deepak Bharadwaj, General Manager, HR Product Line, ServiceNow “By 2020, based on the adoption of chatbots in our personal lives, I can see how penetration in the workplace could reach adoption rates of as high as 75% with employees accessing a chatbot to resolve frequently asked HR questions and access HR solutions anywhere and anytime.” Bharadwaj points out how fast we are changing our behavior as consumers, given the dramatic rise of conversational AI technology and its ease of use. For example, Amazon’s Alexa now has more than 15,000 “skills” (Amazon’s term for voice-based apps), nearly all of which were created in the last two years since Amazon opened Alexa to outside developers. In fact, 10,000 Alexa skills were created since fourth quarter, 2016.

Artificial intelligence and chatbots are revolutionizing both the candidate and employee experience. As Diana Wong, Senior Vice President of HR at Capital Group says,”Technology is an enabler to delivering world-class Advisor and Investor experiences to our customers. So, we believe HR must mirror these best in class experiences by leveraging artificial intelligence for all phases of the employee life cycle from recruiting to on-boarding and developing employees.”

Capital Group is piloting a number of artificial intelligence technologies in HR, from using Textio to write more effective and bias free job descriptions to using predictive analytic web based video interviewing through the MontageTalent platform. Wong believes the piloting and usage of artificial intelligence not only improves the efficiency and effectiveness of the candidate and employee experience, but also enables Capital Group to be seen as a modern employer with Millennial workers.

However, there are barriers along the journey as HR experiments with artificial intelligence. I recently spoke about the impact of artificial intelligence to a group of senior HR leaders in Milan last week. This group identified a number of barriers to using artificial intelligence in HR, namely the fear of job loss among HR team members, lack of skills to truly embrace these new technologies and the change management needed to adopt to new ways of sourcing, recruiting, and engaging employees.

Wong emphasizes this when she says, “One of the critical success factors to adopting artificial intelligence for HR is the cultural orientation around change and on-going employee communications on how and why the organization is digitally transforming HR.”

Delivering a compelling employee experience is a competitive advantage in attracting and retaining talent. Companies are realizing that transforming employee experience is not an HR initiative, rather it is a business initiative. This means senior C-level executives from HR, IT, Digital Transformation, Real Estate, and Corporate Communications need to develop one common shared vision on what a memorable and compelling employee experience is and define the elements of the employee experience over the short, medium, and long term.

If everyone who read the articles and like it, that would be favorable to have your donations – Thank you.

How Augmented Reality is Being Implemented in the Real World – Umeed Kothavala

1.jpeg

Augmented Reality (AR) is the imposing of digitally generated images into a viewer’s real-world surroundings. Unlike Virtual Reality, which creates a completely artificial environment, AR uses the existing environment and overlays it with new information. Augmented reality apps are usually written using special 3D programs which allow developers to superimpose animation in the computer program, to an AR “marker” in the real world. It is now popularly being used by advertisers to create 3D renders of products, such as cars, the inside of buildings, and machinery. This provides consumers with a 360-degree product view.

The term ‘Augmented Reality’ was coined by Boeing researcher Thomas Caudell in 1990, to explain how head-mounted displays of electricians worked during the assembling of complicated wiring. Since then, the technology has been used in CAD programs for aircraft assembly, architecture, digital advertising, simulation, translation, military, and various medical procedures.

Tech giant Google, unveiled Google Glass in 2013, propelling AR to a more wearable interface – glasses. It works by projecting on the user’s lens screen while responding to voice commands, overlaying images, videos, and sounds.

Real-World Examples

AR has proven to be very useful across several industries when tied with location-based technology. Investments in this market continue to grow as several applications, which leverage the power of AR, are now available across different sectors. Its use in marketing is particularly appealing, as more detailed content be put within a traditional 2D advert with very interactive, engaging results with a high possibility of generating viral campaigns. Other fields utilizing AR with commendable results include:

  • Education: Academic publishers are developing applications which embed text, images, videos, and real-world curriculum with classroom lessons.
  • Travel: AR has enabled travellers to access real-time information of historical places and tourist sites, by pointing their camera’s viewfinders to specific subjects.
  • Translation: Globalization has propelled the development of translation applications to interpret text to different languages such as French, Afrikaans, Spanish and many more.
  • Locators: With location applications, users can access information about places near their current location along with user reviews.
  • Gaming: It is being used to develop real-time 3D games, through Unity 3D engines.
  • Defense: Several governments are now implementing AR solutions for their military. The US military has begun to use Google Glass designed for the battlefield. The glasses display virtual icons which are superimposed on a real-world view increasing the soldiers awareness.
  • Automotive: Back in 2013, Volkswagen launched an application for the Audi luxury brand which enabled potential customers to experience AR based car drives through the use of graphics, audio, and videos to enhance real-world vehicle motions.
  • Healthcare: Some optical manufacturers are now using AR to design smart contact lenses which repel optical radiation that could cause poor sight and eye cancer.

Statistics and Projected Growth

The growth and demand of AR applications and platforms in commercial, aviation, defense, and other fields is expected to be worth at least 56.8 billion dollars by 2020 according to a new research report by Markets and Markets.

The presence of several tech giants such as Google, Qualcomm, and Microsoft in the industry will boost AR growth within several geographies. This begins with North America, followed by the European market, due to growth in the automotive and aerospace sectors. Followed by the Asia Pacific region as a result of the growing industrial and manufacturing sectors, especially in China and Japan.

2.jpg

Industries expected to heavily embrace augmented reality technologies include healthcare, automotive, defense, education, and travel sectors. In fact, high calibre investors are planning to invest in AR startups. Magic Leap, a giant AR startup which designs 2D generated imagery, has received over 590 million dollars in investment funding since 2014.

Threats and Barriers to AR

Although AR seems to have huge market potential, there are specific threats which may restrict its mass adoption:

  • Lack of public or social awareness of mobile AR
  • Lack of profitability for enterprises
  • Huge monopolies within the AR market
  • Limitations in user experience
  • Poor marketing and advertising compared to VR
  • Budget limitations mostly among SME’s
  • Privacy, security issues, and other concerns
  • Poor mobile internet connectivity in emerging markets

With the population of smartphones rising, augmented reality is definitely here to stay. More and more consumers are carrying phones with AR application capabilities. For as long as augmented reality content remains engaging, and innovative while embracing superior user experience, consumers will gravitate towards AR friendly applications.

If everyone who reads our articles and like it , help to fund it. Our future would be much more secure if you send us your donations…THANK YOU

How the European Union’s GDPR Rules Impact Artificial Intelligence and Machine Learning – Mike Kaput

1.jpg

This Regulation lays down rules relating to the protection of natural persons with regard to the processing of personal data and rules relating to the free movement of personal data.”

It’s no “We hold these truths to be self-evident…” but when the European Union (EU) drafted the General Data Protection Regulation (GDPR) that goes into effect on May 25, 2018, they definitely had individual freedoms in mind.

In this case, it’s freedom of individuals to control their personal data.

The GDPR is a broad regulation that outlines how companies may legally collect and use individual personal data—and what rights EU citizens have concerning their data.

It’s a major regulation with major effects. Companies that collect data from EU citizens must follow a number of regulations around collecting that data in order to legally use it. They must also respond to citizen requests to alter that data in certain circumstances.

Notes Elizabeth Juran, a consultant at marketing agency PR 20/20 (which powers the Marketing AI Institute):

“The GDPR is a European privacy law that protects consumers from unfair, unclear and unethical uses of their data. You may have noticed updates in your automation software or data collection tools like the one below from Google Analytics:

2.jpg

These aren’t your average skim-and-delete email notifications. The GDPR will change how we, as marketers, use data. Historically, companies haven’t been required to disclose information like the following:

  • What kinds of data they store about consumers.
  • What they’re using consumer data for.
  • Why they ask for (or require) the data they do.

Starting May 25, the rules about data will heavily favor the consumer. The law is specific to individuals who reside in the European Union (EU) and European Economic Community (EEC), but companies all over should be aware. If you have even one person on your contact list from the EU or EEC, your forms, privacy policy and email tactics will likely have to change to avoid breaking the rules for that contact.”

This is big for marketers of all stripes. But what effects might GDPR have on the use of artificial intelligence?

Turns out, a significant part of the regulation also deals with AI and algorithms. Like the rest of GDPR, the language may be construed broadly. No precedents have yet been set with the regulation. So a lot is up in the air as to what will actually be enforced and how it’ll be enforced.

GDPR and AI

According to the Brookings Institution, a US think tank:

The GDPR being implemented in Europe place severe restrictions on the use of artificial intelligence and machine learning. According to published guidelines, “Regulations prohibit any automated decision that ‘significantly affects’ EU citizens. This includes techniques that evaluates a person’s ‘performance at work, economic situation, health, personal preferences, interests, reliability, behavior, location, or movements.’” In addition, these new rules give citizens the right to review how digital services made specific algorithmic choices affecting people.

This statement alone creates substantial uncertainty if you know anything about artificial intelligence and machine learning.

Lots of AI systems run into the “black box” problem, in that they’re not very transparent about how their machine learning algorithms reach decisions. For consumers, this means you don’t necessarily know why AI may recommend what it recommends or take the actions it takes.

There’s no doubt the black box problem becomes troublesome the more AI is adopted in marketing and other industries. At some point, marketers will want some idea of how systems make decisions, especially as these systems recommend more sophisticated marketing actions.

For instance, if I have an AI system that prescribes how I should allocate my marketing budget, I’ll at least want some idea what inputs the system uses to make those decisions. (At least, I will if I need to explain any of this to my executive team or board.)

Does that mean you need to know exactly how the AI’s algorithms work? Probably not. But there’s a balance here that likely needs to be established.

Another problem, however, is that sometimes the creators of AI systems can’t always explain fully why AI makes its decisions. For sophisticated AI, like deep learning and neural networks, it is sometimes extremely difficult for the people who created these systems to pinpoint each and every step in the decision-making process.

4.jpg

Says AI expert Pedro Domingos, author of The Master Algorithm:

“The best learning algorithms are these neural network-based ones inspired by what we find in humans and animals. These algorithms are very accurate as they can understand the world based on a lot of data at a much more complex level than we can. But they are completely opaque. Even we, the experts, don’t understand exactly how they work. We only know that they do. So, we should not allow only algorithms which are fully explainable. It is hard to capture the whole complexity of reality and keep things at the same time accurate and simple.”

Hard as it is to believe, he’s right. There may not be an easy way for an AI system’s creator to explain how the system works. In the meantime, regulations like GDPR that require such explanations may limit the amazing efficacy of these systems, Domingos points out:

“Let’s take the example of cancer research, where machine learning already plays an important role. Would I rather be diagnosed by a system that is 90 percent accurate but doesn’t explain anything, or a system that is 80 percent accurate and explains things? I’d rather go for the 90 percent accurate system.”

GDPR presents some interesting conflicts and considerations for the companies that build AI. Brookings notes that it could hold back AI development in the EU:

“If interpreted stringently, these rules will make it difficult for European software designers (and American designers who work with European counterparts) to incorporate artificial intelligence and high-definition mapping in autonomous vehicles. Central to navigation in these cars and trucks is tracking location and movements. Without high-definition maps containing geo-coded data and the deep learning that makes use of this information, fully autonomous driving will stagnate in Europe. Through this and other data protection actions, the European Union is putting its manufacturers and software designers at a significant disadvantage to the rest of the world.”

Now, a lot of the effects will utterly depend on how lawmakers within the EU interpret GDPR.

Somewhat ironically, it’s not at all clear how they’ll come to their decisions, as they operate in a bit of a black box of their own.

Disclaimer: We love talking about everything related to AI—even legal regulations—but we’re not lawyers, nor should this content be construed in any way as legal advice. We recommend all companies consult with legal counsel about GDPR-related questions and actions.

If everyone who reads our articles and like it , help to fund it. Our future would be much more secure if you send us your donations…THANK YOU

 

AI Can Now Identify You By Your Walking Style – Ashley Sams

1.jpg

Similar to snowflakes, every individual’s walking style is unique to him or her. Gizmodo shares that work is being done to create a new footstep recognition tool that could replace retinal scanners and fingerprinting at security checkpoints.

“Each human has approximately 24 different factors and movements when walking, resulting in every individual person having a unique, singular walking pattern,” says Omar Costilla Reyes, the lead author of the new study and a computer scientist at the University of Manchester.

Reyes created the largest footsteps database in existence by collecting 20,000 footstep signals from 120 individuals. Using this database, Reyes trains the artificially intelligent system to scour through the data and analyze weight distribution, gait speed, and three-dimensional measures of each walking style.

The results so far show that, on average, the system is 100 percent accurate in identifying individuals.

Which Online Conversations Will End in Conflict? AI Knows.

According to The Verge, researchers at Cornell University, Google Jigsaw, and Wikimedia have created an artificial intelligence system that can predict whether or not an online conversation will end in conflict.

To do so, they trained the system using the “talk page” on Wikipedia articles—where editors discuss changes to phrasing, the need for better sources, and so on.

The system is trained to look for several indicators to gauge whether the conversation is amicable or unfriendly. Signs of a positive conversation include the use of the word “please,” greetings (“How’s your day going?”), and gratitude (“Thanks for your help”).

On the contrast, telling signs of a negative dialogue include direct questioning (“Why didn’t you look at this?”) and use of second person pronouns (“Your sources are incomplete”).

Currently, the AI can correctly predict the sentiment outcome of an online discussion 64 percent of the time. Humans still perform the task better, making the right call 72 percent of the time. However, this development shows we’re on the right path to creating machines that can intervene in online arguments.

AI Across Industries

In a recent Forbes article, author Bernard Marr shares 27 examples of artificial intelligence and machine learning currently being implemented across industries. If you don’t have time to read the full list, we’ve shared a few of our favorites below.

In consumer goods, companies like Coca-Cola and Heineken are using artificial intelligence to sort through their mounds of data to improve their operations, marketing, advertising, and customer service.

In energy, GE uses big data, machine learning, and Internet of Things (IoT) technology to build an “internet of energy.” Machine learning and analytics enable predictive maintenance and business optimization for GE’s vision of a “digital power plant.”

In social media, tech giants Twitter, Facebook, and Instagram are using artificial intelligence to fight cyberbullying, racist content, and spam, further enhancing the user experience.

If everyone who reads our articles and like it , help to fund it. Our future would be much more secure if you send us your donations…THANK YOU

 

AI Is Already Changing The Way We Think About Photography – Evgeny Tchebotarev

2.jpg

AI is rapidly changing the way we think about photography. Just a couple of years from now, most advances in the photo space will be AI-centric, and not optics or sensor-centric as before. The advancement in photography technology will, for the first time ever, be untethered from physics, and will create a whole new way of thinking about photography. This is how it’s going to happen.

Processing power

Just six months ago we saw the first glimpse of AI entering our consumer world when Apple introduced A11 Bionic neural engine chip, which powers current generation of iPhones. The A11 is important because the chip is specifically designed for tasks such as image–and face–recognition, AR applications, etc. In applications like this, it’s way more effective.

I then wrote that the Google Pixel line would introduce it’s own hardware chips, designed for specific tasks, and that indeed happened sooner than anyone—including me—expected, as the Pixel 2 featured dedicated image enhancement chip to help with image processing on the fly.

What made it intriguing is that when Pixels were announced and shipped, there was no mention of the feature, and only sometime later did Google admit that the Pixels had a dedicated chip which would be “enabled” sometime in the future (if you own Pixel 2 today, this hardware feature is already enabled).

Then came Chinese smartphone maker Huawei with the P20 Pro, featuring 4 cameras — 1 in front and 3 in the back. In addition to achieving the highest DxO Mark score to date, the Huawei P20 Pro is packed with AI features, such as real-time image scene recognition, meaning it can discern 500 scenarios in 19 categories, such as animals, landscapes, as well as an advanced night mode, where the AI assists in processing noisy photos, making them almost perfect.

The Verge has great coverage with image samples to provide a good overview of this photo powerhouse. As the next generation of smartphone products are developed, many manufacturers are focused on image capture and real-time processing, partially because it’s a great marketing differentiator, but also because advances in this area are clearly visible to the consumer.

1.jpg

Catering to the pros

But in professional and semi-pro setting, there are several other developments that are key to image quality. First of all, is the processing part, that has to happen right after the photo has been taken. Advances in RAW processing have been steady and predictable (but yet, very welcome by everyone), but AI is ready to supercharge this process. Recently PetaPixel featured a research paper named “Learning to See in the Dark” by Chen Chen, Qifeng Chen, Jia Xu, and Vladlen Koltun, that covers techniques of recovery of extremely underexposed RAW files.

For the consumer it means that AI-assisted software can create high-quality images way beyond the current physical limit — allowing smaller sensors (such as found in drones or mirrorless cameras) to leapfrog current top-end DSLR’s.

In other applications, it might allow tiny security cameras to yield high-quality imagery, increasing overall surveillance.

Photo optimization

One intriguing technology I had a chance to see recently is AI-powered upscaling, far beyond in quality than what is currently available to the public. A team of AI developers at Skylum is putting finishing touches on technology that will allow smartphone images to be upscaled and printed at an incredibly high resolution and sharpness. As I’ve previously pointed out, not everyone has an iPhone X in their pocket — hundreds of millions of people today are buying brand new phones that use 4-year-old technology, so having sharper, crispier photos from outdated smartphone sensors will allow millions of people future-proof their precious moments.

Thousands of kilometers from Skylum AI research lab is another startup, that stealthily applying quantum mechanics research to RAW files, is promising to compress your photos up to 10x without loss of data.

A year ago Apple introduced HEIF, High Efficiency Image Format. If you use iPhone with iOS11 you are likely using HEIF without even knowing it. HEIF allows for higher quality images (compared to JPEG) at about half the size, allowing to keep twice as many photos as before. Dotphoton, a small startup from Switzerland, is aiming to up HEIF format and is focusing on the professional applications, from aerial footage to professional photographers.

After a long technological hiatus in image tech, we are yet again seeing an explosion of interest in the space. Photography plays an important role in every tech company, but nowhere it is more important than in the smartphone race. And as September edges closer, Google and Apple will both be aiming to announce cutting-edge photography advances. Yet, an influx of smaller players are innovating at a rapid rate and raising the stakes for everyone. Evgeny Tchebotarev founded 500px (acquired by VCG), backed by A16Z . He is VP Business Development Asia at Skylum and Director of Startup Grind Taipei. He is based in Taipei, Taiwan.

If everyone who reads our articles and likes it, helps fund it, our future would be much more secure by your donations – Thank you.

Artificial Intelligence Could Help Generate the Next Big Fashion Trends – Emily Matchar

1.jpg

A fashion designer working on a new collection has an idea, but wonders if it’s been done before. Another is looking for historical inspiration—1950s-style wasp waists or 80s-era padded shoulders.

Soon, they might turn to Cognitive Prints for help. The suite of AI tools IBM is developing for the fashion industry can take a photo of a dress or a shirt and search for similar garments. It can search for images with specific elements—Mandarin collars, for example, or gladiator laces, or fleur-de-lis prints. It can also design patterns itself, based on any image data set a user inputs—architectural images, amoebas, sunsets.

“Fashion designers arduously put in efforts and time in coming up with new designs which could potentially be trend-setters,” says Priyanka Agrawal, a research scientist at IBM Research India, who has worked on Cognitive Prints. “Additionally, they have inspirations like architecture or technology, which they aspire to translate into their work. However, it becomes difficult to do something novel and interesting every single time. We wanted to make it easier for them by augmenting the design lifecycle.”

The AI image search engine, a collaboration between IBM and the Fashion Institute of Technology (FIT), was trained using 100,000 print swatches from 10 years of winning Fashion Week entries. Users can filter results by year, designer or inspiration (say, “Japanese street wear”). Designers can get inspired, or can make sure their inspiration is really their own and not inadvertent plagiarism (Gucci was recently accused of ripping off designs from legendary Harlem tailor Dapper Dan; they’ve since launched a collaboration).

The Cognitive Prints team is looking at making several extensions to the tool’s abilities. They want to enable designers to make custom edits to Cognitive Prints-generated designs, like changing the background color or, say, swapping spirals for circles on a fabric. They’d also like to teach the tool to design entire garments given just a few specifications, like “red one-shoulder dresses with ruffled hem.”

 

The use of AI in fashion has exploded in recent years. Various online services use AI to peruse the internet or your own social media data to suggest new outfits it thinks might be to your taste. Indian designers Shane and Falguni Peacock used IBM’s AI platform, Watson, to search a half-century of Bollywood and high-fashion images—some 600,000 in total—to help them create a new East-meets-West collection. Tommy Hilfiger partners with IBM and FIT to use AI to help identify trends in real time, for a quicker design-to-store time. Amazon has created its own AI designer as well, capable of generating new garments.

Agrawal thinks we’ll be seeing much more of this in the near future.

“As AI progress continues to advance, fashion [will] see more transformations,” she says. “For example, with the rise of conversational agents and virtual reality/augmented reality technology, it should not be long until users can not only query fashion catalogs but also interact, iterate and [be inspired by] the technology.”

Muchaneta Kapfunde, founder and editor-in-chief of the technology and fashion site FashNerd, agrees. AI is becoming common at the retail end of fashion, she says, with stores using algorithms to predict customers’ needs. It’s also being used in attempts to create more sustainable materials, an important consideration in an industry that’s one of the world’s top polluters.

But Kapfunde thinks it will be a while before AI tools like Cognitive Prints are ready to design quality garments on their own.

“The idea of using technology to design a perfect dress, it sounds great in theory, but there’s still a lot of work to be done. It’s not so easy to implement,” she says. “We still need the human touch.”

If everyone who reads our articles and likes it, helps fund it, our future would be much more secure. For as little as $5, you can donate us – Thank you.

AI And The Third Wave Of Silicon Processors

1.jpg

The semiconductor industry is currently caught in the middle of what I call the third great wave of silicon development for processing data. This time, the surge in investment is driven by the rising hype and promising future of artificial intelligence, which relies on machine learning techniques referred to as deep learning.

As a veteran with over 30 years in the chip business, I have seen this kind of cycle play out twice before, but the amount of money being plowed into the deep learning space today is far beyond the amount invested during the other two cycles combined.

The first great wave of silicon processors began with the invention of the microprocessor itself in the early 70s. There are several claimants to the title of the first microprocessor, but by the early 1980s, it was clear that microprocessors were going to be a big business, and almost every major semiconductor company (Intel, TI, Motorola, IBM, National Semiconductor) had jumped into the race, along with a number of hot startups.

These startups (Zilog, MIPS, Sun Microsystems, SPARC, Inmos Transputer) took the new invention in new directions. And while Intel clearly dominated the market with its PC-driven volumes, many players continued to invest heavily well into the 90s.

As the microprocessor wars settled into an Intel-dominated détente (with periodic flare-ups from companies such as IBM, AMD, Motorola, HP and DEC), a new focus for the energy of many of the experienced processor designers looking for a new challenge emerged: 3-D graphics.

The highly visible success of Silicon Graphics, Inc. showed that there was a market for beautifully rendered images on computers. The PC standard evolved to enable the addition of graphics accelerator cards by the early 90s, and when SGI released the OpenGL standard in 1992, a market for independently designed graphics processing units (GPUs) was enabled.

Startups such as Nvidia, Rendition, Raycer Graphics, ArtX and 3dfx took their shots at the business. At the end of the decade, ATI bought ArtX, and the survivors of this second wave of silicon processor development were set. While RISC-based architectures like ARM, MIPS, PowerPC and SPARC persisted (and in ARM’s case, flourished), the action in microprocessors never got back to that of the late 80s and early 90s.

Image result for AI And The Third Wave Of Silicon Processors

Competition between Nvidia and ATI (eventually acquired by AMD) drove rapid advances in GPUs, but the barrier to entry for competitors was high enough to scare off most new entrants.

In 2006, Geoffrey Hinton published a paper that described how a long-known technology referred to as neural networks could be improved by adding more layers to the networks.

This discovery changed machine learning into deep learning. In 2009, Andrew Ng, a researcher at Stanford University, published a paper showing how the computing power of GPUs could be used to dramatically accelerate the mathematical calculations required by convolutional neural networks (CNNs).

These discoveries — along with work by people like Yann LeCun and Yoshua Bengio, among many others — put in place the elements required to accelerate the development of deep learning systems: large labeled datasets, high-performance computing, new deep learning algorithms and the infrastructure of the internet to enable large-scale work and sharing of results around the world.

The final ingredient required to launch a thousand (or at least several hundred) businesses was money, which soon started to flow in abundance with venture capital funding for AI companies almost doubling every year from 2012. In parallel, large companies — established semiconductor heavyweights like Intel and Qualcomm and computing companies like Google, Microsoft, Amazon and Baidu — started to invest heavily, both internally and through acquisition.

Over the past couple of years, we have seen the rapid buildup of the third wave of silicon processor development, which has primarily targeted deep learning. A significant difference between this wave of silicon processor development and the first two waves is that the new AI or deep learning processors rarely communicate directly with user software or human interfaces — instead, these processors operate on data.

Given this relative isolation, AI processors are uniquely able to explore radically different and new implementation alternatives that are more difficult to leverage for processors that are constrained by software or GUI compatibility. There are AI processors being built in almost every imaginable way, from building on traditional digital circuits to relying on analog circuits (Mythic, Syntient) to derivatives of existing digital signal processing designs (Cadence, CEVA) and special-purpose optimized circuits for deep learning computations (Intel Nervana, Google TPU, Graphcore).

And one popular chip architecture has been revived by a technology from the 30-year-old Inmos Transputer: systolic processing (Wave Computing, TPU), proving that everything does indeed come back in fashion one day. Think of systolic processing as the bell bottoms of the silicon processor business.

Image result for AI And The Third Wave Of Silicon Processors

There are even companies such as Lightmatter looking to use light itself, a concept known as photonic processing, to implement AI chips. The possibilities for fantastic improvements in performance and energy consumption are mind-boggling — if we can get light-based processing to work.

This massive investment in deep learning chips is chasing what looks to be a vast new market. Deep learning will likely be a new, pervasive, “horizontal” technology, one that is used in almost every business and in almost every technology product. There are deep learning processors in some of our smartphones today, and soon they will be in even lower-power wearables like medical devices and headphones.

Deep learning chips will coexist with industry-standard servers in almost every data center, accelerating new AI algorithms every day. Deep learning will be at the core of the new superchips that will enable truly autonomous driving vehicles in the not-too-distant future. And, on top of all of this silicon, countless software offerings will compete to establish themselves as the new Microsoft, Google or Baidu of the deep learning future.

If everyone who reads our articles, who likes it, helps fund it, our future would be much more secure. For as little as $5, you can donate us – and it only takes a minute. Thank you.

By: Ty Garibay

UK Officials Reveal 5 New Principles for Businesses Working With AI

These guiding principles may be the first step towards codified laws governing businesses’ use of AI.