Now You Can Rent a Robot Worker For Less Than Paying a Human

Polar Manufacturing has been making ​metal ​hinges, locks, and brackets ​in south Chicago for more than 100 years. Some of the company’s metal presses—hulking great machines that loom over a worker—date from the 1950s. Last year, to meet rising demand amid a shortage of workers, Polar hired its first robot employee.

The robot arm performs a simple, repetitive job: lifting a piece of metal into a press, which then bends the metal into a new shape. And like a person, the robot worker gets paid for the hours it works.

​Jose Figueroa​, who manages Polar’s production line, says the robot, which is leased from a company called Formic, costs the equivalent of $8 per hour, compared with a minimum wage of $15 per hour for a human employee. Deploying the robot allowed a human worker to do different work, increasing output, Figueroa says.

“Smaller companies sometimes suffer because they can’t spend the capital to invest in new technology,” Figueroa says. “We’re just struggling to get by with the minimum wage increase.”

The fact that Polar didn’t need to pay $100,000 upfront to buy the robot, and then spend more money to get it programmed, was crucial. Figueroa says that he’d like to see 25 robots on the line within five years. He doesn’t envisage replacing any of the company’s 70 employees, but says Polar may not need to hire new workers.

Formic buys standard robot arms, and leases them along with its own software. They’re among a small but growing number of robots finding their way into workplaces on a pay-as-you-go basis.

The pandemic has led to shortages of workers across numerous industries, but many smaller firms are reluctant to write big checks for automation.“Cost declines are great for the diffusion of a technology.” Andrew McAfee, principle research scientist, MIT

“Anything that can help reduce labor count or the need for labor is obviously a plus at this particular time,” says Steve Chmura, chief operating officer at Georgia Nut, a confectionery company in Skokie, Illinois, that has been struggling to find employees and also rents robots from Formic.

The robot-as-employee approach could help automation spread into smaller businesses more rapidly by changing the economics. Companies such as Formic see an opportunity to build large businesses by serving many small firms. Many are mining the data they collect to help refine their products and improve customers’ operations.

Shahan Farshchi, an investor in Formic, likens the state of robotics today to computing before personal computers took off, when only rich companies could afford to invest in massive computer systems that required considerable expertise to program and maintain. Personal computing was enabled by companies including Intel and Microsoft that made the technology cheap and easy to use. “We’re entering that same time now with robots,” Farshchi says.

Robots have been taking on new jobs in recent years as the technology becomes more capable as well as easier and cheaper to deploy. Some hospitals use robots to deliver supplies and some offices employ robotic security guards. The companies behind these robots often provide them on a rental basis.

Jeff Burnstein, president of the Association for Advancing Automation, an industry body, says rising demand for automation among smaller companies is driving interest in robotics as a service. The approach has seen particular traction among warehouse fulfillment firms, Burnstein says.

It might eventually become normal to pay robots to do all sorts of jobs, Burnstein says, pointing to RoboTire, a startup developing a robot capable of switching the tires on a car. “As more and more companies automate in different industries, you’re seeing more receptivity to robotics as a service,” he says. Search our artificial intelligence database and discover stories by sector, tech, company, and more.

The International Federation of Robotics, an organization that tracks robot trends globally, projected in October that the number of robots sold last year would grow 13 percent. One market analysis from 2018 projected the number of industrial robots that are leased or that rely on subscription software will grow from 4,442 units in 2016 to 1.3 million in 2026.

“Cost declines are great for the diffusion of a technology,” says Andrew McAfee, a principle research scientist at MIT who studies the economic implications of automation.

McAfee says robots themselves have become cheaper and more user friendly in recent years thanks to the falling cost of sensors and other components, a trend that he expects will continue. “They are the peace dividend of the smartphone wars,” he says.

Dustin Pederson, CFO of Locus Robotics, a company that leases robots for use in warehouses, says his company’s revenue has grown sixfold over the past year amid rising demand for ecommerce and a shortage of workers. “To be able to step in with a subscription model makes automation a lot friendlier,” Pederson says. “And we are still early on in the overall adoption of robotics in the warehousing industry.”

It’s unclear—even to economists—what impact the growing use of robots will have on the supply of jobs. Research from Daron Acemoglu and Pascual Restrepo, economists at MIT and Boston University, respectively, suggests that the adoption of robots from 1990 to 2020 resulted in fewer jobs and lower wages overall.

But one study of robot adoption in Japanese nursing homes, from January 2021, found that the technology helped create more jobs by allowing for more flexibility in working practices. And another study, from 2019, also found that robot adoption among Canadian businesses had often affected managers more than workers by changing business processes.

Lynn Wu, an associate professor at the University of Pennsylvania’s Wharton School and a coauthor on the 2019 study, says she expects robots paid by the hour to become more common. But she notes that in contrast to many information technologies, few businesses know how to use robots. “It’s going to take longer than people think,” she says.

For now, most robots found in industrial settings are relatively dumb, following precise movements repetitively. Robots are gradually becoming smarter thanks to use of artificial intelligence, but it remains very challenging for machines to respond to complex environments or uncertainty. Some researchers believe that adding AI to robots will prompt companies to reorganize in ways that have a bigger impact on jobs.

Saman Farid, CEO of Formic, says the company hopes to position itself to be able to offer more capable robots to all sorts of companies in the future. “Robots are going to be able to do a lot more tasks over the next 5 to 10 years,” Farid says. “As machine learning gets better, and you get to a higher level of reliability, then we’ll start implementing those.”

By:

Will Knight is a senior writer for WIRED, covering artificial intelligence. He was previously a senior editor at MIT Technology Review, where he wrote about fundamental advances in AI and China’s AI boom. Before that, he was an editor and writer at New Scientist. He studied anthropology and journalism in the UK before turning his attention to machines.

Source: Now You Can Rent a Robot Worker—for Less Than Paying a Human  | WIRED

.

More Great WIRED Stories

AI Can Write Code Like Humans Bugs and All

Some software developers are now letting artificial intelligence help write their code. They’re finding that AI is just as flawed as humans.

Last June, GitHub, a subsidiary of Microsoft that provides tools for hosting and collaborating on code, released a beta version of a program that uses AI to assist programmers. Start typing a command, a database query, or a request to an API, and the program, called Copilot, will guess your intent and write the rest.

Alex Naka, a data scientist at a biotech firm who signed up to test Copilot, says the program can be very helpful, and it has changed the way he works. “It lets me spend less time jumping to the browser to look up API docs or examples on Stack Overflow,” he says. “It does feel a little like my work has shifted from being a generator of code to being a discriminator of it.”

But Naka has found that errors can creep into his code in different ways. “There have been times where I’ve missed some kind of subtle error when I accept one of its proposals,” he says. “And it can be really hard to track this down, perhaps because it seems like it makes errors that have a different flavor than the kind I would make.”

The risks of AI generating faulty code may be surprisingly high. Researchers at NYU recently analyzed code generated by Copilot and found that, for certain tasks where security is crucial, the code contains security flaws around 40 percent of the time.

The figure “is a little bit higher than I would have expected,” says Brendan Dolan-Gavitt, a professor at NYU involved with the analysis. “But the way Copilot was trained wasn’t actually to write good code—it was just to produce the kind of text that would follow a given prompt.”

Despite such flaws, Copilot and similar AI-powered tools may herald a sea change in the way software developers write code. There’s growing interest in using AI to help automate more mundane work. But Copilot also highlights some of the pitfalls of today’s AI techniques.

While analyzing the code made available for a Copilot plugin, Dolan-Gavitt found that it included a list of restricted phrases. These were apparently introduced to prevent the system from blurting out offensive messages or copying well-known code written by someone else.

Oege de Moor, vice president of research at GitHub and one of the developers of Copilot, says security has been a concern from the start. He says the percentage of flawed code cited by the NYU researchers is only relevant for a subset of code where security flaws are more likely.

De Moor invented CodeQL, a tool used by the NYU researchers that automatically identifies bugs in code. He says GitHub recommends that developers use Copilot together with CodeQL to ensure their work is safe.

The GitHub program is built on top of an AI model developed by OpenAI, a prominent AI company doing cutting-edge work in machine learning. That model, called Codex, consists of a large artificial neural network trained to predict the next characters in both text and computer code. The algorithm ingested billions of lines of code stored on GitHub—not all of it perfect—in order to learn how to write code.

OpenAI has built its own AI coding tool on top of Codex that can perform some stunning coding tricks. It can turn a typed instruction, such as “Create an array of random variables between 1 and 100 and then return the largest of them,” into working code in several programming languages.

Another version of the same OpenAI program, called GPT-3, can generate coherent text on a given subject, but it can also regurgitate offensive or biased language learned from the darker corners of the web.

Copilot and Codex have led some developers to wonder if AI might automate them out of work. In fact, as Naka’s experience shows, developers need considerable skill to use the program, as they often must vet or tweak its suggestions.

Hammond Pearce, a postdoctoral researcher at NYU involved with the analysis of Copilot code, says the program sometimes produces problematic code because it doesn’t fully understand what a piece of code is trying to do. “Vulnerabilities are often caused by a lack of context that a developer needs to know,” he says.

Some developers worry that AI is already picking up bad habits. “We have worked hard as an industry to get away from copy-pasting solutions, and now Copilot has created a supercharged version of that,” says Maxim Khailo, a software developer who has experimented with using AI to generate code but has not tried Copilot.

Khailo says it might be possible for hackers to mess with a program like Copilot. “If I was a bad actor, what I would do would be to create vulnerable code projects on GitHub, artificially boost their popularity by buying GitHub stars on the black market, and hope that it will become part of the corpus for the next training round.”

Both GitHub and OpenAI say that, on the contrary, their AI coding tools are only likely to become less error prone. OpenAI says it vets projects and code both manually and using automated tools.

De Moor at GitHub says recent updates to Copilot should have reduced the frequency of security vulnerabilities. But he adds that his team is exploring other ways of improving the output of Copilot. One is to remove bad examples that the underlying AI model learns from. Another may be to use reinforcement learning, an AI technique that has produced some impressive results in games and other areas, to automatically spot bad output, including previously unseen examples. “Enormous improvements are happening,” he says. “It’s almost unimaginable what it will look like in a year.”

Source: AI Can Write Code Like Humans—Bugs and All | WIRED

.

Related Contents:

AI Breakthrough Could Spark Medical Revolution

Artificial intelligence has been used to predict the structures of almost every protein made by the human body. The development could help supercharge the discovery of new drugs to treat disease, alongside other applications. Proteins are essential building blocks of living organisms; every cell we have in us is packed with them.

Understanding the shapes of proteins is critical for advancing medicine, but until now, only a fraction of these have been worked out. Researchers used a program called AlphaFold to predict the structures of 350,000 proteins belonging to humans and other organisms. The instructions for making human proteins are contained in our genomes – the DNA contained in the nuclei of human cells.

There are around 20,000 of these proteins expressed by the human genome. Collectively, biologists refer to this full complement as the “proteome”. Commenting on the results from AlphaFold, Dr Demis Hassabis, chief executive and co-founder of artificial intelligence company Deep Mind, said: “We believe it’s the most complete and accurate picture of the human proteome to date.

“We believe this work represents the most significant contribution AI has made to advancing the state of scientific knowledge to date. “And I think it’s a great illustration and example of the kind of benefits AI can bring to society.” He added: “We’re just so excited to see what the community is going to do with this.”

Proteins are made up of chains of smaller building blocks called amino acids. These chains fold in myriad different ways, forming a unique 3D shape. A protein’s shape determines its function in the human body. The 350,000 protein structures predicted by AlphaFold include not only the 20,000 contained in the human proteome, but also those of so-called model organisms used in scientific research, such as E. coli, yeast, the fruit fly and the mouse.

This giant leap in capability is described by DeepMind researchers and a team from the European Molecular Biology Laboratory (EMBL) in the prestigious journal Nature.  AlphaFold was able to make a confident prediction of the structural positions for 58% of the amino acids in the human proteome.

The positions of 35.7% were predicted with a very high degree of confidence – double the number confirmed by experiments. Traditional techniques to work out protein structures include X-ray crystallography, cryogenic electron microscopy (Cryo-EM) and others. But none of these is easy to do: “It takes a huge amount of money and resources to do structures,” Prof John McGeehan, a structural biologist at the University of Portsmouth, told BBC News.

Therefore, the 3D shapes are often determined as part of targeted scientific investigations, but no project until now had systematically determined structures for all the proteins made by the body. In fact, just 17% of the proteome is covered by a structure confirmed experimentally. Commenting on the predictions from AlphaFold, Prof McGeehan said: “It’s just the speed – the fact that it was taking us six months per structure and now it takes a couple of minutes. We couldn’t really have predicted that would happen so fast.”

“When we first sent our seven sequences to the DeepMind team, two of those we already had the experimental structures for. So we were able to test those when they came back. It was one of those moments – to be honest – where the hairs stood up on the back of my neck because the structures [AlphaFold] produced were identical.”

Prof Edith Heard, from EMBL, said: “This will be transformative for our understanding of how life works. That’s because proteins represent the fundamental building blocks from which living organisms are made.” “The applications are limited only by our understanding.” Those applications we can envisage now include developing new drugs and treatments for disease, designing future crops that can resist climate change, and enzymes that can break down the plastic that pervades the environment.

Prof McGeehan’s group is already using AlphaFold’s data to help develop faster enzymes for degrading plastic. He said the program had provided predictions for proteins of interest whose structures could not be determined experimentally – helping accelerate their project by “multiple years”.

Dr Ewan Birney, director of EMBL’s European Bioinformatics Institute, said the AlphaFold predicted structures were “one of the most important datasets since the mapping of the human genome”. DeepMind has teamed up with EMBL to make the AlphaFold code and protein structure predictions openly available to the global scientific community.

Dr Hassabis said DeepMind planned to vastly expand the coverage in the database to almost every sequenced protein known to science – over 100 million structures.

By : Paul Rincon / Science editor, BBC News website

Source: AI breakthrough could spark medical revolution – BBC News

.

Digital Transformation Depends on Diversity

Across industries, businesses are now tech and data companies. The sooner they grasp and live that, the quicker they will meet their customer needs and expectations, create more business value and grow. It is increasingly important to re-imagine business and use digital technologies to create new business processes, cultures, customer experiences and opportunities.

One of the myths about digital transformation is that it’s all about harnessing technology. It’s not. To succeed, digital transformation inherently requires and relies on diversity. Artificial intelligence (AI) is the result of human intelligence, enabled by its vast talents and also susceptible to its limitations.

Therefore, it is imperative for organizations and teams to make diversity a priority and think about it beyond the traditional sense. For me, diversity centers around three key pillars.

People

People are the most important part of artificial intelligence; the fact is that humans create artificial intelligence. The diversity of people — the team of decision-makers in the creation of AI algorithms — must reflect the diversity of the general population.

This goes beyond ensuring opportunities for women in AI and technology roles. In addition, it includes the full dimensions of gender, race, ethnicity, skill set, experience, geography, education, perspectives, interests and more. Why? When you have diverse teams reviewing and analyzing data to make decisions, you mitigate the chances of their own individual and uniquely human experiences, privileges and limitations blinding them to the experiences of others.

One of the myths about digital transformation is that it’s all about harnessing technology. It’s not.

Collectively, we have an opportunity to apply AI and machine learning to propel the future and do good. That begins with diverse teams of people who reflect the full diversity and rich perspectives of our world.

Diversity of skills, perspectives, experiences and geographies has played a key role in our digital transformation. At Levi Strauss & Co., our growing strategy and AI team doesn’t include solely data and machine learning scientists and engineers. We recently tapped employees from across the organization around the world and deliberately set out to train people with no previous experience in coding or statistics.

We took people in retail operations, distribution centers and warehouses, and design and planning and put them through our first-ever machine learning bootcamp, building on their expert retail skills and supercharging them with coding and statistics.

We did not limit the required backgrounds; we simply looked for people who were curious problem solvers, analytical by nature and persistent to look for various ways of approaching business issues. The combination of existing expert retail skills and added machine learning knowledge meant employees who graduated from the program now have meaningful new perspectives on top of their business value. This first-of-its-kind initiative in the retail industry helped us develop a talented and diverse bench of team members.

Data

AI and machine learning capabilities are only as good as the data put into the system. We often limit ourselves to thinking of data in terms of structured tables — numbers and figures — but data is anything that can be digitized.

The digital images of the jeans and jackets our company has been producing for the past 168 years are data. The customer service conversations (recorded only with permissions) are data. The heatmaps from how people move in our stores are data. The reviews from our consumers are data. Today, everything that can be digitized becomes data. We need to broaden how we think of data and ensure we constantly feed all data into AI work.

Most predictive models use data from the past to predict the future. But because the apparel industry is still in the nascent stages of digital, data and AI adoption, having past data to reference is often a common problem. In fashion, we’re looking ahead to predict trends and demand for completely new products, which have no sales history. How do we do that?

We use more data than ever before, for example, both images of the new products and a database of our products from past seasons. We then apply computer vision algorithms to detect similarity between past and new fashion products, which helps us predict demand for those new products. These applications provide much more accurate estimates than experience or intuition do, supplementing previous practices with data- and AI-powered predictions.

At Levi Strauss & Co., we also use digital images and 3D assets to simulate how clothes feel and even create new fashion. For example, we train neural networks to understand the nuances around various jean styles like tapered legs, whisker patterns and distressed looks, and detect the physical properties of the components that affect the drapes, folds and creases. We’re then able to combine this with market data, where we can tailor our product collections to meet changing consumer needs and desires and focus on the inclusiveness of our brand across demographics.

Furthermore, we use AI to create new styles of apparel while always retaining the creativity and innovation of our world-class designers.

Tools and techniques

In addition to people and data, we need to ensure diversity in the tools and techniques we use in the creation and production of algorithms. Some AI systems and products use classification techniques, which can perpetuate gender or racial bias.

For example, classification techniques assume gender is binary and commonly assign people as “male” or “female” based on physical appearance and stereotypical assumptions, meaning all other forms of gender identity are erased. That’s a problem, and it’s upon all of us working in this space, in any company or industry, to prevent bias and advance techniques in order to capture all the nuances and ranges in people’s lives. For example, we can take race out of the data to try and render an algorithm race-blind while continuously safeguarding against bias.

We are committed to diversity in our AI products and systems and, in striving for that, we use open-source tools. Open-source tools and libraries by their nature are more diverse because they are available to everyone around the world and people from all backgrounds and fields work to enhance and advance them, enriching with their experiences and thus limiting bias.

An example of how we do this at Levi Strauss & Company is with our U.S. Red Tab loyalty program. As fans set up their profiles, we don’t ask them to pick a gender or allow the AI system to make assumptions. Instead, we ask them to pick their style preferences (Women, Men, Both or Don’t Know) in order to help our AI system build tailored shopping experiences and more personalized product recommendations.

Diversity of people, data, and techniques and tools is helping Levi Strauss & Co. revolutionize its business and our entire industry, transforming manual to automated, analog to digital, and intuitive to predictive. We are also building on the legacy of our company’s social values, which has stood for equality, democracy and inclusiveness for 168 years. Diversity in AI is one of the latest opportunities to continue this legacy and shape the future of fashion.

By: Katia Walsh

Source: Digital transformation depends on diversity | TechCrunch

.

More Contents:

FreshBooks reaches $1B+ valuation with $130.75M for its SMB-focused accounting platform

Latent AI, which says it can compress common AI models by 10x, lands some key backing

5 ways AI can help mitigate the global shipping crisis

Daily Crunch: Bangalore-based UpGrad becomes India’s newest unicorn with $185M funding round

Scientists Predict Early Covid-19 Symptoms Using AI (And An App)

Combining self-reported symptoms with Artificial Intelligence can predict the early symptoms of Covid-19, according to research led by scientists at Kings College London. Previous studies have predicted whether people will develop Covid using symptoms from the peak of viral infection, which can be less relevant over time — fever is common during later phases, for instance.

The new study reveals which symptoms of infection can be used for early detection of the disease. Published in the journal The Lancet Digital Health, the research used data collected via the ZOE COVID Symptom Study smartphone app. Each app user logged any symptoms that they experienced over the first 3 days, plus the result of a subsequent PCR test for Coronavirus and personal information like age and sex.

Researchers used those self-reported data from the app to assess three models for predicting Covid in advance, which involved using one dataset to train a given model before its performance was tested on another set. The training set included almost 183,000 people who reported symptoms from 16 October to 30 November 2020, while the test dataset consisted of more than 15,000 participants with data between 16 October and 30 November.

The three models were: 1) a statistical method called logical regression; 2) a National Health Service (NHS) algorithm, and; 3) an Artificial Intelligence (AI) approach known as a ‘hierarchical Gaussian process’. Of the three prediction models, the AI approach performed the best, so it was then used to identify patterns in the data. The AI prediction model was sensitive enough to find which symptoms were most relevant in various groups of people.

The subgroups were occupation (healthcare professional versus non-healthcare), age group (16-39, 40-59, 60-79, 80+ years old), sex (male or female), Body-Mass Index (BMI as underweight, normal, overweight/obese) and several well-known health conditions. According to results produced by the AI model, loss of smell was the most relevant early symptom among both healthcare and non-healthcare workers, and the two groups also reported chest pain and a persistent cough.

The symptoms varied among age groups: loss of smell had less relevance to people over 60 years old, for instance, and seemed irrelevant to those over 80 — highlighting age as a key factor in early Covid detection. There was no big difference between sexes for their reported symptoms, but shortness of breath, fatigue and chills/shivers were more relevant signs for men than for women.

No particular patterns were found in BMI subgroups either and, in terms of health conditions, heart disease was most relevant for predicting Covid. As the study’s symptoms were from 2020, its results might only apply to the original strain of the SARS-CoV-2 virus and Alpha variant – the two variants with highest prevalence in the UK that year.

The predictions wouldn’t have been possible without the self-reported data from the ZOE COVID Symptom Study project, a non-profit collaboration between scientists and personalized health company ZOE, which was co-founded by genetic epidemiologist Tim Spector of Kings College London.

The project’s website keeps an up-to-date ranking of the top 5 Covid symptoms reported by British people who are now fully vaccinated (with a Pfizer or AstraZeneca vaccine), have so far received one of the two doses, or are still unvaccinated. Those top 5 symptoms provide a useful resource if you want to know which signs are common for the most prevalent variant circulating in a population — currently Delta – as distinct variants can be associated with different symptoms.

When a new variant emerges in future, you could pass some personal information (such as age) to the AI prediction model so it shows the early symptoms most relevant to you — and, if you developed those symptoms, take a Covid test and perhaps self-isolate before you transmit the virus to other people. As the new study concludes, such steps would help alleviate stress on public health services:

“Early detection of SARS-CoV-2-infected individuals is crucial to contain the spread of the COVID-19 pandemic and efficiently allocate medical resources.” Follow me on Twitter or LinkedIn. Check out my website or some of my other work here.

I’m a science communicator and award-winning journalist with a PhD in evolutionary biology. I specialize in explaining scientific concepts that appear in popular culture and mainly write about health, nature and technology. I spent several years at BBC Science Focus magazine, running the features section and writing about everything from gay genes and internet memes to the science of death and origin of life. I’ve also contributed to Scientific American and Men’s Health. My latest book is ’50 Biology Ideas You Really Need to Know’.

Source: Scientists Predict Early Covid-19 Symptoms Using AI (And An App)

.

Critics:

Healthcare providers and researchers are faced with an exponentially increasing volume of information about COVID-19, which makes it difficult to derive insights that can inform treatment. In response, AWS launched CORD-19 Search, a new search website powered by machine learning, that can help researchers quickly and easily search for research papers and documents and answer questions like “When is the salivary viral load highest for COVID-19?”

Built on the Allen Institute for AI’s CORD-19 open research dataset of more than 128,000 research papers and other materials, this machine learning solution can extract relevant medical information from unstructured text and delivers robust natural-language query capabilities, helping to accelerate the pace of discovery.

In the field of medical imaging, meanwhile, researchers are using machine learning to help recognize patterns in images, enhancing the ability of radiologists to indicate the probability of disease and diagnose it earlier.

UC San Diego Health has engineered a new method to diagnose pneumonia earlier, a condition associated with severe COVID-19. This early detection helps doctors quickly triage patients to the appropriate level of care even before a COVID-19 diagnosis is confirmed. Trained with 22,000 notations by human radiologists, the machine learning algorithm overlays x-rays with colour-coded maps that indicate pneumonia probability. With credits donated from the AWS Diagnostic Development Initiative, these methods have now been deployed to every chest x-ray and CT scan throughout UC San Diego Health in a clinical research study.

Related Links:

Governments must build trust in AI to fight COVID-19 – Here’s how they can do it

This AI model has predicted which patients will get the sickest from COVID-19

Coalition for Epidemic Preparedness Innovations

What history tells us about pandemics’ impact on inflation

How to back an inclusive post-COVID recovery

Survey: How US employees feel about a full return to the workplace

%d bloggers like this: