Will a Robot Take Your Job? It May Just Make Your Job Worse

The robot revolution is always allegedly just around the corner. In the utopian vision, technology emancipates human labor from repetitive, mundane tasks, freeing us to be more productive and take on more fulfilling work. In the dystopian vision, robots come for everyone’s jobs, put millions and millions of people out of work, and throw the economy into chaos.

Such a warning was at the crux of Andrew Yang’s ill-fated presidential campaign, helping propel his case for universal basic income that he argued would become necessary when automation left so many workers out. It’s the argument many corporate executives make whenever there’s a suggestion they might have to raise wages: $15 an hour will just mean machines taking your order at McDonald’s instead of people, they say. It’s an effective scare tactic for some workers.

But we often spend so much time talking about the potential for robots to take our jobs that we fail to look at how they are already changing them — sometimes for the better, but sometimes not. New technologies can give corporations tools for monitoring, managing, and motivating their workforces, sometimes in ways that are harmful. The technology itself might not be innately nefarious, but it makes it easier for companies to maintain tight control on workers and squeeze and exploit them to maximize profits.

“The basic incentives of the system have always been there: employers wanting to maximize the value they get out of their workers while minimizing the cost of labor, the incentive to want to control and monitor and surveil their workers,” said Brian Chen, staff attorney at the National Employment Law Project (NELP). “And if technology allows them to do that more cheaply or more efficiently, well then of course they’re going to use technology to do that.”

Tracking software for remote workers, which saw a bump in sales at the start of the pandemic, can follow every second of a person’s workday in front of the computer. Delivery companies can use motion sensors to track their drivers’ every move, measure extra seconds, and ding drivers for falling short.

Automation hasn’t replaced all the workers in warehouses, but it has made work more intense, even dangerous, and changed how tightly workers are managed. Gig workers can find themselves at the whims of an app’s black-box algorithm that lets workers flood the app to compete with each other at a frantic pace for pay so low that how lucrative any given trip or job is can depend on the tip, leaving workers reliant on the generosity of an anonymous stranger. Worse, gig work means they’re doing their jobs without many typical labor protections.

In these circumstances, the robots aren’t taking jobs, they’re making jobs worse. Companies are automating away autonomy and putting profit-maximizing strategies on digital overdrive, turning work into a space with fewer carrots and more sticks.

A robot boss can do a whole lot more watching

In recent years, Amazon has become the corporate poster child for automation in the name of efficiency — often at the expense of workers. There have been countless reports of unsustainable conditions and expectations at Amazon’s fulfillment centers. Its drivers reportedly have to consent to being watched by artificial intelligence, and warehouse workers who don’t move fast enough can be fired.

Demands are so high that there have been reports of people urinating in bottles to avoid taking a break. The robots aren’t just watching, they’re also picking up some of the work. Sometimes, it’s for the better, but in other cases, they may actually be making work more dangerous as more automation leads to more pressure on workers. One report found that worker injuries were more prevalent in Amazon warehouses with robots than warehouses without them.

“It would have been prohibitively expensive to employ enough managers to time each worker’s every move to a fraction of a second or ride along in every truck, but now it takes maybe one,” Dzieza wrote. “This is why the companies that most aggressively pursue these tactics all take on a similar form: a large pool of poorly paid, easily replaced, often part-time or contract workers at the bottom; a small group of highly paid workers who design the software that manages them at the top.”

A 2018 Gartner survey found that half of large companies were already using some type of nontraditional techniques to keep an eye on their workers, including analyzing their communications, gathering biometric data, and examining how workers are using their workspace. They anticipated that by 2020, 80 percent of large companies would be using such methods. Amid the pandemic, the trend picked up pace as businesses sought more ways to keep tabs on the new waves of workers working from home.

This has all sorts of implications for workers, who lose privacy and autonomy when they’re constantly being watched and directed by technology. Daron Acemoglu, an economist at MIT, warned that they’re also losing money. “Some of these new digital technologies are not simply replacing workers or creating new tasks or changing other aspects of productivity, but they’re actually monitoring people much more effectively, and that means rents are being shared very differently because of digital technologies,” he said.

He offered up a hypothetical example of a delivery driver who is asked to deliver a certain number of packages in a day. Decades ago, the company might pay the driver more to incentivize them to work a little faster or harder or put in some extra time. But now, they’re constantly being monitored so that the company knows exactly what they’re doing and is looking for ways to save time. Instead of getting a bonus for hitting certain metrics, they’re dinged for spending a few seconds too long here or there.

The problem isn’t technology itself, it’s the managers and corporate structures behind it that look at workers as a cost to be cut instead of as a resource.

“A lot of this boom of Silicon Valley entrepreneurship where venture capital made it very easy for companies to create firms didn’t exactly prioritize the well-being of workers as one of their main considerations,” said Amy Bix, a historian at Iowa State University who focuses on technology. “A lot of what goes on in the structure of these corporations and the development of technology is invisible to most ordinary people, and it’s easy to take advantage of that.”

The future of Uber isn’t driverless cars, it’s drivers

Uber’s destiny was supposed to be driverless.

In 2016, former CEO Travis Kalanick told Bloomberg making an autonomous vehicle was “basically existential” for the company. After a deadly accident with an autonomous Uber vehicle in 2018, current chief executive Dara Khosrowshahi reiterated that the company remained “absolutely committed” to the self-driving cause. But in December 2020 and after investing $1 billion, Uber sold off its self-driving unit. A little over four months later, its main competitor, Lyft, followed suit. Uber says it’s still not giving up on autonomous technology, but the writing on the wall is clear that driverless cars aren’t core to Uber’s business model, at least in the near future.

“Five or 10 years from now, drivers are still going to be a big piece of the mix on a percentage basis [of Uber’s business], and on an absolute basis, they may be an even bigger piece than they are today even with autonomous in the mix because the business should get bigger as both segments get bigger,” said Chris Frank, director of corporate ratings at S&P Global. “In addition, drivers will need to handle more complex conditions like poorly marked roads or inclement weather.”

In other words, they’re going to need workers to make money — workers they would very much like not to classify as such.

Gig economy companies such as Uber, Lyft, and DoorDash are fighting tooth and nail to make sure the people they enlist to make deliveries or drive people around are not considered their employees. In California last year, such companies dumped $200 million into lobbying to pass Proposition 22, which lets app-based transportation and delivery companies classify their workers as independent contractors and therefore avoid paying for benefits such as sick leave, employer-provided health care, and unemployment. After it passed, a spokesman for the campaign for the ballot measure said it “represents the future of work in an increasingly technologically-driven economy.”

It’s a future of work that might not be pleasant for gig workers. In California, some workers say they’re not getting the benefits companies promised after Prop 22’s passage, such as health care stipends. Companies said that workers would make at least 120 percent of California’s minimum wage, but that’s contemplating the time they spend driving only. Before the ballot initiative was passed, research from the UC Berkeley Labor Center estimated that it would guarantee a minimum wage of just $5.64 per hour.

Companies say they’ve been clear with drivers about how to qualify for the health care stipend, which is available to drivers with more than 15 engaged hours a week (in other words, if you don’t have a job and are waiting around, it doesn’t count). In a statement to Vox, Geoff Vetter, a spokesperson for the Protect App-Based Drivers + Services Coalition, the lobbying group that championed Prop 22, said that 80 percent of drivers work fewer than 20 hours per week and most work less than 10 hours per week, and that many have health insurance through other jobs.

Gig companies have sometimes been cagey about how much their workers make, and they’re often changing their formulas. In 2017, Uber agreed to pay the Federal Trade Commission $20 million over charges that it misled prospective drivers about how much they could make with the app. The FTC found that Uber claimed some of its drivers made $90,000 in New York and $74,000 in San Francisco, when in reality their median incomes were actually $61,000 and $53,000, respectively. DoorDash caused controversy over a decision to pocket tips and use them to pay delivery workers, which it has since reversed.

Even though Uber is charging customers more for rides in the wake of the pandemic, that’s not directly being passed onto their drivers. According to the Washington Post, Uber changed the way it paid drivers in California soon after Prop 22 passed so that they were no longer paid a proportion of the cost of the ride but instead by time and distance, with different bonuses and incentives based on market and surge pricing. (This is how Uber does it in most states, but it had changed things up during the push to get Prop 22 passed.) Uber’s CEO pushed back on the Post story in a series of tweets, arguing that decoupling driver pay from customer fares had not hurt California drivers and that some are now getting a higher cut from their rides.

In light of a driver shortage, Uber recently announced what it’s billing as a $250 million “driver stimulus” that promises higher earnings to try to get drivers back onto the road. The company acknowledges this initiative is likely temporary once the supply-demand imbalance works itself out. Still, it’s hard not to notice how quickly Uber and Lyft have been able to corner most of the ride-hailing app market and exert control over their drivers and customers.

“When a new thing like this comes on, there’s huge new consumer benefits, and then over time they are the market, they have less competition except one another, probably they’re a cartel at this point. And then they start doing stuff that’s much nastier,” said David Autor, an economist at MIT.

One of the gig economy’s main selling points to workers is that it offers flexibility and the ability to work when they want. It’s certainly true that an Uber or Lyft driver has much more autonomy on the job than, say, an Amazon warehouse worker. “People drive with Lyft because they prefer the freedom and flexibility to work when, where, and for however long they want,” a Lyft spokesperson said in a statement to Vox.

“They can choose to accept a ride or not, enjoy unlimited upward earning potential, and can decide to take time off from driving whenever they want, for however long they want, without needing to ask a ‘boss’ — all things they can’t do at most traditional jobs.” The spokesperson also noted that most of its drivers work outside of Lyft.

But flexibility doesn’t mean gig companies have no control over their drivers and delivery people. They use all sorts of tricks and incentives to try to push workers in certain directions and manage them, essentially, by algorithm. Uber drivers report being bothered by the constant surveillance, the lack of transparency from the company, and the dehumanization of working with the app. The algorithm doesn’t want to know how your day is, it just wants you to work as efficiently as possible to maximize its profits.

Carlos Ramos, a former Lyft driver in San Diego, described the feeling of being manipulated by the app. He noticed the company must have needed morning drivers because of the incentives structures, but he also often wondered if he was being “punished” if he didn’t do something right.

“Sometimes, if you cancel a bunch of rides in a row or if you don’t take certain rides to certain things, you won’t get any rides. They’ve shadow turned you off,” he said. The secret deprioritization of a worker is something many Lyft and Uber drivers speculate happens. “You also have no way of knowing what’s going on behind there. They have this proprietary knowledge, they have this black box of trade secrets, and those are your secrets you’re telling them,” said Ramos, now an organizer with Gig Workers Rising.

Companies deny that they secretly shut off drivers. “It is in Lyft’s best interests for drivers to have as positive an experience as possible, so we communicate often and work directly with drivers to help them improve their earnings,” a Lyft spokesperson said. “We never ‘shadow ban’ drivers, and actively coach them when they are in danger of being deactivated.”

The future of innovation isn’t inevitable

We often talk about technology and innovation with a language of inevitability. It’s as though whenever wages go up, companies will of course replace workers with robots. Now that the country is turned on to online delivery, it can be made to seem like the grocery industry is on an unavoidable path to gig work. After all, that’s what happened with Albertsons. But that’s not really the case — there’s plenty of human agency in the technological innovation story.

“Technology of course doesn’t have to exploit workers, it doesn’t have to mean robots are coming for all of our jobs,” Chen said. “These are not inevitable outcomes, they are human decisions, and they are almost always made by people who are driven by a profit motive that tends to exploit the poor and working class historically.”

Chase Copridge, a longtime California worker who’s done the gamut of gig jobs — Instacart, DoorDash, Amazon Flex, Uber, and Lyft — is one of the people stuck in that position, the victim of corporate tendencies on technological overdrive. He described seeing delivery offers that pay as little as $2. He turns those jobs down, knowing that it’s not economically worth it for him. But there might be someone else out there who picks it up. “We’re people who desperately need to make ends meet, who are willing to take the bare minimum that these companies are giving out to us,” he said. “People need to understand that these companies thrive off of exploitation.”

Not all decisions around automation are ones that increase productivity or improve really anything except corporate profits. Self-checkout stations may reduce the need for cashiers, but are they really making the shopping experience faster or better? Next time you go to the grocery store and inevitably screw up scanning one of your own items and waiting several minutes for a worker to appear, you tell me.

Despite technological advancements, productivity growth has been on the decline in recent years. “This is the paradox of the last several decades, and especially since 2000, that we had enormous technological changes as we perceive it but measured productivity growth is quite weak,” Autor said. “One reason may be that we’re automating a lot of trivial stuff rather than important stuff. If you compare antibiotics and indoor plumbing and electrification and air travel and telecommunications to DoorDash and smartphones or self-checkout, it may just not be as consequential.”

Acemoglu said that when firms focus so much on automation and monitoring technologies, they might not explore other areas that could be more productive, such as creating new tasks or building out new industries. “Those are the things that I worry have fallen by the wayside in the last several years,” he said. “If your employer is really set on monitoring you really tightly, that biases things against new tasks because those are things that are not easier to monitor.”

It matters what you automate, and not all automation is equally beneficial, not only to workers but also to customers, companies, and the broader economy.

Grappling with how to handle technological advancements and the ways they change people’s lives, including at work, is no easy task. While the robot revolution isn’t taking everyone’s jobs, automation is taking some of them, especially in areas such as manufacturing. And it’s just making work different: A machine may not eliminate a position entirely, but it may turn a more middle-skill job into a low-skill job, bringing lower pay with it. Package delivery jobs used to come with a union, benefits, and stable pay; with the rise of the gig economy, that’s declining. If and when self-driving trucks arrive, there will still be some low-quality jobs needed to complete tasks the robots can’t.

“The issue that we’ve faced in the US economy is that we’ve lost a lot of middle-skill jobs so people are being pushed down into lower categories,” Autor said. “Automation historically has tended to take the most dirty and dangerous and demeaning jobs and hand them over to machines, and that’s been great.

What’s happened in the last bunch of decades is that automation has affected the middle-skill jobs and left the hard, interesting, creative jobs and the hands-on jobs that require a lot of dexterity and flexibility but don’t require a lot of formal skills.”

But again, none of this is inevitable. Companies are able to leverage technology to get the most out of workers because workers often don’t have power to push back, enforce limits, or ask for more. Unionization has seen steep declines in recent decades. America’s labor laws and regulations are designed around full-time work, meaning gig companies don’t have to offer health insurance or help fund unemployment. But the laws could — and many would argue should — be modernized.

“The key thing is it’s not just technology, it’s a question of labor power, both collectively and individually,” Bix said. “There are a lot of possible outcomes, and in the end, technology is a human creation. It’s a product of social priorities and what gets developed and adopted.”

Maybe the robot apocalypse isn’t here yet. Or it is, and many of us aren’t quite recognizing it, in part because we got some of the story wrong. The problem isn’t really the robot, it’s what your boss wants the robot to do.

Source: Will a robot take your job? It may just make your job worse. – Vox

.

Critics:

The history of robots has its origins in the ancient world. During the industrial revolution, humans developed the structural engineering capability to control electricity so that machines could be powered with small motors. In the early 20th century, the notion of a humanoid machine was developed.

The first uses of modern robots were in factories as industrial robots. These industrial robots were fixed machines capable of manufacturing tasks which allowed production with less human work. Digitally programmed industrial robots with artificial intelligence have been built since the 2000s.

Concepts of artificial servants and companions date at least as far back as the ancient legends of Cadmus, who is said to have sown dragon teeth that turned into soldiers and Pygmalion whose statue of Galatea came to life. Many ancient mythologies included artificial people, such as the talking mechanical handmaidens (Ancient Greek: Κουραι Χρυσεαι (Kourai Khryseai); “Golden Maidens”) built by the Greek god Hephaestus (Vulcan to the Romans) out of gold.

Reference:

Tiny Graphene Microchips Could Make Your Phones & laptops Thousands of Times Faster, Say Scientists

Graphene strips folded in similar fashion to origami paper could be used to build microchips that are up to 100 times smaller than conventional chips, found physicists – and packing phones and laptops with those tiny chips could significantly boost the performance of our devices.

New research from the University of Sussex in the UK shows that changing the structure of nanomaterials like graphene can unlock electronic properties and effectively enable the material to act like a transistor.  

Innovation

The scientists deliberately created kinks in a layer of graphene and found that the material could, as a result, be made to behave like an electronic component. Graphene, and its nano-scale dimensions, could therefore be leveraged to design the smallest microchips yet, which will be useful to build faster phones and laptops. 

SEE: Hiring Kit: Computer Hardware Engineer (TechRepublic Premium)

Alan Dalton, professor at the school of mathematical and physics sciences at the University of Sussex, said: “We’re mechanically creating kinks in a layer of graphene. It’s a bit like nano-origami.  

“This kind of technology – ‘straintronics’ using nanomaterials as opposed to electronics – allows space for more chips inside any device. Everything we want to do with computers – to speed them up – can be done by crinkling graphene like this.” 

Discovered in 2004, graphene is an atom-thick sheet of carbon atoms, which, due to its nano-sized width, is effectively a 2D material. Graphene is best known for its exceptional strength, but also for the material’s conductivity properties, which has already generated much interest in the electronics industry including from Samsung Electronics. 

Need to know which users, applications, and protocols consume the most bandwidth?

With our award-winning products, Network Performance Monitor (NPM) and NetFlow Traffic Analyzer (NTA), you will gain insights into traffic patterns and detect, diagnose, and resolve network performance issues. Downloads provided by SolarWinds

The field of straintronics has already shown that deforming the structure of 2D nanomaterials like graphene, but also molybdenum disulfide, can unlock key electronic properties, but the exact impact of different “folds” remains poorly understood, argued the researchers.  

Yet the behavior of those materials offers huge potential for high-performance devices: for example, changing the structure of a strip of 2D material can change its doping properties, which correspond to electron density, and effectively convert the material into a superconductor.  

The researchers carried an in-depth study of the impact of structural changes on properties, such as doping in strips of graphene and of molybdenum disulfide. From kinks and wrinkles to pit-holes, they observed how the materials could be twisted and turned to eventually be used to design smaller electronic components.  

Manoj Tripathi, research fellow in nano-structured materials at the University of Sussex, who led the research, said: “We’ve shown we can create structures from graphene and other 2D materials simply by adding deliberate kinks into the structure. By making this sort of corrugation we can create a smart electronic component, like a transistor, or a logic gate.” 

SEE: Microsoft’s quantum cloud computing plans take another big step forward

The findings are likely to resonate in an industry pressed to conform to Moore’s law, which holds that the number of transistors on a microchip doubles every two years, in response for growing demand for faster computing services. The problem is, engineers are struggling to find ways to fit much more processing power into tiny chips, creating a big problem for the traditional semiconducting industry. 

A tiny graphene-based transistor could significantly help overcome these hurdles. “Using these nanomaterials will make our computer chips smaller and faster. It is absolutely critical that this happens as computer manufacturers are now at the limit of what they can do with traditional semiconducting technology. Ultimately, this will make our computers and phones thousands of times faster in the future,” said Dalton. 

Since it was discovered over 15 years ago, graphene has struggled to find as many applications as was initially hoped for, and the material has often been presented as a victim of its own hype. But then, it took over a century for the first silicon chip to be created after the material was discovered in 1824. Dalton and Tripathi’s research, in that light, seems to be another step towards finding a potentially game-changing use for graphene.

Daphne Leprince-Ringuet

By: Daphne Leprince-Ringuet

Subject Zero Science

Graphene Processors and Quantum Gates Since the 1960s, Moore’s law has accurately predicted the evolution trend of processors as to the amount of transistor doubling every 2 years. But lately we’ve seen something odd happening, processor clocks aren’t getting any faster. This has to do with another law called Dennard Scaling and it seems that the good old days with silicon chips are over. Hello everyone, subject zero here! Thankfully the solution might have been available for quite some time now and Graphene offers something quite unique to this problem, not only for your everyday processor types, but also Quantum computing. In 2009 it was speculated that by now we would have the famous 400GHz processors, but this technology has proven itself to be a bit more complicated than previously thought however most scientists including me, believe that in the next 5 years we will see the first Graphene commercial hardware come to reality. References https://en.wikipedia.org/wiki/Quantum…https://www.nature.com/articles/s4153…https://www.hpcwire.com/2019/05/08/gr…https://en.wikipedia.org/wiki/Graphen…https://www.computerhope.com/history/…http://www.tfcbooks.com/teslafaq/q&a_…https://www.rambus.com/blogs/understa…https://www.technologyreview.com/s/51…https://arxiv.org/ftp/arxiv/papers/13…https://www.sciencedaily.com/releases…https://www.nature.com/articles/srep2…http://infowebbie.com/scienceupdate/s…https://graphene-flagship.eu/field-ef…https://github.com/karlrupp/microproc…https://aip.scitation.org/doi/full/10…https://www.theglobeandmail.com/repor…

Xerox Awarded Patent for DLT Based Electronic Document Verification System – Aisshwarya Tiwari

1.jpg

According to a patent filing published November 13, 2018, by the U.S. Patent and Trademark Office (USPTO), American print and digital documents solutions company Xerox has won a patent for a DLT-based auditing system that tracks revisions to electronic documents. Increased Authenticity of Electronic Documents Per the filing, the patent was initially filed on February 16, 2016. The patent explains a blockchain technology-based system for efficient and secure recording of changes made to electronic documents…………..

Read more: https://btcmanager.com/xerox-awarded-patent-dlt-based-electronic-document-verification-system/

 

 

 

Your kindly Donations would be so effective in order to fulfill our future research and endeavors – Thank you

Why Apple Is Finally Ditching Its Proprietary Lightning Connector For USB-C On All iPhones, iPads – Jean Baptiste Su

1.jpg

At the company’s “More in the Making” event on Tuesday, Apple’s vice-president of hardware engineering John Ternus revealed that the new iPad Pro will have a USB-C port – already present on the latest MacBooks – instead of the company’s proprietary Lightning connector. “Because a high performance computer deserves a high performance connector. And so in these new iPad Pros, we’re moving to USB-C,” said Ternus. “This brings a whole new set of capabilities to the iPad Pro like connecting to accessories that change how you use your iPad, cameras, musical instruments, or even docks. Or connecting to high-resolution external displays up to 5K………

Read more: https://www.forbes.com/sites/jeanbaptiste/2018/10/31/why-apple-is-finally-ditching-its-proprietary-lightning-connector-for-usb-c-on-all-iphones-ipads/#409f327a434c

 

 

 

 

 

Your kindly Donations would be so effective in order to fulfill our future research and endeavors – Thank you

AI And The Third Wave Of Silicon Processors

1.jpg

The semiconductor industry is currently caught in the middle of what I call the third great wave of silicon development for processing data. This time, the surge in investment is driven by the rising hype and promising future of artificial intelligence, which relies on machine learning techniques referred to as deep learning.

As a veteran with over 30 years in the chip business, I have seen this kind of cycle play out twice before, but the amount of money being plowed into the deep learning space today is far beyond the amount invested during the other two cycles combined.

The first great wave of silicon processors began with the invention of the microprocessor itself in the early 70s. There are several claimants to the title of the first microprocessor, but by the early 1980s, it was clear that microprocessors were going to be a big business, and almost every major semiconductor company (Intel, TI, Motorola, IBM, National Semiconductor) had jumped into the race, along with a number of hot startups.

These startups (Zilog, MIPS, Sun Microsystems, SPARC, Inmos Transputer) took the new invention in new directions. And while Intel clearly dominated the market with its PC-driven volumes, many players continued to invest heavily well into the 90s.

As the microprocessor wars settled into an Intel-dominated détente (with periodic flare-ups from companies such as IBM, AMD, Motorola, HP and DEC), a new focus for the energy of many of the experienced processor designers looking for a new challenge emerged: 3-D graphics.

The highly visible success of Silicon Graphics, Inc. showed that there was a market for beautifully rendered images on computers. The PC standard evolved to enable the addition of graphics accelerator cards by the early 90s, and when SGI released the OpenGL standard in 1992, a market for independently designed graphics processing units (GPUs) was enabled.

Startups such as Nvidia, Rendition, Raycer Graphics, ArtX and 3dfx took their shots at the business. At the end of the decade, ATI bought ArtX, and the survivors of this second wave of silicon processor development were set. While RISC-based architectures like ARM, MIPS, PowerPC and SPARC persisted (and in ARM’s case, flourished), the action in microprocessors never got back to that of the late 80s and early 90s.

Image result for AI And The Third Wave Of Silicon Processors

Competition between Nvidia and ATI (eventually acquired by AMD) drove rapid advances in GPUs, but the barrier to entry for competitors was high enough to scare off most new entrants.

In 2006, Geoffrey Hinton published a paper that described how a long-known technology referred to as neural networks could be improved by adding more layers to the networks.

This discovery changed machine learning into deep learning. In 2009, Andrew Ng, a researcher at Stanford University, published a paper showing how the computing power of GPUs could be used to dramatically accelerate the mathematical calculations required by convolutional neural networks (CNNs).

These discoveries — along with work by people like Yann LeCun and Yoshua Bengio, among many others — put in place the elements required to accelerate the development of deep learning systems: large labeled datasets, high-performance computing, new deep learning algorithms and the infrastructure of the internet to enable large-scale work and sharing of results around the world.

The final ingredient required to launch a thousand (or at least several hundred) businesses was money, which soon started to flow in abundance with venture capital funding for AI companies almost doubling every year from 2012. In parallel, large companies — established semiconductor heavyweights like Intel and Qualcomm and computing companies like Google, Microsoft, Amazon and Baidu — started to invest heavily, both internally and through acquisition.

Over the past couple of years, we have seen the rapid buildup of the third wave of silicon processor development, which has primarily targeted deep learning. A significant difference between this wave of silicon processor development and the first two waves is that the new AI or deep learning processors rarely communicate directly with user software or human interfaces — instead, these processors operate on data.

Given this relative isolation, AI processors are uniquely able to explore radically different and new implementation alternatives that are more difficult to leverage for processors that are constrained by software or GUI compatibility. There are AI processors being built in almost every imaginable way, from building on traditional digital circuits to relying on analog circuits (Mythic, Syntient) to derivatives of existing digital signal processing designs (Cadence, CEVA) and special-purpose optimized circuits for deep learning computations (Intel Nervana, Google TPU, Graphcore).

And one popular chip architecture has been revived by a technology from the 30-year-old Inmos Transputer: systolic processing (Wave Computing, TPU), proving that everything does indeed come back in fashion one day. Think of systolic processing as the bell bottoms of the silicon processor business.

Image result for AI And The Third Wave Of Silicon Processors

There are even companies such as Lightmatter looking to use light itself, a concept known as photonic processing, to implement AI chips. The possibilities for fantastic improvements in performance and energy consumption are mind-boggling — if we can get light-based processing to work.

This massive investment in deep learning chips is chasing what looks to be a vast new market. Deep learning will likely be a new, pervasive, “horizontal” technology, one that is used in almost every business and in almost every technology product. There are deep learning processors in some of our smartphones today, and soon they will be in even lower-power wearables like medical devices and headphones.

Deep learning chips will coexist with industry-standard servers in almost every data center, accelerating new AI algorithms every day. Deep learning will be at the core of the new superchips that will enable truly autonomous driving vehicles in the not-too-distant future. And, on top of all of this silicon, countless software offerings will compete to establish themselves as the new Microsoft, Google or Baidu of the deep learning future.

If everyone who reads our articles, who likes it, helps fund it, our future would be much more secure. For as little as $5, you can donate us – and it only takes a minute. Thank you.

By: Ty Garibay