Why Your Workforce Needs Data Literacy

Organizations that rely on data analysis to make decisions have a significant competitive advantage in overcoming challenges and planning for the future. And yet data access and the skills required to understand the data are, in many organizations, restricted to business intelligence teams and IT specialists.

As enterprises tap into the full potential of their data, leaders must work toward empowering employees to use data in their jobs and to increase performance—individually and as part of a team. This puts data at the heart of decision making across departments and roles and doesn’t restrict innovation to just one function. This strategic choice can foster a data culture—transcending individuals and teams while fundamentally changing an organization’s operations, mindset and identity around data.

Organizations can also instill a data culture by promoting data literacy—because in order for employees to participate in a data culture, they first need to speak the language of data. More than technical proficiency with software, data literacy encompasses the critical thinking skills required to interpret data and communicate its significance to others.

Many employees either don’t feel comfortable using data or aren’t completely prepared to use it. To best close this skills gap and encourage everyone to contribute to a data culture, organizations need executives who use and champion data, training and community programs that accommodate many learning needs and styles, benchmarks for measuring progress and support systems that encourage continuous personal development and growth.

Here’s how organizations can improve their data literacy:

1. LEAD

Employees take direction from leaders who signal their commitment to data literacy, from sharing data insights at meetings to participating in training alongside staff. “It becomes very inspiring when you can show your organization the data and insights that you found and what you did with that information,” said Jennifer Day, vice president of customer strategy and programs at Tableau.

“It takes that leadership at the top to make a commitment to data-driven decision making in order to really instill that across the entire organization.” To develop critical thinking around data, executives might ask questions about how data supported decisions, or they may demonstrate how they used data in their strategic actions. And publicizing success stories and use cases through internal communications draws focus to how different departments use data.

Self-Service Learning

This approach is “for the people who just need to solve a problem—get in and get out,” said Ravi Mistry, one of about three dozen Tableau Zen Masters, professionals selected by Tableau who are masters of the Tableau end-to-end analytics platform and now teach others how to use it.

Reference guides for digital processes and tutorials for specific tasks enable people to bridge minor gaps in knowledge, minimizing frustration and the need to interrupt someone else’s work to ask for help. In addition, forums moderated by data specialists can become indispensable roundups of solutions. Keeping it all on a single learning platform, or perhaps your company’s intranet, makes it easy for employees to look up what they need.

3.Measure

Success Indicators

Performance metrics are critical indicators of how well a data literacy initiative is working. Identify which metrics need to improve as data use increases and assess progress at regular intervals to know where to tweak your training program. Having the right learning targets will improve data literacy in areas that boost business performance.

And quantifying the business value generated by data literacy programs can encourage buy-in from executives. Ultimately, collecting metrics, use cases and testimonials can help the organization show a strong correlation between higher data literacy and better business outcomes.

4.Support

Knowledge Curators

Enlisting data specialists like analysts to showcase the benefits of using data helps make data more accessible to novices. Mistry, the Tableau Zen Master, referred to analysts who function in this capacity as “knowledge curators” guiding their peers on how to successfully use data in their roles. “The objective is to make sure everyone has a base level of analysis that they can do,” he said.

This is a shift from traditional business intelligence models in which analysts and IT professionals collect and analyze data for the entire company. Internal data experts can also offer office hours to help employees complete specific projects, troubleshoot problems and brainstorm different ways to look at data.

What’s most effective depends on the company and its workforce: The right data literacy program will implement training, software tools and digital processes that motivate employees to continuously learn and refine their skills, while encouraging data-driven thinking as a core practice.

For more information on how you can improve data literacy throughout your organization, read these resources from Tableau:

The Data Culture Playbook: Start Becoming A Data-Driven Organization

Forrester Consulting Study: Bridging The Great Data Literacy Gap

Data Literacy For All: A Free Self-Guided Course Covering Foundational Concepts

By: Natasha Stokes

Source: Why Your Workforce Needs Data Literacy

.

Critics:

As data collection and data sharing become routine and data analysis and big data become common ideas in the news, business, government and society, it becomes more and more important for students, citizens, and readers to have some data literacy. The concept is associated with data science, which is concerned with data analysis, usually through automated means, and the interpretation and application of the results.

Data literacy is distinguished from statistical literacy since it involves understanding what data mean, including the ability to read graphs and charts as well as draw conclusions from data. Statistical literacy, on the other hand, refers to the “ability to read and interpret summary statistics in everyday media” such as graphs, tables, statements, surveys, and studies.

As guides for finding and using information, librarians lead workshops on data literacy for students and researchers, and also work on developing their own data literacy skills. A set of core competencies and contents that can be used as an adaptable common framework of reference in library instructional programs across institutions and disciplines has been proposed.

Resources created by librarians include MIT‘s Data Management and Publishing tutorial, the EDINA Research Data Management Training (MANTRA), the University of Edinburgh’s Data Library and the University of Minnesota libraries’ Data Management Course for Structural Engineers.

See also

30+ of 2020’s Latest Cloud Computing Trends

Cloud computing is a market that’s consistently increasing. For this reason, there are many cloud computing trends available for us to talk about.

The growth in last year’s cloud computing market was astonishing, and it’s only set to grow even more through 2020 and beyond. It’s for this reason that so many businesses are relying on cloud computing.

Whether you’re looking for VPS cloud hosting, application software, virtual networks or databases, there are plenty of cloud services to grab by the horns and take advantage of.

If speed and security are important factors in your company, cloud computing is definitely the way forward, and I highly recommend you looking further into this world.

General Cloud Computing Statistics

  • By 2025, the cloud computing market is expected to exceed $650 billion
  • 80% of organisations are expected to use cloud services by 2025
  • The main reason people turn to cloud computing is due to being able to access data from everywhere
  • By 2021, cloud data centers will process 94% of workloads (Cisco)
  • In 2018, cloud infrastructure spending surpassed $80 billion (Canalys)

Cloud Type Statistics

  • The average business runs 38% of workloads in public and 41% in private cloud (RightScale)
  • In small to medium-sized businesses, 43% use public cloud (RightScale)
  • In 2019, the revenue from the global public cloud computing market was set to reach $258 billion (Statista)
  • 89% of companies use SaaS (IDG)
  • By 2021, 75% of all cloud workloads will use SaaS (Cisco)
  • IaaS is the fastest-growing cloud spending service with a five-year CAGR of 33.7% (IDC)

Cloud Computing Adoption Statistics

  • 42% of companies say “providing access to data anytime, anywhere” is the main reason for cloud adoption (Sysgroup)
  • 38% of companies choose cloud computing due to disaster recovery (Sysgroup)
  • 37% of businesses would prefer to use cloud computing due to flexibility (Sysgroup)
  • The hybrid cloud adoption rate is 58% (RightScale)
  • 12.2% of global cloud spending is on professional services (IDC)
  • Banking accounts for 10.6% of global cloud spending (IDC)
  • Process manufacturing and retail are expected to be in the top five spenders of cloud spending in 2022 (IDC)

Cloud Security Statistics

  • Almost 2/3 of companies believe security is their biggest challenge in cloud adoption (Logicmonitor)
  • 60% of enterprises worry about privacy and regulatory issues (Logicmonitor)
  • By 2020, public cloud IaaS workloads will experience 60% fewer security incidents than traditional data centers (Gartner)
  • In 2022, at least 95% of security failures in the cloud will be caused by customers (Gartner)

Cloud Spending Statistics

  • In 2018, companies’ average yearly cloud budget was $2.2 million (IDG)
  • Between 2016 and 2018 there was a 36% increase in cloud budget (IDG)
  • In terms of revenue, online backup/recovery is the leading cloud service (15%) followed closely by email hosting (11%) (IDG)
  • Smaller companies dedicate only around 20% of their IT budget towards the cloud (Spiceworks)
  • In 2019, companies planned to spend 24% more on public cloud than they did in 2018 (RightScale)

Cloud Service Provider Statistics

  • In 2018, Amazon, Microsoft and Google accounted for 57% of the global cloud computing market (Canalys)
  • AWS attracts 52% of early-stage cloud users (RightScale)
  • 41% of beginners choose Azure (RightScale)
  • Only 9% of beginners choose Google Cloud (RightScale)
  • AWS earned more than $7.6 million in the first quarter of 2019 (Canalys)
  • Google’s cloud service revenue was $2.3 billion (Canalys)

By: Jann Chambers

Related Links:

50+ Instagram Statistics Marketers Will Love in 2020 If you’re planning your Instagram marketing strategy for 2020 or are struggling to see results from your marketing efforts, these

35+ Mind Blowing WordPress Theme and Plugin Statistics for 2020 WordPress is a powerhouse in the CMS market. In 2003, when it first launched, WordPress provided the world with a

56+ Need To Know YouTube Statistics in 2020 [INFOGRAPHIC] There’s absolutely no doubt that YouTube is the most popular video platform on the market in 2020. Since the first video was

48+ Eye-Opening Web Browser Statistics (2020) Over the year’s web browsers have evolved and gained new features, each one doing

.

Mirror Review

COVID-19 crisis has shuddered industries globally but the Cloud Industry has clasped industries with its unique offerings. Acknowledging the efforts of cloud solution providers, we have managed to gather the best cloud solution providers in this pandemic. Here are the 8 Cloud Computing Trends to Watch Post COVID19 Pandemic. #cloudcomputing#devops#cybersecurity To know more, Please Visit Important Pages On Our Website: Home: https://www.mirrorreview.com/ About Us: https://bit.ly/2ON45IY Magazine: https://bit.ly/2nWgaRd Subscribe: https://bit.ly/2y2JqXR Blog: https://bit.ly/2VIvLA9 Follow us on: Twitter : https://bit.ly/2qfYT6d Facebook : https://bit.ly/2nNbX25 Linkedin : https://bit.ly/2MGR7K1 Instagram: https://bit.ly/2MektAq

4 Benefits Of Hybrid Data Management With AWS

http://www.bevtraders.com/?ref=arminham

Organizations today are witnessing an increase in data volumes across various industries that need addressing to maintain a differentiated data management practice and stay competitive. The cloud offers capabilities to address any data management need; however, not all workloads can migrate to the cloud easily. This could be due to legacy application dependencies residing on-premises, data residency regulations or low-latency computation needs, such as in healthcare, financial and manufacturing industries.

Read on to discover: 

  • Constraints that keep data tied to on-premises environments 
  • Why companies should embrace hybrid data management practice 
  • How AWS Outposts meets your hybrid data needs 

Data management constraints organizations face 

Data residency regulations, low-latency requirements, and complex application migrations are some of the main issues surrounding the management of data. The journey to the cloud also creates challenges for data infrastructure and development teams to design data management models that provide consistent and reliable cloud services on-premises. These challenges can vary depending on the specific industry and operational requirements, but include: 

challenges of building a consistent data management solution tile

Hybrid cloud benefits

Organizations can deploy cloud infrastructure on-premises, determine data processing priorities, and when ready, migrate towards the cloud. 

1. Cloud capabilities on-premises 

Amazon EC2 instances featuring Intel® Xeon® Scalable processors brings the same cloud capabilities on-premises. 

2. Seamless migration to the cloud 

Build an application once and deploy it in the cloud, on-premises, or in a hybrid architecture with consistent performance.  

3. Accelerated modernization 

Companies can accelerate the adoption of cloud services on-premises across teams. 

4. Focus on what matters  

Reduce the time, resources, operational risk, and maintenance downtime required for managing IT infrastructure, giving you the ability to focus on what differentiates your business. 

AWS offers a hybrid solution to meet data management needs 

AWS Outposts catalog includes options supporting the latest generation Intel powered EC2 instance types with or without local instance storage. Organizations can choose from a range of pre-validated Outposts configurations offering a mix of EC2 and EBS capacity which are designed to meet a variety of data management needs. 

AWS Outposts options: 

  • General-purpose  
  • Compute-optimized  
  • Memory-optimized  
  • I/O optimized 
How outposts works module
http://www.bevtraders.com/?ref=arminhamAWS Infrastructure Solutions

Innovate with AWS 

Healthcare use case:  

Medical professionals manually collect structured data to store and analyze in vital fields such as cancer staging, medical/family history and patient-reported symptoms. AWS Cloud services automates data collection, where using machine learning inference models amplify data processing and extraction of valuable insights. AWS provides the tools, services and APIs to deliver real-time video analytics and pattern matching, while delivering on-premises flexibility and access to cloud capabilities when needed. 

Finance use case: 

Financial or government institutions that need to comply to specific data regulations use hybrid cloud to meet their contractual obligations with their customers and demonstrate compliance with legal policies. AWS Outposts allow these organizations to maintain data visibility, process sensitive data locally, including collecting local cache and filtering, and when needed connect to Local Zones or send it to AWS Region. 

Security use case: 

Companies that are interested in using Outposts to run physical security environments, such as video surveillance, badging systems or security systems, can build and run these workflows on Outposts, archiving relevant data to S3/Glacier within the AWS Region for forensic analysis. 

Getting started with AWS 

With a consistent set of infrastructure, services, tools, and APIs, AWS simplifies your data management and data migration process, reducing the effort and complexity involved. Leverage the latest Intel technology innovations to accelerate modernization at your edge too. Find out more about hybrid data management for your organization using AWS Outposts in our full guide here.

AWS Infrastructure Solutions Contributor

AWS Infrastructure Solutions Contributor

AWS infrastructure solutions allow enterprises across all industries the opportunity to bring AWS services closer to where it’s needed, such as on-premises with AWS Outposts, in large metro areas with AWS Local Zones, or at the edge of 5G networks with AWS Wavelength. These solutions offer enterprises the capability to deliver innovative applications and immersive next-generation experiences using AWS cloud services where they need it. Millions of customers—including the fastest-growing startups, largest enterprises, and leading government agencies—are using AWS to lower costs, speed time to market, and become more dynamic. To learn more about AWS infrastructure solutions, visit aws.amazon.com.

.

.

AWS Online Tech Talks

Enterprises are rapidly adopting the cloud for greater agility and cost savings. However, they often find that some applications need to be re-architected or “”modernized”” before they can be moved to the cloud. Others need to remain on-premise due to low-latency or data processing requirements. As a result, enterprises are looking to hybrid cloud architectures to integrate their on-premises and cloud operations to support a broad spectrum of hybrid use cases, such as data center extension, VMware cloud migration, or building and managing applications using a common set of cloud services and APIs across on-premises and cloud environments. In this tech talk, you will learn how you can build your hybrid cloud architecture with AWS. We will cover our extensive portfolio of services that offer seamless integration between your on-premises and cloud environments for any hybrid use case. Learning Objectives: – Discover AWS services that offer seamless integration across on-premises and cloud environments – See how to build the hybrid cloud architecture to support your use case – Learn about new services that bring cloud services on-premises

SitemapArchiveVideo ArchiveTopics IndexMobile AppsScreensaverRSSText-based siteReader PrintsOur PapersTop of page
Daily MailMail on SundayThis is Money
MetroJobsiteMail TravelZoopla.co.ukPrime Location

Published by Associated Newspapers Ltd

Part of the Daily Mail, The Mail on Sunday & Metro Media Groupdmg mediaContact usHow to complainLeadership TeamAdvertise with usContributorsWork with UsTermsDo not sell my infoCA Privacy NoticePrivacy policy & cookies

Is Data The Answer To Scaling Compassion In Healthcare?

I completed my internal medicine residency at a large urban hospital system in Boston, Massachusetts. One particularly challenging day, I worked hard to arrange in-hospital dialysis for a patient—only to find out later that day that he left the hospital against medical advice and without receiving dialysis.

His reason for leaving was a complicated yet common social situation. Later, during rounds, I voiced my frustration about this patient’s actions. “He made the wrong choice,” I said. My attending (supervising) physician stopped mid-stride and said, “No, Vick. He made the choice that’s right for him.” My attending physician calmly explained, “This isn’t about you and your frustration; have the courage to admit this will never be about you. This is about him and his life.” At the time, most of my 20 years of formal education had been about me: my work as a physician, striving to execute the treatment plan as I saw best for my patients. 

That day I learned a critical lesson that broadened my perspective on patient needs: Medical knowledge, data, lab tests, and more are incredibly informative and meaningful when guided by compassion for the whole person who needs care. That lesson has stayed with me from residency to my current role at Commonwealth Care Alliance (CCA), where I oversee the application of data science, data engineering, and data-driven decision-making to improve the care that our patients receive.

Data-informed and compassion-guided healthcare during the COVID-19 pandemic

CCA is a community-based healthcare organization that’s nationally recognized as a leader in providing care for high-cost, high-need individuals who are dually eligible for Medicare and Medicaid, including individuals with disabilities. These individuals live with a broad range of complex medical, behavioral, and social needs, which leads to high rates of marginalization and vulnerability. CCA provides services to nearly 40,000 members in Massachusetts for a range of services, including medical care, behavioral-health care, providing durable medical equipment, transportation, and social services and supports.

We want our data to serve as a primary vehicle for decision-making and learning, our experience and intuition to provide context, and our compassion to occupy the driver’s seat. 

At CCA, we are pioneering the necessary convergence of data and compassion in healthcare. We recognize that we must employ more than data-driven decision-making; we must be both data-informed and compassion-guided. We define “data-informed” as the combined use of data, experience, and intuition (each with their strengths and weaknesses) to make the best possible choices for a situation despite its complexities. In other words, we want our data to serve as a primary vehicle for decision-making and learning, our experience and intuition to provide context, and our compassion to occupy the driver’s seat. 

The COVID-19 pandemic provided us with a practical example unlike any other scenario. It put stress on all the usual societal supports in Massachusetts (and elsewhere) and magnified the vulnerability of every individual. About 30% of our members are at high risk of complications or death from COVID-19. From our experience, we knew our members would need enhanced support during the pandemic.

Our data-informed approach ensured we were aware—sooner and more accurately—about each individual’s risks and needs, as well as about disruptions to existing community support. Our approach allowed us to proactively engage with our members to keep them safe (for example, avoid hospitalizations, obtain essential medications and home oxygen) and supported (for example, relieve them of the sense of social isolation or fear). 

Could we build the first platform to scale compassion and value-based healthcare?

We were already building a data platform to support and accelerate CCA’s mission. We had gratefully drawn inspiration from outside of healthcare to design and build a modern solution to meet our needs. Then the COVID-19 pandemic arrived and accelerated our need for such a platform—as seen in CCA members’ needs. Our organizational response revealed the profound benefit provided when data is combined with compassion in healthcare. With essential data easily available, we were freed up to think more holistically and with compassion about our members’ needs. We saw their needs more clearly and how many of them needed support—for example, about 25% of members did not have another person or organization to help keep them safe and supported. 

On average, a CCA clinician consults data nearly 60 times a minute during the working day.

The urgency of COVID-19 accelerated our understanding of how data can improve interactions and overall care. The use of data broadened our compassion; it did not detract from it. Our experience reinforced that you build up the ability to quickly iterate from being wrong to finally get it right. And the technology and data must allow for that.

We did not need to “pivot” our data platform; the design and technology allowed clinicians and care managers to rapidly receive tailored and instantly updated information to help them prioritize outreach based on quickly shifting factors. This experience reinforced our transformation to a data-informed, compassion-guided healthcare organization; on average a CCA clinician consults data nearly 60 times a minute during the working day. We anticipate that we will continue to lean upon data to serve our members during the potential combination of influenza and COVID-19.

As a servant-leader, caregiver, and builder, I think it’s beautiful to use the best technologies (such as Looker and Google Cloud) to care for the most vulnerable individuals, especially in times of great need. I see the profound benefit of a system that would learn and scale holistic, compassionate, value-based care. Designing data and systems around human needs and compassion (that is, human-centered) lets us make sense of volumes of data so we can care for people who live complex lives.

Without good use of data, we risk only seeing what we already know (or think we know) and reinforcing existing disparities or poor outcomes. Ideally, the technology powering the care we offer should fade into the background, like GPS now does as we’re driving or walking. When essential data is easily available, the data and tools themselves fade into the background; data simply becomes part of the context of doing our jobs and serving people who are vulnerable. 

Keep learning: Does the key to business transformation start with a data-driven culture? Read this whitepaper to learn how to foster a culture that improves agility, intelligence, insights, and trust. And join the author during his session at JOIN@Home.

The author of this article would also like to share these additional resources that inspired and shaped his personal views on this subject: DataKitchen, Sequoia Data Science, Airbnb Engineering and Data Science, and Max Beauchemin on Medium.Valmeek Kudesia

Valmeek Kudesia, MD, VP Clinical Informatics and Advanced Analytics, Commonwealth Care Alliance

Valmeek Kudesia is an experienced physician-leader, engineer, and board-certified clinical informatician. He is a servant-leader who transforms healthcare organizations into learning organizations. Valmeek leads interdisciplinary teams to design and equip healthcare organizations with information platforms, data science tools, and change-processes for patient and organization success (particularly in the high-complexity value-based care setting). He describes himself: “I’m a doctor who talks tech, does data, and does systems. I take care of people and build things that take care of people.”

.

.

GE Healthcare

Driving innovation and leveraging years of experience in the healthcare industry, GE Healthcare’s eHealth Solutions is a leading provider of health information exchange (HIE). Providing clinical information within existing workflows allows providers to have the relevant information they need to provide the best care. Our standards based technology supports secure exchange of patient data in virtually any scenario — between providers, across regions and provinces. #GEHealthcare#GeneralElectric Subscribe to GE Healthcare: https://invent.ge/3ipZ7wI Learn more about GE Healthcare Website: http://www.gehealthcare.com Facebook: https://www.facebook.com/GEHealthcare Twitter: http://twitter.com/gehealthcare LinkedIn: http://www.linkedin.com/company/gehea… About GE Healthcare: As a leading global medical technology and digital solutions innovator, GE Healthcare enables clinicians to make faster, more informed decisions through intelligent devices, data analytics, applications and services, supported by its Edison intelligence platform. With over 100 years of healthcare industry experience and around 50,000 employees globally, the company operates at the center of an ecosystem working toward precision health, digitizing healthcare, helping drive productivity and improve outcomes for patients, providers, health systems and researchers around the world. We embrace a culture of respect, transparency, integrity and diversity. GE Health Information Exchange (HIE) a secure, standards-based infrastructure | GE Healthcare http://www.youtube.com/gehealthcare

We Need to Change How We Share Our Personal Data Online in the Age of COVID19

1

A few months into the coronavirus pandemic, the web is more central to humanity’s functioning than I could have imagined 30 years ago. It’s now a lifeline for billions of people and businesses worldwide. But I’m more frustrated now with the current state of the web than ever before. We could be doing so much better.

COVID-19 underscores how urgently we need a new approach to organizing and sharing personal data. You only have to look at the limited scope and the widespread adoption challenges of the pandemic apps offered by various tech companies and governments.

Think of all the data about your life accumulated in the various applications you use – social gatherings, frequent contacts, recent travel, health, fitness, photos, and so on. Why is it that none of that information can be combined and used to help you, especially during a crisis?

It’s because you aren’t in control of your data. Most businesses, from big tech to consumer brands, have siphoned it for their own agendas. Our global reactions to COVID-19 should present us with an urgent impetus to rethink this arrangement.

For some years now, I, along with a growing number of dedicated engineers, have been working on a different kind of technology for the web. It’s called Solid. It’s an update to the web – a course-correction if you will – that provides you with a trusted place or places to store all your digital information about your life, at work and home, no matter what application you use that produces it. The data remains under your control, and you can easily choose who can access it, for what purpose, and for how long. With Solid, you can effectively decide how to share anything with anyone, no matter what app you or the recipient uses. It’s as if your apps could all talk to one another, but only under your supervision.

There’s even more that could have been done to benefit the lives of people impacted by the crisis – simply by linking data between apps. For example:

What if you could safely share photos about your symptoms, your fitness log, the medications you’ve taken, and places you’ve been directly with your doctor? All under your control.

What if your whole family could automatically share location information and daily temperature readings with each other so you’d all feel assured when it was safe to visit your grandfather? And be sure no-one else would see it.

What if health providers could during an outbreak see a map of households flagged as immuno-compromised or at-risk, so they could organize regular medical check-ins? And once the crisis is over, their access to your data could be taken away, and privacy restored.

What if grocery delivery apps could prioritize homes based on whether elderly residents lived there? Without those homes or the people in them having their personal details known by the delivery service.

What if a suddenly unemployed person could, from one simple app, give every government agency access to their financial status and quickly receive a complete overview of all the services for which they’re eligible? Without being concerned that any agency could pry into their personal activity.

None of this is possible within the constructs of today’s web. But all of it and much more could be possible. I don’t believe we should accept the web as it currently is or be resigned to its shortcomings, just because we need it so much. It doesn’t have to be this way. We can make it better.

My goal has always been a web that empowers human beings, redistributes power to individuals, and reimagines distributed creativity, collaboration, and compassion.

Today, developers are creating exciting new applications and organizations are exploring new ways to innovate. The momentum for this new and vibrant web is already palpable, but we must not let the crisis distract us. We must be ready to hit the ground running once this crisis passes so we are better prepared to navigate the next one. To help make this a reality, I co-founded a company, called Inrupt, to support Solid’s evolution into a high-quality, reliable technology that can be used at scale by businesses, developers, and, eventually, by everyone.

Let’s free data from silos and put it to work for our personal benefit and the greater good. Let’s collaborate more effectively and innovate in ways that benefit humanity and revitalize economies. Let’s build these new systems with which people will work together more effectively. Let’s inspire businesses, governments, and developers to build powerful application platforms that work for us, not just for them.

Let’s focus on making the post-COVID-19 world much more effective than the pre-COVID-19 world. Our future depends on it.

BY TIM BERNERS-LEE

bevtraders-2

Billionaire Tech Entrepreneur Tom Siebel Built A Massive Compendium Of Covid-19 Datasets. Some 2,000 Researchers Now Use It

1

Billionaire tech entrepreneur Tom Siebel struck gold with Siebel Systems, which he sold to Oracle in 2006, and is trying again with artificial intelligence firm C3.ai, valued at $3.3 billion. But as the pandemic hit, business slowed and he spent weeks immersed in how to use data to help Covid-19 researchers. He set up a so-called “data lake” of Covid-19 information, culled from Johns Hopkins, the World Health Organization, the Institute for Health Metrics and Evaluation, the Covid Tracking Project and dozens of other organizations that researchers could access in one place for free.

All told, he says, some 2,000 active users from around the world are now working with this compendium of datasets to research the course of the disease and ways to mitigate it. Among the users, he says, are researchers at the National Institutes of Health, MIT and various pharmaceutical companies.

“What’s difficult about these data sets is making all the connections. All of these data sets are extraordinarily large with tens of thousands of fields, and hundreds of millions of records. In order to make them useful for analytics you need to connect issues like comorbidity and infection rates,” he says. “The number of things we have connected is mind-numbing.”

Siebel, 67, is in a unique position to create a compendium of data sets. He spent more than a decade and, he says, nearly $1 billion building the technology underlying C3.ai, which offers predictive analytics to customers that include 3M, Royal Dutch Shell and the U.S. Air Force. His Redwood City, California-based business has grown rapidly, passing $160 million in revenue for the fiscal year ended in April. Yet as the pandemic hit the United States this spring, Siebel–who expects both a recession and a massive shakeout among AI companies–became one of more than a dozen billionaires to borrow money from the federal Paycheck Protection Program, accessing between $5 million and $10 million, according to data from the Small Business Admnistration. (For more on Siebel and other billionaires who’ve borrowed from the PPP, see our online feature; for more on C3.ai, see our 2017 magazine story.)

C3.ai cleaned up the data sets using the automated tools it developed to help its corporate customers so that researchers could access data that is structured, readable by machine and free of anomalies. The effort began with 11 data sets, published in April, and expanded over time to include 32 in June. Siebel says that he intends to continuing adding new datasets to the data lake, which is hosted on AWS, over time.

“This is a natural application of AI,” Siebel says. “There are a lot of applications of AI that we both know are a little scary and onerous, and this is one that is potentially enormously socially beneficial.”

bestmining780

The data effort is one of two Covid-19 projects that Siebel launched this spring. The other, called the C3.ai Digital Transformation Institute, is giving away more than $300 million in grants and in-kind resources to data-driven, Covid-19 research projects in partnership with Microsoft. The University of California, Berkeley, and the University of Illinois at Urbana-Champaign are managing that consortium, which has funded 26 projects to date.

“We’re doing our best to help advance the underlying science that will make this problem go away,” Siebel says. “Until we make this problem go away, I don’t think we’re going to get this economy back on its feet.”

Full coverage and live updates on the Coronavirus

Follow me on Twitter. Send me a secure tip.

I’m a senior editor at Forbes, where I cover manufacturing, industrial innovation and consumer products. I previously spent two years on the Forbes’ Entrepreneurs team. It’s my second stint here: I learned the ropes of business journalism under Forbes legendary editor Jim Michaels in the 1990s. Before rejoining, I was a senior writer or staff writer at BusinessWeek, Money and the New York Daily News. My work has also appeared in Barron’s, Inc., the New York Times and numerous other publications. I’m based in New York, but my family is from Pittsburgh—and I love stories that get me out into the industrial heartland. Ping me with ideas, or follow me on Twitter @amyfeldman.

Source:https://www.forbes.com

bevtraders

How To Succeed Without Data, In A Data Driven World – Chaka Booker

1.jpg

There are some words that inspire confidence when you use them. “Data” is one of those words. Throw “data-driven” in front of “decision-making” and you’ll suddenly find yourself more credible. If someone is sharing an idea, ask about “the data” and your IQ shoots up several points. I believe in data. I understand how data can identify trends, minimize risk and lead to better decisions. Data comforts me. But the fixation on data has a drawback. It leads to the belief that decisions made without data – aren’t as strong. Never mind that bad decisions, based on data, get made all the time……

Read more: https://www.forbes.com/sites/chakabooker/2018/10/06/how-to-make-decisions-without-data-in-a-data-driven-world/#5ba0f01e1d6e

 

 

 

Your kindly Donations would be so effective in order to fulfill our future research and endeavors – Thank you

What is The Difference Between Data Analysis and Data Science?

1.jpg

Following the current technological transformations within the economy, there has been an emergence of enormous career options, wherein, Data Science is the hottest. According to the Glassdoor, Data Science arose as the highest paid area. On the other hand, there is a significant field which has been gazing attention since years, i.e., Data Analysis. Both the Data Science and Data Analysis is often confused by the individuals. However, the terms are incredibly different in accordance with their job roles and the contribution they do to the businesses. But, are these the only factors which make these two distinct from each other? Well, to know more we need to take a look below:

Data Analysis Data Science:

Data Analysis is referred as the process of accumulating the data and then analyzing it to persuade the decision making for the business. The analysis is undertaken with a business goal and impact the strategies. Whereas, Data Science is a much broader concept where a set of tools and techniques are implied upon to extract the insights from the data. It involves several aspects of mathematics, statistics, scientific methods, etc. to drive the essential analysis of data

Skills:

The individuals misinterpret Data Analysis with Data Science, but the methodologies for both are diverse. The skill set for the two are distinct as well. The fundamental skills required for Data Analysis are Data Visualisation, HIVE, and PIG, Communication Skills, Mathematics, In-Depth understanding of R and python and Statistics. On the other hand, the Data Science embed the skills like – Machine Learning, Analytical Skills, Database Coding, SAS/R, understanding of Bayesian Networks and Hive

Techniques:

Though the areas – Data Analysis and Data Science, are often confused about being similar, but the methodology is different for both. The methods used in the two are diverse. The essential techniques used in Data Analysis are – Data Mining, Regression, Network Analysis, Simulation, Time Series Analysis, Genetic Algorithms and so on. While, the Data Science involves – Split Testing, categorizing the issues, cluster analysis and so on

Aim:

Just like the areas are different, so are their goals. The Data analysis is basically about answering the questions generated, for the betterment of the businesses. While the Data Science is concerned with shaping the questions followed by answering The Data science, as illustrated above, is a more profound concept

The era of the Artificial Intelligence and Machine Learning is shaping economy in a much more comprehensive aspect. The organizations are moving towards data-driven decision-making process. The data is becoming imperative in functioning and are not limited to the Information Technology organizations. It is soon taking over the industries like – Sports, Medicine, Hospitality, etc.

Such technological advancements have led to a rise in the job opportunities in the area of Data Science and Analysis. The merely significant facet which needs to be taken into consideration is the understanding of the difference between the two. The Big Data is the future which is expected to lay a considerable impact on the operations of both industries and routine life.

 

 

Your kindly Donations would be so effective in order to fulfill our future research and endeavors – Thank you
https://www.paypal.me/ahamidian

 

The World’s Most Valuable Resource Is No Longer Oil, But Data – The Economist

1.jpg

A NEW commodity spawns a lucrative, fast-growing industry, prompting antitrust regulators to step in to restrain those who control its flow. A century ago, the resource in question was oil. Now similar concerns are being raised by the giants that deal in data, the oil of the digital era.

These titans—Alphabet (Google’s parent company), Amazon, Apple, Facebook and Microsoft—look unstoppable. They are the five most valuable listed firms in the world. Their profits are surging: they collectively racked up over $25bn in net profit in the first quarter of 2017. Amazon captures half of all dollars spent online in America. Google and Facebook accounted for almost all the revenue growth in digital advertising in America last year.

Such dominance has prompted calls for the tech giants to be broken up, as Standard Oil was in the early 20th century. This newspaper has argued against such drastic action in the past. Size alone is not a crime. The giants’ success has benefited consumers. Few want to live without Google’s search engine, Amazon’s one-day delivery or Facebook’s newsfeed.

Nor do these firms raise the alarm when standard antitrust tests are applied. Far from gouging consumers, many of their services are free (users pay, in effect, by handing over yet more data). Take account of offline rivals, and their market shares look less worrying. And the emergence of upstarts like Snapchat suggests that new entrants can still make waves.

But there is cause for concern. Internet companies’ control of data gives them enormous power. Old ways of thinking about competition, devised in the era of oil, look outdated in what has come to be called the “data economy” (see Briefing). A new approach is needed.

Quantity has a quality all its own

What has changed? Smartphones and the internet have made data abundant, ubiquitous and far more valuable. Whether you are going for a run, watching TV or even just sitting in traffic, virtually every activity creates a digital trace—more raw material for the data distilleries. As devices from watches to cars connect to the internet, the volume is increasing:

some estimate that a self-driving car will generate 100 gigabytes per second. Meanwhile, artificial-intelligence (AI) techniques such as machine learning extract more value from data. Algorithms can predict when a customer is ready to buy, a jet-engine needs servicing or a person is at risk of a disease. Industrial giants such as GE and Siemens now sell themselves as data firms.

This abundance of data changes the nature of competition. Technology giants have always benefited from network effects: the more users Facebook signs up, the more attractive signing up becomes for others. With data there are extra network effects. By collecting more data, a firm has more scope to improve its products, which attracts more users, generating even more data, and so on.

The more data Tesla gathers from its self-driving cars, the better it can make them at driving themselves—part of the reason the firm, which sold only 25,000 cars in the first quarter, is now worth more than GM, which sold 2.3m. Vast pools of data can thus act as protective moats.

Access to data also protects companies from rivals in another way. The case for being sanguine about competition in the tech industry rests on the potential for incumbents to be blindsided by a startup in a garage or an unexpected technological shift. But both are less likely in the data age. The giants’ surveillance systems span the entire economy:

Google can see what people search for, Facebook what they share, Amazon what they buy. They own app stores and operating systems, and rent out computing power to startups. They have a “God’s eye view” of activities in their own markets and beyond. They can see when a new product or service gains traction, allowing them to copy it or simply buy the upstart before it becomes too great a threat.

Many think Facebook’s $22bn purchase in 2014 of WhatsApp, a messaging app with fewer than 60 employees, falls into this category of “shoot-out acquisitions” that eliminate potential rivals. By providing barriers to entry and early-warning systems, data can stifle competition.

Who ya gonna call, trustbusters?

The nature of data makes the antitrust remedies of the past less useful. Breaking up a firm like Google into five Googlets would not stop network effects from reasserting themselves: in time, one of them would become dominant again. A radical rethink is required—and as the outlines of a new approach start to become apparent, two ideas stand out.

The first is that antitrust authorities need to move from the industrial era into the 21st century. When considering a merger, for example, they have traditionally used size to determine when to intervene. They now need to take into account the extent of firms’ data assets when assessing the impact of deals.

The purchase price could also be a signal that an incumbent is buying a nascent threat. On these measures, Facebook’s willingness to pay so much for WhatsApp, which had no revenue to speak of, would have raised red flags. Trustbusters must also become more data-savvy in their analysis of market dynamics, for example by using simulations to hunt for algorithms colluding over prices or to determine how best to promote competition .

The second principle is to loosen the grip that providers of online services have over data and give more control to those who supply them. More transparency would help: companies could be forced to reveal to consumers what information they hold and how much money they make from it.

Governments could encourage the emergence of new services by opening up more of their own data vaults or managing crucial parts of the data economy as public infrastructure, as India does with its digital-identity system, Aadhaar. They could also mandate the sharing of certain kinds of data, with users’ consent—an approach Europe is taking in financial services by requiring banks to make customers’ data accessible to third parties.

Rebooting antitrust for the information age will not be easy. It will entail new risks: more data sharing, for instance, could threaten privacy. But if governments don’t want a data economy dominated by a few giants, they will need to act soon.

Your kindly Donations would be so effective in order to fulfill our future research and endeavors – Thank you
https://www.paypal.me/ahamidian

%d bloggers like this: