Reducing Risk When Migrating Mission-Critical Applications To The Cloud

Over the last decade, significant strides have been made in cloud computing, and today’s enterprises have substantial data and application footprints in the cloud. Many organizations are moving toward implementing cloud-based operations for their most crucial business applications.

A cloud-first mindset is usually a given for new companies and continues to gain traction for established enterprises. Still, existing legacy infrastructures and large on-premise footprints that don’t map easily to cloud architectures are slowing and even blocking faster adoption.

Organizations are poised to prioritize cloud investments over the next five years, according to the results of IDC’s Future of Operations research. The appeal includes the potential for an improved experience for customers, employees and suppliers/partners, better development agility, improved time to market and increased operational efficiency across organizations.

Although a pivot to the cloud could complete the evolution of the business from an operational and capital perspective, significant barriers to broader adoption still exist.

Cloud spending is on track to surpass $1 trillion by 2024, partly due to urgent changes to business operations driven by the pandemic, which accelerated cloud adoption timelines for many companies. And the results of recent research find that optimizing cloud costs tops companies’ 2021 priorities for the fifth year in a row.

Increasingly ambitious migration timelines are driving important decisions about moving critical applications without fully understanding the risks. Organizations are addressing application and data migration with largely ineffective one-size-fits-all solutions that don’t always meet expectations, often causing more problems than they promise to solve.

Others are moving with extreme caution when deciding which applications to keep on-prem and which to move, migrate or refactor. Mission-critical apps remain on the legacy infrastructure to assure control over the foundational data and safely maintain business as usual.

Moving from massive, on-prem data centers to the cloud presents a future filled with possibilities but also a level of risk due to the various unknowns within this significant paradigm shift. After all, a mission-critical app can be essential to the immediate viability of an organization and fundamentally necessary for success.

Although moving to the cloud is the way forward for many modern companies, migration can prove time-consuming and highly challenging, often with incomplete or unacceptable results. Successful migration can further business opportunities, but the risk of failure is considerable, and the high visibility that accompanies these major initiatives increases the level of exposure and consequences of said failure.

Mission-critical initiatives often cross the length and breadth of organizations, across low-level operational groups up to the C-suite and beyond. But just as all data is not created equal, neither are clouds or migration strategies.

Data Mobility Matters

With increasing cloud investments comes a growing need for more accessible data mobility. As more data moves to the cloud and strategies expand to occasionally include multicloud environments, there’s an expectation that underlying cloud resources deliver about the same level of performance as on-prem. But, often, the required type and volume of cloud resources are not available and deployment is difficult or impossible.

Performance is instrumental in determining where a mission-critical application should live and drives myriad scaling considerations and challenges. Sometimes, particular features, functionality and capabilities are lacking.

Perhaps the data primarily resides in a private or hybrid cloud to engage in cloud bursting on the public cloud when capacity needs to balloon. Longtime legacy challenges of architecting for the peak versus the average persist. Still, cloud decisions have forced IT leaders to relinquish a level of control over the physical infrastructure, significantly increasing risk.

Managing data mobility is challenging. To increase success, plan an approach that minimizes workflow disruptions of critical processes while ensuring sufficient capacity to support expected workloads and providing enough scalability to handle unexpected workloads. Managing random workload fluctuations requires a solid plan and a scalable, flexible and agile architecture to avoid those black swan events that are all too threatening.

Cloud Migration Considerations

Successful migration is not easy, but for many applications, it’s pretty simple to migrate to a platform as a service (PaaS) or managed service and be up and running fairly quickly. But for those performance-sensitive vertical stack monolithic applications running on the most expensive hardware for decades, moving can prove challenging and even impossible.

Ideally, refactoring enhances an application without negatively modifying external behavior and improving the internal architecture, as well as perhaps gaining cost efficiencies, maintenance or performance. But not all mission-critical applications are a fit for a refactor. Complexity, cost and the risk of disrupting a mission-critical app that’s performing as expected are valid reasons to leave some apps on-prem.

Others are constrained by performance requirements that aren’t achievable in the cloud with current offerings. There are fundamental limitations to the types of applications and databases that can quickly move to the cloud, and overhauling those solutions introduces significant risk, possibly resulting in critical delays and higher costs.

Solving Migration Problems

The best plan to mitigate the risks and improve the odds for cloud migration is to eliminate silos between multiple clouds and on-prem — regardless of type or location — facilitating a free flow of information in a simple, resilient, well-understood fashion. The next truth can’t be overstated:

Data is the new oil and should be treated as such. Just as trained specialists are leveraged to find and extract oil, specialized experts should be utilized when performing high-risk, business-changing moves regarding mission-critical data and the application stacks that access it. Ideally, the team migrating mission-critical applications should be proficient in enabling data mobility across environments without refactoring to reduce risk.

The question of cloud migration in 2021 is often no longer “if” but “when and how.” The material risk of maintaining the status quo can be significant, and avoiding moving mission-critical applications to the cloud is often no longer an option. A wise man once said, “What’s dangerous is not to evolve,” and this truism fully applies to an organization’s journey to the cloud.

Follow me on LinkedIn. Check out my website

Source: Reducing Risk When Migrating Mission-Critical Applications To The Cloud

What An Ethical Data Strategy Can Look Like

That’s according to Angela Benton, the founder and CEO of Streamlytics, a company that collects first-party consumer data transparently and aims to disrupt the current model of third-party mining of data from cookies and other methods that raise privacy and ethics concerns. Most recently, she was named one of Fast Company‘s Most Creative People for helping consumers learn what major companies know about them and paying them for the data they create while using streaming services like Netflix or Spotify.

In the latest Inc. Real Talk streaming event, Benton explains that she founded the company with minorities in mind, particularly the Black and Latinx communities, because of the disproportionate way they’ve been affected by data and privacy. For example, she points to the recent controversy over facial recognition data being sold to the police, which has a much higher error rate when comparing data of Black and Asian male faces, which could potentially lead to wrongful arrests.

“That becomes extremely important when you think of what artificial intelligence is used for in our day-to-day world,” she says, noting that AI is used for everyday interactions like loan applications, car applications, mortgages, and credit cards. Using her company’s methods, Benton says, clients can secure ethically sourced data, so that algorithms won’t negatively affect communities that have historically suffered from discriminatory practices.

Here are a few suggestions from Benton for finding data ethically without relying on third-party cookies.

Do your own combination of data sets.

“How [Streamlytics] gets data is very old school,” Benton says. Instead of relying on tech to combine data points, she says, you can manually compare data you already own and make assumptions using your best judgment. You may have data from a Shopify website, for example, about the demographic of your customers, and then you can go to a specific advertiser, like Hulu, for instance, to then target people that fit that profile.

Use your data to discover new products.

You can also look to your data to find common searches or overlapping interests to get ideas for new products, Benton says. Often, she says, she receives data requests from small business owners to discover ideas that aren’t currently on the market, for example, a vegan searching for a vitamin.

This combination method surprised Benton when she presented clients with data. “I thought it was going to be more focused on just like, “How can I make more money?” she says. “But we are hearing from folks that they want access to data to use it in more creative ways.”

Don’t take social media data at face value.

Benton and her company purposely do not source social media data because she thinks the data leave too much out of the full picture. You may get a customer’s age and “likes” from a social media page, but that doesn’t tell you what they’re searching for or what their habits are.

Related:

Data Privacy: 4 Things Every Business Professional Should Know

5 Applications of Data Analytics in Health Care

Data Science Principles

“That’s not, to me, meaningful data. That’s not where the real value lies,” she says. “We’re not focused on what people are doing on social media, we’re focused on all of the activities outside of that.” She gave a scenario where a consumer is watching Amazon Prime, while also scrolling on Uber Eats to find dinner.

Data signals are happening at the same time, but they’re not unified. It’s up to businesses to connect the dots. To Benton, that’s more meaningful than what you’re posting and what you’re liking on social media.

Source: What an Ethical Data Strategy Can Look Like | Inc.com

.

References:

“Datafication and empowerment: How the open data movement re-articulates notions of democracy, participation, and journalism”.

“Who Owns the Data? Open Data for Healthcare”.

“Note – The Right to Be Forgotten”.

“Big Data ethics”.

“Data workers of the world, unite”.

“Challenges and Opportunities of Big Data in Health Care: A Systematic Review”

Personal Data trading Application to the New Shape Prize of the Global Challenges Foundation

The Data Revolution: Big Data, Open Data, Data Infrastructures and Their Consequences

“Methodology – Global Open Data Index

The Open Knowledge Foundation

Data Or Processes: Where To Begin With Intelligent Automation

Over the past year, many clients I’ve spoken with have been looking for ways to make processes smarter, more adaptable and more resilient. According to our recent research, many companies see the combination of AI and automation — or intelligent automation — as key to achieving these goals.

Despite the promise of better operational performance with intelligent automation, a common question is where to begin: with the process itself or with the data that will power the process? The answer lies in identifying which outcome you’re trying to achieve. Getting the sequence wrong could counteract the very goal you’re pursuing.

The right starting point 

Here are two examples that distinguish when a process-led vs. data-led approach makes the most sense with intelligent automation:

How can we improve our operational efficiency?

Amid global uncertainty, supply chain disruptions and social distancing requirements, improving operational efficiency has become a priority for many businesses. The goal in this case is to improve speed and accuracy across the value chain, and achieve outcomes faster without cutting corners.

Adding data intelligence can significantly reduce errors, remove process hurdles and reveal where corrections are needed. But doing so requires a strong process automation backbone in order to shape when and how the data is applied. So in this case, a process-led approach is best.

For example, we’re working with a major insurance provider to improve customer lifecycle management. Typically, insurance customers who file a claim experience long decision times, a lack of visibility into decision making and repeated or disconnected requests for information submission.

Insurers can distinguish themselves by being fast, frictionless and responsive in how they handle claims. However, operating in a highly regulated industry and with overt risks around claims fraud, speed can never be a trade-off for accuracy and compliance.

A contributing factor to the insurer’s process challenges was the dependence on third-party systems and disparate data sources to make decisions. We helped the company implement an automated and fully integrated process for claims handling, which was then supported with AI and data modeling to segment customer profiles and personalize services.

The system has helped reduce the turnaround on claims capture by as much as 80% and shorten overall claims procedure times from 14 days to just two, all while maintaining the necessary high levels of accuracy and regulatory compliance. The insurer has also received positive customer feedback on the effectiveness and quality of services.

How can we be more agile in our product and service offerings?

Leading retailers have an impressive ability to recommend relevant products and anticipate customers’ next actions. Whether shoppers search for a needed item, browse relevant sites or interact with brands across different channels, digitally savvy retailers can connect the dots in real-time and make recommendations with a high degree of precision.

With so many factors and variables at play in dynamic online customer environments, companies need an agile approach that allows them to test the market, gather feedback and continuously improve in order to meet customer needs.

We’re working with an online fashion retailer to deliver this level of personalization. The company is well aware of the speed at which consumers’ tastes and styles change, and realized it needed to move swiftly to gain and keep customers’ attention.

Because it was vital to gain insights into consumer preferences, we took a data-led approach. We helped the retailer use existing data to gain a deeper consumer understanding. Using this insight, we then designed a process that segmented the brand’s customer base and enabled all interactions and product recommendations across channels like chatbots, email and social media to have the highest degree of relevance, timeliness and usefulness.

The combination of process improvements and data insights allowed for an integrated digital thread to run through all phases of the customer lifecycle, including product design and development, sales and after-sales. As a result, the retailer can now drive more relevant customer interactions and next-best offers, which in turn has improved customer mindshare, loyalty and revenue.

Accelerating the path to Intelligent Automation

To get the most out of intelligent automation, process and data need to work in harmony. Automated processes enable greater efficiency, while data enables better decision-making.

By coordinating these attributes — and having a clear outcome in mind — businesses can add intelligence to how and where they automate processes in a way that accelerates business outcomes while ensuring the quality of service is enhanced.

To learn more, visit the Intelligent Process Automation section of our website. View our latest webinar on Redesigning Work for the Post-Pandemic Age.

Chakradhar “Gooty” Agraharam is VP and Commercial Head of EMEA for Cognizant’s Digital Business Operations’ IPA Practice. In this role, he leads advisory, consulting, automation and analytics growth and delivery within the region, helping clients navigate and scale their automation and digital transformation journeys. He has over 25 years of consulting experience, working with clients on large systems integration, program management and transformation consulting programs across Asia, Europe and the Americas. Gooty holds an MBA from IIM, Calcutta (India’s Premier B school), and has executive management certifications from Rutgers, Henley Business School. Gooty has won reputed industry awards with MCA for his contribution to the digital industry in the UK and is a member of various industry forums. He can be reached at Gooty.Agraharam@cognizant.com

Source: Data Or Processes: Where To Begin With Intelligent Automation

.

Related Contents:

MarketAxess Holdings Inc. stock outperforms the market despite the day’s losses

Referrizer – Marketing Automation for Local Businesses

Putting AI into Practice with Taylor Cyr, Practice Lead, Public Sector/Higher Education at Quantiphi! – Cognilytica Events

Service Management Automation X (SMAX) review in IT Service Management Tools

Dicker Data Case Study

The Online Data That’s Being Deleted

For years, we were encouraged to store our data online. But it’s become increasingly clear that this won’t last forever – and now the race is on to stop our memories being deleted. How would you adjust your efforts to preserve digital data that belongs to you – emails, text messages, photos and documents – if you knew it would soon get wiped in a series of devastating electrical storms?

That’s the future catastrophe imagined by Susan Donovan, a high school teacher and science fiction writer based in New York. In her self-published story New York Hypogeographies, she describes a future in which vast amounts of data get deleted thanks to electrical disturbances in the year 2250.

In the years afterwards, archaeologists comb through ruined city apartments looking for artefacts from the past – the early 2000s.

“I was thinking about, ‘How would it change people going through an event where all of your digital stuff is just gone?’” she says.

In her story, the catastrophic data loss is not a world-ending event. But it is a hugely disruptive one. And it prompts a change in how people preserve important data. The storms bring a renaissance of printing, Donovan writes. But people are also left wondering how to store things that can’t be printed – augmented reality games, for instance.

Data has never been completely safe from obliteration. Just consider the burning of the Great Library of Alexandria – its very destruction is possibly the only reason you’ve heard about it. Digital data does not disappear in huge conflagrations, but rather with a single click or the silent, insidious degradation of storage media over time.

You might also like:

Today, we’re becoming accustomed to such deletions. There are lots of examples – the MySpace profiles that famously vanished in 2019. Or the many Google services that have shut down over the years. And then there are the online data storage companies that have offered to keep people’s data safe for them. Ironically, they have sometimes ended up earmarking it for deletion.

In other cases, these services actually keep running for long periods. But users might lose their login details. Or forget, even, that they had an account in the first place. They’ll probably never find the data stored there again, like they might find a shoebox of old letters in the attic.

Donovan’s interest in the ephemerality of digital data stems from her personal experiences. She studied maths at university and has copies of her handwritten notes. “There’s a point when I started taking digital notes and I can’t find them,” she says with a laugh.

She also had an online diary that she kept in the late 1990s. It’s completely lost now. And she worked on creative projects that no longer survive intact online. When she made them, it felt like she was creating something solid. A film that could be replayed endlessly, for instance. But now her understanding of what digital data is, and how long it might last, has changed.

“It was more like I produced a play, and you got to watch it, and then you just have your memories,” she says.

Thanks to the permanence of stone tablets, ancient books and messages carved into the very walls of buildings by our ancestors, there’s a bias in our culture towards assuming that the written word is by definition enduring. We quote remarks made centuries ago often because someone wrote them down – and kept the copies safe. But in digital form, the written word is little more than a projection of light onto a screen. As soon as the light goes out, it might not come back.

That said, some online data lasts a very long time. There are several examples of websites that are 30 years old or more. And now and again data hangs around even when we don’t want it to. Hence the emergence of the “right to be forgotten”. As tech writer and BBC web product manager Simon Pitt writes in the technology and science publication OneZero, “The reality is that things you want will disappear and things you don’t will be around for forever.”

Someone who aims to redress this balance is Jason Scott. He runs Archive Team, a group dedicated to preserving data, especially from websites that get shut down.

He has presided over dozens of efforts to capture and store information in the nick of time. But often it’s not possible to save everything. When MySpace accidentally deleted an estimated 50 million songs that were once held by the social network, an anonymous academic group gave Archive Team a collection of nearly half a million tracks they had previously backed up.

“What are my children or any potential grandchildren […] going to do with the 400 pictures of my pet that are on my phone?” – Paul Royster

“There were bands for whom MySpace was their only presence,” says Scott. “This entire cultural library got wiped out.”

MySpace apologised for the data loss at the time.

“Once you delete the stuff it just disappears utterly,” says Scott, explaining the significance of proactive efforts to preserve data. He also argues that society has, to an extent, sleepwalked into this situation: “We did not expect the online world was going to be as important as it was.”

It should be clear by now that digital data is, at best, slippery. But how to curb its habit of disappearing?

Scott says he thinks there should be legal or regulatory requirements on companies that give people the option to retrieve their data, for a certain period – say, five years – after an online service is due to shut down. Within that time, anyone who wants their information could download it, or at least pay for a CD copy of it to be sent to them.

Not all of the data we accumulate each day will be worth preserving forever (Credit: Alamy)

Not all of the data we accumulate each day will be worth preserving forever (Credit: Alamy)

A small number of companies have set a good example, he adds. Scott points to Glitch, a 2D online multiplayer game that was removed from the web in 2012, just over a year after it was launched. Its liquidation, in data terms, was “basically perfect”, says Scott. Others, too, have praised the fact that the game’s developers acknowledged players’ frustrations and gave them ample opportunity to download their data from the company’s servers before they were switched off.

Some of the game’s code was even made public and multiple remakes of Glitch, developed by fans, have emerged in the years since. Should this approach be mandatory, though?

“We should have real-time rights, for example to ask for data deletion, data download, or data portability – to take the data from one source to another,” argues Teemu Ropponen at MyData.

He and his colleagues are working on systems designed to make it easier for people to transfer important data about themselves, such as their family history or CV, between services or institutions.

Ropponen argues that there are efforts within the European Union to enshrine this sort of data portability in law. But there is a long way to go.

Even if the technology and regulations were in place, that doesn’t mean that preserving data would become easy overnight. We have so much of it that it is actually quite hard to fathom.

“We should set aside one day of the year when we all go through our data – data preservation day,” – Paul Royster

Around 150 years ago, making a photograph of a family member was a luxury available only to the wealthiest in society. For decades, this more or less remained the case. Even when the technology became more broadly available, it wasn’t cheap to take lots of snaps at once. Photographs became treasured items as a result. Today, smartphone cameras mean it feels like second nature to take literally hundreds or even thousands of photographs every year.

“What are my children or any potential grandchildren […] going to do with the 400 pictures of my pet that are on my phone?” says Paul Royster at the University of Nebraska-Lincoln. “What’s that going to mean to them?”

Royster argues that saving all of our data won’t necessarily be very useful to our descendants. And he disagrees with Scott and Ropponen that laws are the answer. Governments and legislators are often behind the curve on technology issues and sometimes don’t understand the systems they intend to regulate, he says.

Instead, people ought to get into the habit of selecting and preserving the data that is most important to them. “We should set aside one day of the year when we all go through our data – data preservation day,” he says.

Unlike old letters, which are often rediscovered years after being forgotten, online memories are unlikely to last unless you take active steps to preserve them (Credit: Alamy)

Unlike old letters, which are often rediscovered years after being forgotten, online memories are unlikely to last unless you take active steps to preserve them (Credit: Alamy) . Scott also suggests that we should think about what we really want to keep, just in case it gets deleted. “Nobody is thinking of it as the stuff that we have to preserve at all costs, it’s just more data,” he says. “If it’s written, I would print it out.”

There is another option, though. Miia Kosonen at South-Eastern Finland University of Applied Sciences and her colleagues have been working on solutions for storing digital data in archives and national institutions.

“We converted more than 200,000 old emails from former chief editors of Helsingin Sanomat – the largest newspaper in Finland,” she says, referring to a pilot project by Digitalia, a digital data preservation project. The converted emails were later stored in a digital archive.

The US Library of Congress famously keeps a digital archive of tweets, though it has stopped recording every single public tweet and is now preserving them “on a very selective basis” instead.

Could public institutions do some digital data curation and preservation on our behalf? If so, we could potentially submit information to them such as family history and photographs for storage and subsequent access in the future.

Kosonen says that such projects would naturally require funding, probably from the public. Institutions would also be more inclined to retain information that is considered of significant cultural or historical interest.

At the heart of this discussion lies a simple fact: it’s hard for us to know – here in the present – what we, or our descendants, will actually value in the future.

Archival or regulatory interventions could go some way to addressing the ephemerality of data. But that ephemerality is something we will probably always live with, to some extent. Digital data is just too convenient for everyday purposes and there’s little rationale for trying to store everything.

The question has become, at best, one of personal motivation. Today, we decide either to make or not make the effort to save things. Really save them. Not just on the nearest hard-drive or cloud storage device. But also to backup drives or more permanent media, with instructions for how to maintain the storage over time.

This might sound like an exceptionally dry endeavour, but it need not be. A cultural movement might be all it takes to spur us on.

Many audiophiles insist on buying vinyl in an age of music streaming. Booklovers still make the effort to acquire physical copies of their favourite author’s new work. Perhaps we need an analogue-cool movement for preservationists. People who devote themselves to making physical photo albums again. Who go out of their way to write handwritten notes or letters.

These things might just end up being far easier to keep than anything digital, which will likely always require you to trust a system you haven’t built, or a service you don’t own. As Donovan says, “If something is precious, it’s dangerous, I think, to leave it in someone else’s hands.”

By Chris Baraniuk

Source: The online data that’s being deleted – BBC Future

.

Best Cloud Storage Services of 2021

IDrive cloud storage

Backblaze cloud storage

pCloud cloud storage

IceDrive cloud storage

NordLocker cloud storage

Microsoft OneDrive cloud storage

Google Drive cloud storage

Most Popular Cloud Storage Providers

Comparison of the Best Free Online Cloud Storage

#1) pCloud #2) Sync.com #3) Livedrive #4) Icedrive #5) Polarbackup #6) Zoolz BigMIND #7) IBackup #8) IDrive #9) Amazon Cloud Drive #10) Dropbox #11) Google Drive #12) Microsoft OneDrive #13) Box #14) iCloud #15) OpenDrive #16) Tresorit #17) Amazon S3

Why Your Workforce Needs Data Literacy

Organizations that rely on data analysis to make decisions have a significant competitive advantage in overcoming challenges and planning for the future. And yet data access and the skills required to understand the data are, in many organizations, restricted to business intelligence teams and IT specialists.

As enterprises tap into the full potential of their data, leaders must work toward empowering employees to use data in their jobs and to increase performance—individually and as part of a team. This puts data at the heart of decision making across departments and roles and doesn’t restrict innovation to just one function. This strategic choice can foster a data culture—transcending individuals and teams while fundamentally changing an organization’s operations, mindset and identity around data.

Organizations can also instill a data culture by promoting data literacy—because in order for employees to participate in a data culture, they first need to speak the language of data. More than technical proficiency with software, data literacy encompasses the critical thinking skills required to interpret data and communicate its significance to others.

Many employees either don’t feel comfortable using data or aren’t completely prepared to use it. To best close this skills gap and encourage everyone to contribute to a data culture, organizations need executives who use and champion data, training and community programs that accommodate many learning needs and styles, benchmarks for measuring progress and support systems that encourage continuous personal development and growth.

Here’s how organizations can improve their data literacy:

1. LEAD

Employees take direction from leaders who signal their commitment to data literacy, from sharing data insights at meetings to participating in training alongside staff. “It becomes very inspiring when you can show your organization the data and insights that you found and what you did with that information,” said Jennifer Day, vice president of customer strategy and programs at Tableau.

“It takes that leadership at the top to make a commitment to data-driven decision making in order to really instill that across the entire organization.” To develop critical thinking around data, executives might ask questions about how data supported decisions, or they may demonstrate how they used data in their strategic actions. And publicizing success stories and use cases through internal communications draws focus to how different departments use data.

Self-Service Learning

This approach is “for the people who just need to solve a problem—get in and get out,” said Ravi Mistry, one of about three dozen Tableau Zen Masters, professionals selected by Tableau who are masters of the Tableau end-to-end analytics platform and now teach others how to use it.

Reference guides for digital processes and tutorials for specific tasks enable people to bridge minor gaps in knowledge, minimizing frustration and the need to interrupt someone else’s work to ask for help. In addition, forums moderated by data specialists can become indispensable roundups of solutions. Keeping it all on a single learning platform, or perhaps your company’s intranet, makes it easy for employees to look up what they need.

3.Measure

Success Indicators

Performance metrics are critical indicators of how well a data literacy initiative is working. Identify which metrics need to improve as data use increases and assess progress at regular intervals to know where to tweak your training program. Having the right learning targets will improve data literacy in areas that boost business performance.

And quantifying the business value generated by data literacy programs can encourage buy-in from executives. Ultimately, collecting metrics, use cases and testimonials can help the organization show a strong correlation between higher data literacy and better business outcomes.

4.Support

Knowledge Curators

Enlisting data specialists like analysts to showcase the benefits of using data helps make data more accessible to novices. Mistry, the Tableau Zen Master, referred to analysts who function in this capacity as “knowledge curators” guiding their peers on how to successfully use data in their roles. “The objective is to make sure everyone has a base level of analysis that they can do,” he said.

This is a shift from traditional business intelligence models in which analysts and IT professionals collect and analyze data for the entire company. Internal data experts can also offer office hours to help employees complete specific projects, troubleshoot problems and brainstorm different ways to look at data.

What’s most effective depends on the company and its workforce: The right data literacy program will implement training, software tools and digital processes that motivate employees to continuously learn and refine their skills, while encouraging data-driven thinking as a core practice.

For more information on how you can improve data literacy throughout your organization, read these resources from Tableau:

The Data Culture Playbook: Start Becoming A Data-Driven Organization

Forrester Consulting Study: Bridging The Great Data Literacy Gap

Data Literacy For All: A Free Self-Guided Course Covering Foundational Concepts

By: Natasha Stokes

Source: Why Your Workforce Needs Data Literacy

.

Critics:

As data collection and data sharing become routine and data analysis and big data become common ideas in the news, business, government and society, it becomes more and more important for students, citizens, and readers to have some data literacy. The concept is associated with data science, which is concerned with data analysis, usually through automated means, and the interpretation and application of the results.

Data literacy is distinguished from statistical literacy since it involves understanding what data mean, including the ability to read graphs and charts as well as draw conclusions from data. Statistical literacy, on the other hand, refers to the “ability to read and interpret summary statistics in everyday media” such as graphs, tables, statements, surveys, and studies.

As guides for finding and using information, librarians lead workshops on data literacy for students and researchers, and also work on developing their own data literacy skills. A set of core competencies and contents that can be used as an adaptable common framework of reference in library instructional programs across institutions and disciplines has been proposed.

Resources created by librarians include MIT‘s Data Management and Publishing tutorial, the EDINA Research Data Management Training (MANTRA), the University of Edinburgh’s Data Library and the University of Minnesota libraries’ Data Management Course for Structural Engineers.

See also

%d bloggers like this: