Meaningful data are extremely valuable for small businesses, says Streamlytics founder and CEO Angela Benton, but it’s your responsibility to find and use information ethically. We’re currently in a new era of data collection–and that’s for the better.
That’s according to Angela Benton, the founder and CEO of Streamlytics, a company that collects first-party consumer data transparently and aims to disrupt the current model of third-party mining of data from cookies and other methods that raise privacy and ethics concerns. Most recently, she was named one of Fast Company‘s Most Creative People for helping consumers learn what major companies know about them and paying them for the data they create while using streaming services like Netflix or Spotify.
In the latest Inc. Real Talk streaming event, Benton explainsthat she founded the company with minorities in mind, particularly the Black and Latinx communities, because of the disproportionate way they’ve been affected by data and privacy. For example, she points to the recent controversy over facial recognition data being sold to the police, which has a much higher error rate when comparing data of Black and Asian male faces, which could potentially lead to wrongful arrests.
“That becomes extremely important when you think of what artificial intelligence is used for in our day-to-day world,” she says, noting that AI is used for everyday interactions like loan applications, car applications, mortgages, and credit cards. Using her company’s methods, Benton says, clients can secure ethically sourced data, so that algorithms won’t negatively affect communities that have historically suffered from discriminatory practices.
Here are a few suggestions from Benton for finding data ethically without relying on third-party cookies.
Do your own combination of data sets.
“How [Streamlytics] gets data is very old school,” Benton says. Instead of relying on tech to combine data points, she says, you can manually compare data you already own and make assumptions using your best judgment. You may have data from a Shopify website, for example, about the demographic of your customers, and then you can go to a specific advertiser, like Hulu, for instance, to then target people that fit that profile.
Use your data to discover new products.
You can also look to your data to find common searches or overlapping interests to get ideas for new products, Benton says. Often, she says, she receives data requests from small business owners to discover ideas that aren’t currently on the market, for example, a vegan searching for a vitamin.
This combination method surprised Benton when she presented clients with data. “I thought it was going to be more focused on just like, “How can I make more money?” she says. “But we are hearing from folks that they want access to data to use it in more creative ways.”
Don’t take social media data at face value.
Benton and her company purposely do not source social media data because she thinks the data leave too much out of the full picture. You may get a customer’s age and “likes” from a social media page, but that doesn’t tell you what they’re searching for or what their habits are.
“That’s not, to me, meaningful data. That’s not where the real value lies,” she says. “We’re not focused on what people are doing on social media, we’re focused on all of the activities outside of that.” She gave a scenario where a consumer is watching Amazon Prime, while also scrolling on Uber Eats to find dinner.
Data signals are happening at the same time, but they’re not unified. It’s up to businesses to connect the dots. To Benton, that’s more meaningful than what you’re posting and what you’re liking on social media.
Over the past year, many clients I’ve spoken with have been looking for ways to make processes smarter, more adaptable and more resilient. According to our recent research, many companies see the combination of AI and automation — or intelligent automation — as key to achieving these goals.
Despite the promise of better operational performance with intelligent automation, a common question is where to begin: with the process itself or with the data that will power the process? The answer lies in identifying which outcome you’re trying to achieve. Getting the sequence wrong could counteract the very goal you’re pursuing.
The right starting point
Here are two examples that distinguish when a process-led vs. data-led approach makes the most sense with intelligent automation:
How can we improve our operational efficiency?
Amid global uncertainty, supply chain disruptions and social distancing requirements, improving operational efficiency has become a priority for many businesses. The goal in this case is to improve speed and accuracy across the value chain, and achieve outcomes faster without cutting corners.
Adding data intelligence can significantly reduce errors, remove process hurdles and reveal where corrections are needed. But doing so requires a strong process automation backbone in order to shape when and how the data is applied. So in this case, a process-led approach is best.
For example, we’re working with a major insurance provider to improve customer lifecycle management. Typically, insurance customers who file a claim experience long decision times, a lack of visibility into decision making and repeated or disconnected requests for information submission.
Insurers can distinguish themselves by being fast, frictionless and responsive in how they handle claims. However, operating in a highly regulated industry and with overt risks around claims fraud, speed can never be a trade-off for accuracy and compliance.
A contributing factor to the insurer’s process challenges was the dependence on third-party systems and disparate data sources to make decisions. We helped the company implement an automated and fully integrated process for claims handling, which was then supported with AI and data modeling to segment customer profiles and personalize services.
The system has helped reduce the turnaround on claims capture by as much as 80% and shorten overall claims procedure times from 14 days to just two, all while maintaining the necessary high levels of accuracy and regulatory compliance. The insurer has also received positive customer feedback on the effectiveness and quality of services.
How can we be more agile in our product and service offerings?
Leading retailers have an impressive ability to recommend relevant products and anticipate customers’ next actions. Whether shoppers search for a needed item, browse relevant sites or interact with brands across different channels, digitally savvy retailers can connect the dots in real-time and make recommendations with a high degree of precision.
With so many factors and variables at play in dynamic online customer environments, companies need an agile approach that allows them to test the market, gather feedback and continuously improve in order to meet customer needs.
We’re working with an online fashion retailer to deliver this level of personalization. The company is well aware of the speed at which consumers’ tastes and styles change, and realized it needed to move swiftly to gain and keep customers’ attention.
Because it was vital to gain insights into consumer preferences, we took a data-led approach. We helped the retailer use existing data to gain a deeper consumer understanding. Using this insight, we then designed a process that segmented the brand’s customer base and enabled all interactions and product recommendations across channels like chatbots, email and social media to have the highest degree of relevance, timeliness and usefulness.
The combination of process improvements and data insights allowed for an integrated digital thread to run through all phases of the customer lifecycle, including product design and development, sales and after-sales. As a result, the retailer can now drive more relevant customer interactions and next-best offers, which in turn has improved customer mindshare, loyalty and revenue.
Accelerating the path to Intelligent Automation
To get the most out of intelligent automation, process and data need to work in harmony. Automated processes enable greater efficiency, while data enables better decision-making.
By coordinating these attributes — and having a clear outcome in mind — businesses can add intelligence to how and where they automate processes in a way that accelerates business outcomes while ensuring the quality of service is enhanced.
Chakradhar “Gooty” Agraharam is VP and Commercial Head of EMEA for Cognizant’s Digital Business Operations’ IPA Practice. In this role, he leads advisory, consulting, automation and analytics growth and delivery within the region, helping clients navigate and scale their automation and digital transformation journeys. He has over 25 years of consulting experience, working with clients on large systems integration, program management and transformation consulting programs across Asia, Europe and the Americas. Gooty holds an MBA from IIM, Calcutta (India’s Premier B school), and has executive management certifications from Rutgers, Henley Business School. Gooty has won reputed industry awards with MCA for his contribution to the digital industry in the UK and is a member of various industry forums. He can be reached at Gooty.Agraharam@cognizant.com
For years, we were encouraged to store our data online. But it’s become increasingly clear that this won’t last forever – and now the race is on to stop our memories being deleted. How would you adjust your efforts to preserve digital data that belongs to you – emails, text messages, photos and documents – if you knew it would soon get wiped in a series of devastating electrical storms?
That’s the future catastrophe imagined by Susan Donovan, a high school teacher and science fiction writer based in New York. In her self-published story New York Hypogeographies, she describes a future in which vast amounts of data get deleted thanks to electrical disturbances in the year 2250.
In the years afterwards, archaeologists comb through ruined city apartments looking for artefacts from the past – the early 2000s.
“I was thinking about, ‘How would it change people going through an event where all of your digital stuff is just gone?’” she says.
In her story, the catastrophic data loss is not a world-ending event. But it is a hugely disruptive one. And it prompts a change in how people preserve important data. The storms bring a renaissance of printing, Donovan writes. But people are also left wondering how to store things that can’t be printed – augmented reality games, for instance.
Data has never been completely safe from obliteration. Just consider the burning of the Great Library of Alexandria – its very destruction is possibly the only reason you’ve heard about it. Digital data does not disappear in huge conflagrations, but rather with a single click or the silent, insidious degradation of storage media over time.
In other cases, these services actually keep running for long periods. But users might lose their login details. Or forget, even, that they had an account in the first place. They’ll probably never find the data stored there again, like they might find a shoebox of old letters in the attic.
Donovan’s interest in the ephemerality of digital data stems from her personal experiences. She studied maths at university and has copies of her handwritten notes. “There’s a point when I started taking digital notes and I can’t find them,” she says with a laugh.
She also had an online diary that she kept in the late 1990s. It’s completely lost now. And she worked on creative projects that no longer survive intact online. When she made them, it felt like she was creating something solid. A film that could be replayed endlessly, for instance. But now her understanding of what digital data is, and how long it might last, has changed.
“It was more like I produced a play, and you got to watch it, and then you just have your memories,” she says.
Thanks to the permanence of stone tablets, ancient books and messages carved into the very walls of buildings by our ancestors, there’s a bias in our culture towards assuming that the written word is by definition enduring. We quote remarks made centuries ago often because someone wrote them down – and kept the copies safe. But in digital form, the written word is little more than a projection of light onto a screen. As soon as the light goes out, it might not come back.
That said, some online data lasts a very long time. There are several examples of websites that are 30 years old or more. And now and again data hangs around even when we don’t want it to. Hence the emergence of the “right to be forgotten”. As tech writer and BBC web product manager Simon Pitt writes in the technology and science publication OneZero, “The reality is that things you want will disappear and things you don’t will be around for forever.”
Someone who aims to redress this balance is Jason Scott. He runs Archive Team, a group dedicated to preserving data, especially from websites that get shut down.
He has presided over dozens of efforts to capture and store information in the nick of time. But often it’s not possible to save everything. When MySpace accidentally deleted an estimated 50 million songs that were once held by the social network, an anonymous academic group gave Archive Team a collection of nearly half a million tracks they had previously backed up.
“What are my children or any potential grandchildren […] going to do with the 400 pictures of my pet that are on my phone?” – Paul Royster
“There were bands for whom MySpace was their only presence,” says Scott. “This entire cultural library got wiped out.”
“Once you delete the stuff it just disappears utterly,” says Scott, explaining the significance of proactive efforts to preserve data. He also argues that society has, to an extent, sleepwalked into this situation: “We did not expect the online world was going to be as important as it was.”
It should be clear by now that digital data is, at best, slippery. But how to curb its habit of disappearing?
Scott says he thinks there should be legal or regulatory requirements on companies that give people the option to retrieve their data, for a certain period – say, five years – after an online service is due to shut down. Within that time, anyone who wants their information could download it, or at least pay for a CD copy of it to be sent to them.
Not all of the data we accumulate each day will be worth preserving forever (Credit: Alamy)
A small number of companies have set a good example, he adds. Scott points to Glitch, a 2D online multiplayer game that was removed from the web in 2012, just over a year after it was launched. Its liquidation, in data terms, was “basically perfect”, says Scott. Others, too, have praised the fact that the game’s developers acknowledged players’ frustrations and gave them ample opportunity to download their data from the company’s servers before they were switched off.
Some of the game’s code was even made public and multiple remakes of Glitch, developed by fans, have emerged in the years since. Should this approach be mandatory, though?
“We should have real-time rights, for example to ask for data deletion, data download, or data portability – to take the data from one source to another,” argues Teemu Ropponen at MyData.
He and his colleagues are working on systems designed to make it easier for people to transfer important data about themselves, such as their family history or CV, between services or institutions.
Ropponen argues that there are efforts within the European Union to enshrine this sort of data portability in law. But there is a long way to go.
Even if the technology and regulations were in place, that doesn’t mean that preserving data would become easy overnight. We have so much of it that it is actually quite hard to fathom.
“We should set aside one day of the year when we all go through our data – data preservation day,” – Paul Royster
Around 150 years ago, making a photograph of a family member was a luxury available only to the wealthiest in society. For decades, this more or less remained the case. Even when the technology became more broadly available, it wasn’t cheap to take lots of snaps at once. Photographs became treasured items as a result. Today, smartphone cameras mean it feels like second nature to take literally hundreds or even thousands of photographs every year.
“What are my children or any potential grandchildren […] going to do with the 400 pictures of my pet that are on my phone?” says Paul Royster at the University of Nebraska-Lincoln. “What’s that going to mean to them?”
Royster argues that saving all of our data won’t necessarily be very useful to our descendants. And he disagrees with Scott and Ropponen that laws are the answer. Governments and legislators are often behind the curve on technology issues and sometimes don’t understand the systems they intend to regulate, he says.
Instead, people ought to get into the habit of selecting and preserving the data that is most important to them. “We should set aside one day of the year when we all go through our data – data preservation day,” he says.
Unlike old letters, which are often rediscovered years after being forgotten, online memories are unlikely to last unless you take active steps to preserve them (Credit: Alamy) . Scott also suggests that we should think about what we really want to keep, just in case it gets deleted. “Nobody is thinking of it as the stuff that we have to preserve at all costs, it’s just more data,” he says. “If it’s written, I would print it out.”
There is another option, though. Miia Kosonen at South-Eastern Finland University of Applied Sciences and her colleagues have been working on solutions for storing digital data in archives and national institutions.
“We converted more than 200,000 old emails from former chief editors of Helsingin Sanomat – the largest newspaper in Finland,” she says, referring to a pilot project by Digitalia, a digital data preservation project. The converted emails were later stored in a digital archive.
The US Library of Congress famously keeps a digital archive of tweets, though it has stopped recording every single public tweet and is now preserving them “on a very selective basis” instead.
Could public institutions do some digital data curation and preservation on our behalf? If so, we could potentially submit information to them such as family history and photographs for storage and subsequent access in the future.
Kosonen says that such projects would naturally require funding, probably from the public. Institutions would also be more inclined to retain information that is considered of significant cultural or historical interest.
At the heart of this discussion lies a simple fact: it’s hard for us to know – here in the present – what we, or our descendants, will actually value in the future.
Archival or regulatory interventions could go some way to addressing the ephemerality of data. But that ephemerality is something we will probably always live with, to some extent. Digital data is just too convenient for everyday purposes and there’s little rationale for trying to store everything.
The question has become, at best, one of personal motivation. Today, we decide either to make or not make the effort to save things. Really save them. Not just on the nearest hard-drive or cloud storage device. But also to backup drives or more permanent media, with instructions for how to maintain the storage over time.
This might sound like an exceptionally dry endeavour, but it need not be. A cultural movement might be all it takes to spur us on.
Many audiophiles insist on buying vinyl in an age of music streaming. Booklovers still make the effort to acquire physical copies of their favourite author’s new work. Perhaps we need an analogue-cool movement for preservationists. People who devote themselves to making physical photo albums again. Who go out of their way to write handwritten notes or letters.
These things might just end up being far easier to keep than anything digital, which will likely always require you to trust a system you haven’t built, or a service you don’t own. As Donovan says, “If something is precious, it’s dangerous, I think, to leave it in someone else’s hands.”
Organizations that rely on data analysis to make decisions have a significant competitive advantage in overcoming challenges and planning for the future. And yet data access and the skills required to understand the data are, in many organizations, restricted to business intelligence teams and IT specialists.
As enterprises tap into the full potential of their data, leaders must work toward empowering employees to use data in their jobs and to increase performance—individually and as part of a team. This puts data at the heart of decision making across departments and roles and doesn’t restrict innovation to just one function. This strategic choice can foster a data culture—transcending individuals and teams while fundamentally changing an organization’s operations, mindset and identity around data.
Organizations can also instill a data culture by promoting data literacy—because in order for employees to participate in a data culture, they first need to speak the language of data. More than technical proficiency with software, data literacy encompasses the critical thinking skills required to interpret data and communicate its significance to others.
Many employees either don’t feel comfortable using data or aren’t completely prepared to use it. To best close this skills gap and encourage everyone to contribute to a data culture, organizations need executives who use and champion data, training and community programs that accommodate many learning needs and styles, benchmarks for measuring progress and support systems that encourage continuous personal development and growth.
Here’s how organizations can improve their data literacy:
Employees take direction from leaders who signal their commitment to data literacy, from sharing data insights at meetings to participating in training alongside staff. “It becomes very inspiring when you can show your organization the data and insights that you found and what you did with that information,” said Jennifer Day, vice president of customer strategy and programs at Tableau.
“It takes that leadership at the top to make a commitment to data-driven decision making in order to really instill that across the entire organization.” To develop critical thinking around data, executives might ask questions about how data supported decisions, or they may demonstrate how they used data in their strategic actions. And publicizing success stories and use cases through internal communications draws focus to how different departments use data.
This approach is “for the people who just need to solve a problem—get in and get out,” said Ravi Mistry, one of about three dozen Tableau Zen Masters, professionals selected by Tableau who are masters of the Tableau end-to-end analytics platform and now teach others how to use it.
Reference guides for digital processes and tutorials for specific tasks enable people to bridge minor gaps in knowledge, minimizing frustration and the need to interrupt someone else’s work to ask for help. In addition, forums moderated by data specialists can become indispensable roundups of solutions. Keeping it all on a single learning platform, or perhaps your company’s intranet, makes it easy for employees to look up what they need.
Performance metrics are critical indicators of how well a data literacy initiative is working. Identify which metrics need to improve as data use increases and assess progress at regular intervals to know where to tweak your training program. Having the right learning targets will improve data literacy in areas that boost business performance.
And quantifying the business value generated by data literacy programs can encourage buy-in from executives. Ultimately, collecting metrics, use cases and testimonials can help the organization show a strong correlation between higher data literacy and better business outcomes.
Enlisting data specialists like analysts to showcase the benefits of using data helps make data more accessible to novices. Mistry, the Tableau Zen Master, referred to analysts who function in this capacity as “knowledge curators” guiding their peers on how to successfully use data in their roles. “The objective is to make sure everyone has a base level of analysis that they can do,” he said.
This is a shift from traditional business intelligence models in which analysts and IT professionals collect and analyze data for the entire company. Internal data experts can also offer office hours to help employees complete specific projects, troubleshoot problems and brainstorm different ways to look at data.
What’s most effective depends on the company and its workforce: The right data literacy program will implement training, software tools and digital processes that motivate employees to continuously learn and refine their skills, while encouraging data-driven thinking as a core practice.
For more information on how you can improve data literacy throughout your organization, read these resources from Tableau:
As data collection and data sharing become routine and data analysis and big data become common ideas in the news, business, government and society, it becomes more and more important for students, citizens, and readers to have some data literacy. The concept is associated with data science, which is concerned with data analysis, usually through automated means, and the interpretation and application of the results.
Data literacy is distinguished from statistical literacy since it involves understanding what data mean, including the ability to read graphs and charts as well as draw conclusions from data. Statistical literacy, on the other hand, refers to the “ability to read and interpret summary statistics in everyday media” such as graphs, tables, statements, surveys, and studies.
As guides for finding and using information, librarians lead workshops on data literacy for students and researchers, and also work on developing their own data literacy skills. A set of core competencies and contents that can be used as an adaptable common framework of reference in library instructional programs across institutions and disciplines has been proposed.
Resources created by librarians include MIT‘s Data Management and Publishing tutorial, the EDINA Research Data Management Training (MANTRA), the University of Edinburgh’s Data Library and the University of Minnesota libraries’ Data Management Course for Structural Engineers.
Did you know the average app includes six third-party trackers that collect and share your online data?
The war over data privacy continues to heat up in the tech world. Two of the world’s biggest technology companies, Apple and Facebook, are taking very different approaches to user privacy, and their decisions are having ripple effects throughout the tech community.
Apple’s New Transparency Requirement
Apple’s new App Tracking Transparency feature, which will automatically be enabled on iOS in early spring, forces app developers to explicitly ask for permission from users to track and share information for cross-platform ad targeting.
With App Tracking Transparency, Apple requires every iOS app to ask you upfront if they’re allowed to share your information with data brokers and other networks, so they can serve mobile ads to you and measure your response to those ads.
After this change is in place, you’ll see a notification the first time you launch any new app on your phone, explaining what the proposed third-party tracker is used for, and whether you want to approve or reject the tracking and sharing of your data.
Facebook CEO Mark Zuckerberg criticized Apple’s new changes publicly, saying they were specifically put in place to put Facebook at a disadvantage. Zuckerberg says Apple is Facebook’s biggest competitor.
But while Apple is adding more privacy features to give its users more control, Facebook is moving in the other direction.
The Thin Line Between WhatsApp and Facebook
Right now, WhatsApp has some features that allow users to communicate with businesses through WhatsApp chat—and some of those businesses are hosted by Facebook. According to the new policy, messages between the prospect or customer and the business they’re communicating with could be collected and shared with the larger Facebook ecosystem.
That means Facebook and its advertisers could potentially use customer service chats or transaction receipts for marketing and advertising purposes.
The content of users’ individual chats will continue to be encrypted, so they cannot be seen by the company. The data within those chats will not be harvested or shared with third parties. Nonetheless, Facebook faced a huge backlash against the new rules after the announcement, prompting them to publish an FAQ page to clarify the policy and reassure upset WhatsApp users.
For many WhatsApp users, this announcement was a distinct reminder that WhatsApp users are now Facebook customers, and over time, Facebook will be moving information between the two platforms more often, in the name of “interoperability.”
Transparency: Winning Hearts and Minds in the Tech World
Apple and Facebook often take different approaches to user privacy. More and more, Apple seems to be taking steps to be more transparent and to protect user data, including regulating app developers in their ecosystem.
Meanwhile, Facebook has trouble gaining the trust of many of its users, and the common assumption is that the company prioritizes the needs of its advertisers over the privacy of its users.
Clearly, the market is sensitive to privacy issues, and they want companies to be more transparent – as evidenced by the backlash to Facebook’s recent WhatsApp announcement.
In the long run, I believe the companies that are more transparent with their users and take a stand to protect data privacy will be the ones who succeed – but only time will tell.
Bernard Marr is an internationally best-selling author, popular keynote speaker, futurist, and a strategic business & technology advisor to governments and companies. He helps organisations improve their business performance, use data more intelligently, and understand the implications of new technologies such as artificial intelligence, big data, blockchains, and the Internet of Things. Why don’t you connect with Bernard on Twitter (@bernardmarr), LinkedIn (https://uk.linkedin.com/in/bernardmarr) or instagram (bernard.marr)?
Only on “CBS This Morning,” Facebook CEO Mark Zuckerberg and his wife, philanthropist Priscilla Chan, invited us into their home. They have never allowed a TV camera crew inside before. Gayle King was able to see first-hand who this couple is outside their Facebook lives. They discussed raising their two young daughters and how family inspires the work they do. Watch “CBS This Morning” HERE: http://bit.ly/1T88yAR Download the CBS News app on iOS HERE: https://apple.co/1tRNnUy Download the CBS News app on Android HERE: https://bit.ly/1IcphuX Like “CBS This Morning” on Facebook HERE: http://on.fb.me/1LhtdvI Follow “CBS This Morning” on Twitter HERE: http://bit.ly/1Xj5W3p Follow “CBS This Morning” on Instagram HERE: http://bit.ly/1Q7NGnY
[…] vendor risk, examining vendor contracts for terms of service, understanding third-party risk, and data privacy issues […] such as COBIT, NIST (800-53, cybersecurity), ISO, ITIL, PCI, GLBA, GDPR, HIPAA, and other data privacy and security standards and regulations […]
[…] the eﬀects of cyber-attacks and changes in legislation and regulation related to cybersecurity and data privacy, (18) changes in general competitive factors, (19) the inability to protect our intellectua […]
[…] bills to watch in 2021: Florida HB 969 (HB569) – On February 15, Governor Ron DeSantis announced a data privacy bill similar to the CCPA […] Takeaway States across the country are contemplating ways to enhance their data privacy and security protections […]
The technological and regulatory environments are changing rapidly, making governance and data privacy a priority […] Yet, despite the urgency, enterprises struggle to get a handle on data privacy — lacking organizational alignment, the ability to scale from policy to usage, and visibility an […] cataloging and privacy classification, have partnered to bring automation and scale to governing data privacy […]
[…] RELATED CONTENT Virtual Event: APEC Data Privacy Subgroup Meeting BSA’s Jared Ragland presents to the delegates of the Asia-Pacific Economi […] Jared Ragland presents to the delegates of the Asia-Pacific Economic Cooperation (APEC) forum’s Data Privacy Subgroup (DPS) during their virtual meeting […]
[…] India’s data policy should help export of services, software to grow: IBM Chairman The future of Data Privacy Officers in India Reviews Vivo V20 review: Good mid-range camera smartphone with premium look […]
[…] bills to watch in 2021: Florida HB 969 (HB569) – On February 15, Governor Ron DeSantis announced a data privacy bill similar to the CCPA […] Takeaway States across the country are contemplating ways to enhance their data privacy and security protections […]
[…] corporate compliance issues and regulatory matters, including, without limitation, antitrust, data privacy and sales; Participating in the design and development of corporate policies, procedures, an […]
[…] Social benchmarks address how companies respond to the complex and evolving issues like data privacy, pay equity, health and safety, diversity and inclusion, social justice positions, and employe […]
To celebrate Data Privacy Day we are delighted to invite you to this in-depth fireside chat featuring renowned privacy leader Michelle Dennedy, co-author of the Privacy Engineer’s Manifesto and former senior data privacy leader at Cisco, McAfee/Intel Security, Oracle and Sun Microsystems. Michelle will draw on her experiences working on complex global data and privacy…
[…] GDPR and data privacy is one topic, but another threat is that the GAFA get into an oligopolistic situation i […] 000 clients, brands, MR agencies and 300 Scientific Research Institutions, I know about data privacy and standards in operations […]
[…] goal of our series, “defining what counts in the algorithmic age,” guests will discuss issues like data privacy for children, data agency for all, and how metrics like the United Nations Sustainable Developmen […]
[…] new concept to the local market and, initially, there were some concerns over reliability and data privacy,” said Dr Mohamed AlGassab, operation director at Cura Healthcare, a telemedicine startup in Saud […]
[…] of the current market size, drivers, trends, opportunities, challenges, and key segments of the Data Privacy Software market. Further, the report explains various definitions and classifications of Data Privacy Software industry, applications, and chain structure. Continuing with the data above, the Data Privacy Software report gives different marketing strategies by distributors and major players […]
[…] a history of good governance and experience in maintaining good governance standards data and data privacy practices corporate accountability safeguards such as KYC and AML policies and oversight committees […]
[…] Much of this resistance is due to concerns about security and data privacy […] Automakers Must Respect Data Privacy There are two types of data being gleaned […] While the growing data is an unusual issue for automakers to wrestle with, the industry recognizes data privacy concerns and the need to anonymize all data […]
[…] begins detailing its regulatory and enforcement priorities, it faces a new challenge on the health data privacy and security front […] Department of Health and Human Services’ (HHS) interpretation of two key data privacy and security regulations, and required the agency to consider penalties assessed against othe […]
[…] Address All Relevant Privacy Requirements Data privacy is top of mind for many consumers […] Regulatory, data privacy, and safety aspects can easily be overlooked during development, but they are key to a successfu […]
Covid-19 forced organizations to rethink the future of physical workspaces. Everything from desk layouts to conference rooms to communal areas needs to be approached with a new lens of employee health and safety. Data plays a critical role in how leaders structure their reopening plans, identify metrics for reopening and measure effectiveness.
Some countries are already reopening offices as the rest of the world watches and learns. One of the biggest lessons from the Asia Pacific region so far, as Gartner suggests, is the importance of “transparency” and “iteration.” As Hernan Asorey, chief data officer at Salesforce explained, “We are always assessing the data we have available to make decisions. For every evolving need, we pragmatically look at what exists from trusted sources, we vet it with experts in the field, and then we assess, augment, learn and adapt.”
Since organizations are faced with entirely new challenges—all dependent on a variety of factors including office location, workspace type and workforce size—leaders need data to inform a flexible approach to planning, informed by data.
There are four areas where data can inform your reopening strategy:
Creating a COVID-19 task force
Tracking regional policies
Informing workspace planning
Analyzing employee survey data
These areas represent a starting point and not an exhaustive list. Since all of these details vary based on your organization, this piece should be used for informational purposes only.
Reopening is a cross-functional effort. Organizations are instituting centralized, assigned Covid-19 task forces—made up of a variety of people with a diverse set of skills and perspectives—to manage details like workplace logistics and employee communications. This group should represent your workforce as a whole.
“At Tableau, we’re bringing together a variety of stakeholders into workplace conversations,” said Debbie Smith, senior manager of workplace at Tableau. “We have perspectives—and data—from all aspects of the company, from security to HR to real estate to marketing to procurement. We’re also bringing in outside experts to inform details like capacity planning and air filtration.”
All of these stakeholders work with different data points to inform their perspectives. For example, health and safety teams might monitor regional policy data, procurement might use data to inform any new equipment purchases, like panels between desks, and IT might work with workplace teams to determine how to replace existing equipment like phones or headsets.
Creating a dedicated team is a foundational step in a reopening strategy, because data is useful only when people can provide context and take action.
Reopening strategies are largely dependent on local policies. In addition to these policies, organizations are also faced with a long list of guidance from the Occupational Safety and Health Administration (OSHA), the Centers for Disease Control and Prevention (CDC), the Environmental Protection Agency (EPA) and more.
Organizations are exploring centralized dashboards to track changing policies and to inform key indicators to determine when it is safe to reopen offices. SC&H Group’s data analytics team, for example, created a sample dashboard that shows what this could look like for a company in the United States. The dashboard highlights legislation on a state-by-state basis alongside a map showing number of cases.
Christopher Adolph, associate professor of political science and adjunct associate professor of statistics at the University of Washington, is curating and maintaining a data set on state policies related to Covid-19 from open source data. He encourages data and analytics leaders to take a focused approach when visualizing local policy data. That might mean considering other visualization types beyond maps to focus on specific, regional metrics that show the impact of Covid-19.
“If I were an organization,” shares Adolph, “I would structure a visualization to show what’s happening in each location associated with my business, with filters that allow stakeholders to sort through stringency of policies, trends in mobility and trends in cases. I would want to see a time series of how policies change over time as cases increase or decrease in a region.”
Data analytics and geospatial services firm Lovelytics created a dashboard template combining Covid-19 case data from the Tableau Covid-19 Data Hub with sample HR data, providing a breakdown of at-risk employees by building, age group and location. Although this example was originally developed for companies looking to stabilize in a crisis, these types of dashboards could also become a single source of truth in the event of another wave of the virus after reopening.
Some of the most complex challenges that employers face in the wake of Covvid-19 are related to workspace layouts. Many organizations have adopted open office concepts, making it difficult to enforce six-feet guidance between employees. They’re also evaluating the use of shared spaces like kitchens, bathrooms and elevators along with high-end air filtration systems to reduce the spread of infectious droplets. One way that employers can start to make sense of all of these logistical decisions is through data.
Some key data points that employers are collecting (or considering collecting) around space utilization are:
Physical distance (between desks and in shared spaces)
De-densification (removing furniture in communal spaces like kitchens and conference rooms)
Air movement and ventilation
Pinch points like elevators and bathrooms
These new challenges are leading organizations to take a new approach to workplace metrics. Salesforce, for example, is analyzing data to model staggered arrival times so they can effectively manage elevator capacity. Salesforce is also partnering with Siemens on key solutions for a “touchless office,” where organizations can manage occupancy and location data to augment their contact tracing process (on an opt-in basis).
Global commercial real estate services firm Cushman & Wakefield noted in its Recovery Readiness guide that organizations may want to “invest in operational building technologies that enhance the integration, visibility, and control of building and workplace systems” (like occupancy sensors or air quality monitoring capabilities). The company also piloted a new office layout in Amsterdam deemed “The 6-Feet Office,” using large circles and visual cues to enforce a six-foot separation between employees.
An example dashboard from Tableau Zen Master Ken Flerlage. Note that this is intended to be an example and not a template. There are a variety of factors in workplace planning that organizations need to consider beyond the six-feet guideline. Interact with the full visualization.
Recently, Tableau Zen Master Ken Flerlage explored what an office space visualization could look like, drawing six-feet circles around each desk. If a desk area doesn’t follow the six-foot perimeter, then the circle turns red and indicates that the company needs to rethink the layout of that office area. In Flerlage’s blog post about the visualization, Amanda Makulec, data visualization lead at Excella and Bridget Cogley, senior consultant at Teknion, explain that this template is a good starting point for people as they rethink office seating arrangements, but that there needs to be additional thinking around the complexities of how people move in an office setting.
To account for these complexities, some companies are hiring external experts to help set these parameters and inform logistics planning. All of these concepts will require additional iteration and flexibility as organizations put them into practice.
Whether or not they can physically return to work, organizations also need to think about employee needs. Are employees comfortable returning to work—and if so, in what capacity? Some employees need to stay home with kids as schools remain closed, others may have compromised immune systems, and some may just be more comfortable working from home until a vaccine is available to the public.
Some companies, including Tableau, are gauging employees’ concerns through regular surveys. They’ll ask questions about general well-being, like how they’re adapting to work-from-home and how the company can support them. Companies in the logistical planning stages might ask questions about whether or not employees are comfortable returning to work to determine reopening schedules.
An example dashboard from the Tableau people analytics team showing results of a COVID-19 work-from-home survey (this dashboard contains sample data). Interact with the full visualization.
With this data at their fingertips, organizations can analyze:
Employee needs like office equipment or childcare support services
Once offices reopen, companies could join this survey data with utilization data to understand how many employees are actually coming into the office on a regular basis. This can help inform whether or not employees are comfortable with new working conditions.
Analyzing the results of these surveys can help organizations develop important metrics around how the pandemic is affecting their employee base and help them determine how to take action.
From connection through collaboration, Tableau is the most powerful, secure, and flexible end-to-end analytics platform for your data. Elevate people with the power of data. Designed for the individual, but scaled for the enterprise, Tableau is the only business intelligence platform that turns your data into insights that drive action.
Organizations today are witnessing an increase in data volumes across various industries that need addressing to maintain a differentiated data management practice and stay competitive. The cloud offers capabilities to address any data management need; however, not all workloads can migrate to the cloud easily. This could be due to legacy application dependencies residing on-premises, data residency regulations or low-latency computation needs, such as in healthcare, financial and manufacturing industries.
Read on to discover:
Constraints that keep data tied to on-premises environments
Why companies should embrace hybrid data management practice
How AWS Outposts meets your hybrid data needs
Data management constraints organizations face
Data residency regulations, low-latency requirements, and complex application migrations are some of the main issues surrounding the management of data. The journey to the cloud also creates challenges for data infrastructure and development teams to design data management models that provide consistent and reliable cloud services on-premises. These challenges can vary depending on the specific industry and operational requirements, but include:
Hybrid cloud benefits
Organizations can deploy cloud infrastructure on-premises, determine data processing priorities, and when ready, migrate towards the cloud.
1. Cloud capabilities on-premises
Amazon EC2 instances featuring Intel® Xeon® Scalable processors brings the same cloud capabilities on-premises.
2.Seamless migration to the cloud
Build an application once and deploy it in the cloud, on-premises, or in a hybrid architecture with consistent performance.
3. Accelerated modernization
Companies can accelerate the adoption of cloud services on-premises across teams.
4.Focus on what matters
Reduce the time, resources, operational risk, and maintenance downtime required for managing IT infrastructure, giving you the ability to focus on what differentiates your business.
AWS offers a hybrid solution to meet data management needs
AWS Outposts catalog includes options supporting the latest generation Intel powered EC2 instance types with or without local instance storage. Organizations can choose from a range of pre-validated Outposts configurations offering a mix of EC2 and EBS capacity which are designed to meet a variety of data management needs.
AWS Outposts options:
Innovate with AWS
Healthcare use case:
Medical professionals manually collect structured data to store and analyze in vital fields such as cancer staging, medical/family history and patient-reported symptoms. AWS Cloud services automates data collection, where using machine learning inference models amplify data processing and extraction of valuable insights. AWS provides the tools, services and APIs to deliver real-time video analytics and pattern matching, while delivering on-premises flexibility and access to cloud capabilities when needed.
Finance use case:
Financial or government institutions that need to comply to specific data regulations use hybrid cloud to meet their contractual obligations with their customers and demonstrate compliance with legal policies. AWS Outposts allow these organizations to maintain data visibility, process sensitive data locally, including collecting local cache and filtering, and when needed connect to Local Zones or send it to AWS Region.
Security use case:
Companies that are interested in using Outposts to run physical security environments, such as video surveillance, badging systems or security systems, can build and run these workflows on Outposts, archiving relevant data to S3/Glacier within the AWS Region for forensic analysis.
Getting started with AWS
With a consistent set of infrastructure, services, tools, and APIs, AWS simplifies your data management and data migration process, reducing the effort and complexity involved. Leverage the latest Intel technology innovations to accelerate modernization at your edge too. Find out more about hybrid data management for your organization using AWS Outposts in our full guide here.
AWS infrastructure solutions allow enterprises across all industries the opportunity to bring AWS services closer to where it’s needed, such as on-premises with AWS Outposts, in large metro areas with AWS Local Zones, or at the edge of 5G networks with AWS Wavelength. These solutions offer enterprises the capability to deliver innovative applications and immersive next-generation experiences using AWS cloud services where they need it. Millions of customers—including the fastest-growing startups, largest enterprises, and leading government agencies—are using AWS to lower costs, speed time to market, and become more dynamic. To learn more about AWS infrastructure solutions, visit aws.amazon.com.
Enterprises are rapidly adopting the cloud for greater agility and cost savings. However, they often find that some applications need to be re-architected or “”modernized”” before they can be moved to the cloud. Others need to remain on-premise due to low-latency or data processing requirements. As a result, enterprises are looking to hybrid cloud architectures to integrate their on-premises and cloud operations to support a broad spectrum of hybrid use cases, such as data center extension, VMware cloud migration, or building and managing applications using a common set of cloud services and APIs across on-premises and cloud environments. In this tech talk, you will learn how you can build your hybrid cloud architecture with AWS. We will cover our extensive portfolio of services that offer seamless integration between your on-premises and cloud environments for any hybrid use case. Learning Objectives: – Discover AWS services that offer seamless integration across on-premises and cloud environments – See how to build the hybrid cloud architecture to support your use case – Learn about new services that bring cloud services on-premises
I completed my internal medicine residency at a large urban hospital system in Boston, Massachusetts. One particularly challenging day, I worked hard to arrange in-hospital dialysis for a patient—only to find out later that day that he left the hospital against medical advice and without receiving dialysis.
His reason for leaving was a complicated yet common social situation. Later, during rounds, I voiced my frustration about this patient’s actions. “He made the wrong choice,” I said. My attending (supervising) physician stopped mid-stride and said, “No, Vick. He made the choice that’s right for him.” My attending physician calmly explained, “This isn’t about you and your frustration; have the courage to admit this will never be about you. This is about him and his life.” At the time, most of my 20 years of formal education had been about me: my work as a physician, striving to execute the treatment plan as I saw best for my patients.
That day I learned a critical lesson that broadened my perspective on patient needs: Medical knowledge, data, lab tests, and more are incredibly informative and meaningful when guided by compassion for the whole person who needs care. That lesson has stayed with me from residency to my current role at Commonwealth Care Alliance (CCA), where I oversee the application of data science, data engineering, and data-driven decision-making to improve the care that our patients receive.
Data-informed and compassion-guided healthcare during the COVID-19 pandemic
CCA is a community-based healthcare organization that’s nationally recognized as a leader in providing care for high-cost, high-need individuals who are dually eligible for Medicare and Medicaid, including individuals with disabilities. These individuals live with a broad range of complex medical, behavioral, and social needs, which leads to high rates of marginalization and vulnerability. CCA provides services to nearly 40,000 members in Massachusetts for a range of services, including medical care, behavioral-health care, providing durable medical equipment, transportation, and social services and supports.
We want our data to serve as a primary vehicle for decision-making and learning, our experience and intuition to provide context, and our compassion to occupy the driver’s seat.
At CCA, we are pioneering the necessary convergence of data and compassion in healthcare. We recognize that we must employ more than data-driven decision-making; we must be both data-informed and compassion-guided. We define “data-informed” as the combined use of data, experience, and intuition (each with their strengths and weaknesses) to make the best possible choices for a situation despite its complexities. In other words, we want our data to serve as a primary vehicle for decision-making and learning, our experience and intuition to provide context, and our compassion to occupy the driver’s seat.
The COVID-19 pandemic provided us with a practical example unlike any other scenario. It put stress on all the usual societal supports in Massachusetts (and elsewhere) and magnified the vulnerability of every individual. About 30% of our members are at high risk of complications or death from COVID-19. From our experience, we knew our members would need enhanced support during the pandemic.
Our data-informed approach ensured we were aware—sooner and more accurately—about each individual’s risks and needs, as well as about disruptions to existing community support. Our approach allowed us to proactively engage with our members to keep them safe (for example, avoid hospitalizations, obtain essential medications and home oxygen) and supported (for example, relieve them of the sense of social isolation or fear).
Could we build the first platform to scale compassion and value-based healthcare?
We were already building a data platform to support and accelerate CCA’s mission. We had gratefully drawn inspiration from outside of healthcare to design and build a modern solution to meet our needs. Then the COVID-19 pandemic arrived and accelerated our need for such a platform—as seen in CCA members’ needs. Our organizational response revealed the profound benefit provided when data is combined with compassion in healthcare. With essential data easily available, we were freed up to think more holistically and with compassion about our members’ needs. We saw their needs more clearly and how many of them needed support—for example, about 25% of members did not have another person or organization to help keep them safe and supported.
On average, a CCA clinician consults data nearly 60 times a minute during the working day.
The urgency of COVID-19 accelerated our understanding of how data can improve interactions and overall care. The use of data broadened our compassion; it did not detract from it. Our experience reinforced that you build up the ability to quickly iterate from being wrong to finally get it right. And the technology and data must allow for that.
We did not need to “pivot” our data platform; the design and technology allowed clinicians and care managers to rapidly receive tailored and instantly updated information to help them prioritize outreach based on quickly shifting factors. This experience reinforced our transformation to a data-informed, compassion-guided healthcare organization; on average a CCA clinician consults data nearly 60 times a minute during the working day. We anticipate that we will continue to lean upon data to serve our members during the potential combination of influenza and COVID-19.
As a servant-leader, caregiver, and builder, I think it’s beautiful to use the best technologies (such as Looker and Google Cloud) to care for the most vulnerable individuals, especially in times of great need. I see the profound benefit of a system that would learn and scale holistic, compassionate, value-based care. Designing data and systems around human needs and compassion (that is, human-centered) lets us make sense of volumes of data so we can care for people who live complex lives.
Without good use of data, we risk only seeing what we already know (or think we know) and reinforcing existing disparities or poor outcomes. Ideally, the technology powering the care we offer should fade into the background, like GPS now does as we’re driving or walking. When essential data is easily available, the data and tools themselves fade into the background; data simply becomes part of the context of doing our jobs and serving people who are vulnerable.
Keep learning: Does the key to business transformation start with a data-driven culture? Read this whitepaper to learn how to foster a culture that improves agility, intelligence, insights, and trust. And join the author during his session at JOIN@Home.
Valmeek Kudesia, MD, VP Clinical Informatics and Advanced Analytics, Commonwealth Care Alliance
Valmeek Kudesia is an experienced physician-leader, engineer, and board-certified clinical informatician. He is a servant-leader who transforms healthcare organizations into learning organizations. Valmeek leads interdisciplinary teams to design and equip healthcare organizations with information platforms, data science tools, and change-processes for patient and organization success (particularly in the high-complexity value-based care setting). He describes himself: “I’m a doctor who talks tech, does data, and does systems. I take care of people and build things that take care of people.”
Driving innovation and leveraging years of experience in the healthcare industry, GE Healthcare’s eHealth Solutions is a leading provider of health information exchange (HIE). Providing clinical information within existing workflows allows providers to have the relevant information they need to provide the best care. Our standards based technology supports secure exchange of patient data in virtually any scenario — between providers, across regions and provinces. #GEHealthcare#GeneralElectric Subscribe to GE Healthcare: https://invent.ge/3ipZ7wI Learn more about GE Healthcare Website: http://www.gehealthcare.com Facebook: https://www.facebook.com/GEHealthcare Twitter: http://twitter.com/gehealthcare LinkedIn: http://www.linkedin.com/company/gehea… About GE Healthcare: As a leading global medical technology and digital solutions innovator, GE Healthcare enables clinicians to make faster, more informed decisions through intelligent devices, data analytics, applications and services, supported by its Edison intelligence platform. With over 100 years of healthcare industry experience and around 50,000 employees globally, the company operates at the center of an ecosystem working toward precision health, digitizing healthcare, helping drive productivity and improve outcomes for patients, providers, health systems and researchers around the world. We embrace a culture of respect, transparency, integrity and diversity. GE Health Information Exchange (HIE) a secure, standards-based infrastructure | GE Healthcare http://www.youtube.com/gehealthcare
In early 2020, we published our 2020 Data Trends Report featuring our predictions for the major trends that will shape the future of data and analytics. Little did we know that the world as we knew it would completely change by spring. We recently decided to take a fresh look at these data trends to assess the larger cultural and organizational impacts of Covid-19. What we found was an even greater urgency for data and analytics, to highlight inequities in the world and to help organizations empower collaboration and increase agility.
For this piece, we’ll focus on three trends that have seen rapid acceleration this year—growth in data literacy programs, the emergence of data as a critical resource for advocates highlighting racial inequality, and executive involvement in data culture. For a deeper dive into other topics like artificial intelligence and data storytelling, read the full 2020 Data Trends report.
Early 2020 Prediction: Organizations look to academia as a data literacy incubator
How It’s Evolved: Data literacy remains a foundational mission for agile businesses
Earlier this year, we predicted that data literacy would continue to be high on leaders’ priority lists throughout 2020. Despite a pandemic that led to budget cuts and employees working remotely, data literacy is more important than ever—and leaders are getting creative with their approach to training and development around analytics.
Last year, IDG reported that organizations were making major investments in digital initiatives—an average of $15.3 million in 2019, with 41% of that budget allocated to people and skills. Instead of reducing spending in training and skills in light of COVID-19, leaders are optimizing their training budgets to maximize value from their existing investments.
Before the pandemic, nutritional food company, Huel, prioritized data training and drop-in sessions to upskill employees. This foundation is helping the company adapt to a new business reality. Since many employees understand how to explore data and turn it into insights, they can act with greater speed and clarity on decisions around marketing spend and distribution effectiveness.
In this new world, we’ll see virtual data communities emerge as the preferred method of regular communication among analysts, business users, and line of business leaders. Organizations that already had virtual communities will see them grow as workers need a place to offer inspiration, ask questions, and share best practices in a new remote world. This will set the foundation for more expansive in-person and online training programs in the future.
Early 2020 Prediction: Transparency around workplace data leads to equity and organizational success
How It’s Evolved: Data sheds light on inequality and areas where progress is needed
Previously, we discussed data as a tool instrumental in dismantling inequities in the workplace—but even more, we are seeing data being used to shine a light on inequities in the world at large in 2020. The novel coronavirus has had a disproportionate impact on communities of color and to understand the breadth of the impact and to spearhead solutions, we need data. Data from Kaiser Family Foundation shows that as of August 4, the COVID-19 related death rate among Black people was over twice as high as the rate for White people. People from Latinx communities are also seeing higher rates of infection and hospitalization.
Data can be a key tool in building awareness and inspiring action in the fight against inequality. Headwaters Economics, an independent, nonprofit research group, developed a series of data visualizations that show Census response rates for four minority groups: Black, Asian, Hispanic and Latino, and Native American. The decennial Census determines how much federal funding flows into communities and influences decisions about schools, health clinics, and development programs—but Covid-19 is affecting response rates due to health risks, unemployment, and limited door-to-door outreach.
Headwaters pulled in data on Census self-reporting from the U.S. Census Bureau (updated daily), and overlaid it with demographic data on the racial and ethnic makeup of communities across the U.S. Explore the full visualization.
Patty Hernandez Gude, associate director at Headwaters Economics noted: “All of this adds up to a situation where communities of color stand to be represented even less in the 2020 Census than they have been historically. This would be a monumental step backwards.”
Headwaters is working to get these visualizations into the hands of advocates and nonprofits to improve Census response rates. Working with limited resources, these visualizations help them quickly identify how to make the largest impact. “Hopefully the data can serve a purpose and be used to more effectively direct energy and resources in this critical period of time when it can really make a difference,” shared Gude.
Early 2020 Prediction: Data strategy stretches across the C-suite
How It’s Evolved: Executives extend data-driven decision making to frontline workers
Digital transformation efforts, particularly in the realm of data and analytics, were once the sole responsibility of the Chief Data Officer. But that paradigm is changing, especially in light of Covid-19, as data and analytics is more tightly woven into business goals.
All executives—not just analytics leaders—at the C-level and VP-level are committing to treat data and analytics as a shared responsibility, and functional leaders are expected to empower their employees with the data and the skills they need to do their jobs. McKinsey recently shared six key lessons that have emerged from crisis-response efforts. One of these was the need for frontline teams to have full decision-making rights.
Chemical and consumer goods company, Henkel Laundry and Home Care leverages self-service analytics in their operations and supply chain. Henkel’s frontline workers—ranging from analysts to global managers to factory line operators—all have access to data to track major KPIs like energy efficiency in factories. When some managers had to shift to remote work, the team moved all major KPIs into online dashboards, so they can conduct operations planning and track the availability of personal protective equipment (PPE) in facilities in order to keep employees safe. Dr. Johannes Holtbruegge, Senior Manager of Transformation at Henkel, noted that the ability to track metrics at a local, regional, and global level creates strong alignment between disciplines and increases agility. All these developments are embedded in a long-term digitalization strategy of the company.
Organizations get more value out of analytics when they bring in the people who know the data best—the people that make and execute decisions based on business goals. When these people have the data they need, along with the necessary skills to interpret and act on the data, organizations build a strong data culture and as a result, stronger networks of teams.
If we’ve learned anything during this new normal, it’s that predicting what’s next can be incredibly difficult. But we know that the one constant, reliable way forward is with data.
Data is playing an important role in identifying ways we can improve our communities, our businesses, and our world. Organizations using trusted data are well-positioned for navigating through change and setting themselves up for success in the future. Data empowers people to make better decisions, faster, and that has been one of the main differentiators we’ve seen in organizations that are surviving and even finding new and innovative ways of getting work done during this pandemic.
Data will continue to be an even more important component to finding stability and growth, especially as more operations and services move into the digital space. The potential impact of that data will only get stronger as increased automation, AI, and forecasting models help us better predict and prepare for what’s ahead. Even in a crisis, those who have taken the initiative to shift to a digital-first mindset, driven by data, are better prepared to handle whatever comes next.
Andrew Beers is Tableau’s Chief Technology Officer, and is responsible for Tableau’s long-term technology roadmap and emerging technologies. During his tenure at Tableau, he has led many of the engineering teams, created new products for the company, and personally written pieces of the product code. Andrew has been at the very heart of Tableau’s engineering for most of the company’s existence. Prior to joining Tableau in 2004, Andrew ran the engineering group at Align Technology, makers of the Invisalign system, building software to support large-scale customized manufacturing. He holds a master’s degree in computer science from Stanford University, where he worked in Pat Hanrahan’s (Tableau co-founder) computer graphics research group.
Arizona State University 63.2K subscribers In this first episode of Study Hall: Data Literacy, Jessica Pucci talks us through some of the critical vocabulary we’ll need to become great Data Analysts. And, she lays out the basic ideas behind what it means to be Data Literate and how we can start looking at information and the world a little differently.
Big data as a concept is thrown around a lot. It’s often used as a buzzword to sound tech-savvy and on the ball — but how much do you really know about it? In truth, it’s been around for decades. Businesses have been analyzing their customers’ actions and behaviors and using it to inform their business decisions for a long time; it’s the marker of a strong businessperson. The difference today is that we now have the tools and technology to gather and analyze larger amounts of data faster. Enter big data.
You don’t have to be a tech genius or a data scientist to make big data work for you and your business. Here are seven areas where you can use big data to streamline and optimize what you already have, with key examples and actionable tips to get you started.
1. Website design
To prove that big data is not only for the scientists of the world, let’s start with a more creative example. A well-designed website shouldn’t only look good, it should be part of a subtle conversation going on between you and your customers and leads.
One way to gather useful big data from your website is through heat maps. You can see exactly where the eyes and cursors of visitors to your site spent the most time. If these heat spots aren’t on your CTA button, contact form, or wherever else you most want them to go, you know what needs to be changed. You can achieve similar results with traffic analysis — looking at page views, unique visitors, visit duration and more. Many traffic analysis tools will also let you compare to your competitors’ sites to get a bigger picture of the general landscape.
2. Campaign timing
Have you ever put together a five-star marketing campaign that ticks all of your customer persona boxes, looks great and has a punchy CTA — only to see it flop? The greatest campaigns in the world will get you nowhere if you don’t publish them at the right time.
Whether you’re publishing on social media, email or any other digital platform, there are tools (like Growbots for email or Sprout Social for social media) that will gather data for you about when your audience is most active, when they are most prone to engaging and ultimately when the best time to reach them is. With big data, you don’t have to take a stab in the dark about when to launch a winning campaign.
3. Conversion optimization
There are a lot of variables when it comes to on-site content. While that may seem daunting to some, that really just means that there’s a lot of room for optimization so that your business can do even better than it is now. From headline copy to page color scheme, it can all be tweaked and improved to gather the highest amount of conversions possible.
Big data analytics can help us to understand how leads travel through our sales funnels, where they might get lost and at what point many prospective customers drop off. Data-driven optimization is the fastest and most efficient way to get it right. Even while experimenting, be sure to gather as much data as possible and analyze it in bulk for the most accurate and informative results.
No matter what your business is, at the end of the day it will come down to people making a decision. Big data might seem like a huge and faceless tool, but it can also be used to add more personality and individuality to your marketing and customer interaction.
The fashion brand, H&M, used big data to do exactly that when they integrated it with their chatbot. As it offered options to prospective customers and asked them if they liked the product choice, it learned more and more about what clothing options they liked. Along the same vein, for marketers to make personalized decisions that will have a real impact on leads and customers, we need to learn about them first. Big data is one effective way to do so.
5. Customer retention
A good business person knows how to attract and win clients. A great business person knows how to keep loyal customers. Once again, big data can take the heavy lifting out of this process.
Checking in with your existing customers through quick surveys and polls is one way to be continually staying in touch with how they perceive your company and what they think of your products or services. It’s anywhere between five to 25 times more expensive to find a new customer than to retain an existing one, depending on your industry. Use big data to regularly make sure that you are doing exactly what your existing customers are expecting of you.
6. Informing risk management
Risk is a fact of life for any business. Wouldn’t it be great if we could find a way to make smarter strategic decisions with key data to back up our more risky ventures? With big data, that could be a reality.
UOB Bank in Singapore did it. As a financial institution, making a misstep in risk assessment and management could be catastrophic. The bank used big data to develop a risk management system that cut down their risk analysis time from 18 hours to just a few minutes. Being able to carry out extensive risk analysis in real-time was a game-changer.
Of course, not every business has the ability or the resources to create their own risk management solution from scratch but there are tools out there that help businesses accurately quantify the risks they take on a daily basis, shedding light on one of the trickiest parts of business decision-making.
Think about the most successful businesses in the world — Amazon, Apple and Microsoft, just to name a few. They didn’t get to where they are today by sticking to their first idea and running with it. They diversified, innovated and kept up with other demands from their customers. Often, it was big data that showed them the way.
Let’s look at Amazon’s recent venture, Amazon Fresh. To launch their whole foods service, Amazon focused on big data analytics to not just understand how customers buy groceries, but also how suppliers interact with grocers. Big data helped them understand the whole supply chain and find a solution that streamlined every aspect of it, thereby providing an innovative and helpful service.
By: Sina Fak / Entrepreneur Leadership Network Writer