When inflation and interest rates soar, businesses must look closely at their spending to protect their bottom line. In my many years of experience in accounts payable and automation, I have seen a recurring pattern: A significant untapped savings opportunity lies within most companies’ accounts payable processes.
The scope of payment options has expanded so that we can now handle transactions faster and reduce transaction costs and both consumers and businesses are adopting digital payments. If you’ve been considering it, but haven’t yet taken the leap, here are a few ways digitizing payments could help improve your business—and how to get started.
Six Ways Digital Payments Can Improve Your Business
There are several ways digital payments can help improve your bottom line and make for a better customer experience.
• Digital payments can help reduce fraud. Digital payments leverage several technologies to secure and encrypt transaction data and multi-factor authentication, making it more difficult for bad actors to initiate fraudulent transactions. Plus, with digital payment types like virtual credit cards, you have a one-off payment dedicated to only one vendor for only one amount, thereby helping to protect your actual credit card from being hacked.
• Digitial payments can help reduce costs and waste. The Association for Financial Professionals reports that 92% of companies accept paper checks as incoming payments, and 86% accept them as outgoing payments. Sending and receiving checks can incur hefty processing costs like bank fees, printing, postage and secure disposal. AFP found that digital payment fees are much lower—they can be less than $0.50 per transaction—and are a greener solution. Some digital payments have no associated costs, meaning no additional fees, which could earn you cash back.
• Digital payments can increase transaction speeds. While traditional payment methods like paper checks can take days or weeks to process and complete, digital payments can be almost instantaneous. By digitizing payments, companies can be better about paying invoices on time, every time.
• Digital payments give you real-time cash flow visibility. When a payment occurs, the account balance reflects the change immediately and all currency conversions happen right at the moment of the transaction. Thus, executives can track expenses and income in real time and make quicker business decisions in key areas such as spending, investing and hiring.
• Digital payments can turn a cost center into a value driver. Consumers often consider cashback offers when deciding which credit cards to use. Businesses can get the same perks with VCCs as with physical credit cards. Plus, VCC numbers can be used only once, and companies determine how much money to fund in the credit card. Cashback offers could mean reoccurring revenue for your organization on every invoice you already need to pay.
What To Consider When Choosing A Digital Payment Service Provider
To move ahead with digital payments, you’ll need to find a provider that is a good fit for your business. As you make that decision, it’s important to consider a few factors.
• Security: Your provider should have a stellar reputation for following industry-standard security protocols and protecting its customers’ sensitive information. Ask for details about their encryption approach.
• Fees: Look for providers that are fully transparent about transaction fees, monthly fees and chargeback fees. Take time to identify potential hidden costs such as early cancellation fees, monthly minimums, charges in case of a breach, etc.
• Integration: Evaluate the effort required to integrate the digital payment service with your existing systems, such as your website, point-of-sale system or accounting software.
• Reputation: Consider the provider’s reputation in the industry. Search for reviews and mentions of the provider in the news. Ask for referrals to ensure they have a track record of providing reliable and trustworthy services.
• Geography and currency support: If you conduct business worldwide, verify that the provider supports the countries and currencies where you operate, as some may have operating limitations.
• Scalability: If your business is growing and you expect transaction volumes to increase significantly, make sure the provider can scale with you.
Taking The First Step With Digitized Payments
For companies looking to improve the bottom line with digital payments, there are some ways to include them in your AP process.
ACH payments: This common digital payment draws funds directly from a checking account and requires no signature, check printing or postage. However, there are some potential drawbacks, as these payments are not real time. Once posted, they are irrevocable and irreversible. Additionally, ACH payments aren’t the most secure digital payment, with 37% of companies reporting fraud attempts through ACH in 2021.
Wire transfers: Wire transfers occur in real time but are a costly digital payment option. They can also be risky because they are often irreversible. Nevertheless, they are the preferred method for international payments and large transaction amounts.
Virtual credit cards: This is a quick and easy way to get paid. They don’t require the sharing of sensitive information and there are no transaction fees associated. With cashback perks, they can also become a source of recurring revenue. However, be aware of virtual cards with percentage-based transaction fees, as they can add up quickly on large invoices.
Fraud detection, faster transactions, reduced costs and a potential new income stream are all reasons to consider adopting digital payments. If you’ve started your AP digital transformation journey, don’t stop at the payment stage.
While most struggle to gain familiarity with the increasing role of distributed ledgers in the wider economy, developers are already delving deeper into providing more advanced functionality. Right now, software engineering teams are working on technology that will become integral to global functioning in the years to come.
One critical innovation in Web3 that will facilitate wide-scale adoption is the subnet. While it is possible to explore these concepts in-depth, it is much better to simply give a high-level overview. When the complicated terminology is removed, the core concepts are actually very relatable.
What Is The Crypto Scalability Problem?
In crypto “lingo,” blockchains are divided into Layer One (L1) and Layer Two (L2). Again, this sounds a lot more complicated than it is. L2 blockchains are those that address specific problems that an L1 blockchain cannot cope with. They are placed “on top” of earlier blockchains.
A prime example is Bitcoin. Bitcoin, the first cryptocurrency, was a wonderful innovation for its time. But it quickly ran into massive scalability problems, with high fees and network congestion. So it needed a L2 solution, which is known as the Lightning Network. In the same way, Ethereum ran into issues as an earlier crypto ecosystem. So it needed to resort to an L2 solution in the form of Plasma.
Unfortunately, these L2 solutions are not doing what they are supposed to do. Ethereum is still the primary ecosystem on which dApps are built, and NFTs are traded (as ERC-20 tokens). Still, it has massive fees, which is why developers and market newcomers are looking towards alternatives such as Avalanche.
L2 solutions are not resolving the problem of scalability. High fees and slow speeds are powerful indicators that cryptocurrency cannot go mainstream. If cryptocurrency needs global adoption, it needs to be able to cope with more people on the network. L2 solutions have not yet proven up to the mark. Subnets are a much more versatile and effective technology.
Exploring Subnets Within Web3
Subnets are a game-changer for crypto scalability. A subnet is merely a sub-level network within a larger network. Each blockchain is simply a network – a network being the number of nodes/servers that communicate with each other through distinct protocols. The subnet will take attributes from the parent chain/larger network but will have a specific use case.
Subnets are closely related to the concept of sharding. They are very reliable, efficient, and better at solving scalability than L2 blockchains. The major difference between subnets and sharding is that subnets can be created at will by customers and developers.
While sharding is built into the architecture, you can launch infinite subnets to see which ones scale the best while implementing the sharding model. In other words, you can create infinite subnets that take the best attributes from the initial blockchain network. These subnets can be put to a variety of different uses.
Subnets Are Already Taking Over
Avalanche is a prominent blockchain that has recently launched subnets, allowing many newer Web3 projects to build their own ecosystems. Ayoken Labs has launched its token on the Avalanche C-Chain. Ayoken is a digital collectibles marketplace that connects creators to global audiences. With a vision to onboard 10 million new crypto users & digital collectible owners, Ayoken Labs aims to catalyze the mainstream adoption of crypto in emerging markets. It is onboarding creatives to the metaverse. And it selected Avalanche to assist with this venture.
Avalanche offers C-Chain, X-Chain, and P-Chain subnets. P-Chain is for staking, X-Chain is for sending and receiving transfers, and C-Chain is for smart contracts (broadly speaking). These subnets allow for distinct chains to be used for specific purposes. But they are all validated by the primary network, taking its benefits with them.
Ankr is another web3 company that aims to be a major player in the subnet/sidechain space. Ankr is a major Web3 infrastructure provider that launched the first Binance Smart Chain Application Sidechain (BAS) testnet, along with Celer and NodeReal.
The BAS Testnet is a framework for creating side chains dedicated to applications in the BNB Chain ecosystem. Ankr is also the main infrastructure partner for Binance, Fantom, and Polygon and has helped these major firms to scale. Ankr also launched the first game on the Binance Application Sidechain (BAS).
It is currently the leading RPC provider and offers a cost-effective mechanism to build, deploy, and scale in Web3. Its low latency and high resilience levels can be observed from many tracking tools.
As a major infrastructure provider, Ankr is also looking to launch subnets so that Web3 projects can grow from a stable, fast, and efficient foundation. This enables projects to test and grow without being “locked-in” to a previous blockchain.
Crypto Scalability Issues: A Thing Of The Past
Subnet functionality is going to become a core necessity to build the future of Web3 and resolve the crypto scalability issue. It resolves perhaps the most pressing issue observed with previous blockchains. Development teams can tweak and test in secure environments and can create as many subnets as they wish.
These innovations will ultimately help to grow the wider ecosystem and help to quickly replace legacy systems that are already obsolete.
By: Victor Fabusola Crypto Writer & Blockchain Journalist. Lover of mental models and conscious hip-hop.
Internet Standard Subnetting Procedure. IETF. p. 6. doi:10.17487/RFC0950. RFC950. It is useful to preserve and extend the interpretation of these special addresses in subnetted networks. This means the values of all zeros and all ones in the subnet field should not be assigned to actual (physical) subnets.Troy Pummill; Bill Manning (December 1995).
Variable Length Subnet Table For IPv4. IETF. doi:10.17487/RFC1878. RFC1878. This practice is obsolete! Modern software will be able to utilize all definable networks. (Informational RFC, demoted to category Historic)A. Retana; R. White; V. Fuller; D. McPherson (December 2000).
RFC 3627 to Historic Status. IETF. doi:10.17487/RFC6547. RFC6547. This document moves “Use of /127 Prefix Length Between Routers Considered Harmful” (RFC 3627) to Historic status to reflect the updated guidance contained in “Using 127-Bit IPv6 Prefixes on Inter-Router Links” (RFC 6164).R. Hinden; S. Deering (February 2006).
How you design your IT house can be as important as how architects design physical homes...getty
Where you decide to run your applications is as important as what you run. What does your workload placement strategy look like? Home architects are very careful about their design choices. Many of their decisions, such as the best locations for load-bearing walls, support beams and other infrastructure, have long-term consequences.
Where do they put windows and skylights will deliver optimal sunlight? How do they situate bedrooms and bathrooms? What is the right density of wood, concrete and other materials required to construct safe walls, roofs and floors? Those are just the broad strokes; architects plan thousands of minor details as well, often well before raw materials are purchased.
Like their home-building counterparts, IT systems architects carefully design technology systems. Which is why workload placement has emerged as a critical strategy for governing what applications and other resources run where.
IT has grown more complex, thanks to a proliferation of environments comprised of public and private clouds, on-premises infrastructure and edge devices. IT leaders who placed assets in these locations have constructed a multicloud house without planning for the long-term impact on their organization.
For example, while it may have initially made sense to build a key business application in a certain IT environment, perhaps performance began to lag as usage grew. Maybe the goalposts for security and compliance shifted, forcing you to rethink your choice.
Whatever architectural concerns arise, where you decide to put what in your IT house can be as important as how architects design physical homes. CIOs are thinking about this a great deal, as 92% of 233 IT decision makers Dell surveyed said that they have a formal strategy for deciding where to place workloads. Half of those executed this strategy in the past year.
The public cloud grew rapidly, as engineers learned how easily it enabled them to launch and test new applications. Soon IT teams notched quick wins, including flexibility as they lifted and shifted existing business applications to the cloud.
Then came the overcorrection. Emboldened by the prospect of saving money while fostering greater agility as they innovated, many CIOs declared a “cloud-first” strategy. Those who were initially more measured in their adoption of cloud technology saw their colleagues migrate their entire IT estate and followed suit.
As workloads got more complex it turned out that the public cloud-first stance was not always the best fit for the business. Hasty decisions had unanticipated ramifications, either in the form of escalating costs or failed migrations.
The reasons: Workloads are unique. Each application has its own set of business requirements and benefits. Just as the home architect must carefully weigh each design choice, CIOs must be intentional about where they put their software assets.
Variations on a multicloud
Let’s consider some examples where the right workload is tied to a business outcome. Cloud environments—public or private—make sense where you get huge bursts of data traffic. Cloud technologies enable you to quickly spin up compute resources and dial them down as requirements subside.
Retail ecommerce is a classic example. For brands selling clothing, footwear and other merchandise, holiday seasonality drives peaks and valleys to web and mobile sales. Large traffic spikes in October or November through Christmas subside, then stabilize.
Or think of a digital crossword puzzle published every weekend. With most people completing these on the weekend, traffic bursts Saturday and Sunday before slowing over the remaining 5 days.
For such use cases, a public cloud that provides massive scalability may yield the desired business outcome.
Conversely, so-called “steady state” use cases—in which applications’ compute needs fluctuate little if at all—often run better on-premises, either in traditional IT infrastructure or in a private cloud. Thousands of these applications run without much deviation across business lines.
Think traditional general ledger software in ERPs. Travel and expense utilities. Software that governs data backups. Applications, such as those that monitor anomalous network traffic, often run locally for security reasons.
Other applications with disparate patterns and needs are emerging. Applications requiring minimal latency—think Internet of Things software—are moving to the edge for faster processing and cost efficacy.
In Dell’s survey, 72% of IT decision makers said performance guided their decisions to place workload, followed by data protection and security at 63% and 58%, respectively. Venues include public clouds, data centers, colocation facilities and edge environments.
Workload types vary, but 39% of respondents said they had placed data protection workloads while 35% each said they had placed ERP and CRM systems.
Diverse workloads require fungible infrastructure
There are no absolutes in determining workload placement. Well, not in the way many IT leaders think. Every software asset will have different requirements, which will influence where you decide to place them.
Just as an architect decides how to situate walls, beams, rooms and other physical infrastructure, where an IT architect places assets matters. The wrong choices can have negative consequences.
These decisions aren’t easy nor should they be made lightly, as the ramifications of poor asset placement can impact your bottom line, make your business more vulnerable or prompt you to run afoul of compliance mandates.
All diverse workloads require a flexible infrastructure that enables enterprises to move their applications and other workloads to move seamlessly across clouds, on-premises and edge venues, based on their business requirements.
As-a-Service infrastructure, which includes on-premises equipment ordered on demand, can power these workloads to meet requirements for performance and availability, as well as your needs for simplicity, agility, and control. How will you lay the foundation for your IT assets?
On the placement of web server replicasL Qiu, VN Padmanabhan… – … IEEE INFOCOM 2001 …, 2001 – ieeexplore.ieee.org replica placement in detail. We develop several placement algorithms that use workloadinfor… To study the effect of overlooking some network links on the placement algorithms, we …
Network Slicing and Workload Placement in MegacitiesP Soumplis, P Kokkinos, D Lagos… – 2020 22nd …, 2020 – ieeexplore.ieee.org… ’ workload is appropriately offloaded. In our work, we examine mechanisms for the joint resourceallocation and the applications’ workload placement in … connections (links) between two …
Foggy: a platform for workload orchestration in a fog computing environmentD Santoro, D Zozin, D Pizzolli… – … on Cloud Computing …, 2017 – ieeexplore.ieee.org… Foggy orchestrates application workload, negotiates … negotiation, scheduling and workload placement taking into account … world situation in which the link between the Edge Cloudlets …
N Jain, A Bhatele, S White, T Gamblin… – SC’16: Proceedings of …, 2016 – ieeexplore.ieee.org Multi-job simulations and placement: Realistic workloads in HPC include several … on the links increase as we approach the root; thus, link bandwidth should be higher for the …
Businesses in Australia and New Zealand that use data effectively can, on average, increase their annual revenue by 9.5%. This translates to an additional $38 million in annual revenue for large organisations in Australia with more than 200 employees.
According to a new AWS report prepared by Deloitte Access Economics, organisations with more than 100 employees improved their data capabilities in the previous year, with 34 per cent achieving Advanced or Master levels of data maturity, compared to 16 per cent in 2021.
Almost half of the organisations polled (48 per cent ) stated that effectively capturing and analysing data can lead to increased productivity, followed by improved customer experience (45 per cent ) and lower operating costs (42 per cent).
Finance and insurance companies scored the highest on the data maturity scale, with 50 per cent achieving Advanced or Master status, followed by manufacturing (45 per cent ) and information, media, and telecommunications (33 per cent ).
On the other hand, construction, healthcare and social assistance, and retail trade organisations have the lowest data maturity levels, with less than 20 per cent of surveyed organisations in these industries achieving Advanced or Master levels of data maturity.
Unusual challenges
While improving data maturity benefits businesses, large organisations in Australia and New Zealand continue to face challenges in climbing the data maturity ladder, with 42% of organisations achieving Basic and Beginner data maturity.
The main barrier cited by organisations to use data and analytics was a lack of funding (44 per cent ), which has been exacerbated by COVID-19, with 49 per cent of respondents reporting that competing priorities have resulted in fewer resources for data and analytics since the pandemic’s onset. Furthermore, 37 per cent of organisations cited poor data quality as a barrier to businesses adopting more advanced data analytics.
“We are excited to see that more organisations have advanced their data capabilities, which will help them to drive productivity, and create a positive impact on the economy while delivering significant financial returns for their business,” said John O’Mahony, partner at Deloitte Access Economics.
“Investing in cloud solutions will help businesses further their data capabilities and leverage advanced analytics tools such as artificial intelligence, machine learning, and the Internet of Things to achieve data-driven insights.
In fact, businesses that already use the cloud are 71 per cent more likely to have invested in artificial intelligence and machine learning capabilities versus organisations using on-premises data storage. To increase productivity and innovation, organisations should have a clear and practical roadmap for advancing on the data maturity ladder, invest in attracting and retaining talent, and leverage the right technology to reap the full benefits.”
According to the report, one-third of Australian and New Zealand organisations (35 per cent ) cited a lack of skilled resources as a barrier to developing their data and analytics capabilities. To improve data maturity, 33 per cent of surveyed organisations prefer to upskill their current employees, followed by outsourcing to other organisations (24 per cent ), and hiring skilled staff (24 per cent ).
“Data can be an invaluable source of growth for organisations in Australia and New Zealand. The key is recognising its inherent value, analysing it effectively, and building a data-driven culture.
“No matter what stage organisations are in their data journey, AWS is committed to helping customers leverage the scalability, cost efficiency, and security of the cloud to scale their data projects and unify their data to drive productivity and innovate on behalf of their customers,” said Rada Stanic, chief technologist at AWS in Australia and New Zealand.
“Organisations will also benefit from building data skills within their teams, which may involve upskilling current staff through on-the-job training and training courses or collaborating with organisations such as our extensive network of AWS Partners.
“As organisations increase their data maturity, it will transform how they solve problems and build customer experiences, leading to breakthroughs in all industries, including healthcare, finance, retail trade, and manufacturing operations.”
A look at the differences between virtual desktops and cloud desktops, and why businesses need a fresh, cloud-native approach as hybrid working conditions continue to become the norm. With more people working remotely for the foreseeable future, the corporate network’s ability to protect assets has significantly eroded.
There’s been an explosion of commentary in recent months about the “future of work,” and much of it has reinforced a few key themes: most enterprises will embrace hybrid models in which more work is done outside the office, and to do so, they’ll leverage cloud technologies to make corporate assets and workflows available from anywhere on any device.
This is all fairly straightforward at a high level, but moving a bit closer to specific companies and specific business decisions, things can be more complicated. Specifically, for many organizations, the difference between virtual desktops and cloud desktops will be crucial.
I’ve seen this tension firsthand, having worked for years in both the virtual desktop infrastructure (VDI) space and the more recently-emerged market for Cloud PCs. Let’s look at the differences between the two and why only the latter is suitable for enterprises’ needs, both today and in the future.
Legacy VDI: Like using a horse-drawn carriage instead of a fleet of supersonic trains
Legacy VDI usually involves an enterprise running Windows in its own data center so it can provide remote access to workers. This solves the problem of making enterprise resources securely available outside the office, but that’s just about all it solves.
Many organizations rely heavily on Windows frameworks, not only for applications but also security, authentication, and overall workflows. In the pre-pandemic world, this was fine because most employees came into the office, logged onto the corporate network, and received updates to keep their devices secure. But with more people working remotely for the foreseeable future, the corporate network’s ability to protect assets has significantly eroded.
Moreover, many of the people working from home need Windows but have moved to other endpoints, such as Chromebooks. This is especially true for personal devices, and it’s quite common for work-from-home employees to use their preferred devices for professional tasks at least some of the time. As a result, securing a company-issued machine isn’t helpful if hybrid or remote employees are going to access enterprise resources from other endpoints.
The IT challenge is thus to support machines not only outside the corporate network but also outside traditional PCs. Some kind of remote desktop is obviously part of the solution, but most existing approaches cannot match the scale of this challenge.
Legacy VDI usually involves an enterprise running Windows in its own data center so it can provide remote access to workers. This solves the problem of making enterprise resources securely available outside the office, but that’s just about all it solves.
VDIs require a lot of IT resources to maintain—another potentially significant problem, given that most CEOs want their technical talent focused on strategic projects, not IT curation.
Physics can’t be cheated, so the farther workers are from that single data center, the worse latency and performance become. For example, let’s say a few years ago, a small group of contractors or remote employees needed access but were relatively close to the home office, so this wasn’t a big problem. However, as the number of users grew and their distance from the office increased, legacy VDI fell flat, offering slow, productivity-killing performance.
This situation, as unhelpful as it is, doesn’t even take into account that VDIs require a lot of IT resources to maintain—another potentially significant problem, given that most CEOs want their technical talent focused on strategic projects, not IT curation.
Because of these limitations of legacy on-premises VDIs, a variety of alternatives have emerged, but few of them meet the core demands for scalability, performance, and manageability. For instance, Desktop-as-a-Service (DaaS) offerings are often just a VDI in a managed service provider’s (MSP) data center.
This doesn’t solve the challenge of scaling remote resources up or down as the workforce changes, and depending on the MSP’s geographic footprint, may not do much to address performance concerns either.
Even running VDI in a top public cloud is not the panacea it may seem. When it comes to managing a VDI, it’s just like legacy VDI, only with hardware maintained by someone else. This means that if a business wants to extend remote access to workers in new regions, it will need to duplicate its VDI solution into those regions.
So while this approach may not require the same capital expenses as on-premises VDI, in terms of IT resources required for ongoing management, it is still costly and onerous.
How are Cloud PCs different?
Rather than attempting to retrofit legacy VDI for today’s landscape, businesses need a fresh, cloud-native approach—a Cloud PC.
By cloud-native, I mean a Software-as-a-Service (SaaS) model defined by the following:
Elastic scale and flexible pricing: New Cloud PCs can be spun up as needed in less than an hour, without the traditionally lengthy and complex provisioning processes, and enterprises only pay for the resources they use. Just as the number of Cloud PCs can be scaled up or down as needed, so too can the underlying compute and storage resources. This gives the Cloud PC more potential power for intense and complicated tasks, compared to running the OS locally on each machine, let alone compared to legacy VDI.
Up-to-date security, low latency, and global availability: Because SaaS services are always connected to the network, they always offer the most-up-to-date security resources, and because Cloud PCs can be deployed on public cloud networks in the region closest to each user, latency is a non-issue.
Comprehensive visibility: Because the OS runs in the cloud, IT can monitor usage for security and insights. Moreover, if an employee logs in with their own device, rather than a corporate-issued machine, the SaaS model keeps a clean separation between personal and corporate data, which allows for end point flexibility without sacrificing security or employee privacy.
Multicloud compatibility: Some workloads run better on some clouds than others, and this relationship is not necessarily static over time, so enterprises need the flexibility to optimize and update their Cloud PC deployments over time based on their business requirements, employee preferences, and the strengths of different providers.
Rather than simply shifting the legacy model to the cloud without any real modernization or improvement, the Cloud PC approach reimagines what a remote desktop experience is and how it should be delivered. As hybrid working conditions continue to become the norm, the enterprises that choose the more forward-looking options now will be poised for success as their workplace models continue to evolve for years to come.
Amitabh Sinha is CEO at Workspot. Amitabh has more than 20 years of experience across enterprise software, end user computing, mobile, and database software.