The Real Cost of High Network Latency
- Cloud networking
- November 21, 2024
- RSS Feed
By David Sloan, Solutions Architect
High latency and inflexible bandwidth have a domino effect across your entire business, impacting your productivity and revenue. Here’s how to improve it.
When we talk about network latency, we’re referring to more than just the occasional lag or slow loading speed. At scale, the consistency of your network can be the difference between your enterprise making and losing money – and latency is where it all starts.
In this blog we’ll look at exactly what latency is and the impact it can have on your business, and share our top tips for reducing latency so you can keep your business operating at peak performance.
What is network latency?
Put simply, latency refers to the time it takes for a data packet to move from one point to the next. Because the internet is fast (up to two-thirds the speed of light), latency is measured in milliseconds. For example, if you send a data packet from a server in Sydney and 120 milliseconds later it arrives at a server in Los Angeles, that’s your latency between those two servers. This provides a quality benchmark of performance that your application is operating within.
What you might consider low or high latency depends on the context. For example, online multiplayer games, live sports streams, and autonomous cars all depend on ultra-low latency to serve their purpose; the average ecommerce or logistics company, on the other hand, can get away with higher latency. If you’re in a global business communicating between continents, you also might accept the higher latency figures that come with that distance, compared to a local business using only servers in close proximity.
Your latency is primarily determined by the quality of the connection medium you use, but it can also be affected by bandwidth as well as the quality of the infrastructure your connection relies on. For example, a dedicated private connection on a redundant network underlay will have far more consistent latency than a shared public connection with single points of failure.
Latency and bandwidth
Latency is the amount of time it takes your data to travel from one point to the next. Bandwidth, on the other hand, refers to the maximum amount of data, or the capacity, you can transfer between these two points.
Measured in megabytes per second, bandwidth can have a direct and critical impact on your network speed, application throughput, and performance – on an underlay with the same speed, a 10 Gbps connection will transfer a large file faster than a 1 Gbps connection. Your bandwidth is influenced by your mediums of connection, including wireless connection type and physical infrastructure used. Depending on your connectivity method and the capabilities of your provider, you can also provision different bandwidth on different connections.
Like latency, the bandwidth you need depends on your industry, but for optimal network performance, it should be high enough to handle your maximum workloads. For example, 4K streaming services (like Netflix) or ecommerce giants (like Amazon) rely on super-high bandwidth to deliver seamless experiences to thousands of users simultaneously; a manufacturing company or mid-sized online retailer can likely get away with lower bandwidth.
In short, high bandwidth is good; high latency is not.
Why is latency and bandwidth sizing so important?
In a cloud-native, work-from-anywhere world where users are consuming more data more rapidly than ever, latency has become a critical network variable.
When you leave your network latency to chance, you can end up dealing with:
- Slow operations: Delays, jitter, and lag can proliferate across your network, slowing internal and external systems and applications.
- Inconsistent network performance: Inconsistent network latency doesn’t just equal slower speeds, it also equals application performance issues. There’s no way of knowing when, or how badly, your latency will fluctuate, leaving you unprepared for out-of-sequence packets for low-latency or real-time applications, impairing your productivity.
- Lost revenue: If your end-users depend on low latency, or experience inadequate or competing bandwidth, this could impair their ability to interact with your product or make purchases, losing you revenue and even damaging your reputation over time. In industries like gaming or ecommerce, for example, latency or bandwidth oversubscription can mean that in-game purchases don’t happen and shopping carts are abandoned.
- Higher cloud costs: Delays, lagging, and downtime are also expensive to remediate – so expensive that recent research suggests downtime costs organizations up to $9,000 a minute.
- Poor SEO: Slow loading speeds and poor application response times deduct serious points from your search engine optimization score, affecting the online visibility of your brand.
It’s obvious that having low, consistent network latency and flexible on-demand bandwidth is important, but how do you get it?
How to reduce your network latency
1. Avoid the public internet
Many companies still use the public internet for their cloud connectivity; a common band-aid is to use internet-based VPN tunnels. But while VPN tunnels can provide some protection for data in transit, they still place users at the mercy of the internet’s instability. This can result in slow and unreliable connectivity in times of peak demand or cyberattacks, as well as the requirement for constant outage mitigation from IT departments.
Reducing reliance on the public internet is the most significant step you can take toward improving your latency and acquiring rightsized bandwidth. Using a private network backbone gives your enterprise access to connectivity unaffected by the fluctuations of public internet, resulting in a far more stable and secure connection. A good private connectivity provider will also have a highly redundant, easily scalable network that reroutes your workloads in an outage, so you can reduce downtime.
Learn how Framestore supercharged its efficiency by switching from VPN tunnels to Megaport.
2. Choose a connectivity provider with scalable bandwidth
When organizations sign up with a traditional connectivity provider, they’re usually required to lock into a long-term contract with a set bandwidth. This leaves them with a difficult decision: Do they choose bandwidth that is lower and therefore more affordable, or bandwidth that will account for the largest workloads they might need to accommodate, but is more expensive?
The former means latency, bandwidth capacity, and business operations will be severely impacted in times of peak demand; the latter means they’re overpaying for bandwidth when they’re not using it. But there’s a way to get the best of both: by choosing a scalable connectivity provider.
When you provision connectivity with a scalable provider, you’ll gain the ability to adapt to real-time traffic fluctuations while paying only for what you use. Plus, you’ll future-proof your network by giving it the agility to dynamically grow on demand.
Connect physical, virtual, and edge PoPs globally with scalable bandwidth on demand.
3. Extend your network to the edge
Enterprises can significantly reduce latency on their vital business applications by deploying scalable private connectivity for their cloud and data center connections, but there’s another component you should account for – the last mile.
If you haven’t considered SD-WAN for your enterprise yet, or if you’ve implemented SD-WAN and haven’t explored ways to extend your SD-WAN fabric closer to your end-users, there isn’t a more compelling reason to adopt it than the latency improvements you’ll likely see.
By streamlining your enterprise connectivity from branch to cloud, an SD-WAN solution with optimized connectivity creates a more reliable, low-latency network to minimize last-mile bandwidth bottlenecks. Combined with its tightened security and simplified network management, SD-WAN as part of a larger SASE solution can also be a game changer.
To optimize your SD-WAN, you can use a virtual network function hosting service like Megaport Virtual Edge (MVE) to get fast private connectivity from branch to cloud.
4. Virtualize your multicloud
With 85 percent of organizations projected to embrace a cloud-first principle by 2025, it’s now critical to have a network architecture that supports stable latency between cloud providers.
Kiwi.com faced this exact challenge. Launched in 2012, the Czech Republic-based travel broker is a “virtual global supercarrier”, delivering door-to-door travel bookings online. With an average of 100 million search queries and 35,000 seat bookings made on Kiwi.com every day, infrastructure continuity is crucial.
Kiwi.com was originally using the public internet to connect its back-end cloud services, and had orchestrated IPSec tunnels to allow its AWS and Google Cloud environments to communicate. However, the team was constantly let down by poor network performance, instability, and lag during peak demand periods. They knew they needed a network infrastructure that could flexibly support their systems’ capacity demands, so they turned to Megaport Cloud Router (MCR), a Megaport-defined virtual network function.
With MCR, you can instantly connect at Layer 3 without hardware to spin up virtual dedicated connections between your cloud, IaaS, and SaaS environments, giving your multicloud lower latency and improved performance.
By spinning up MCR, Kiwi.com was able to establish secure and reliable connectivity between its disparate cloud environments, transferring data directly between AWS and Google Cloud without having to hairpin traffic. Using MCR in conjunction with Megaport’s Virtual Cross Connects (VXCs), the team was successfully able to establish a flexible network infrastructure with the latency needed to support their capacity and performance demands.
Learn more about how Kiwi.com reduced its latency with Megaport Cloud Router.
5. Interconnect your data centers
If your organization is geographically dispersed, or employees and customers need to access your systems across several locations, your applications and data will move through multiple data center locations or providers – and hairpinning data between these endpoints can significantly increase your latency.
Data center interconnection integrates your data center endpoints on a single connectivity underlay, allowing resources to be shared directly between them regardless of location or provider. This could be as simple as a single metro Ethernet Virtual Connection (EVC) loop to connect multiple providers directly to your branch, or it could be as intricate as a global end-to-end architecture that interconnects dozens of branches, data center locations, and providers.
Look for data center interconnect providers that have super-high bandwidth options, as well as an established global ecosystem of provider partners to ensure you’re covered for future business expansions.
Reduce latency and scale bandwidth with Megaport
The far-reaching impacts of poor network speed and performance make latency and scalable bandwidth critical factors in your enterprise network. But improving network performance doesn’t have to be difficult. By orchestrating a private, scalable, end-to-end network with virtualized connectivity, you can positively impact your company’s bottom line.
Whether you need to interconnect your multicloud, hybrid cloud, data centers, SD-WAN, virtual routers, or virtual firewalls, Megaport’s high-performance network underlay will give you unmatched speed.
Our private global network of over 930 enabled locations reduces your latency, hops, and jitter. You can also provision and scale your global network in just 60 seconds—simply point, click, and connect—and manage all of your connections in our easy-to-use portal.
Learn more about Megaport See our latency figures