WAN gigabit 100 Gbps network speed

Future-proofing the WAN as Speeds Approach the 100 Gbps Threshold

“Future-proofing,” in any context, can be uncomfortable, as it pushes the teams involved — be they construction workers, city planners or enterprise IT — to think about worst-case scenarios alongside the realities of their given budget and time constraints.

In many cases, it’s impossible to truly gauge the reality of what the future holds: Homebuilders in the early 20th century, for instance, may not have used lead paint had the dangers of lead consumption been widely understood, while city planners of the period likely would’ve gone with copper pipes.

But where modern enterprise network teams have a bit of an advantage — not to say there won’t inevitably be surprises that throw a wrench in their best-laid plans — is that the future of enterprise networking is pretty clear to see.

Enterprise networks are more widely distributed and highly-trafficked today than ever before, and the trend is only set to continue. Already, the global public cloud service market is projected to top $206 billion by the end of 2019, according to Gartner, which is up 17.3 percent in just one year from the market cap in 2018.

SaaS adoption has also never been higher in the enterprise, as Salesforce estimates that this market will reach $113.1 billion in 2021; almost twice the revenues seen in 2017 ($58.8 billion).

It’s not just the volume of SaaS apps that are making things difficult to future-proof, either, as the variety of these tools and the unique demands each presents on the network are also throwing more considerations into the mix. Video conferencing platforms, for instance, require a lot of bandwidth, while VoIP needs low latency to deliver call quality and consistent performance.

To top it all off, enterprises have never been more distributed, as the availability of a wide variety of SaaS tools that can connect far-flung employees has made enterprises less beholden to the “HQ-centric” network infrastructure popular in the past.

All of this boils down to enterprise networks that are now supporting rapidly accelerating bandwidth requirements and, in many cases, 100 Gbps speeds across their largest network links.

So what are the considerations enterprise IT teams need to bear in mind when future-proofing their networks for more cloud, SaaS, and ever-higher connection speeds?

“Internet-first” to meet the needs of decentralization, remote locations

According to Cisco, 80 percent of enterprise employees and customers “work in or are served in branch offices.”

Historically, enterprise networks were largely hardware-defined and dependent strictly on MPLS connectivity, where teams could purchase and manage pathways between users and their apps. Today, these techniques are simply untenable, as MPLS-only can be cost-prohibitive in the context of a highly decentralized WAN, especially as the traffic between users and apps has never been higher.

As a result, teams are taking an Internet-first approach as “direct-to-Internet” connectivity usurps MPLS-only when connecting apps and users at remote locations. The reasons are clear, as the internet is ubiquitous and almost every branch office already partners with an ISP who can facilitate paths over the internet, rather than having teams purchase dedicated MPLS pathways for each.

Enterprises also don’t have to pay for scaling ever-higher MPLS link speeds to meet their traffic demands. This comes with the loss of explicit control of the network pathways, but modern monitoring techniques can regain that visibility.

Harnessing speeds of 100 Gbps+

While internet-first solutions solve one part of the future-proofing equation for enterprise IT, things are very different when you’re looking at the core links that carry the highest traffic loads across the network. While just a decade ago, teams may have hardly imagined a world where connectivity between data centers, large campuses, or across the entire WAN backbone would scale up to 100 Gbps or higher, it’s now a fact of life for the world’s largest businesses.

The challenges with 100 Gbps links between core parts of the WAN start with some simple economics. When companies deploy higher-capacity WAN links (anything from 25 to 40 to 100 Gbps speeds), they’re investing substantial resources to deliver business-critical applications to users, customers, or both.

While bandwidth is getting cheaper compared to just a few years ago, a single 100 Gbps circuit can run in the tens of thousands of dollars per month. And when teams connect critical sites, redundancy is required, which can cause the investment to jump to hundreds of thousands of dollars per year.

In this context, future proofing hinges on how teams can retain visibility into high-speed connections to ensure performance and their investment.

Questions teams need to ask themselves when balancing these considerations include:

  • How do I ensure I’m getting what I’m paying for when it comes to achievable capacity and performance?
  • How do I detect performance issues like the well-known performance thieves of loss, latency, and jitter, or the lesser known and harder to measure items like poor routes or capacity drops?

Visibility ties it all together

For all of this future-proofing to succeed, teams need visibility across the entire network, especially in areas where they don’t own or control the network pathways outright.

Where the decentralization storyline is involved, the trouble with switching from dedicated MPLS links to access over the public Internet is that the latter essentially represents a global third-party network that no single entity owns or controls.

Making sure that users enjoy the benefits of direct-to-internet connectivity calls for comprehensive monitoring at all times — especially in the midst of any network rearchitecting or transformation — that can actively and passively track all network activity without itself becoming a drain on total network capacity.

But where high-speed, high-capacity links are involved, ie. your 100 Gbps data center connections and backbone links, “lightweight” monitoring that can deliver actionable insights is even more critical. While monitoring techniques like network flooding are one option that teams can use to gain an understanding of network capacity, they can’t gain even this baseline information without interrupting or pausing network performance over that link.

Instead, teams need a solution that can continuously monitor capacity in addition to latency and loss without breaking the network or consuming too much bandwidth of its own in the process.

Even if enterprises aren’t running at 100 Gbps speeds today, partnering with monitoring solutions that are able to seamlessly scale to that level today is critical since higher-capacity links are pretty much an inevitability. This can help arm IT with the foresight they need to get ahead of any potential performance issues that could put a major hitch in their “future-proofing” efforts.

Scroll to Top