Back to blog

Follow and Subscribe

Four key innovations that prepared the internet for COVID-19

Patrick McManus

Distinguished Engineer, Fastly

In an effort to stay informed, stock up on necessities, connect with family and friends, educate our children, and find moments of reprieve amid the COVID-19 pandemic, the world is turning to the internet now more than ever. For some, this perceived increase in traffic and usage patterns may call into question the internet’s ability to handle this global, almost overnight shift. And — had this pandemic occurred just 10 short years ago — the internet would not have been as well-suited to the challenge as it is today. Because, while today’s internet is accessible from nearly everywhere, that wasn’t always the case. Not long ago, the world had to log on at work to work, log into AOL to chat, or go to the university library because it was the only place to access licensed research publications. In just the past decade, a number of innovations have created working models that are better adapted to the modern user behaviors in where bandwidth is needed and how it is used.

Here are four crucial pieces of internet architecture — only widely adopted over the last 10 years — that are creating the capacity for us to live, work, and learn from home in unprecedented conditions.

HTTP-based adaptive bitrate video became a best practice.

The advent of HTTP-based adaptive bitrate streaming protocols like  HTTP Live Streaming (HLS) and Dynamic Adaptive Streaming over HTTP (MPEG-DASH) have allowed video players to dynamically adjust their quality, and thus bandwidth consumption, in a wide variety of environments. Using these well-standardized protocols, different implementations of players interact with a rich ecosystem of over-the-top video on demand and live video services. As loads on the network shift, the players automatically keep the video going and also make sure they don’t monopolize all the available capacity and overwhelm other parts of the network. Had this pandemic occurred just over a decade ago, our ability to download video for entertainment, educational, or informational purposes would have been severely compromised.

Zero-trust networking provided security regardless of location.

Having to go into the office to access the server was the result of perimeter-based networking security — using a physical location’s IP address as a safe-zone inside of which anyone can gain access. With the invention of smartphones, ubiquitous broadband, and IoT came a shift toward greater mobility, and the need and desire to sign on and access that same network from anywhere. Virtual private networks (VPNs) were a step in that direction. But VPNs have several weaknesses: their performance is often spotty, they’re difficult to manage, and they use an often unreliable mechanism to bring the user virtually inside the perimeter. As more businesses moved their files and assets to the cloud, the resources everyone relied on weren’t tied to the perimeter anymore, and a new best practice —  based on identity rather than trust in the physical network — became necessary. That’s where zero-trust networking came in: location-independent security based on cryptography, authentication, and certificate management. Today, it’s expanded beyond the server — nearly all SaaS platforms require logins built upon this security principle. Without this shift from private networks to internet everywhere, the option to work or learn outside of the traditional physical location would be less available and less secure.

The cloud offered a more scalable approach to dedicated servers.

Just a decade ago, dedicated servers, and their associated resources and maintenance, were an essential part of day-to-day business. Today, you can count the number of companies that build their own large data centers on a few fingers. It’s no wonder why, from 2009 through 2019, global spending on cloud services grew at an astounding 56% — from storage to processing power to capability and complexity, the cloud is a much more efficient and scalable approach to every facet of traditional servers, with storage and computing rentable for just pennies an hour. The rise of edge computing offers low latency aspects of applications very close to the end user, bringing the best properties of the old on-premise model into the dynamic-networking computing era. With the onus now on cloud providers to understand the capacity needs of their collective customers and adjust accordingly, the odds of capacity shortages are considerably lower than they would have been 10 years prior.

Content delivery networks took the load off of origin servers.

A content delivery network (CDN) works by sitting “in front of” the website that your browser or app connects to, caching and optimizing the responses, and assuring that the full load never actually reaches the site’s origin server. CDNs do this by placing servers all over the world, in well-connected places. They peer with local networks to ensure the most direct, cost-effective connections to end users and negotiate the best possible rates with networks where peering isn’t available. Their servers are capable of handling incredible amounts of traffic, are finely tuned to be performant, secure, and stable, and are constantly monitored.

This arrangement not only takes a load off of the website and reduces perceived latency by bringing the site closer to the end user, but it also helps all of the intervening networks as well — making both the site and the internet at large more resilient.

When you visit, for example, the New York Times, you’re not actually contacting The New York Times’ servers. You’re contacting a CDN server that acts on the company’s behalf — in this case, Fastly. This offers a number of benefits. If everyone who wanted to read the Times, which has generously removed their paywall for COVID-19 coverage, went to their servers in New York, they’d need an enormous number of servers to handle the load on a typical day, not to mention during an emergency or an epidemic where everyone wants to read the news.

Because this approach is so effective, many major websites and apps use a CDN. While a few of the world’s largest tech companies have built their own, most use a commercial CDN that serves many customers, making the economics of providing such tremendous connectivity more workable.

And while CDNs themselves have been around for more than a decade, there are some significant, more recent ways they’ve fundamentally changed the architecture of the internet. For one, transit providers no longer need to provision continually larger links, because so many of the videos, webpages, applications, and other services served on the internet today are actually provided by CDN servers local to the end users. Cloud security is another reason. In fact, DDoS protection and mitigation requires a CDN footprint to be effective. Another interesting evolution is how CDNs have become a point of leverage for upgrading the protocols that keep the internet secure and modern. We no longer have to rely on millions of websites keeping their stacks up-to-date as CDNs now provide the mechanism for those enhancements.

Today’s internet: more open with the capacity to unite.

Thankfully, the internet has evolved away from a compartmentalized structure, allowing us the flexibility and capacity necessary for this moment in time — a way to be together at all times, while keeping a safe distance. Today’s challenges will force it to adapt and deploy different resources in different proportions in different places. But dynamically adapting to the needs of its users is, and will always be, a core property of the modern internet.