Performance
Page 7
Fastly’s edge modules that will power your ecommerce site
Ecommerce companies face challenges that a content delivery network built on Varnish can help address. To stay competitive and relevant, ecommerce websites and applications need to be able to target specific content to specific users (based on location, language, or browsing preferences), tailor content delivery depending on which device a consumer is using, and prioritize shoppers based on actions they’ve taken within a site or app.
Understanding Your End User with Performance Monitoring
End users are consumers who will use a service, and monitoring their performance is often overlooked. See how monitoring performance of end users improves user experience.
The benefits of using Varnish
Varnish is an open source web accelerator that is designed for high-performance content delivery. Learn more about what Varnish is and how Fastly's varnish can help accelerate your content.
Introducing Soft Purge
Today, we’re excited to announce Soft Purge, a new purging feature that allows you to easily mark content as outdated (stale) instead of permanently deleting it from Fastly’s caches. With Soft Purge, you have the same real-time purging options that you get with Instant Purge: purge by URL or by surrogate key.
Cache hit issues? Fix it | Fastly.
The cache hit ratio (or hit ratio for short) is the ratio of hits to cacheable requests (hits and misses combined). There's also cache coverage, the ratio of cacheable requests to all requests (cacheable requests and passes). In most cases, you'll want both to be as high as possible, since misses and passes cause load on your origins, and are slower than cache hits.
How our solid-state drives result in cost savings for customers
Solid-State Drives (SSDs) are semiconductor-based storage devices that save persistent data by using NAND flash memory. See how Fastly manages caches with SSDs and how you can save.
Boost Cache Efficiency with Origin Log Analysis
If you want to increase the efficiency of your Varnish (or Fastly) cache, you need to figure out what traffic is not cached. By definition, any traffic that reaches your origin is not cached, and thus worthy of investigation.
Accelerating Rails, Part 2: Dynamic HTTP Caching
In the second part of our series on accelerating Rails, I'll cover configuration of a few Fastly features, Varnish and Varnish Configuration Language (VCL), and strategies for caching dynamic content that are targeted towards the Rails developer.
Normalizing the Host Header
In the continued quest to increase cache hit ratios, the chant is: "Normalize, normalize, normalize." Less variation in your requests means you have a higher chance of getting hits. This month's highlight is the Host header.
Accelerating Rails, Part 1: Built-in Caching
Caching is one strategy that helps ease scaling pains that I often see Rails developers overlooking. Starting out with caching can be confusing, because terms and documentation can be convoluted, especially if you’re not an expert.
Using ESI, Part 2: Leveraging VCL and ESI to Use JSONP
In this post, I’m going to discuss how you can leverage ESI and VCL (Varnish Configuration Language, the domain-specific language that powers Fastly’s edge scripting capabilities) to use JSON responses, even when they’re loaded from another site. This is useful in many cases, including various analytics and social sharing instances.
Large File Delivery Improved with Streaming Miss Support | Fastly
Today, we’re excited to announce two related features that lower bandwidth costs and reduce origin load for Fastly customers, resulting in faster downloads for their users: Streaming Miss and Large File Support.
New Gzip Settings and Deciding What to Compress
Fastly recently conducted an extensive analysis of which resources should be compressed. Today, the results of that analysis are reflected in the Fastly app, which allows our customers to adopt better gzip settings. This not only makes our customers' websites faster, but it will also reduce monthly bandwidth charges.
Hooman Beheshti talks caching | Fastly
Hooman Beheshti, VP of Technology at Fastly, recently gave a talk at Velocity NYC 2014 about the challenges CDNs face with dynamic content and how businesses can use programmatic means to fully integrate their applications with their CDN.
Stale-While-Revalidate, Stale-If-Error Available Today
Fastly is excited to announce that as of today, we support stale-while-revalidate and stale-if-error. As a company dedicated to building a better Internet, we work hard to identify and support new standards that move the Web forward. Read on for an explanation of how these Cache-Control extensions make the Web faster and more reliable for browsers and CDNs, and check out documentation of these features.
Using ESI, Part 1: Simple Edge-Side Include
Fastly customers can use ESI to cache pages that contain both cacheable and uncacheable content (such as user-specific information).
Best practices for using the Vary header
Vary is one of the most powerful HTTP response headers. However, if used incorrectly, it can cause problems for developers. Understand vary header best practices to reduce mistakes and improve performance.
Caching “Like” and “Share” Buttons
In a blog post about caching with tracking cookies, I explained how Fastly’s edge scripting language allows businesses to cache things that were previously uncacheable as well as send data back via our real-time logging system. But what happens when you need to cache something more complicated, such as a product that handles user interaction?
API Caching, Part III
In this, our final API Caching installment, we're going to explore how to use Surrogate Keys to reduce the overall complexity of caching an API.
Building a Fast and Reliable Purging System
At Fastly, we’re always working to make our systems faster and more reliable. One of the more difficult problems we’ve faced is efficient cache invalidation across our global network, or as we call it: Instant Purging. When content changes, our customers issue a purge request, which we then need to deliver to each of our cache servers. The system that handles these purge requests is codenamed Powderhorn.