Understand how CORS impact your performance monitoring

What is the impact of CORS on performance monitoring

What is the impact of CORS on performance monitoring?


CORS, which stands for Cross-Origin Resource Sharing, has an impact on performance monitoring. The goal of this article is to explain what CORS consists in, why it is used, and how it impacts the way you can monitor digital services performance.

Modern web applications make extensive use of third-party resources. Examples are the use of CDNs to better deliver content like images and stylesheets, or third-party services resources that are called through APIs. So one webpage can be the result of a multitude of requests to different locations called origins. This is especially true for SPAs (Single Page Applications) that mainly rely on calls to API backends.

A web content origin is defined by the scheme (protocol), host (domain), and port of the URL used to access it.

same origins vs different origins

So as soon as the browser performs requests to different origins, CORS comes into play.

Let’s discuss why CORS has an impact on performance monitoring.

What is CORS – Cross-Origin Resource Sharing?

CORS is a mechanism that allows a server to indicate any other origins than its own from which a browser should permit loading of resources.

As such, this is mainly a security mechanism that is enforced at the browser level by the server. It is used to allow requests made on your behalf and at the same time block some requests made by potential malicious JavaScripts.

Before the CORS standardization, all calls to external origins (fetch or ajax calls) were blocked by the Same-origin Policy.

The following picture shows a simple example of a webpage that contains two images and some styling.

The first HTTP GET request (GET /) determines the content origin (domain-a.com).

Rendering the page correctly requires the download of a stylesheet (layout.css) that is located on a third-party service via a cross-origin request (domain-b.com).

Finally, one image (image-1) is stored on the origin server, while the second (image-2) is stored on a third domain (domain-c.com).

So in this small example, getting the stylesheet as well as the image-2 is performed through cross-origin requests.

How does CORS work?

So far, you have seen that a server can provide the browser with a list of origins from which it can request content. The browser typically requests this content through ajax or fetch calls triggered by JavaScripts.

In the previous example, domain-b.com and domain-c.com servers should allow the browser to fetch resources initiated from a previous connection to the domain-a.com (e.g. a JavaScript downloaded from the domain-a.com server and executed on the browser).

But how does a server grant this permission to a browser?

It does this through specific HTTP headers.

CORS in HTTP responses

The server grants the browser permission to access resources from other origins. It does this by using the Access-Control-Allow-Origin HTTP header.

For example, you can allow requesting resources from any origin as follows: Access-Control-Allow-Origin: *

You can also narrow down to a specific origin: Access-Control-Allow-Origin: https://example.com

CORS in HTTP requests

There are two types of HTTP requests in a CORS context: “simple” and “preflight”.

The “simple” request

In the following example, the client sends a GET request to the home page of the website example2.com.

This request is coming from the https://example.com, which is specified by the Origin request header.

In response, the server sends back an Access-Control-Allow-Origin header with the value *, meaning that the resource can be accessed by any origin.

Simple CORS request

The “preflight” request

You will often encounter the preflight type of request when the browser intends to use HTTP methods like PUT, POST or DELETE. These methods obviously represent a bigger security risk as they typically modify web content on the server.

Unlike “simple” requests, in “preflight” requests, the browser first sends an HTTP request using the OPTION method to determine if the actual request is safe to send. In other worlds, this preflight request checks whether the main request will be allowed or not.

Seeing that the OPTION method cannot modify the resource itself, this is a safe method to use.

Preflight request

This example shows the basic process:

  1. The client sends a first preflight request using the POST method. Among others, this request includes the following information:
    • The origin of the request (https://example.com)
    • The HTTP method the browser will use in the main request (POST as mentioned by the Access-Control-Request-Method header)
    • The HTTP header the browser will use in the main request (X-PINGOTHER as mentioned by the Access-Control-Request-Headers header)
  2. The server responses back to the browser, confirming that the main request will be allowed. For this, the response includes the following headers:
    • Access-Control-Allow-Origin as previously explained
    • Access-Control-Allow-Methods that provides the types of HTTP methods that will be allowed
    • Access-Control-Allow-Headers that provides the list of types of headers that will be accessible by the main request (X-PINGOTHER and Content-Type in this example)
  3. Based on the preflight response, the client performs the main request
  4. As the request allowance has been verified previously, the server sends a response to this main request. Note that the Access-Control-Allow-Origin header is present in all server responses.

The following site provides codes to implement CORS on different platforms.

What does CORS mean for your performance monitoring?

Now that you know why CORS exists and how it works, let’s have a look at the way performance metrics are reported in this context.

Major Real User Monitoring solutions use the W3C Time Navigation API as a basis for collecting performance metrics:

The following article provides more details about the page load process.

The problem that cross-origin resources introduce here is that they only expose timestamps for the following attributes by default:

  • startTime (will equal fetchStart)
  • fetchStart
  • responseEnd
  • duration

This is to protect users’ privacy (so an attacker cannot load random URLs to see where you’ve been).

This means that all of the following attributes will be “Null” for cross-origin resources:

  • redirectStart
  • redirectEnd
  • domainLookupStart
  • domainLookupEnd
  • connectStart
  • connectEnd
  • secureConnectionStart
  • requestStart
  • responseStart

In addition, all size information will be “0” for cross-origin resources:

  • transferSize
  • encodedBodySize
  • decodedBodySize

Last but not the least, cross-origin resources that are redirected will have a startTime that only reflects the final resource — startTime will equal fetchStart instead of redirectStart. This means the time of any redirect(s) will be hidden from the resource timing.

So as you can see, using CORS has an impact on performance monitoring!

The solution to monitor the performance of CORS resources: enabling TAO (Time-Allow-Origin)

Luckily, if you control the domains you are fetching other resources from, you can bypass the default protection mechanism by sending a Timing-Allow-Origin header in HTTP responses, informing the browser that it is allowed to process performance metrics.

If you are serving any of your content from another domain, i.e. from a CDN, it is strongly recommended that you set this header for those responses. Thankfully, third-party libraries for widgets, ads, analytics etc, are starting to set this TAO header on their content.

In 2013, only about 1,2% of the total resources embedded on a few hundred thousand websites presented the Time Access Origin (TAO) header. In 2018, this represented about 15% of the total websites. So there is still a long way to go.

Finally, if you still want to get performance metrics from the W3C Time Navigation API, there are browser extensions out there that allow the browser to bypass any restriction. This is obviously to the detriment of security! So just consider this workaround as a temporary solution when you require specific degradations troubleshooting.

Your next steps on CORS performance monitoring

CORS is a security mechanism that is enforced at the browser level. If you control the different servers, make sure to allow cross-origins requests by adding the Access-Control-Allow-Origin header in HTTP responses in order to avoid errors in fetching resources.

Using CORS has an impact on performance monitoring. Monitoring performances of cross-origins requests is a challenge because the metrics are not reported by default.

If you use third-party services like CDNs, make sure they allow you to activate the Timing-Allow-Origin header in HTTP responses.

In case this is not possible but you really need to collect these performance metrics, temporarily enable specialized browser’s extensions that will disable this security protection.

How HTTP redirects impact web performance

How HTTP redirections impact your web performance

How HTTP redirections impact your web performance

HTTP redirections are frequent and they have an impact on your web performance. This article covers what HTTP redirections are, how they work and what is their impact on the users experience.

What is an HTTP redirect?

The very first thing to know is what HTTP redirects are and how they work.

As its name implies, HTTP redirect refers to the fact that a client’s request to a certain server’s web content can be redirected to another location.

As illustrated here, the client requests the content of the /doc location. The server replies back with the HTTP status code 301 (Moved Permanently) and provides the new location /doc_new. Then the client has to initiate a new HTTP request to the new location.

When is an HTTP redirect useful?

There are different reasons why you may use HTTP redirections.

First of all, you can use redirections to guarantee up-to-date content delivery:

  • You can expand the reach of your site by redirecting requests to yourcompany.com to www.yourcompany.com
  • You want to adapt your website content structure and do not have any control on the third-party links to this content
  • Your organization merged with another and you have moved to another domain and want to ensure existing links or users’ bookmarks will still reach you

Secondly, you can use redirections to deliver content that best fits the users’ devices and location. For example, you can redirect all mobile device users that access your company site (yourcompany.com) to the mobile optimized version (m.yourcompany.com). You can also customize delivered services based on the user’s location.

Finally, you can use redirections for security reasons. For example, you do certainly want to redirect all HTTP requests to corresponding secure HTTPS connections!

The different types of HTTP redirections

There are two main categories of redirections: server-side and client-side.

Server-side redirects

In a server-side redirection, the server uses specific HTTP status codes to instruct the client to redirect the request to the alternative URL.

The most used redirection codes are summarized in the following table:

Most used redirect status codes

Client-side redirects

As an alternative to the HTTP redirects triggered at the server level, two other redirections techniques can be used:

  • HTML redirections with the <meta> element
  • JavaScript redirections via the DOM

These techniques are both client-side redirections.

One of the advantage of using these techniques is that you do not have to control the server to make it work.

The main disadvantage is that these cannot be used for all types of resources. HTML redirection for example only works with HTML, and not for contents like images or other types.

How HTTP redirects impact your web performance

Redirecting clients’ requests to other locations can have a significant impact on the performance.

Let’s analyze what happens when a user first access a website in HTTP and gets redirected to the corresponding secure HTTPS connection.

The different connection steps are as follows:

  1. The client sends a DNS request to get the server’s IP address
  2. The client establishes a TCP connection with the server
  3. The server processes the request and sends back the HTTP redirect status code
  4. The server sends the rest of the HTTP response content to the client (not much as this is a pure redirection instruction)
  5. Seeing that the redirection implies a secure HTTPS connection, the client has to establish an new TCP connection to the server
  6. The client establishes the secure connection through the TLS handshake process and sends the request for the required content
  7. The server processes the request and sends a response (status code 200 OK) back to the client
  8. The server sends the rest of the HTTP response content to the client.

During the four first steps, the user does not see anything rendered on the screen. Depending on network conditions, DNS service performance, as well as server performance, this can greatly impact the end user experience.

So it is important to monitor this kind of redirection events.

How redirects are reported in W3C Time Navigation API

The W3C Time Navigation API provides the redirection timing as one of the web performance metrics.

The redirect time is defined by the time spent between redirectStart and redirectEnd events (when these events exist).

OK, at first it seems quite clear. But as seen before, redirection often imply two consecutive TCP sessions. From the Time Navigation API, we see the DNS and TCP event happen after the potential redirect event. So how does it all work?

Well, let’s see:

From this timeline example, you can notice that:

  • The complete initial HTTP user session is part of the redirection timing calculation
  • The initial HTTP session individual metrics (DNS, TCP, …) are “lost”. They are not reported separately and the API does not provide anything related to this first HTTP request

What about multiple consecutive HTTP redirections?

All sessions that lead to the final request will be part of the redirection timing calculation, as illustrated hereunder:

Next steps

If you found that article insightful, you will certainly like our other articles about web performance monitoring.
If you wonder how this can be reported by Kadiska’s solution, just take a look here!

How network latency drives digital performance

Why network latency drives digital performance

Why network latency drives digital performance

Each time a packet traverses a network to reach a destination, it takes time! Network latency drives performance.

As this blog article explains, latency is the  delay for a packet to travel on a network from one point to another. Different factors like processing, serialization and queuing, drive this latency. When using newly hardware and software capabilities, you can potentially reduce the impact these elements have on latency. But there is one thing you will never improve: the speed of light!

As Einstein outlined in his theory of special relativity, the speed of light is the maximum speed at which all energy, matter, and information can travel. With modern optical fiber, you can reach around 200.000.000 meters per second, the theoretical maximum speed of light (in a vacuum) being 299.792.458 meters per second. Not too bad!

Considering a communication between New York and Sydney, the latency is about 80ms. This value assumes a direct link between both cities, which will of course usually not be the case. Packets will traverse multiple hops, each one introducing additional routing, processing, queuing and transmission delays. You’ll probably end up with a latency between 100 and 150ms. Still pretty fast right?

Well, latency stays the performance bottleneck for most websites! Let’s see why.

The TCP/IP protocol stack

As of today, the TCP/IP protocol stack dominates the Internet. IP (Internet Protocol) is what provides the node-to-node routing and addressing, while TCP (Transmission Control Protocol), is what provides the abstraction of a reliable network running over an unreliable channel.

TCP and IP have been published respectively under RFC 791 and RFC 793, back in September 1981. Quite old protocols… 

Even if new UDP-based protocols are emerging, like HTTP/3 discussed in one of our future articles, TCP is still in use today for most popular applications: World Wide Web, email, file transfers, and many others. 

One could argue TCP cannot cope with performance requirements of today’s modern systems. Let’s explain why.

The three-way handshake

As stated before, TCP provides an effective abstraction of a reliable network running over an unreliable channel. The basic idea behind this is that TCP guarantees packet delivery. So it cares about retransmission of lost data, in-order delivery, congestion control and avoidance, data integrity, and more.

In order for all of this to work, TCP gives each packet a sequence number. For security reasons, the first packet does not correspond to the sequence number of 1. Each side of a TCP-based conversation (a TCP session) sends a randomly generated ISN (Initial Sequence Number) to the other side, providing the first packet number.

This information exchange occurs in what is called the TCP “three-way handshake”:

  • Step 1 (SYN): The client wants to establish a connection with the server, so it sends a packet (called a segment at TCP layer) with SYN (Synchronize Sequence Number) signal bit set, which informs the server that it intends to start communicating. This first segment includes the ISN (Initial Sequence Number).
  • Step 2 (SYN/ACK): The server responds to the client’s request with SYN/ACK signal bits set. It provides the client with its own ISN and confirms the good reception of the first client’s segment (ACK).
  • Step 3 (ACK): The client finally acknowledges the good reception of the server’s SYN/ACK segment.

At this stage, the TCP session is established.

The impact of TCP on total latency

Establishing a TCP session costs 1.5 round trips. So, taking the example of a communication between New York and Sydney, this introduces a setup delay typically between 450 and 600ms!

This is without taking secured communications (HTTPS through TLS) into consideration, which introduces additional round trips to negotiate security parameters. This part will be covered in a future article.

How to reduce the impact of latency on performance?

So how to reduce the impact of latency on performance if you cannot improve the transmission speed?

In fact, you can leverage two  factors:

  1. The distance between the client and the server
  2. The number of packets to transmit through the network

There are different ways to reduce the distance between the client and the server. First, you can make use of Content Delivery Network (CDN) services to deliver resources closer to the users. Secondlycaching resources makes data available directly from the user’s device.  No data at all to transfer through the network in this case.

In addition to reducing the distance between the client and the server, you can also reduce the number of packets to transmit on a network. One of the best examples is the use of compression techniques.

Nevertheless, the optimization you can achieve has some limits, because of how transmission protocols work… The TCP handshake process does require 1,5 round trips. The only solution to avoid this would be to replace TCP by another protocol, which is the trend we’ll certainly see in the future.

How HTTP compression works, when to use it and how to optimize web performance

Using HTTP compression to improve web performance

Using HTTP compression to improve web performance

For any network-based application, reducing the transfer time (needed to transfer data from the server to the client) always improves web performance.

You can achieve better page load times by reducing:

  1. the frequency the server has to send data using caching techniques
  2. the size of the transferred data using HTTP compression

As the title suggests, HTTP compression squeezes the size of a file on the server before its transmission to the client. This way, HTTP compression makes file transfers faster and contributes to reduce page load times

Let’s review how you can implement compression to define the best way to optimize web performance in your  context.

2 types of HTTP compression

There are two major categories of compression:

  • lossless
  • lossy.

When one compresses data with lossless compression, after decompression one can retrieve the original data. This is of course important when compressing texts (HTTP body). Other types of files formats that use this compression technique are PNG and GIF images, PDF and SWF documents, as well as WOFF fonts.

In contrast, decompressing lossy compressed data does not allow you to retrieve the original data. Instead, you get a surrogate which resembles the original either closely or loosely depending on the quality of the compression algorithm used. This technique is typically used in formats that allow a certain drop in details that will not be noticed by humans. Examples are MP3 audio, MPEG video and JPEG image file formats.

What should you compress to improve web performance

Which HTTP content should you compress

As compression brings significant web performance improvements, we recommend to activate it for all files, with the exception of already compressed files.

Trying to compress already compressed files like ZIP and GIF can indeed be totally unproductive. If one tries to compress them, he/she takes the risk to:

  • Increase the size of the response message
  • Waste CPU time on the server.

HTTP body compression

The most commonly used compression formats are gzip, Brotli (br) and deflate.

To decompress the files properly, both client and server must communicate over the compression format(s) support and usage. 

For this to happen:

  • The client sends the Accept-Encoding request HTTP header that advertises which content encoding it is able to understand.
  • In return, the server indicates which compression algorithm was used for a given response by means of the Content-Encoding response HTTP header.

Beyond HTTP body compression, HTTP Header compression

So far we addressed content compression for the files included in the <body> of an HTML file.

What about the HTTP header itself? As the HTTP header is obviously sent with each and every request/response, reducing its size can greatly improve web performance.

The HTTP/2 protocol has introduced HTTP header compression. It is called HPACKPrior attempts used GZIP but introduced security breaches. This type of implementation was vulnerable to attacks like CRIME.

As an exception, please note that the limitation of this technique: it is only efficient in case the header does not change from messages to messages. More details can be found here.

Takeaways: how to reduce page load times with compression

As a conclusion, together with content caching, compression is one of the basic and cost effective ways to improve web performance.

  1. First, you  should systematically compress non-compressed content like text. 
  2. Furthermore, make use of HTTP/2 when possible. Among other benefits, it provides a secured HTTP header compression capability.
  3. Finally, you should monitor how your web servers and third party service providers like CDNs deliver content. This is key to optimize your digital services delivery and guarantee optimal user experience.

How caching content improves your web performance

How to improve web performance with content caching

Improve web performance with content caching

Each time you deliver resources closer to the users, you optimize your web performance. Fetching resources through a network always implies additional delays, mainly driven by the network conditions (network latency, packet loss and size of the content to be transferred). Content caching improves web performance by making resources available without any network delay. 

What is content caching?

Caching content is one of the most popular web performance optimization techniques used today. Caching content means temporarily storing data close to the users so that they do not have to be systematically fetched through the network.

This technique of course applies only to content that does not (or seldom) change through time (static images, brand assets, stylesheets and scripts that do not change often, …).

There are two main categories of caching: “browser-side” and “server-side”.

Browser-side caching

When a user first visits a site, the browser stores certain items like CSS files and images for a specific amount of time. The browser caches these data in local storage so that they are immediately available and served from this cache in case the browser needs them during a next site visit.

Using this technique improves web performance by avoiding:

  • fetching resources through the network
  • systematically requesting the origin server (the server that hosts the web resources) for content (less stress on the server)

Nevertheless, this technique only works after a first visit has been made.

Server-side caching

Server-side caching is a technique implemented on servers that reside between the user and the origin server. These are proxy servers that act as reverse proxy by intercepting and serving content to users before requests reach the origin server.

As it is the case for the browser-side caching technique, this helps reduce the stress on the origin server. Furthermore, as it serves many visitors from the same cache, it does not require individual first-time visits to the origin server before delivering content from the cache.

How content caching works

In order for the origin server to communicate caching-related information to the browser, HTTP header directives are used. These are options you can add to the HTTP header.

Three major techniques are typically used to control how caching behaves: freshness, cache-control and validators.


The Expires directive is a basic means of controlling caches. It is supported by the majority of web browsers.

The Expires header contains the date/time after which the response is considered stale.

This basic concept has two major limitations:

  • Because there’s a date involved, the clocks on the web server and the browser’s cache must be synchronized
  • It is prone to configuration errors, as if you do not update the expires date when it is expired and the content is refreshed, no further caching will happen and each and every request will go back to the origin server!


Cache-control directive has been introduced in  HTTP/1.1 protocol and is widely supported.

Instead of using a date, cache-control uses, among other possible parameters, the max-age as the number of seconds the content can remain in the browser’s cache. When this timer expires, the browser has to fetch the data from the origin server again. 

This technique solves the clock synchronization problem of the expires attribute.


What if the max-age timer expires and the content has not been updated in the meantime? Then the unchanged data is requested from the server anyway, which is totally useless and can degrade performance.

Finding a way to check for content updates would prevent the browser from performing inefficient requests to the origin server.

This is where validators come into play.

The most common validator is the time on which the document last changed. This is communicated by the origin server through the Last-Modified attribute. When the browser caches the content and the caching timer expires, it can check with the origin server whether this content has been changed by using the If-Modified-Since attribute. This technique is widely supported.

ETag is another validator that can be used. It has been introduced in the HTTP/1.1 protocol.

ETags are unique identifiers that are generated by the server and changed every time the content does. When the browser makes a request to the origin server, it uses the If-None-Match attribute to check for ETags modification. If the Etags value is the same as previously, it means the content is identical and does not need to be fetched again. ETag is widely supported too.

How does content caching improves web performance

Establishing a solid caching strategy is key to optimize web performance:

  • helps reduce network latency by delivering the resources as close possible to the users
  • provides an extra layer of redundancy by delivering content even in case of temporarily origin server failure
  • helps mitigate network congestion

It is a very cost effective solution and is widely used in combination with third-party services like Content Delivery Networks (CDNs).

Pay attention though to set up caching properly to avoid missing updated content.

How to minimize performance the impact of your JavaScripts by using "defer" or "async" attributes

The performance impact of JavaScripts: how to minimize it by using “defer” or “async” attributes

Modern websites make significant use of JavaScripts. The performance impact of javaScripts on digital experience will vary depending on how you implement them. 

From a performance perspective, the main challenge of using JavaScripts is that they are “render-blocking”. When your browser parses the HTML file and comes across a <script>...</script> tag, it must stop the DOM building process in order to fetch and execute the script. In case of an external script (<script src=”...”>), this can dramatically impact the overall user experience.

The performance impact of JavaScripts placed in the <head>

Usually, scripts are placed in the <head> portion of the page.

In this case, the HTML parsing process looks like this:

The parsing is paused until the script is fetched and executed. Then it resumes.

JavaScripts placed in the <head> will directly impact loading metrics like LCP.

The performance impact of JavaScripts placed in the <body>

One simple solution to solve this problem consists of placing the script at the very end of the <body> part of the page.

In this case, the HTML parsing process does not pause . The script is fetched and executed when the parsing process is done and the DOM is ready. In this case, it will not impact your loading metrics like Page Load Time and Largest Contentful Paint.

It seems so easy to solve the problem! Why not systematically place all scripts at the end of the body then? Well, there are two main drawbacks using this technique:

  • First, it is only valid in case the script can be executed after the DOM is ready.

Let’s take the example of the Kadiska RUM JavaScript. If this script is executed way after the page is fully loaded, you take the risk of missing some first performance metrics because some events would happen before the JavaScript execution starts.

  • Second, depending on the network conditions (latency, packet loss) as well as script size, fetching it can take a relatively long time, delaying the execution of the script.

Using Defer and Async to manage the performance impact of javaScripts

Fortunately, techniques exist to minimize the potential impact loading JavaScript may have on performances: defer and async.

Both are boolean attributes that are used in a similar way:

<script defer src=”your-script.js”></script>

<script async src=”your-script.js”></script>

Both techniques pursue the same goal: avoiding to block the DOM building process by interrupting the HTML parsing.

Most modern browsers support these techniques (https://caniuse.com/script-async).

What is “Defer”?

When using the defer attribute, the browser does not wait for fetching and executing the script to parse the HTML and build the DOM. The script loads in the background and then runs when the DOM is fully built.

In this case the script does not block the browser’s main thread and is executed when the DOM is ready.

Compared to placing the script at the end of the <body> tag, this improves the performance by fetching the script while parsing the HTML. 

Nevertheless, the script is still executed when the DOM is ready, which can be a problem in some cases, like mentioned before.

What  is “Async”? 

When using the async attribute, the script is completely independent. It means that:

  • Fetching the script does not block the HTML parsing process
  • Executing the script can happen before or after the DOM is ready. When executing before the DOM is ready, the HTML process is paused while executing the script.
  • An async script does not wait for other scripts.

Example of async script executed before DOM is ready:

Example of async script executed after DOM is ready:

When your script does not rely on others and you do not have to ensure it loads and executes before the DOM is ready, async is a very good solution.


When your JavaScript execution is not needed for rendering critical content of the page (which is for example the case for Google Analytics), using either defer or async attribute can greatly enhance user experience.

Defer has the advantage of never blocking the HTML parsing, while async has the advantage of loading the script independently of others.

If you found this article relevant, there is a great chance that you find this article on the interactivity metrics: 

How to improve your Cumulative Layout Shift (CLS)

Visual Stability: how to improve your Cumulative Layout Shift (CLS)

Google introduced Cumulative Layout Shift (CLS) in May 2020 as part of the Web Core Vitals. Let’s take a closer look at the definition of the CLS, how it measures the visual stability on your web platform and how to improve it.

What is Cumulative Layout Shift (CLS)?

Have you ever been reading an article on the web when something suddenly changes on the page? Without warning, the text moves, and you are lost. Or even worse: you’re about to tap a link or a button, but in the instant before your finger lands, the link moves and you end up clicking something else!

Cumulative Layout Shift (CLS) measures how ‘stable’ elements load onto the screen. It looks at how often these elements jump around while loading and by how much.
layout shift occurs any time a visible element changes its position from one rendered frame to the next

How Cumulative Layout Shift (CLS) tracks the visual stability

In summary, CLS measures the sum of all individual layout shift scores for every unexpected layout shift that occurs during the entire lifespan of the page.

Getting into how it is calculated, CLS incorporates two different measures: the impact fraction (1) and the distance fraction (2).

CLS value = impact fraction x distance fraction

Impact fraction

First, the impact fraction measures how unstable elements impact the viewport area between two frames.

The union of the visible areas of all unstable elements for the previous frame and the current frame, as a fraction of the total area of the viewport, is the impact fraction for the current frame.

Understand the Impact Fraction measured by the Cumulative Layout Shift (CLS)

In the image above there’s an element that takes up half of the viewport in one frame. Then, in the next frame, the element shifts down by 25% of the viewport height. The red, dotted rectangle indicates the union of the element’s visible area in both frames, which, in this case, is 75% of the total viewport, so its impact fraction is 0.75.

Distance fraction

The other part of the layout shift score equation measures the distance that unstable elements have moved, relative to the viewport. The distance fraction is the greatest distance any unstable element has moved in the frame (either horizontally or vertically) divided by the viewport’s largest dimension (width or height, whichever is greater).

Understand the Distance Fraction taken into account in the Cumulative Layout Shift (CLS)

In the example above, the largest viewport dimension is the height, and the unstable element has moved by 25% of the viewport height, which makes the distance fraction 0.25.

So, in this example the impact fraction is 0.75 and the distance fraction is 0.25, so the layout shift score is 0.75 * 0.25 = 0.1875.

What is a good CLS score

To provide a good user experience, sites should strive to have CLS score of less than 0.1. Everything under 0.25 needs improvement and you can consider everything over that as performing poorly.

What is a good CLS score

How to improve CLS

The most common causes of a poor CLS are:

  • Images without dimensions
  • Ads, embeds, and iframes without dimensions
  • Dynamically injected content
  • Web Fonts causing FOIT (Flash Of Invisible Text) / FOUT (Flash Of Unstyled Text)
  • Actions waiting for a network response before updating DOM

For most websites, you can avoid all unexpected layout shifts by sticking to the following recommendations:

  • Always include size attributes on your images and video elements. Alternatively, reserve the required space with something like CSS aspect ratio boxes
  • Never insert content above existing content, except in response to a user interaction
  • Prefer transform animations to animations of properties that trigger layout changes. Animate transitions in a way that provides context and continuity from state to state

As a follow up, we recommend that you take a look at the two other Web Core Vital metrics:

Web core vitals within digital experience metrics

Introduction to Core Web Vitals

Introduction to Core Web Vitals

For years, web performance monitoring has been driven by “browser-centric” metrics. Even if some of them have been heavily used and are still useful today to some extent, as the Page Load Time, the main issue they have in common is their inability to provide accurate data about how real users experience their web journey.

In order to address this challenge, Google announced in May 2020 the introduction of brand new web performance metrics that form the Core Web Vitals.

These Core Web Vitals focus on three important aspects of a real user experience:

To take a close look at them, we recommend that you read the following articles:

Interactivity: how to improve your First Input Delay (FID)

Interactivity: how to improve your First Input Delay (FID)

Google introduced First Input Delay (FID) in May 2020 as part of the Web Core Vitals. Let’s take a closer look at the definition of the FID, how it measures the interactivity on your web platform and how to improve it.

What is First Input Delay (FID)?

The First Input Delay (FID) metric measures your user’s first impression of your site’s interactivity and responsiveness. It measures the time from when a user first interacts with a page to the time when the browser is able to react to it, meaning when it is able to begin processing event handlers in response to that interaction. A user’s interaction can be clicking a link, tapping a button or using a custom, JavaScript-powered control.

How is First Input Delay (FID) tracking interactivity?

When writing code that responds to events, developers often assume their code runs immediately (as soon as the event happens). But as users, we’ve all frequently experienced the opposite: we’ve loaded a web page on our phone, tried to interact with it, and then felt frustrated when nothing happened.

Tracking when the browser’s main thread cannot respond immediately

In general, input delay (a.k.a. input latency) happens because the browser’s main thread is busy doing something else, so it can’t (yet) respond to the user.

Consider the following timeline of a typical web page load:

The above figure shows a page that’s making a couple of network requests for resources (most likely CSS and JS files), and after those resources are downloaded, they’re processed on the main thread.

This results in periods where the main thread is temporarily busy, which is indicated by the beige-colored task blocks.

Long tasks and their impact on interactivity

In the example hereunder, we can see some “long tasks” in the main thread tasks. These are JavaScript execution periods where users may find your User Interface (UI) unresponsive. A long task corresponds to Any piece of code that blocks the main thread for 50 milliseconds or more.
When the first user’s interaction happens near the beginning of such a Long Task, there will be a delay in the browser’s response to this action, as it is shown on the following example.

Because the input occurs while the browser is in the middle of running a task, it has to wait until the task completes before it can respond to the input. The time it must wait is the FID value for this user on this page.
Please note FID only tracks the very first user’s interaction.

What is a good FID score

To provide a good user experience, sites should strive to have a First Input Delay of less than 100 milliseconds. Everything between 100 and 300 milliseconds needs improvement and you can consider everything over that as performing poorly.

FID score should be less than 100 ms

How to improve FID (First Input Delay)

To sum up, the main cause of a poor FID is heavy JavaScript execution. Optimizing how JavaScript parses, compiles, and executes on your web page will directly reduce FID.

Concretely, you can achieve this by following best practices:

  • Break up long-running tasks code into smaller, asynchronous tasks
  • Optimize your page for interaction readiness (optimize first-party script loading, minimize reliance on cascading data fetches, minimize how much data needs to be post-processed on the client-side, explore on-demand loading of third-party code, …)
  • Use a web worker that makes possible to run JavaScript on a background thread
  • Reduce JavaScript execution time by deferring unused JavaScript and minimizing unused polyfills

As a follow up, we recommend that you take a look at the two other Web Core Vital metrics:

Loading Speed: how to improve LCP

Loading Speed: how to improve LCP

What is LCP (Largest Contentful Paint)?

LCP (Largest Contentful Paint) measures how long it takes for the largest piece of content to appear on the viewport.

  • The viewport corresponds to the portion of the webpage that is visible to the user without having to scroll down.
  • The content can be an image or a block of text.

LCP vs FCP (First Contentful Paint) and PLT (Page Load Time)

The LCP measurement does not take into account all elements that compose a webpage. As some of them can be invisible to the user, LCP is a much better metric to use compared to Page Load Time when it comes to measuring user’s perception of how fast a webpage loads.
Furthermore, as shown on the example below, LCP is a far better metric than First Content Paint (FCP) that measures how long it takes for any content to be rendered on the screen.

Understand the difference between FCP (First Contentful Paint) and LCP (Largest Contentful Paint)

On the one hand, as you can see from this example, the paint that triggers the FCP is not really useful for the user. It does not drive user’s perception of how fast the page loads.
On the other hand, the element that triggers the LCP is a paragraph of text that is displayed before any of the images or logo finish loading. Since the individual images are smaller than this paragraph, it remains the largest element throughout the load process.
From a user experience standpoint, LCP provides a very good metric that closely matches how a real user experiences the loading speed.

LCP, a dynamic metric

LCP is a dynamic metric, as shown in the following picture:

LCP (Largest Contentful Paint) is a dynamic metric

During the page load process, you see that the LCP candidate changes as content loads. In this example, a text block is a first candidate to measure LCP, but as new content is added to the page, the largest element changes. The final largest content in the viewport here is the picture.

What is a good LCP score?

To provide a good user experience, sites should strive to have LCP occur within the first 2.5 seconds of the page starting to load. Everything under 4 seconds needs improvement and you can consider everything over that as performing poorly.

LCP score should be less than 2,5 sec

How to improve LCP

The LCP is affected by the following main factors:

Factor Solution
Slow server response times Optimize your server, use CDN, cache assets, …
Render-blocking JavaScript and CSS Minify CSS, defer non-critical CSS, Minify JavaScript, …
Resource load times Optimize images, preload resources, …
Client-side rendering Limit number of JavaScripts, use server-side rendering and pre-rendering,

To go further, we recommend that you read the article on the two other Core Web Vital metrics: