Measuring latency from the client side using Chrome DevTools and N|Solid

Almost every modern web browser includes a powerful suite of developer tools. In our previous blog-post we covered __How to Measure Node.js server response time with N|Solid__, read more ???? HERE.

The developer tools have a lot of capabilities, from inspecting the current HTML-CSS and Javascript code to inspecting the current ongoing network communication client-server.

To open the devtools and analyze the network, you can go to:

“More Tools” > “Developer Tools” > “Network”

Being on the devtools screen now, you can visit your Fastify API(or express) http://localhost:3000 after you get an HTTP response, you will see the request itself, the HTTP status code, the response size, and the response time.

GIF 1 – devtools.gif

An explanation for the time measured by the Chrome DevTools, HERE.

Let’s measure the Client-Side Latency.

Remove the delay timer logic on your application and restart the Node.js process.

import Fastify from “fastify”;

const fastify = Fastify({
logger: true,
});

// Declare the root route and delay the response randomly
fastify.get(“/”, async function (request, reply) {
return { delayTime: 0 };
});

// Run the server!
fastify.listen({ port: 3000 }, function (err, address) {
if (err) {
fastify.log.error(err);
process.exit(1);
}
});

NOTE: Discover another useful code snippet in our ‘Measure Node.js server response time with N|Solid’ article! This time, learn how to simulate server-side latency to further test your application’s performance. Check it out: ???? HERE

After this, you will get the fastest response times of ~1 ms, I was able to execute 139k requests in 10 secs using autocannon.

npx autocannon localhost:3000

The main point here is to see how it behaves when we have a poor internet connection on the client side and the best possible performance on the server side; for this, we can simulate high latency and slow internet connections on the client side, using Chrome devtools which has this option out of the box.

Simulate Client-Side latency

In the Chrome Devtools:

Go to the “Network Conditions” option > Select the option “Network Throttling” > Set it to __“Slow 3G”__.

If you request your browser to the URL http://localhost:3000/, you’ll see a long response time, in our case, __~2 seconds__.

This response time doesn’t mean the server takes that long to process the request and return an answer that was the amount of time that the answer took to transfer over the network till it arrived at the client side.

If you check your Fastify logs of the N|Solid metrics, you’ll see the server only took ~1 ms to return.

Check your logs with N|Solid
HERE

In our case, the response time was 0.3 ms

“responseTime”:0.3490520715713501

Can I improve/help the client-side latency?

Well, it is possible to improve the user experience on client-side devices with high latency when you use a Content Delivery Network to cache content on edge locations geographically near to users’ devices; even implementing some simple compression mechanism will improve the load times on users’ devices with high latency.

Look at this Jonas, our Principal OSS Engineer, blog-post and see ???????? __How to create a fast SSR application__.

Connect with NodeSource

If you have any questions, please contact us at [email protected] or through this form.

To get the best out of Node.js and experience the benefits of its integrated features, including OpenTelemetry support, SBOM integration, and machine learning capabilities. Sign up for a free trial and see how N|Solid can help you achieve your development and operations goals. #KnowyourNode

Experience the Benefits of N|Solid’s Integrated Features
Sign up for a Free Trial Today

Measure Node.js server response time with N|Solid

As software developers, we constantly face new challenges in an ever-changing ecosystem. However, we must always remember the importance of addressing performance and security concerns, which remain at the top of our priority list.

To ensure that our applications based on Node.js can meet our performance and scalability needs without compromising security or incurring costly infrastructure changes, we must be aware of the importance of network optimization in Node.js.

The Impact of Latency/Ping Time on the Performance and Speed of Your Node.js Application

IMG – Ping Cats – via GIPHY

This communication, known as network ping time or latency, is a crucial factor that impacts the performance and speed of your application. Knowing how to measure network ping time between the browser and the server is essential for developers who want to optimize their applications and provide a better user experience. _Have you ever wondered how long it takes for your application to communicate with the server? _

Network Optimization in Node.js

To ensure the optimal performance and scalability of our Node.js applications, we must accurately measure our HTTP server’s connection and response time. Doing so enables us to identify and address potential bottlenecks without compromising security or incurring unnecessary infrastructure changes.

Before delving deeper into measuring connection and response time, let’s explore fundamental concepts and critical differentiators in the network landscape.

HTTP vs. WebSocket:

HTTP and WebSocket are communication protocols used in web development but serve different purposes. HTTP is a stateless protocol commonly used for client-server communication, while WebSocket enables full-duplex communication between clients and servers, allowing real-time data exchange.

Types of Connections and Versions:

When creating APIs, HTTP as a protocol and standard has different versions, such as HTTP 1.1 and 2.0. Additionally, APIs may use alternative protocols like gRPC, which offer different features and capabilities. Understanding these options empowers developers to choose the most suitable tools for their web servers.

TCP/IP Basics:

The Transmission Control Protocol (TCP) and Internet Protocol (IP) are fundamental protocols that form the backbone of computer networks. Among TCP’s critical processes is the three-way handshake, which plays a vital role in establishing a secure and dependable connection between two endpoints. This handshake ensures the orderly and reliable transmission of data. TLS/SSL encryption enhances security, adding an extra layer of protection to the communication between the client and the server.

HTTP vs. HTTPS:

HTTP operates over plain text, which exposes the data being transmitted to potential eavesdropping and tampering.
HTTPS, on the other hand, secures communication through the use of SSL/TLS encryption, providing confidentiality and integrity.
Understanding the trade-offs between HTTP and HTTPS is crucial to making informed data security decisions.

Building a Solid Foundation: Understanding the Three-Way Handshake for Reliable Connections

To evaluate the performance of our HTTP server, we need to differentiate between connection latency and server response time. Connection latency refers to the time it takes for the initial three-way handshake process to complete before data transmission can occur. On the other hand, server response time measures the duration from when the server receives a request to when it generates and sends the response back to the client.

The three-way handshake is a fundamental process in establishing a TCP (Transmission Control Protocol) connection between a client and a server in a network. It involves three steps, a “three-way handshake.” This handshake establishes a reliable and ordered communication channel between the two endpoints.

Here’s a breakdown of the three steps involved in the three-way handshake:

__SYN (Synchronize)__: The client initiates the connection by sending an SYN packet (synchronize) to the server. This packet contains a randomly generated sequence number to initiate the communication.
__SYN-ACK (Synchronize-Acknowledge)__: Upon receiving the SYN packet, the server acknowledges the request by sending an SYN-ACK packet back to the client. The SYN-ACK packet includes its own randomly generated sequence number and an acknowledgment number equal to the client’s sequence number plus one.
__ACK (Acknowledge)__: Finally, the client sends an ACK packet (acknowledge) to the server, confirming the receipt of the SYN-ACK packet. This packet also contains the acknowledgment number equal to the server’s sequence plus one.

Once this three-way handshake process is completed, the client and the server have agreed upon initial sequence numbers, and a reliable connection is established between them. This connection allows for data transmission with proper sequencing and error detection mechanisms, ensuring that the information sent between the client and server is reliable and accurate.

The three-way handshake is essential to establishing TCP connections and is performed before any data transmission can occur. It plays a critical role in ensuring the integrity and reliability of the communication channel, providing a solid foundation for subsequent data exchange between the client and server.

Create a self-serve diagnostic tool for a server-rendered page in Node.js.

The idea is to share an easy-to-follow recipe that will help you create your tool, so let’s start with the ingredients and end with the steps to create a self-serve diagnostic tool for a server-rendered page in Node.js.

Ingredients:

Node.js & NPM installation – https://nodejs.org/

Fastify.js – https://www.fastify.io/

Instructions:

1. Setup a Node.js Project
Use NPM to create your Node project:

$ mkdir diagnostic-tool-nodejs
$ cd diagnostic-tool-nodejs
$ npm init -y

2. Install your NPM packages.
We have Fastify in our recipe, so we must install them first:

$ npm i fastify

3. Create the index.mjs
Create an index.mjs file in the project’s root directory and paste this fastify HTTP server sample code.

import Fastify from “fastify”;

const fastify = Fastify({
logger: true,
});

// Randomly create a timer from 100ms up to X seconds
function timer(time) {
return new Promise((resolve, reject) => {
const ms = Math.floor(Math.random() * time) + 100;
setTimeout(() => {
resolve(ms);
}, ms);
});
};

// Declare the root route and delay the response randomly
fastify.get(“/”, async function (request, reply) {
const wait = await timer(5000);
return { delayTime: wait };
});

// Run the server!
fastify.listen({ port: 3000 }, function (err, address) {
if (err) {
fastify.log.error(err);
process.exit(1);
}
});

This will start the server on port 3000, which you can access by going to http://localhost:3000 in your web browser.

Integrate with N|Solid Console

Be sure you already have N|Solid installed and running on your environment; otherwise, go to https://downloads.nodesource.com and get the installer.

Also, run the console using docker as an alternative to the local installation.

docker run -d -p 6753:6753 -p 9001:9001 -p 9002:9002 -p 9003:9003 nodesource/nsolid-console:hydrogen-alpine-latest

With the application already initialized with npm, Fastify installed, and our index.js in place, we can connect our process with N|Solid

Run the HTTP server with the NSOLID RUNTIME following the instructions on the principal console page.

IMG – Connect N|Solid

In this case, we ran the process by passing the config via environment variables and running a local installation of the Nsolid console.

NSOLID_APPNAME=”NSOLID_RESPONSE_TIME_APP” NSOLID_COMMAND=”127.0.0.1:9001″ nsolid index.mjs

If you instead use our SaaS console, you need to use the NSOLID_SAAS env instead of __NSOLID_COMMAND__.

NSOLID_APPNAME=”NSOLID_RESPONSE_TIME_APP” NSOLID_COMMAND=”XYZ.prod.proxy.saas.nodesource.io:9001″ nsolid index.mjs

After completing those steps, you should be able to watch the app and process connected to the console.

IMG – Connect N|Solid Process

GIF 1 – Connect N|Solid Process

Go to the application process and add the HTTP(S) Server 99th Percentile Duration metric to see in near-real time the HTTP server latency response time and also we have the HTTP(S) Request Median Duration.

GIF 2 – Monitor Process Metrics

After this, we should be able to generate some traffic and see how the response times behave with the sample code provided, generating some response time randomness from 100ms up to 5 secs.

To generate the traffic, we can use autocannon

npx autocannon -d 120 -R 60 localhost:3000

After running autocannon for some minutes, we can see the P99 metric of the HTTP Server. The median and compare them.

IMG – http-latency-response-time-metrics

IMG – http-request-median-duration

IMG – p99-metric

To fully utilize the metrics provided by N|Solid, it is crucial to have a comprehensive understanding of their significance. Two critical metrics offered by N|Solid are the 99th Percentile and the HTTP Median metric. These metrics play a vital role in assessing the performance of Node.js applications in production environments. By getting deeper into their practical application and importance, we can unlock the actual value of these metrics in N|Solid and make informed decisions to optimize our production systems. Let’s explore this further.

The 99th Percentile metric

The 99th percentile is a statistical measure commonly used to analyze and understand response time or latency in a system.

Imagine you have a web application that handles incoming requests. To understand how fast the server responds, you measure the time it takes for each request and gather that data. You can find the 99th percentile response time by looking at the data.

For example, __the 99th percentile response time is 500 milliseconds__.
This means that only 1% of the requests took longer than 500 milliseconds to get a response. In simpler terms, 99% of the requests were handled in 500 milliseconds or less, which is fast.

It helps you identify and address any outliers or performance bottlenecks affecting a small fraction of requests but can significantly impact the user experience or system stability. Monitoring the 99th percentile response time helps you spot any slow requests or performance issues that might affect a few users but still need attention. but can have a significant impact on user experience or system stability.

The HTTP median metric

When sorted in ascending or descending order, the median represents a dataset’s middle value.

To illustrate the difference between the 99th percentile and the median, let’s consider an example. Suppose you have a dataset of response times for a web application consisting of 10 values:
[100ms, 150ms, 200ms, 250ms, __500ms__, 600ms, 700ms, 800ms, 900ms, 1000ms].

The median response time would be the middle value when the dataset is sorted, which is the 5th value, 500ms. This means that 50% of the requests had a response time faster than 500ms, and the other 50% had a response time slower than 500ms.

Connect with NodeSource

If you have any questions, please contact us at [email protected] or through this form.

Experience the Benefits of N|Solid’s Integrated Features
Sign up for a Free Trial Today

To get the best out of Node.js and experience the benefits of its integrated features, including OpenTelemetry support, SBOM integration, and machine learning capabilities. Sign up for a free trial and see how N|Solid can help you achieve your development and operations goals. #KnowyourNode