Measuring latency from the client side using Chrome DevTools and N|Solid

Almost every modern web browser includes a powerful suite of developer tools. In our previous blog-post we covered __How to Measure Node.js server response time with N|Solid__, read more ???? HERE.

The developer tools have a lot of capabilities, from inspecting the current HTML-CSS and Javascript code to inspecting the current ongoing network communication client-server.

To open the devtools and analyze the network, you can go to:

“More Tools” > “Developer Tools” > “Network”

Being on the devtools screen now, you can visit your Fastify API(or express) http://localhost:3000 after you get an HTTP response, you will see the request itself, the HTTP status code, the response size, and the response time.

GIF 1 – devtools.gif

An explanation for the time measured by the Chrome DevTools, HERE.

Let’s measure the Client-Side Latency.

Remove the delay timer logic on your application and restart the Node.js process.

import Fastify from “fastify”;

const fastify = Fastify({
logger: true,
});

// Declare the root route and delay the response randomly
fastify.get(“/”, async function (request, reply) {
return { delayTime: 0 };
});

// Run the server!
fastify.listen({ port: 3000 }, function (err, address) {
if (err) {
fastify.log.error(err);
process.exit(1);
}
});

NOTE: Discover another useful code snippet in our ‘Measure Node.js server response time with N|Solid’ article! This time, learn how to simulate server-side latency to further test your application’s performance. Check it out: ???? HERE

After this, you will get the fastest response times of ~1 ms, I was able to execute 139k requests in 10 secs using autocannon.

npx autocannon localhost:3000

The main point here is to see how it behaves when we have a poor internet connection on the client side and the best possible performance on the server side; for this, we can simulate high latency and slow internet connections on the client side, using Chrome devtools which has this option out of the box.

Simulate Client-Side latency

In the Chrome Devtools:

Go to the “Network Conditions” option > Select the option “Network Throttling” > Set it to __“Slow 3G”__.

If you request your browser to the URL http://localhost:3000/, you’ll see a long response time, in our case, __~2 seconds__.

This response time doesn’t mean the server takes that long to process the request and return an answer that was the amount of time that the answer took to transfer over the network till it arrived at the client side.

If you check your Fastify logs of the N|Solid metrics, you’ll see the server only took ~1 ms to return.

Check your logs with N|Solid
HERE

In our case, the response time was 0.3 ms

“responseTime”:0.3490520715713501

Can I improve/help the client-side latency?

Well, it is possible to improve the user experience on client-side devices with high latency when you use a Content Delivery Network to cache content on edge locations geographically near to users’ devices; even implementing some simple compression mechanism will improve the load times on users’ devices with high latency.

Look at this Jonas, our Principal OSS Engineer, blog-post and see ???????? __How to create a fast SSR application__.

Connect with NodeSource

If you have any questions, please contact us at [email protected] or through this form.

To get the best out of Node.js and experience the benefits of its integrated features, including OpenTelemetry support, SBOM integration, and machine learning capabilities. Sign up for a free trial and see how N|Solid can help you achieve your development and operations goals. #KnowyourNode

Experience the Benefits of N|Solid’s Integrated Features
Sign up for a Free Trial Today

Measure Node.js server response time with N|Solid

As software developers, we constantly face new challenges in an ever-changing ecosystem. However, we must always remember the importance of addressing performance and security concerns, which remain at the top of our priority list.

To ensure that our applications based on Node.js can meet our performance and scalability needs without compromising security or incurring costly infrastructure changes, we must be aware of the importance of network optimization in Node.js.

The Impact of Latency/Ping Time on the Performance and Speed of Your Node.js Application

IMG – Ping Cats – via GIPHY

This communication, known as network ping time or latency, is a crucial factor that impacts the performance and speed of your application. Knowing how to measure network ping time between the browser and the server is essential for developers who want to optimize their applications and provide a better user experience. _Have you ever wondered how long it takes for your application to communicate with the server? _

Network Optimization in Node.js

To ensure the optimal performance and scalability of our Node.js applications, we must accurately measure our HTTP server’s connection and response time. Doing so enables us to identify and address potential bottlenecks without compromising security or incurring unnecessary infrastructure changes.

Before delving deeper into measuring connection and response time, let’s explore fundamental concepts and critical differentiators in the network landscape.

HTTP vs. WebSocket:

HTTP and WebSocket are communication protocols used in web development but serve different purposes. HTTP is a stateless protocol commonly used for client-server communication, while WebSocket enables full-duplex communication between clients and servers, allowing real-time data exchange.

Types of Connections and Versions:

When creating APIs, HTTP as a protocol and standard has different versions, such as HTTP 1.1 and 2.0. Additionally, APIs may use alternative protocols like gRPC, which offer different features and capabilities. Understanding these options empowers developers to choose the most suitable tools for their web servers.

TCP/IP Basics:

The Transmission Control Protocol (TCP) and Internet Protocol (IP) are fundamental protocols that form the backbone of computer networks. Among TCP’s critical processes is the three-way handshake, which plays a vital role in establishing a secure and dependable connection between two endpoints. This handshake ensures the orderly and reliable transmission of data. TLS/SSL encryption enhances security, adding an extra layer of protection to the communication between the client and the server.

HTTP vs. HTTPS:

HTTP operates over plain text, which exposes the data being transmitted to potential eavesdropping and tampering.
HTTPS, on the other hand, secures communication through the use of SSL/TLS encryption, providing confidentiality and integrity.
Understanding the trade-offs between HTTP and HTTPS is crucial to making informed data security decisions.

Building a Solid Foundation: Understanding the Three-Way Handshake for Reliable Connections

To evaluate the performance of our HTTP server, we need to differentiate between connection latency and server response time. Connection latency refers to the time it takes for the initial three-way handshake process to complete before data transmission can occur. On the other hand, server response time measures the duration from when the server receives a request to when it generates and sends the response back to the client.

The three-way handshake is a fundamental process in establishing a TCP (Transmission Control Protocol) connection between a client and a server in a network. It involves three steps, a “three-way handshake.” This handshake establishes a reliable and ordered communication channel between the two endpoints.

Here’s a breakdown of the three steps involved in the three-way handshake:

__SYN (Synchronize)__: The client initiates the connection by sending an SYN packet (synchronize) to the server. This packet contains a randomly generated sequence number to initiate the communication.
__SYN-ACK (Synchronize-Acknowledge)__: Upon receiving the SYN packet, the server acknowledges the request by sending an SYN-ACK packet back to the client. The SYN-ACK packet includes its own randomly generated sequence number and an acknowledgment number equal to the client’s sequence number plus one.
__ACK (Acknowledge)__: Finally, the client sends an ACK packet (acknowledge) to the server, confirming the receipt of the SYN-ACK packet. This packet also contains the acknowledgment number equal to the server’s sequence plus one.

Once this three-way handshake process is completed, the client and the server have agreed upon initial sequence numbers, and a reliable connection is established between them. This connection allows for data transmission with proper sequencing and error detection mechanisms, ensuring that the information sent between the client and server is reliable and accurate.

The three-way handshake is essential to establishing TCP connections and is performed before any data transmission can occur. It plays a critical role in ensuring the integrity and reliability of the communication channel, providing a solid foundation for subsequent data exchange between the client and server.

Create a self-serve diagnostic tool for a server-rendered page in Node.js.

The idea is to share an easy-to-follow recipe that will help you create your tool, so let’s start with the ingredients and end with the steps to create a self-serve diagnostic tool for a server-rendered page in Node.js.

Ingredients:

Node.js & NPM installation – https://nodejs.org/

Fastify.js – https://www.fastify.io/

Instructions:

1. Setup a Node.js Project
Use NPM to create your Node project:

$ mkdir diagnostic-tool-nodejs
$ cd diagnostic-tool-nodejs
$ npm init -y

2. Install your NPM packages.
We have Fastify in our recipe, so we must install them first:

$ npm i fastify

3. Create the index.mjs
Create an index.mjs file in the project’s root directory and paste this fastify HTTP server sample code.

import Fastify from “fastify”;

const fastify = Fastify({
logger: true,
});

// Randomly create a timer from 100ms up to X seconds
function timer(time) {
return new Promise((resolve, reject) => {
const ms = Math.floor(Math.random() * time) + 100;
setTimeout(() => {
resolve(ms);
}, ms);
});
};

// Declare the root route and delay the response randomly
fastify.get(“/”, async function (request, reply) {
const wait = await timer(5000);
return { delayTime: wait };
});

// Run the server!
fastify.listen({ port: 3000 }, function (err, address) {
if (err) {
fastify.log.error(err);
process.exit(1);
}
});

This will start the server on port 3000, which you can access by going to http://localhost:3000 in your web browser.

Integrate with N|Solid Console

Be sure you already have N|Solid installed and running on your environment; otherwise, go to https://downloads.nodesource.com and get the installer.

Also, run the console using docker as an alternative to the local installation.

docker run -d -p 6753:6753 -p 9001:9001 -p 9002:9002 -p 9003:9003 nodesource/nsolid-console:hydrogen-alpine-latest

With the application already initialized with npm, Fastify installed, and our index.js in place, we can connect our process with N|Solid

Run the HTTP server with the NSOLID RUNTIME following the instructions on the principal console page.

IMG – Connect N|Solid

In this case, we ran the process by passing the config via environment variables and running a local installation of the Nsolid console.

NSOLID_APPNAME=”NSOLID_RESPONSE_TIME_APP” NSOLID_COMMAND=”127.0.0.1:9001″ nsolid index.mjs

If you instead use our SaaS console, you need to use the NSOLID_SAAS env instead of __NSOLID_COMMAND__.

NSOLID_APPNAME=”NSOLID_RESPONSE_TIME_APP” NSOLID_COMMAND=”XYZ.prod.proxy.saas.nodesource.io:9001″ nsolid index.mjs

After completing those steps, you should be able to watch the app and process connected to the console.

IMG – Connect N|Solid Process

GIF 1 – Connect N|Solid Process

Go to the application process and add the HTTP(S) Server 99th Percentile Duration metric to see in near-real time the HTTP server latency response time and also we have the HTTP(S) Request Median Duration.

GIF 2 – Monitor Process Metrics

After this, we should be able to generate some traffic and see how the response times behave with the sample code provided, generating some response time randomness from 100ms up to 5 secs.

To generate the traffic, we can use autocannon

npx autocannon -d 120 -R 60 localhost:3000

After running autocannon for some minutes, we can see the P99 metric of the HTTP Server. The median and compare them.

IMG – http-latency-response-time-metrics

IMG – http-request-median-duration

IMG – p99-metric

To fully utilize the metrics provided by N|Solid, it is crucial to have a comprehensive understanding of their significance. Two critical metrics offered by N|Solid are the 99th Percentile and the HTTP Median metric. These metrics play a vital role in assessing the performance of Node.js applications in production environments. By getting deeper into their practical application and importance, we can unlock the actual value of these metrics in N|Solid and make informed decisions to optimize our production systems. Let’s explore this further.

The 99th Percentile metric

The 99th percentile is a statistical measure commonly used to analyze and understand response time or latency in a system.

Imagine you have a web application that handles incoming requests. To understand how fast the server responds, you measure the time it takes for each request and gather that data. You can find the 99th percentile response time by looking at the data.

For example, __the 99th percentile response time is 500 milliseconds__.
This means that only 1% of the requests took longer than 500 milliseconds to get a response. In simpler terms, 99% of the requests were handled in 500 milliseconds or less, which is fast.

It helps you identify and address any outliers or performance bottlenecks affecting a small fraction of requests but can significantly impact the user experience or system stability. Monitoring the 99th percentile response time helps you spot any slow requests or performance issues that might affect a few users but still need attention. but can have a significant impact on user experience or system stability.

The HTTP median metric

When sorted in ascending or descending order, the median represents a dataset’s middle value.

To illustrate the difference between the 99th percentile and the median, let’s consider an example. Suppose you have a dataset of response times for a web application consisting of 10 values:
[100ms, 150ms, 200ms, 250ms, __500ms__, 600ms, 700ms, 800ms, 900ms, 1000ms].

The median response time would be the middle value when the dataset is sorted, which is the 5th value, 500ms. This means that 50% of the requests had a response time faster than 500ms, and the other 50% had a response time slower than 500ms.

Connect with NodeSource

If you have any questions, please contact us at [email protected] or through this form.

Experience the Benefits of N|Solid’s Integrated Features
Sign up for a Free Trial Today

To get the best out of Node.js and experience the benefits of its integrated features, including OpenTelemetry support, SBOM integration, and machine learning capabilities. Sign up for a free trial and see how N|Solid can help you achieve your development and operations goals. #KnowyourNode

Instrument your Nodejs Applications with Open Source Tools – Part 2

As we mentioned in the previous article, at NodeSource, we are dedicated to observability in our day-to-day, and we know that a great way to extend our reach and interoperability is to include the Opentelemetry framework as a standard in our development flows; because in the end our vision is to achieve high-performance software, and it is what we want to accompany the journey of developers in their Node.js base applications.

With this, we know that understanding the bases was very important to know the standard and its scope, but that it is necessary to put it into practice. How to integrate Opentelemetry in our application?; and although NodeSource has direct integration into its product in addition to more than 10 key functionalities in N|Solid, that extend the offer of a traditional APM, as you know, we are great contributors to the Open Source project, we also support the binary distributions of the Node.js project, our DNA is always helping the community and showing you how through Open Source tools you can still increase the visibility. So through this article, we want to share how to set up OpenTelemetry with Open Source tools.

In this article, you will find __How to Apply the OpenTelemetry OS framework in your Node.js Application__, which includes:

Step 1: Export data to the backend

Step 2: Set up the Open Telemetry SDK
__Step 3__: Inspect Prometheus to review we’re receiving data

Step 4: Inspect Jaeger to review we’re receiving data

Step 5: Getting deeper at Jaeger 👀

Note: This article is an extension of our talk at NodeConf.EU, where we had the opportunity to share the talk:

__Dot, line, Plane Trace!__
__Instrument your Node.js applications with Open Source Software__
Get insights into the current state of your running applications/services through OpenTelemetry. It has never been as easy as now to collect data with Open Source SDKs and tools that will help you extract metrics, generate logs and traces and export this data in a standardized format to be analyzed using the best practices. In this talk, We’ll show how easy it is to integrate OpenTelemetry in your Node.js applications and how to get the most out of it using Open Source tools.

To see the talks from this incredible conference, you can watch all sessions through live-stream links below 👇
– Day 1️⃣ – https://youtu.be/1WvHT7FgrAo
– Day 2️⃣ – https://youtu.be/R2RMGQhWyCk
– Day 3️⃣ – https://youtu.be/enklsLqkVdk

Now we are ready to start 💪 📖 👇

Apply the OpenTelemetry OS framework in your Node.js Application

So, going back to the distributed example we described in our previous article, here we can see what the architecture looks like this after adding observability.

Every service will collect signals by using the OpenTelemetry Node.js SDK and export the data to specific backends so we can analyze it.

We are going to use the following:

JAEGER for Traces and Logs.

Prometheus to visualize the metrics.

_Note: _Jaeger and Prometheus are probably the most popular open-source tools in space.

Step 1: Export data to the backend

How the data is exported to the backends differs:
To send data to _JAEGER__, we will use OTLP over HTTP, whereas for _Prometheus__, the data will be pulled from the services using HTTP.

First, we will show you how easy it is to set up the OpenTelemetry SDK to add observability to our applications.

### Step 2: Set up the OpenTelemetry SDK

First, we have the providers in charge of collecting the signals, in our case __NodeTracerProvider__ for traces and __MeterProvider__ for metrics.
Then the exporters send the collected data to the specific backends.
The Resource contains attributes describing the current process, in our case, __ServiceName__ and __Container. Id’s__. The name of these attributes is well defined by the spec (it’s in the __semantic_conventions module__) and will allow us to differentiate where a specific signal comes from.

So to set up traces and metrics, the process is basically the same: we create the provider passing the Resource, then register the specific exporter.

We also register instrumentations of specific modules (either core modules or popular userspace modules), which provide automatic Span creation of those modules.

Finally, the only important thing to remember is that we need to initialize OpenTelemetry before our actual code; the reason is these instrumentation modules (in our case for __http__ and fastify) __monkeypatch__ the module they’re instrumenting.

Also, we create the __meter instruments__ because we will use them on every service: an __HTTP request counter__ and a couple of observable gauges for __CPU usage__ and __ELU usage__.

So let’s spin the application now and send a request to the API. It returns a 401 Not Authorized. Before trying to figure out what’s going on, let’s see if Prometheus and jaeger are actually receiving data.

Step 3: Inspect Prometheus to review we’re receiving data

Let’s look at Prometheus first:
Looking at the HTTP requests counter, we can see there are 2 data points: one for the __API service__ and another one for the __AUTH service__. Notice that the data we had in the Resource is __service_name__ and __container_id__. We also can see the process_cpu is collecting data for the 4 services. The same is true for __thread_elu__.

Step 4: Inspect Jaeger to review we’re receiving data

Let’s look at Jaeger now:
We can see that one trace corresponding to the __HTTP request__ has been generated.

Also, look at this chart where the points represent traces, the X-axis is the timestamp, and the Y-axis is the duration. If we inspect the trace, we can see it consists of 3 spans, where every span represents an __HTTP transaction__, and it has been automatically generated by the instrumentation-HTTP modules:

The 1st span is an HTTP server transaction in the API service (the incoming HTTP request).
The 2nd span represents a POST request to AUTH from API.
The 3rd one represents the incoming HTTP POST in AUTH. If we inspect a bit this last span, apart from the typical attributes associated with the request (HTTP method, request_url, status_code…).

We can see there’s a Log associated with the Span this makes it very useful as we can know exactly which request caused the error. By inspecting it, we found out that the reason for the failure was missing the auth token.

This piece of information wasn’t generated automatically, though, but it’s very easy to do. So in the verify route from the service, in case there’s an error verifying the token, we retrieve the active span from the current context and just call __recordException()__ with the error. As simple as that.

Well, so far, so good. Knowing what the problem is, let’s add the auth token and check if everything works:

curl http://localhost:9000/ -H “Authorization: Bearer eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJpc3MiOiIiLCJpYXQiOjE2NjIxMTQyMjAsImV4cCI6MTY5MzY1MDIyMCwiYXVkIjoid3d3LmV4YW1wbGUuY29tIiwic3ViIjoiIiwibGljZW5zZUtleSI6ImZmZmZmLWZmZmZmLWZmZmZmLWZmZmZmLWZmZmZmIiwiZW1haWwiOiJqcm9ja2V0QGV4YW1wbGUuY29tIn0.PYQoR-62ba9R6HCxxumajVWZYyvUWNnFSUEoJBj5t9I”

Ok, now it succeeded. Let’s look at Jaeger now. We can see the new trace here, and we can see that it contains 7 spans, and no error was generated.

Now, it’s time to show one very nice feature of Jaeger. We can compare both traces, and we can see in grey the Spans that are equal, whereas we can see in Green the Spans that are new. So just by looking at this overview, we can see that if we’re correctly Authorized, the API sends a GET request to SERVICE1, which then performs a couple of operations against POSTGRES. If we inspect one of the POSTGRES spans (the query), we can see useful information there, such as the actual QUERY. This is possible because we have registered the instrumentation-pg module in SERVICE1.

And finally, let’s do a more interesting experiment. We will inject load to the application for 20 seconds with autocannon…

If we look at the latency chart, we see some interesting data: up until at least the 90th percentile, the latency is basically below 300ms, whereas starting at least from 97.5%, the latency goes up a lot. More than 3secs. This is Unacceptable 🧐. Let’s see if we can figure out what’s going on 💪.

Step 5: Getting deeper at Jaeger 👀

Looking at Jaeger and limiting this to like 500 spans, we can see that the graph here depicts what the latency char showed. Most of the requests are fast, whereas there are some significant outliers.

Let’s compare one of the fast vs. slow traces. In addition to querying the database, we can see the slow trace in that SERVICE1 sends a request to SERVICE2. That’s useful info for sure. Let’s take a look more closely at the slow trace.

In the __Trace Graph view__, every node represents a Span, and on the left-hand side, we can see the percentage of time with respect to the total trace duration that the subgraph that has this node as root takes. So by inspecting this, we can see that the branch representing the HTTP GET from SERVICE1 to SERVICE2 takes most of the time of the span. So it seems the main suspect is SERVICE2. Let’s take a look at the Metrics now. They might give us more information. If we look at the thread.elu, we can see that for SERVICE2, it went 100% for some seconds. This would explain the observed behavior.

So now, going to the SERVICE2 code route, we can easily spot the issue. We were performing a __Fibonacci operation__. Of course, this was easy to spot as this is a demo, but in real scenarios, this would not be so simple, and we would need some other methods, such as CPU Profiling, but regardless, the info we collected would help us narrow down the issue quite significantly.

So, that’s it for the demo. We’ve created a repo where you can access the full code, so go play with it! 😎

Main Takeaways

Finally, we just want to share the main takeaways about implementing observability with Open Software Tools:

Setting up observability in our Node.js apps is actually not that hard.
It allows us to observe requests as they propagate through a distributed system, giving us a clear picture of what might be happening.
It helps identify points of failure and causes of poor performance. (for some cases, some other tools might also be needed: CPU profiling, heap snapshots).
Adding observability to our code, especially tracing, comes with a cost. So Be cautious! ☠️But we are not going to go deeper into this, as it could be a topic for another article.

Before you go

If you’re looking to implement observability in your project professionally, you might want to check out N|Solid, and our ’10 key functionalities’. We invited you to follow us on Twitter and keep the conversation!

Top 10 N|Solid —APM for Node— features you needed to use

Nearly a year ago, we launched N|Solid SaaS, and although there are still a few months to go before our anniversary, we wanted to share the top 10 features ofN|Solid that make us proud every day of what we have built.

N|Solid is the best way to monitor and secure your Node.js applications (including in production) that are trusted by developers, software teams, and global enterprise companies. It has an array of features like other APMs, but we go deeper with our insights and are more performant than all others. We created N|Solid to help companies build the best software with Node and save time resolving issues. Because there is significant risk in deploying open-source applications without knowing the security gaps, we provide features to prevent security issues and insights for resolving them.

We are confident we’re the best APM solution for Node. js-based applications; if you are using Node, you should be using our runtime. We’re a complete product/solution, not just an APM focused on Node.

About N|Solid

N|Solid is a toolset built on Node.js that provides a number of enhancements to improve troubleshooting, debugging, managing, monitoring, and securing your Node.js applications. It is 100% compatible with the open-source project and requires no instrumentation of your code.

N|Solid Console

N|Solid provides a web-based console, ‘N|Solid Console’ to monitor your applications but also allows you to introspect your Node.js applications, in the same way, directly in the CLI if you run the __N|Solid Runtime__.

N|Solid Runtime

If you want to introspect your Node.js applications and have the most control from your command line, you’ll run them with the N|Solid Runtime, which is shaped similarly to a typical Node.js runtime but provides some additional executables.

To install N|Solid Runtime, download and unpack an N|Solid Runtime from the N|Solid download site.

Why N|Solid is an APM

Traditionally, the acronym APM has been used to refer to application performance management. However, in recent years it also refers, perhaps more correctly, to Application Performance Monitoring, and that is exactly what N|Solid does, which is why its categorization in this spectrum of applications is correct. Something important to highlight is that it is not a polyglot APM; it is clearly an APM specialized in Node.js, which has always been our focus.

While other APMs support Node.js, none provide the level of insights N|Solid can. In many cases, the APMs can become a part of the problem by consuming significant resources due to how they are designed. But don’t take our word for it. You can check it with real data through this OS Project — The APM’s Benchmark tool —.

APM’s Benchmark Tool – Overview Screen

N|Solid APM (Self-hosted or SaaS) is the best observability and insights tool to manage Node performance and security, and the full platform access enables you to really #KnowyourNode

In this blog post, we want to wrap it up our product series, briefly telling you about the 10 main features of N|Solid. We hope you like it and it helps you get the most out of our product.

[1] Project & Applications Monitoring in N|Solid

Visually view application behavior and identify performance and security issues.

With Project & Application Monitoring, you can track a website or any application based on Node.js. This feature allows you to collect your log data to help developers detect bugs and process use, track downtime, and improve performance to be consistent and focused on the end-user experience.

N|Solid APM – Projects & Applications View

This area is mainly made up of 3 main views that use the Projects and Applications and Process Monitoring:
– Applications view
– Application summary view
– Processes view

Read more about this feature here: https://nsrc.io/ProjectApplicationsMonitoringNS

[2] Process Monitoring in N|Solid

Access deep performance insights.

The applications and associated processes are displayed in this feature of our N|Solid Console. You can visualize Event Loop Estimated Lag, Heap Used, or CPU Used, for example, and you can correlate these metrics in a planimetry. You can also select a specific process to know its general status and vulnerabilities and choose a specific graphic to visually represent the selected information.

N|Solid APM – Process Monitoring

Read more here: https://nsrc.io/ProcessMonitoringNS

[3] CPU Profiling in N|Solid

Shows what functions consume CPU% and how resources are allocated.

CPU Profiling allows you to understand where opportunities exist to improve your Node processes’ speed and load capacity. This feature shows what functions consume CPU% and how resources are allocated.

N|Solid APM – Flamegraph-CPU Profile

Read more here: https://nsrc.io/CPUProfilingNS

[4] Worker Threads in N|Solid

View In-depth metrics of each worker thread.

Worker threads are treated first class and have the same access to CPU profiles, snapshots, etc. as the main process. We are the only solution that has full support worker threads.

View In-depth metrics of each worker thread. With this feature, identify opportunities to improve the performance of CPU-intensive work.

Read more here: https://nsrc.io/WorkerThreadsNS

[5] Capture Heap Snapshots in N|Solid

Understand where and how memory is being used

Taking heap snapshots is a great way to help identify the underlying problem when faced with a memory leak or performance issue. In this way, you will be able to understand where and how memory is being used, and you will be able to quickly resolve memory leaks and performance issues.

N|Solid APM – Capture Heap Snapshots

Read more here: https://nsrc.io/HeapSnapshotsNS

[6] Memory Anomaly Detection in N|Solid

View In-depth metrics of each worker thread.

Identify Memory anomalies taken with a more accurate detection method.
– Insights and metrics are historical, before and after the incident happened.
– Get anomalies at different heap usage levels.
– Detect correlation between sets of memory-specific metrics.
– Filter results by specific processes inside your application.

N|Solid APM – Memory Anomaly Detection

Read more here: https://nsrc.io/MemoryAnomalyNS

[7] Vulnerability Scanning – NCM – in N|Solid

Know all of the potential vulnerabilities within your application.

NCM is security, compliance, and curation tool around the 3rd-Party Node.js & JavaScript package ecosystem. It provides protection against security vulnerabilities and licensing compliance issues and provides risk assessment when working with a 3rd-party ecosystem.

The N|Solid Console can be configured to perform periodic verification of all packages loaded by all N|Solid processes.

N|Solid APM – Vulnerability Scanning from N|Solid Runtime

NCM provides

Actionable insights.
Offline vulnerability scanning.
Prevent processes in an application from launching if they have vulnerabilities with “strict mode.”
NCM-CI (Service Tokens and CI Processes) customization.

__Note__: NCM can be viewed from 3 locations: full overview, per application, and per process.

Read more here: https://nsrc.io/VulnerabilityScanningNS

[8] HTTP Tracing support in N|Solid

Enables the ability to debug application latency and other issues.

HTTP tracing gathers throughput and the lifecycle of any HTTP, DNS, or other types of request.
– Debug latency issues, monitor your services, and more with the collected information.
– See in a timeline graph the density of the number of tracked spans.
– Inspect each span for more detail on the collected trace.
– Filter the results by the attributes of a span and delimit them to the time range.

N|Solid APM – HTTP Tracing Support

Read more here: https://nsrc.io/HTTPTracingNS

[9] Global Alerts & Integrations in N|Solid

Be aware of issues and vulnerabilities. Pre-configured API integrations with key 3rd party services.

You can use automation to trigger alerts over integrations, CPU profiles, or heap snapshots. Be aware of issues and vulnerabilities, Pre-configured API integrations with key 3rd party services.

So when creating the heap snapshot, for example, I will have the notification directly in Slack of my N|Solid Console’s behavior; from there, I can check it by opening the Console.

N|Solid APM – Global Alerts & Integrations – Slack Example

Read more here: https://nsrc.io/GlobalAlertsIntegrationsNS

[10] Distributed Tracing in N|Solid

Better understand the factors that affect an application’s latency.

Distributed tracing is a core component of Observability mainly used by site reliability engineers (SREs) but also by developers and is recommended in that way to obtain the greatest benefits as a team in charge of modern distributed software.

As your system scales, you’ll need to add a tracing and refine sampling capabilities, which means getting the context to understand the complexity of distributed architectures.

N|Solid APM – Distributed Tracing

Distributed Tracing provides several solutions, which include:

Monitoring system health
Latency trend and outliers
Control flow graph
Asynchronous process visualization
Debugging microservices

Read more here: https://nsrc.io/DistributedTracingNS

Still, on our roadmap, we are planning and executing features that will shake up the ecosystem in the coming months. Stay tuned! 😎

Top Ten Features In N|Solid

🧭Projects & Applications Monitoring in N|Solid – https://nsrc.io/ProjectApplicationsMonitoringNS

🌌 Process Monitoring in N|Solid – https://nsrc.io/ProcessMonitoringNS

🔍 CPU Profiling in N|Solid – https://nsrc.io/CPUProfilingNS

🕵️‍♂️ Worker Threads Monitoring in N|Solid – https://nsrc.io/WorkerThreadsNS

📸 Capture Heap Snapshots in N|Solid – https://nsrc.io/HeapSnapshotsNS

🚨 Memory Anomaly Detection in N|Solid – https://nsrc.io/MemoryAnomalyNS

🚩 Vulnerability Scanning & 3rd party Modules Certification in N|Solid – https://nsrc.io/VulnerabilityScanningNS

👣 HTTP Tracing Support in N|Solid – https://nsrc.io/HTTPTracingNS

⏰ Global Alerts & Integrations in N|Solid – https://nsrc.io/GlobalAlertsIntegrationsNS

🪄 Distributed Tracing in N|Solid – https://nsrc.io/DistributedTracingNS
…and more

Want to try N|Solid?

To check out the top 10 features and more in N|Solid, create your account in sign up or sign in, in the top right corner of our main page. More information is available here.

As always, we’re happy to hear your thoughts – feel free to get in touch with our team or reach out to us on Twitter at @nodesource.