Introducing N|Solid Copilot: Your AI-Powered Node.js Navigator

We are thrilled to announce the latest addition to N|Solid Pro – the N|Solid Copilot, a groundbreaking AI-powered assistant designed to revolutionize your Node.js development experience. This innovative tool is a leap forward in Node.js application observability and security, it’s like having a Node expert on-call.

View of N|Solid Pro Console with the Copilot drawer open allowing a user to interact with the AI Assistant.

Why N|Solid Copilot?

N|Solid Copilot is developed with one goal in mind: to make your life as a Node.js developer or DevOps engineer easier, more efficient, and more secure. It’s like having a Node.js expert by your side, 24/7, offering real-time insights into observability alerts, along with actionable advice tailored to your unique application needs.

Key Features of N|Solid Copilot

Real-time analysis and insights: Identify and resolve performance bottlenecks, memory leaks, and other critical issues. Analyze metrics like CPU usage, event loop utilization, and more.
Anomaly detection and remediation: Utilizing the platform and NodeSource’s ML algorithms, the Copilot can detect anomalous behavior in both application performance and security, as well as identify solutions.
Security vulnerability identification and resolution: N|Solid Pro is continuously scanning for security vulnerabilities within the application code and 3rd party dependencies. Users can ask our Copilot about recommendations and solutions.
Code optimization suggestions: Given its training in Node.js, the AI can or will offer suggestions to optimize code for better performance and efficiency. This can include advice on asynchronous programming patterns, memory management, or the use of specific Node.js features.
Interactive querying: Users can interact with the platform in a conversational manner to query specific application metrics or request insights on performance and security aspects. These queries can be general or specific to the data generated in production.
Knowledge sharing: Users can gain knowledge about how to use N|Solid and implement Node.js best practices, creating a better model for users to get up to speed quickly on the platform.

Using N|Solid Copilot to triage security issues through predefined prompts or user questions.

Experience the Future of Node.js Development Powered by AI

N|Solid Copilot isn’t just a tool; it’s your partner in developing and maintaining great software. Whether you’re debugging a tricky issue, seeking performance improvements, or ensuring your application’s security, N|Solid Copilot is there to guide you every step of the way.

How to Get Started?

Sign Up: Simply sign up for a free SaaS account on our website.
Integrate: Seamlessly integrate N|Solid Copilot with your existing Node.js applications.
Navigate: Let N|Solid Copilot guide your development journey with unparalleled insights and assistance.

We believe N|Solid Copilot will not just change how you work with Node.js; it will transform it. Sign up today and be part of this exciting journey!

Connect with us on Twitter @NodeSource, LinkedIn, and to stay updated with the latest from N|Solid.

Serverless Observability in N|Solid for AWS Lambda

We are excited to release Serverless Observability for N|Solid with support for AWS Lambda. With the growth of organizations leveraging serverless increasing as they realize the performance and cost benefits, we’re excited to provide customers with this new visibility into the health and performance of their Node.js apps utilizing Serverless Functions utilizing serverless architectures.

Img 1. Serverless Cloud Providers

Observability has become a critical part of software development; understanding performance issues and why they occur is incredibly valuable. Leveraging Serverless introduces challenges for observability based on how the technology works due to added complexity and challenges in data collection and analysis. Some key observability issues include:

Cold starts: When a serverless function is first invoked, it takes time for the underlying infrastructure to spin up. This can lead to latency issues, especially for applications that require a high degree of responsiveness. In other words, Lambda must initialize a new container to execute the code, and that initialization takes time.

Debugging: Debugging serverless applications can be challenging due to the event-driven nature of the platform. This is because it can be difficult to track the flow of execution through an application comprising multiple functions invoked in response to events.

Visibility: Serverless providers typically provide limited visibility into the underlying infrastructure. This can make it difficult to troubleshoot performance issues and identify security vulnerabilities.

N|Solid now makes it easy to implement distributed tracing based on __OpenTelemetry__, providing better insights into these microservices-based systems.

What is Serverless?

Serverless computing is a cloud computing model that allows developers to run code without provisioning or managing servers. Instead, developers pay for the resources they use, such as the time their code runs and the amount of data it stores.

Some key benefits of Serverless Computing include:

Accelerated app development,
Reduced costs and improved scalability,
Increased agility

Serverless computing has become increasingly popular in recent years, offering several advantages over traditional server-based architectures. However, serverless computing can also be complex, and observability, monitoring, and debugging can be particularly challenging.

Key concepts to understand the importance of extending observability to serverless architectures:

Observability: The ability to understand the state of a system by collecting and analyzing telemetry data. This data can include logs, metrics, and traces. Traces are a particularly important type of telemetry data, as they can be used to track the flow of requests through a system.

OpenTelemetry: An open-source observability framework that can collect and analyze telemetry data from various sources. It includes support for __Distributed Tracing__, which can be used to track the flow of requests through a serverless environment.

At NodeSource, we have been at the forefront of supporting developers to solve these challenges in their Node.js applications, so we have extended this series of features and content since last year:

Distributed Tracing Support in N|Solid – https://nsrc.io/DistributedTracingNS
Enhance Observability with Opentelemetry Tracing – Part 1 – https://nsrc.io/NodeJS_OTel1
Instrument your Node.js Applications with Open Source Tools – Part 2 – https://nsrc.io/NodeJS_OTel2
AIOps Observability: Going Beyond Traditional APM – https://nsrc.io/AIOpsNSolid

In a serverless environment, applications are typically composed of several different services, each responsible for a specific task. This can make it difficult to track the flow of requests through the system and identify performance problems, but not with N|Solid:

With this release, you can now have traces of when a microservices system calls a serverless function.

Now, with N|Solid, you can:

Track the flow of requests across your microservices architecture, from the Frontend to the Backend.
Identify performance bottlenecks and errors.
Correlate traces across multiple services.
Visualize your traces in a timeline graph.

N|Solid is a lightweight and performance-oriented tracing solution that can be used with any Node.js application. It is easy to install and configure, and it does not require any changes to your code.

Introducing Serverless Support for AWS Lambda

NodeSource has introduced a new observability feature to gather information throughout a request in a serverless environment. The collected information can be used for debugging latency issues, service monitoring, and more. This is valuable for users interested in debugging request latency in a serverless environment.

Img 2. N|Solid Serverless Installation Process

With this announcement, we are enabling the opportunity to connect a process following the indications of our Wizard. You will be able to connect the traces of your Node application through requests or connected functions, by connecting the data, you will be able to find the cause of latency issues, errors, and other problems in serverless environments.

How to apply Serverless Observability through N|Solid:

Key practices include leveraging distributed tracing to understand end-to-end request flows, instrumenting functions with custom metrics, analyzing logs for debugging and troubleshooting, and employing centralized observability platforms for comprehensive visibility.

Img 3. N|Solid Serverless Configuration

To use loading Opentelemetry Instrumentation Modules, first, use the NSOLID_INSTRUMENTATION environment variable to specify and load the Opentelemetry instrumentation modules you want to utilize within your application. To enable instrumentation for specific modules, follow these steps:

For HTTP requests using the http module, set the NSOLID_INSTRUMENTATION environment variable to http.

If you’re also performing PostgreSQL queries using the pg module, include it in the NSOLID_INSTRUMENTATION environment variable like this: http,pg.

List all the relevant instrumentation modules required for your application. This will enable the tracing and monitoring of the specified modules, providing valuable insights into their performance and behavior.

Connect through integration the serverless functions; select and configure the environment variable you require. For more advanced settings, visit our NodeSource documentation.

Img 4. N|Solid Wizard Installation Summary

In our NodeSource documentation, you will find CLI – Implementation and Instructions or directly follow the instructions of our Wizard:

Install
Usage
Interactive Shell
Inline Inputs
Interactive Inputs
Help
Uninstall

Benefits of N|Solid Implementing Serverless

Serverless monitoring collects and analyzes serverless application data to identify and fix performance problems. This can be done using a variety of tools and techniques, including:

__Metrics__: Metrics are data points that measure the performance of an application, such as the number of requests per second, the average response time, and the amount of memory used.
__Logs__: Logs are records of events in an application, such as errors, warnings, and informational messages.
__Traces__: Traces are records of a request’s path through an application, including the time it spends in each function.

By collecting and analyzing this data, serverless monitoring tools can identify performance problems, such as:

__Cold starts__: Cold starts is when a serverless function is invoked for the first time. This can cause a delay in the response time.
__Overcrowding__: Overcrowding occurs when too many requests are sent to a serverless function simultaneously. This can cause the function to slow down or crash.
__Bottlenecks__: Bottlenecks occur when a particular part of an application is slow. Some factors, such as inefficient code, a lack of resources, or a bug, can cause this.

Once a performance problem has been identified, serverless monitoring tools can help developers fix the problem. This can be done by:

Tuning the application: Developers can tune the application by changing the code, configuration, or resources used.
Evolving the application: Developers can evolve it by adding new features or changing how it works.
Monitoring the application: Developers can continue monitoring the application to ensure that the problem has been fixed and the application performs as expected.

Benefits of N|Solid Implementing Serverless

Img 5. N|Solid Functions Dashboard

Now, directly in your N|Solid Console, you’ll have the Function Dashboard, which will allow you to review in-depth and detail all the functions you have connected.

By selecting the functions, you can access the __Function Detail View__.

Img 6. N|Solid Function Detail View

In the first content block, you will have the following:

The function’s name and the provider’s icon__.
__Note:
If you click on the function itself, it will open the function itself in a new tab directly in AWS.

On the other hand, in the central box, you will have:
RUNTIME VERSION
FUNCTION VERSION
ACCOUNT ID
N|Solid LAYER: (It is versioned, which is what allows us to see the AWS info)
REGIONS: The zone of the function. Where it is running that specific version.
ARCHITECTURE: Lambda can have several Linux architectures. This info comes directly from AWS.

And finally, on the right corner of our console, we will have
Vulnerability Status__, and below this, __Estimated Cost: How much are you spending on this function? For more information, please review our documentation.

About N|Solid Layer

In an __AWS Lambda__, you can have information about hundreds of instances, AWS does it for you, but you don’t have information on each invocation. They show you by default the last 5 minutes through the median, the 99th percentile, etc., but they can’t give us invocation by invocation individually; they simply group them.

There are two types of data we have from AWS Lambda:

Real-Time Metrics or Telemetry API Metrics
Cloudwatch Metrics

The metrics that CloudWatch reports are not real-time. They are extracted from the CloudWatch API by the Lambda Metrics Forwarder and displayed as AWS Lambda delivers them directly. This means there may be a delay of up to a few minutes between recording and displaying a metric.

The __Telemetry API Metrics__, which we extract, through Lambda layers, through modules that we install to your function to be executed.

We distributed this information as a layer,__N|Solid Layer__, injecting code inside. An AWS extension for N|Solid provides information about the life cycle of each Lambda function providing data to the console:

Generate the Timeline Graph
Span: Span Details, Path of the Span, and filter the results by attributes of the Span.
Changing Time Range
Generate SBOM reports

Our method tracks where the function is frozen to optimize Lambda resources within certain limits, allowing access to the information even after the function has been frozen.🤯In real-time, the registration is detected, providing the key information on each invocation in each instance.

In Closing,

We’ve been working on serverless observability for our customers for a while because we believe in the value of leveraging the technology approach. Serverless provides many benefits for developers and teams, but it can be difficult to provide quality observability without adding a lot of expensive overhead.

N|Solid can now help teams develop and maintain better software, resolve issues faster, and provide security for both traditional and serverless Node.js Applications. Try it for free, or connect with us to learn more.

It’s exciting to share this new release, our first in a series of serverless cloud platforms. Look for Azure and GCP to be released soon. Thanks for reading all about it.promising future ahead. Our heartfelt gratitude goes to our exceptional team for their unwavering dedication and hard work in bringing this vision to life.

Related Links

AWS Lambda Documentation – https://docs.aws.amazon.com/lambda/latest/dg/welcome.html
Layer Concept – https://docs.aws.amazon.com/lambda/latest/dg/gettingstarted-concepts.html#gettingstarted-concepts-layer
Cloudwatch Metrics – https://docs.aws.amazon.com/lambda/latest/dg/monitoring-metrics.html#monitoring-metrics-types
Lambda Extensions – https://docs.aws.amazon.com/lambda/latest/dg/lambda-extensions.html
CloudFlare Serverless Computing – https://www.cloudflare.com/learning/serverless/what-is-serverless/
IBM – What is serverless? – https://www.ibm.com/topics/serverless
GCP Serverless – https://cloud.google.com/serverless?hl=es-419

AIOps Observability: Going Beyond Traditional APM

AIOps is an emerging technology that applies machine learning and analytics techniques to IT operations. AIOps enables IT teams to leverage advanced algorithms to identify performance issues, predict outages, and optimize system performance. Nodesource sees significant advantages for developers and teams to increase software quality by leveraging AIOPS. We have extended our platform’s (N|Solid) observability capabilities to include AIOps, enabling developers to leverage advanced machine learning and analytics techniques to optimize their Node.js applications.

Our N|Solid platform provides the most advanced visibility into Node.js applications, enabling developers to quickly identify performance issues, detect security vulnerabilities, and troubleshoot errors. N|Solid achieves this level of observability through real-time performance monitoring, comprehensive metrics, and detailed instrumentation of Node.js applications.

Last year, we integrated OpenTelemetry into our runtime and were nearing the release of an extension of this layer into our console. This advancement will further extend our platform to support AIOps. Santiago Gimeno, a Senior Architect, sums up our vision of the integration with OTel (Open Telemetry) and N|Solid:

“In today’s world, where applications are becoming more complex and distributed, having a good observability solution is more important than ever. The emergence of OpenTelemetry as the de-facto standard for observability is key. It allows application developers to select solutions that adapt better to their needs. Even more, it allows for healthy competition between observability solution vendors. We support this approach and continue to take steps to ensure N|Solid stays compliant with the OpenTelemetry specification, so everyone can use what we believe is the best observability solution for Node.js.”

Key differences between an APM and Observability

APM (Application Performance Management) and observability are both methods of monitoring and managing the performance and health of software applications, but there are key differences between the two:

Scope: APM is focused on monitoring the performance of applications, while observability is a more comprehensive approach that includes monitoring the infrastructure and application stack, as well as the performance of individual services.

Metrics: APM typically relies on predefined metrics and thresholds to identify performance issues, while observability takes a more flexible approach to collect a wide range of data, including logs, metrics, and traces.

Root cause analysis: APM is designed to quickly identify the root cause of performance issues, often through alerting and automated remediation, while observability takes a more holistic approach that emphasizes the need to understand the relationships between different parts of the system to identify and fix issues.

Proactivity: APM is often reactive, focusing on identifying and fixing issues as they arise, while observability is more proactive, focusing on continuous monitoring and analysis to identify potential issues before they become critical.

Tooling: APM is often built around specific tools and technologies designed to monitor and analyze application performance, while observability is more flexible and adaptable, focusing on integrating a wide range of tools and technologies to provide a comprehensive system view.

You need to have both; they are important to monitoring and managing the performance and health of software applications.

This is why N|Solid is not only an APM but also has observability within its functionalities. And now, with the implementation of ML and SBOM, it goes beyond APM and supports the growing discipline of AIOps.

AIOps: Fundamental concept in Modern IT Operations

AIOps (Artificial Intelligence for IT operations) is an approach to IT operations that leverages machine learning and artificial intelligence to automate and optimize IT tasks. AIOps aims to enhance the efficiency and effectiveness of IT operations by leveraging the vast amount of data generated by various IT systems and applications.

Observability refers to IT teams’ ability to observe and understand the behavior of complex systems in real-time, using a combination of monitoring, logging, and analytics tools.

AIOps and observability enable IT teams to proactively monitor and manage IT systems, applications, and infrastructure, allowing them to identify and resolve issues quickly. AIops uses machine learning and AI algorithms to identify patterns in large amounts of data, while observability provides the visibility and context needed to understand the behavior of complex systems.

Modern Observability in Place

Support for open-source tracing tools and standards like OpenTelemetry facilitates team collaboration in resolving issues. Open Telemetry is the second most active CNCF project, behind only Kubernetes, showcasing its importance to the industry.

_Image Twitter Michael Haberman @hab_mic
_

Following this standard, N|Solid (since N|Solid 4.8.0) supports OTEL:
– Implements the OpenTelemetry TraceAPI, allowing users to use the de-facto standard API to instrument their code.
– It supports using many instrumentation modules available in the Open Telemetry ecosystem. It supports exporting traces using the Open Telemetry Protocol(OTLP) over HTTP.
– With this feature is now possible to send N|Solid runtime monitoring information (metrics and traces) to backends supporting the Open Telemetry standard like multiple APMS (Dynatrace, Datadog, Newrelic, etc.).

Additionally, we included OTEL in the ‘APM performance dashboard,’ an open-source tool we released to the community, enabling developers and organizations to understand the impact of APM tools’ performance.

_Img APM’s Performance Dashboard View
_

Recent enhancements to the tool include the following:

Updated the data with N|Solid 4.8.0 -> 16.16.0 and 14.20.0
Added a few new tests: especially with different solutions for graphql.
Added more APMs: opentelemetry, AppDynamics

Added testing of N|Solid against Datadog, Dynatrace, and NewRelic.

Do you want to implement OTel in your Node.js application? ????

Enriching telemetry data with metadata is an important aspect of observability, and OpenTelemetry provides a flexible and extensible framework for doing so. However, there can be challenges in implementing this in practice, especially when dealing with multiple tools and technologies.

One approach to addressing this challenge is to use a centralized configuration management tool to ensure consistency in metadata enrichment across your observability stack. Please review the following articles to give you an accurate guide to implementing Opentelemetry in your project.

Enhance Observability with Opentelemetry tracing – Part 1

Instrument your Nodejs Applications with Open Source Tools – Part 2

However, if you want this implementation out of the box and have other useful features, we invite you to try N|Solid.

Conclusion

N|Solid Supports OpenTelemetry Features, Integrates SBOM and ML at its Core.

By supporting OpenTelemetry features, N|Solid provides seamless integration with this framework, enabling customers to understand their applications, infrastructure behavior, and performance. This integration enhances the ability of developers and operators to troubleshoot issues, identify bottlenecks, and optimize application performance.

N|Solid’s integration of Software Bill of Materials (SBOM) provides a comprehensive list of all software components used in an application, including open-source libraries and dependencies, which helps organizations to mitigate security risks and ensure compliance with regulations. By integrating SBOM at its core, N|Solid provides organizations with an efficient and reliable way to manage the security and compliance of their software applications.

Finally, N|Solid’s integration of machine learning (ML) at its core is another critical feature that helps to identify patterns and anomalies in data, allowing developers and operators to gain insights that are not easily detectable using traditional monitoring tools. This integration of ML at the core of N|Solid enables organizations to improve the overall reliability, performance, and security of their applications and services.

N|Solid’s support of OpenTelemetry features, integration of SBOM, and integration of ML at its core provides developers and operators with a comprehensive set of tools to manage and optimize their applications and infrastructure, making N|Solid a valuable platform in the modern software development and operations landscape.

Ready to connect?

If you want to know more about our APM’S Benchmark project and get the most out of your Node.js application, read this incredible article by our VP of engineering, Adrián Estrada, ‘In-depth analysis of the APMs performance cost in Node.js.

We also invite you to ????️ Use the ✨APM’s Performance Dashboard✨here:
???? Read the full blog post here: https://nsrc.io/4xFaster
???? Contribute here: https://github.com/nodesource/node-APMs-benchmark
If you have any questions, please contact us at [email protected] or through this form.

Experience the Benefits of N|Solid’s Integrated Features
Sign up for a Free Trial Today

To get the best out of Node.js and experience the benefits of its integrated features, including OpenTelemetry support, SBOM integration, and machine learning capabilities. Sign up for a free trial and see how N|Solid can help you achieve your development and operations goals. #KnowyourNode

Instrument your Nodejs Applications with Open Source Tools – Part 2

As we mentioned in the previous article, at NodeSource, we are dedicated to observability in our day-to-day, and we know that a great way to extend our reach and interoperability is to include the Opentelemetry framework as a standard in our development flows; because in the end our vision is to achieve high-performance software, and it is what we want to accompany the journey of developers in their Node.js base applications.

With this, we know that understanding the bases was very important to know the standard and its scope, but that it is necessary to put it into practice. How to integrate Opentelemetry in our application?; and although NodeSource has direct integration into its product in addition to more than 10 key functionalities in N|Solid, that extend the offer of a traditional APM, as you know, we are great contributors to the Open Source project, we also support the binary distributions of the Node.js project, our DNA is always helping the community and showing you how through Open Source tools you can still increase the visibility. So through this article, we want to share how to set up OpenTelemetry with Open Source tools.

In this article, you will find __How to Apply the OpenTelemetry OS framework in your Node.js Application__, which includes:

Step 1: Export data to the backend

Step 2: Set up the Open Telemetry SDK
__Step 3__: Inspect Prometheus to review we’re receiving data

Step 4: Inspect Jaeger to review we’re receiving data

Step 5: Getting deeper at Jaeger 👀

Note: This article is an extension of our talk at NodeConf.EU, where we had the opportunity to share the talk:

__Dot, line, Plane Trace!__
__Instrument your Node.js applications with Open Source Software__
Get insights into the current state of your running applications/services through OpenTelemetry. It has never been as easy as now to collect data with Open Source SDKs and tools that will help you extract metrics, generate logs and traces and export this data in a standardized format to be analyzed using the best practices. In this talk, We’ll show how easy it is to integrate OpenTelemetry in your Node.js applications and how to get the most out of it using Open Source tools.

To see the talks from this incredible conference, you can watch all sessions through live-stream links below 👇
– Day 1️⃣ – https://youtu.be/1WvHT7FgrAo
– Day 2️⃣ – https://youtu.be/R2RMGQhWyCk
– Day 3️⃣ – https://youtu.be/enklsLqkVdk

Now we are ready to start 💪 📖 👇

Apply the OpenTelemetry OS framework in your Node.js Application

So, going back to the distributed example we described in our previous article, here we can see what the architecture looks like this after adding observability.

Every service will collect signals by using the OpenTelemetry Node.js SDK and export the data to specific backends so we can analyze it.

We are going to use the following:

JAEGER for Traces and Logs.

Prometheus to visualize the metrics.

_Note: _Jaeger and Prometheus are probably the most popular open-source tools in space.

Step 1: Export data to the backend

How the data is exported to the backends differs:
To send data to _JAEGER__, we will use OTLP over HTTP, whereas for _Prometheus__, the data will be pulled from the services using HTTP.

First, we will show you how easy it is to set up the OpenTelemetry SDK to add observability to our applications.

### Step 2: Set up the OpenTelemetry SDK

First, we have the providers in charge of collecting the signals, in our case __NodeTracerProvider__ for traces and __MeterProvider__ for metrics.
Then the exporters send the collected data to the specific backends.
The Resource contains attributes describing the current process, in our case, __ServiceName__ and __Container. Id’s__. The name of these attributes is well defined by the spec (it’s in the __semantic_conventions module__) and will allow us to differentiate where a specific signal comes from.

So to set up traces and metrics, the process is basically the same: we create the provider passing the Resource, then register the specific exporter.

We also register instrumentations of specific modules (either core modules or popular userspace modules), which provide automatic Span creation of those modules.

Finally, the only important thing to remember is that we need to initialize OpenTelemetry before our actual code; the reason is these instrumentation modules (in our case for __http__ and fastify) __monkeypatch__ the module they’re instrumenting.

Also, we create the __meter instruments__ because we will use them on every service: an __HTTP request counter__ and a couple of observable gauges for __CPU usage__ and __ELU usage__.

So let’s spin the application now and send a request to the API. It returns a 401 Not Authorized. Before trying to figure out what’s going on, let’s see if Prometheus and jaeger are actually receiving data.

Step 3: Inspect Prometheus to review we’re receiving data

Let’s look at Prometheus first:
Looking at the HTTP requests counter, we can see there are 2 data points: one for the __API service__ and another one for the __AUTH service__. Notice that the data we had in the Resource is __service_name__ and __container_id__. We also can see the process_cpu is collecting data for the 4 services. The same is true for __thread_elu__.

Step 4: Inspect Jaeger to review we’re receiving data

Let’s look at Jaeger now:
We can see that one trace corresponding to the __HTTP request__ has been generated.

Also, look at this chart where the points represent traces, the X-axis is the timestamp, and the Y-axis is the duration. If we inspect the trace, we can see it consists of 3 spans, where every span represents an __HTTP transaction__, and it has been automatically generated by the instrumentation-HTTP modules:

The 1st span is an HTTP server transaction in the API service (the incoming HTTP request).
The 2nd span represents a POST request to AUTH from API.
The 3rd one represents the incoming HTTP POST in AUTH. If we inspect a bit this last span, apart from the typical attributes associated with the request (HTTP method, request_url, status_code…).

We can see there’s a Log associated with the Span this makes it very useful as we can know exactly which request caused the error. By inspecting it, we found out that the reason for the failure was missing the auth token.

This piece of information wasn’t generated automatically, though, but it’s very easy to do. So in the verify route from the service, in case there’s an error verifying the token, we retrieve the active span from the current context and just call __recordException()__ with the error. As simple as that.

Well, so far, so good. Knowing what the problem is, let’s add the auth token and check if everything works:

curl http://localhost:9000/ -H “Authorization: Bearer eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJpc3MiOiIiLCJpYXQiOjE2NjIxMTQyMjAsImV4cCI6MTY5MzY1MDIyMCwiYXVkIjoid3d3LmV4YW1wbGUuY29tIiwic3ViIjoiIiwibGljZW5zZUtleSI6ImZmZmZmLWZmZmZmLWZmZmZmLWZmZmZmLWZmZmZmIiwiZW1haWwiOiJqcm9ja2V0QGV4YW1wbGUuY29tIn0.PYQoR-62ba9R6HCxxumajVWZYyvUWNnFSUEoJBj5t9I”

Ok, now it succeeded. Let’s look at Jaeger now. We can see the new trace here, and we can see that it contains 7 spans, and no error was generated.

Now, it’s time to show one very nice feature of Jaeger. We can compare both traces, and we can see in grey the Spans that are equal, whereas we can see in Green the Spans that are new. So just by looking at this overview, we can see that if we’re correctly Authorized, the API sends a GET request to SERVICE1, which then performs a couple of operations against POSTGRES. If we inspect one of the POSTGRES spans (the query), we can see useful information there, such as the actual QUERY. This is possible because we have registered the instrumentation-pg module in SERVICE1.

And finally, let’s do a more interesting experiment. We will inject load to the application for 20 seconds with autocannon…

If we look at the latency chart, we see some interesting data: up until at least the 90th percentile, the latency is basically below 300ms, whereas starting at least from 97.5%, the latency goes up a lot. More than 3secs. This is Unacceptable 🧐. Let’s see if we can figure out what’s going on 💪.

Step 5: Getting deeper at Jaeger 👀

Looking at Jaeger and limiting this to like 500 spans, we can see that the graph here depicts what the latency char showed. Most of the requests are fast, whereas there are some significant outliers.

Let’s compare one of the fast vs. slow traces. In addition to querying the database, we can see the slow trace in that SERVICE1 sends a request to SERVICE2. That’s useful info for sure. Let’s take a look more closely at the slow trace.

In the __Trace Graph view__, every node represents a Span, and on the left-hand side, we can see the percentage of time with respect to the total trace duration that the subgraph that has this node as root takes. So by inspecting this, we can see that the branch representing the HTTP GET from SERVICE1 to SERVICE2 takes most of the time of the span. So it seems the main suspect is SERVICE2. Let’s take a look at the Metrics now. They might give us more information. If we look at the thread.elu, we can see that for SERVICE2, it went 100% for some seconds. This would explain the observed behavior.

So now, going to the SERVICE2 code route, we can easily spot the issue. We were performing a __Fibonacci operation__. Of course, this was easy to spot as this is a demo, but in real scenarios, this would not be so simple, and we would need some other methods, such as CPU Profiling, but regardless, the info we collected would help us narrow down the issue quite significantly.

So, that’s it for the demo. We’ve created a repo where you can access the full code, so go play with it! 😎

Main Takeaways

Finally, we just want to share the main takeaways about implementing observability with Open Software Tools:

Setting up observability in our Node.js apps is actually not that hard.
It allows us to observe requests as they propagate through a distributed system, giving us a clear picture of what might be happening.
It helps identify points of failure and causes of poor performance. (for some cases, some other tools might also be needed: CPU profiling, heap snapshots).
Adding observability to our code, especially tracing, comes with a cost. So Be cautious! ☠️But we are not going to go deeper into this, as it could be a topic for another article.

Before you go

If you’re looking to implement observability in your project professionally, you might want to check out N|Solid, and our ’10 key functionalities’. We invited you to follow us on Twitter and keep the conversation!

Enhance Observability with Opentelemetry tracing – Part 1

Recently, conversations have been increasing around OpenTelemetry; it is gaining more and more momentum in Node.js development circles, but what is it? How can we take advantage of the key concepts and implement them in our projects?

Of note, NodeSource is a supporter of OpenTelemetry, and we have recently implemented full support of the open-source standard in our product N|Solid. It allows us to make our powerful Node.js insights accessible via the protocol.

Opentelemetry is a relatively recent vendor-agnostic emerging standard that began in 2019 when OpenCensus and OpenTracing combined to form OpenTelemetry – seeking to provide a single, well-supported integration surface for end-to-end distributed tracing telemetry. In 2021, they released V1. 0.0, offering stability guarantees for the approach.

And most important, OpenTelemetry is an open-source observability project/framework with a collection of software development kits (SDKs), APIs, and tools for instrumentation from the Cloud Native Computing Foundation (CNCF).

W3C Trace Context is the standard format for OpenTelemetry. Cloud providers are expected to adopt this standard, providing a vendor-neutral way to propagate trace IDs through their services. Organizations use OpenTelemetry to send collected telemetry data to a third-party system for analysis.

But to break down its history a bit, we think it’s important to understand the concept of __Observability__.

At Nodesource, as you likely know, we work daily in Observability, focusing exclusively on the Node.js runtime and N|Solid from N|Solid 4.8.0 supports some OpenTelemetry features. But before getting deeper in OTEL, it is important to understand Observability and try to resolve this important question: What is Observability?

Setting the foundations to talk about OpenTelemetry

It’s important to understand that when we talk about Observability, we need first to know what questions we seek to answer or clarify when detailing a system.

The first question often asked is __why__ my application has specific behavior. And to solve this and other questions, we first must instrument our system so that our application can emit signals, that is, traces, metrics, and logs. When we correctly do this, we have the necessary information needed.

Observability is the ability to measure the internal states of a system by examining its outputs. – Splunk

Detailing a system through Data Collection: Telemetry Data

Your systems and apps need proper tooling to collect the appropriate telemetry data to achieve Observability.
But what is the telemetry data that we need?

The three key concepts are :

Metrics

Logs

Traces

Ok, let’s define each of these concepts:

Metrics

__Metrics__: are aggregations over a period of time of numeric data about your infrastructure or application. Examples include system error rate, CPU utilization, and request rate for a given service.

As quoted by isitobservable.io, OpenTelemetry has three metric instruments :

__Counter__: a value that is summed over time (similar to the Prometheus counter)
__Measure__: a value that is aggregated over time (a value over some defined range)
__Observer__: captures a current set of values ​​at a given time (like a gauge in Prometheus)

The context is still very important, along with metric information like name, description, unit, kind (counter, observer, measure), label, aggregation, and time.

Logs

__Logs__: A Log is a timestamped message emitted by services or other components. They are not necessarily associated with any particular user request or transaction, but they become more valuable when they are.

The logical line would tell us that here we must jump to traces because it is part of the three key concepts. But before defining what a trace is, we must zoom in on the concept of __Span__.

Span

__Span__: A Span represents a unit of work or operation. It tracks specific operations that a request makes, painting a picture of what happened during the time in which that operation was executed.

A span is the building block of a trace and is a named, timed operation representing a piece of the workflow in the distributed system. All traces are composed of Spans.

Traces

Traces__: A Trace records the paths taken by requests (made by an application or end-user) as they __propagate through multi-service architectures, like microservice and serverless applications. It is also known as Distributed Trace. A trace is almost always an assessment of end-to-end performance.

Without tracing, it is challenging to pinpoint the cause of performance problems in a distributed system.

Suppose you realize we broke into the three pillars of Observability when introducing the concept of Span. In that case, however, the three pillars and Span conform to what is known as __Telemetry Data__, which are simple __signals emitted from applications and resources about their internal state.__

The core concept of Context Propagation

When we want to correlate events across our services’ boundaries, we look for a context that helps us identify the current trace and Span. But context is not the only thing we need; we also need __propagation__.

If you are with us following the article carefully, you will realize that in the definition of trace, we talk about the word __‘propagation’__. You might wonder what this means.

Propagation is how context is bundled and transferred in and across services, often via HTTP headers. Now, With these clear concepts, we can begin to understand the concept of __Context Propagation__.

A critical functionality required to implement Distributed Tracing is the concept of Context Propagation. We can define it as a mechanism for storing state and accessing data across the lifespan of a distributed transaction, either across execution contexts inside a process or across the boundaries of the services that conform to our system.

For In Process propagation__, we typically use something like the __AsyncLocalStorage class from the async_hooks module.

Whereas Across Processes__, it will depend on the IPC protocol used. For example, for HTTP, there’s the Trace Context specification from the _W3C__, which defines the _traceparent and tracestate headers to propagate tracing info.

Getting into a Distributed Application

Let’s say we have a distributed application like the one in the picture. It has 4 Nodejs services: API, auth, Service1 and Service2, and 1 database.

Imagine we’re having intermittent performance issues. They could come from several points:
– Database access
– Network link status,
– DNS request latency, etc.
Finding where exactly may become a very hard and time-consuming task; the harder, the more complex the system is.

Distributed tracing will help us A LOT with that, as we’ll generate tracing information on every point of the distributed system (A, B, C, D, and E). Not only that, but while the request goes through all the services, thanks to Context Propagation, some ‘tracing state’ will be passed along so all the tracing info can be linked to the very same request.

Instrument your system

To get visibility into the performance and behaviors of the different microservices, we need to instrument the code with OpenTelemetry to generate traces. But first, let’s define what Instrumentation is…

Automatic Instrumentation

With __Automatic Instrumentation__, our instrumentation libraries will automatically take the configuration provided (through code or environment variables) and do most of the work.

In the following example, using the OpenTelemetry SDK, we show how we can automatically generate spans for every HTTP transaction handled by the Nodejs HTTP core module.

Manual Instrumentation

__Manual instrumentation__, on the other hand, while requiring more work on the user/developer side, enables far more options for customization, from naming various components within OpenTelemetry (for example, spans and the traces ) to add your own attributes, specific exception handling, and more. See the following example shows how to manually generate a Span using the OpenTelemetry SDK.

How to Implement Opentelemetry in my project?

The way we historically would implement a typical observability pipeline is shown in the following picture.

In this case, having all that data at your disposal is great and can give us a valuable overview of our system, but unless we are able to correlate the observability signals somehow: metrics, logs, and traces, we won’t be able to have the best of it.
OpenTelemetry comes to solve this problem. The solution is going to come from correlating these signals. This can be done by applying the same concept of Context Propagation that was used for Traces to Metrics and Logs, so in this case, identifiers such as the trace_id and the span_id are associated with those signals.

OpenTelemetry spanId and traceId can correlate Logs and Metrics with a specific Span in a Trace.

Opentelemetry Components

OpenTelemetry is much more… Before finishing this article, it is important to describe the different components of OpenTelemetry.

Note: For more detail, read the specification overview of the Opentelemetry project.

OpenTelemetry API: It provides an API, which defines data types and operations for generating and correlating tracing, metrics, and logging data.

💚 From N|Solid 4.8.0 we provide an implementation of the OpenTelemetry TraceAPI, allowing users to instrument their own code using the de-facto standard API.

OpenTelemetry SDK: It provides Language-specific implementations of the API.

OpenTelemetry OTLP: A protocol to transport the Telemetry Data.

💚 With N|Solid 4.8.0 we support many instrumentation modules available in the OpenTelemetry ecosystem. Supporting exporting traces using the OpenTelemetry Protocol(OTLP) over HTTP.

OpenTelemetry Collector: To receive, process, and export Telemetry data.

💚 In N|Solid 4.8.0 is now possible to send N|Solid Runtime monitoring information (metrics and traces) to backends supporting the OpenTelemetry standard like multiple APMS (Dynatrace, Datadog, Newrelic).

OpenTelemetry Semantic Conventions: To have well-defined naming for the attributes associated with the signals: (service.name, http. port, etc.)

We know that there may be other key concepts to develop around Opentelemetry, and for this reason, we invite you to visit the direct website of the project or the Github Repo directly.

This introductory article gives us the basis for sharing a demo we prepared for NodeConf.EU, where we apply open-source tools to implement Opentelemetry in your project. We invite you to stay tuned for our next blog post. 😉 Wait for the second part!

Conclusions

Traces are really useful for understanding modern distributed systems.
We build better software when we get the best of our traces.
With OTel (Opentelemetry), we’re able to have maximized insights and answer future questions without having to make any code changes.
#OTel provides interoperability with observability tools.
Collect and correlate telemetry data is easy if you follow the OpenTelemetry framework.
As far as we know, the OpenTelemetry community is working hard to develop support for metrics and logs. Waiting for news soon! 🤞
Note: If you want to learn more about OpenTelemetry in Javascript, click HERE

To start getting more value out of your traces and metrics, you can use Opentelemetry with N|Solid back-end.

Achieve Your Performance Goals With N|Solid

We know that you want to get the best out of your application and to do it professionally, you will surely need a great ally to help you with various tools without affecting your performance. We do not want to stay in a ‘marketing speech’ where we tell you that we are the best… you can 👀check it directly here with this Open Source tool that also includes OTEL Results.

We’d love to hear more from you! 💚
– Feel free to TRY N|Solid and get in touch with us on Twitter at @nodesource.