Introducing N|Solid Copilot: Your AI-Powered Node.js Navigator

We are thrilled to announce the latest addition to N|Solid Pro – the N|Solid Copilot, a groundbreaking AI-powered assistant designed to revolutionize your Node.js development experience. This innovative tool is a leap forward in Node.js application observability and security, it’s like having a Node expert on-call.

View of N|Solid Pro Console with the Copilot drawer open allowing a user to interact with the AI Assistant.

Why N|Solid Copilot?

N|Solid Copilot is developed with one goal in mind: to make your life as a Node.js developer or DevOps engineer easier, more efficient, and more secure. It’s like having a Node.js expert by your side, 24/7, offering real-time insights into observability alerts, along with actionable advice tailored to your unique application needs.

Key Features of N|Solid Copilot

Real-time analysis and insights: Identify and resolve performance bottlenecks, memory leaks, and other critical issues. Analyze metrics like CPU usage, event loop utilization, and more.
Anomaly detection and remediation: Utilizing the platform and NodeSource’s ML algorithms, the Copilot can detect anomalous behavior in both application performance and security, as well as identify solutions.
Security vulnerability identification and resolution: N|Solid Pro is continuously scanning for security vulnerabilities within the application code and 3rd party dependencies. Users can ask our Copilot about recommendations and solutions.
Code optimization suggestions: Given its training in Node.js, the AI can or will offer suggestions to optimize code for better performance and efficiency. This can include advice on asynchronous programming patterns, memory management, or the use of specific Node.js features.
Interactive querying: Users can interact with the platform in a conversational manner to query specific application metrics or request insights on performance and security aspects. These queries can be general or specific to the data generated in production.
Knowledge sharing: Users can gain knowledge about how to use N|Solid and implement Node.js best practices, creating a better model for users to get up to speed quickly on the platform.

Using N|Solid Copilot to triage security issues through predefined prompts or user questions.

Experience the Future of Node.js Development Powered by AI

N|Solid Copilot isn’t just a tool; it’s your partner in developing and maintaining great software. Whether you’re debugging a tricky issue, seeking performance improvements, or ensuring your application’s security, N|Solid Copilot is there to guide you every step of the way.

How to Get Started?

Sign Up: Simply sign up for a free SaaS account on our website.
Integrate: Seamlessly integrate N|Solid Copilot with your existing Node.js applications.
Navigate: Let N|Solid Copilot guide your development journey with unparalleled insights and assistance.

We believe N|Solid Copilot will not just change how you work with Node.js; it will transform it. Sign up today and be part of this exciting journey!

Connect with us on Twitter @NodeSource, LinkedIn, and to stay updated with the latest from N|Solid.

See How Much Your APM is Costing You to Monitor Node.js Apps

We are excited to share the release of our new Cost Calculator to showcase just how much the wrong APM provider can add to your cloud hosting costs (try it now). Observability is vital, but it comes with computational overhead that shares the same infrastructure as your application. This is compounded in typical Node.js APM tooling due to the internal workings of Node.js itself. We are performance junkies at NodeSource, so observability without overhead was our first and foremost goal with the original architecture of the N|Solid Runtime. (Of course we didn’t stop there and also provide the deepest insights into your application.) With our new Cost Calculator, you can see just how much using the wrong APM tool can hurt. (WARNING: you may be shocked at the difference!).

As you can see…the difference can be shocking. At NodeSource before anything else we’re Node.js developers. We were tired of being burnt ourselves by the additional overhead costs of observability and we kept seeing it over and over for our customers as well. So we decided to build and open source a benchmarking tool (https://benchmark.nodesource.com) to raise awareness of the issue. With it you can compare throughput and other differences between common Node.js observability tooling options.

With the APM performance dashboard we can see just how much APMs impact application performance across a number of areas, and choosing incorrectly can reduce the potential throughput of your application by tens of thousands of requests per second. For more information about why this happens, check out this article by our VP of Engineering Adrián Estrada who provides a comprehensive analysis.

📗 Read the full blog post here: In-depth Analysis of the Performance Costs of APMs in Node.js.

How to use the Node Observability Cost Calculator

It’s really easy to use, simply select an Observability Provider (Appdynamics, Datadog, Dynatrace, Instana, or New Relic), then your cloud provider (AWS, Azure, or GCP). From the Infrastructure Service dropdown you can select the service type and then choose from a list of options. Now enter the number of Processes you are monitoring, and see the savings!

Why APM performance matters, beyond the cost savings

Our Cost Calculator should quickly show you just how much you could save by using N|Solid over the competition without giving up observability. It’s easy to overlook the fact that your APM tooling is sharing the same processing time as your application and slowing it down. A big reason why developers and organizations love Node is for its performance, so why add overhead that slows it down? Why pay twice for observability?

(BTW – OpenTelemetry adds significant overhead too, we included it in our benchmark)

The great news is you don’t have to! Add N|solid to your stack today, and begin getting the best observability tooling (plus application security monitoring with NCM – Node Certified Modules) with the least overhead. Start here for FREE.

🛠️ Check the Infrastructure Cost Calculator today! – Infrastructure cost calculator – Nodesource
Review the ✨APM Performance Dashboard✨- https://benchmark.nodesource.com
💚 Contribute here: https://github.com/nodesource/node-APMs-benchmark

Announcing N|Solid v4.9.5

NodeSource is excited to announce N|Solid v4.9.5 which contains the following changes:

General stability improvements and bug fixes.
Node.js v16.20.1 (LTS): Includes a Rebase of N|Solid on Node.js v16.20.1 (LTS).
Node.js v18.16.1 (LTS): Includes a Rebase of N|Solid on Node.js v18.16.1 (LTS).

For detailed information on installing and using N|Solid, please refer to the N|Solid User Guide.

Changes

N|Solid v4.9.5 includes patches for this vulnerability:

CVE-2022-25883: Versions of the package semver before 7.5.2 are vulnerable to Regular Expression Denial of Service (ReDoS) via the function new Range, when untrusted user data is provided as a range.

There are two available LTS Node.js versions for you to use with N|Solid, Node.js 16 Gallium and Node.js 18 Hydrogen.

N|Solid v4.9.5 Gallium ships with Node.js v16.20.1.

N|Solid v4.9.5 Hydrogen ships with Node.js v18.16.1.

The Node.js 16 Gallium LTS release line will continue to be supported until September 11, 2023.

The Node.js 18 Hydrogen LTS release line will continue to be supported until April 30, 2025.

Supported Operating Systems for N|Solid Runtime and N|Solid Console

Please note that The N|Solid Runtime is supported on the following operating systems:

Windows:
Windows 10
Microsoft Windows Server 1909 Core
Microsoft Windows Server 2012
Microsoft Windows Server 2008
macOS:
macOS 10.11 and newer
RPM based 64-bit Linux distributions (x86_64):
Amazon Linux AMI release 2015.09 and newer
RHEL7 / CentOS 7 and newer
Fedora 32 and newer
DEB based 64-bit Linux distributions (x86_64, arm64 and armhf):
Ubuntu 16.04 and newer
Debian 9 (stretch) and newer
Alpine
Alpine 3.3 and newer

Download the latest version of N|Solid

You can download the latest version of N|Solid via http://accounts.nodesource.com or visit https://downloads.nodesource.com/ directly.

New to N|Solid?

If you’ve never tried N|Solid, this is a great time to do so. N|Solid is a fully compatible Node.js runtime that has been enhanced to address the needs of the Enterprise. N|Solid provides meaningful insights into the runtime process and the underlying systems. Click 👉 [HERE]

NodeSource, Inc. Announces AI Assistant “Adrian” for Comprehensive Analysis and Optimization of Node.js Applications and Open-Sourcing of its Augmented Node.js Runtime.

[Seattle, WA, June 28, 2023] — On stage at Collision Conf in Toronto, NodeSource, Inc., the leader in enterprise-grade solutions and support for Node.js, made two big announcements: the private beta of its groundbreaking AI Assistant, “Adrian,” designed to revolutionize the way developers and DevSecOps analyze, optimize, and secure Node.js applications, and that it’s open-sourcing its Node.js runtime to enable developers access to the most advanced runtime available.

NodeSource has been helping developers and organizations with the utilization of Node.js to build digital products and services for nearly a decade, most notably with its industry-leading product, N|Solid. Augmenting N|Solid’s unparalleled depth of insights and telemetry with AI gives customers a new level of context and understanding of performance and security analysis and how best to resolve issues.

The AI agent, Adrian, identifies memory leaks, poor code, security issues, and other performance problems that impact application performance and health.

“It’s like “god-mode” for Node, said Russ Whitman, CEO of NodeSource, “we give developers and DevsSecOps teams much more than telemetry and alerts; we help them identify the real issues, with context, to help them solve quickly. The cost and time savings are massive, and lets developers focus on creating new features and adding value to the organization”.

With the ever-increasing demand for scalable, efficient, and high-performing applications, Node.js developers face the constant challenge of optimizing their codebase to deliver exceptional user experiences. Adrian is an advanced AI-powered agent that provides actionable insights and suggestions, enabling teams to streamline their Node.js applications, reduce downtime, cut costs, and enhance overall user satisfaction.

“In the near future the performance of every software development team will be transformed by AI powered tools like N|solid”, offered Robert Duffy, Chief Product and Technology Officer, Drizly, an Uber company.

Key features of Adrian include:

Automated Metric Collection
Node Performance Enhancer
Intelligent CPU Profiling
Cost Calculator
Code Advisor

Sign up HERE to join the private beta and unlock the full potential of Adrian’s AI-driven insights and optimizations.

“Our AI Assistant is a major advancement to the AI features released in N|Solid last year,” noted Adrian Estrada, the VP of Engineering (and the naming inspiration for the assistant), which showcased how we could leverage the combination of our unique data insights from N|Solid, and the expertise from our team, to provide advanced solutions for our customers. With recent advancements in Generative AI, we can go significantly beyond our expectations to bring new value to our customers.”

NodeSource also has an exciting announcement for the developer community. In addition to the launch of Adrian, the company is open-sourcing its N|Solid Runtime, empowering all developers to utilize the best Node.js runtime available. This move aims to foster collaboration and innovation within the Node.js ecosystem, enabling developers worldwide to contribute to the ongoing advancement of Node.js technology.

“We strongly believe in the power of collaboration and open-source development,” added Trevor Norris, NodeSource’s Principal Architect. “By open-sourcing our runtime, we invite the community to join us in building a stronger, more efficient Node.js environment that benefits everyone. We are excited to see the positive impact this will have on the Node.js ecosystem as a whole.”

Big Announcements

The open-sourced version of N|Solid Runtime will be available with the release of Node 20 later this year.
NodeSource will also be offering a private beta program for Adrian. To sign up for early access and receive updates.

Look for more details at www.nodesource.com.

About NodeSource

NodeSource is a leading provider of Node.js application management solutions, like N|Solid, Node.js Support, and services, helping organizations successfully scale and secure their Node.js applications. Node Certified Modules (NCM) is a comprehensive tool that offers visibility, security, and governance for managing Node.js application dependencies. With its powerful features, NCM ensures that Node.js applications remain secure, reliable, and compliant with licensing requirements.

For media inquiries, please contact:
Brandi Duffy
[email protected]

Measure Node.js server response time with N|Solid

As software developers, we constantly face new challenges in an ever-changing ecosystem. However, we must always remember the importance of addressing performance and security concerns, which remain at the top of our priority list.

To ensure that our applications based on Node.js can meet our performance and scalability needs without compromising security or incurring costly infrastructure changes, we must be aware of the importance of network optimization in Node.js.

The Impact of Latency/Ping Time on the Performance and Speed of Your Node.js Application

IMG – Ping Cats – via GIPHY

This communication, known as network ping time or latency, is a crucial factor that impacts the performance and speed of your application. Knowing how to measure network ping time between the browser and the server is essential for developers who want to optimize their applications and provide a better user experience. _Have you ever wondered how long it takes for your application to communicate with the server? _

Network Optimization in Node.js

To ensure the optimal performance and scalability of our Node.js applications, we must accurately measure our HTTP server’s connection and response time. Doing so enables us to identify and address potential bottlenecks without compromising security or incurring unnecessary infrastructure changes.

Before delving deeper into measuring connection and response time, let’s explore fundamental concepts and critical differentiators in the network landscape.

HTTP vs. WebSocket:

HTTP and WebSocket are communication protocols used in web development but serve different purposes. HTTP is a stateless protocol commonly used for client-server communication, while WebSocket enables full-duplex communication between clients and servers, allowing real-time data exchange.

Types of Connections and Versions:

When creating APIs, HTTP as a protocol and standard has different versions, such as HTTP 1.1 and 2.0. Additionally, APIs may use alternative protocols like gRPC, which offer different features and capabilities. Understanding these options empowers developers to choose the most suitable tools for their web servers.

TCP/IP Basics:

The Transmission Control Protocol (TCP) and Internet Protocol (IP) are fundamental protocols that form the backbone of computer networks. Among TCP’s critical processes is the three-way handshake, which plays a vital role in establishing a secure and dependable connection between two endpoints. This handshake ensures the orderly and reliable transmission of data. TLS/SSL encryption enhances security, adding an extra layer of protection to the communication between the client and the server.

HTTP vs. HTTPS:

HTTP operates over plain text, which exposes the data being transmitted to potential eavesdropping and tampering.
HTTPS, on the other hand, secures communication through the use of SSL/TLS encryption, providing confidentiality and integrity.
Understanding the trade-offs between HTTP and HTTPS is crucial to making informed data security decisions.

Building a Solid Foundation: Understanding the Three-Way Handshake for Reliable Connections

To evaluate the performance of our HTTP server, we need to differentiate between connection latency and server response time. Connection latency refers to the time it takes for the initial three-way handshake process to complete before data transmission can occur. On the other hand, server response time measures the duration from when the server receives a request to when it generates and sends the response back to the client.

The three-way handshake is a fundamental process in establishing a TCP (Transmission Control Protocol) connection between a client and a server in a network. It involves three steps, a “three-way handshake.” This handshake establishes a reliable and ordered communication channel between the two endpoints.

Here’s a breakdown of the three steps involved in the three-way handshake:

__SYN (Synchronize)__: The client initiates the connection by sending an SYN packet (synchronize) to the server. This packet contains a randomly generated sequence number to initiate the communication.
__SYN-ACK (Synchronize-Acknowledge)__: Upon receiving the SYN packet, the server acknowledges the request by sending an SYN-ACK packet back to the client. The SYN-ACK packet includes its own randomly generated sequence number and an acknowledgment number equal to the client’s sequence number plus one.
__ACK (Acknowledge)__: Finally, the client sends an ACK packet (acknowledge) to the server, confirming the receipt of the SYN-ACK packet. This packet also contains the acknowledgment number equal to the server’s sequence plus one.

Once this three-way handshake process is completed, the client and the server have agreed upon initial sequence numbers, and a reliable connection is established between them. This connection allows for data transmission with proper sequencing and error detection mechanisms, ensuring that the information sent between the client and server is reliable and accurate.

The three-way handshake is essential to establishing TCP connections and is performed before any data transmission can occur. It plays a critical role in ensuring the integrity and reliability of the communication channel, providing a solid foundation for subsequent data exchange between the client and server.

Create a self-serve diagnostic tool for a server-rendered page in Node.js.

The idea is to share an easy-to-follow recipe that will help you create your tool, so let’s start with the ingredients and end with the steps to create a self-serve diagnostic tool for a server-rendered page in Node.js.

Ingredients:

Node.js & NPM installation – https://nodejs.org/

Fastify.js – https://www.fastify.io/

Instructions:

1. Setup a Node.js Project
Use NPM to create your Node project:

$ mkdir diagnostic-tool-nodejs
$ cd diagnostic-tool-nodejs
$ npm init -y

2. Install your NPM packages.
We have Fastify in our recipe, so we must install them first:

$ npm i fastify

3. Create the index.mjs
Create an index.mjs file in the project’s root directory and paste this fastify HTTP server sample code.

import Fastify from “fastify”;

const fastify = Fastify({
logger: true,
});

// Randomly create a timer from 100ms up to X seconds
function timer(time) {
return new Promise((resolve, reject) => {
const ms = Math.floor(Math.random() * time) + 100;
setTimeout(() => {
resolve(ms);
}, ms);
});
};

// Declare the root route and delay the response randomly
fastify.get(“/”, async function (request, reply) {
const wait = await timer(5000);
return { delayTime: wait };
});

// Run the server!
fastify.listen({ port: 3000 }, function (err, address) {
if (err) {
fastify.log.error(err);
process.exit(1);
}
});

This will start the server on port 3000, which you can access by going to http://localhost:3000 in your web browser.

Integrate with N|Solid Console

Be sure you already have N|Solid installed and running on your environment; otherwise, go to https://downloads.nodesource.com and get the installer.

Also, run the console using docker as an alternative to the local installation.

docker run -d -p 6753:6753 -p 9001:9001 -p 9002:9002 -p 9003:9003 nodesource/nsolid-console:hydrogen-alpine-latest

With the application already initialized with npm, Fastify installed, and our index.js in place, we can connect our process with N|Solid

Run the HTTP server with the NSOLID RUNTIME following the instructions on the principal console page.

IMG – Connect N|Solid

In this case, we ran the process by passing the config via environment variables and running a local installation of the Nsolid console.

NSOLID_APPNAME=”NSOLID_RESPONSE_TIME_APP” NSOLID_COMMAND=”127.0.0.1:9001″ nsolid index.mjs

If you instead use our SaaS console, you need to use the NSOLID_SAAS env instead of __NSOLID_COMMAND__.

NSOLID_APPNAME=”NSOLID_RESPONSE_TIME_APP” NSOLID_COMMAND=”XYZ.prod.proxy.saas.nodesource.io:9001″ nsolid index.mjs

After completing those steps, you should be able to watch the app and process connected to the console.

IMG – Connect N|Solid Process

GIF 1 – Connect N|Solid Process

Go to the application process and add the HTTP(S) Server 99th Percentile Duration metric to see in near-real time the HTTP server latency response time and also we have the HTTP(S) Request Median Duration.

GIF 2 – Monitor Process Metrics

After this, we should be able to generate some traffic and see how the response times behave with the sample code provided, generating some response time randomness from 100ms up to 5 secs.

To generate the traffic, we can use autocannon

npx autocannon -d 120 -R 60 localhost:3000

After running autocannon for some minutes, we can see the P99 metric of the HTTP Server. The median and compare them.

IMG – http-latency-response-time-metrics

IMG – http-request-median-duration

IMG – p99-metric

To fully utilize the metrics provided by N|Solid, it is crucial to have a comprehensive understanding of their significance. Two critical metrics offered by N|Solid are the 99th Percentile and the HTTP Median metric. These metrics play a vital role in assessing the performance of Node.js applications in production environments. By getting deeper into their practical application and importance, we can unlock the actual value of these metrics in N|Solid and make informed decisions to optimize our production systems. Let’s explore this further.

The 99th Percentile metric

The 99th percentile is a statistical measure commonly used to analyze and understand response time or latency in a system.

Imagine you have a web application that handles incoming requests. To understand how fast the server responds, you measure the time it takes for each request and gather that data. You can find the 99th percentile response time by looking at the data.

For example, __the 99th percentile response time is 500 milliseconds__.
This means that only 1% of the requests took longer than 500 milliseconds to get a response. In simpler terms, 99% of the requests were handled in 500 milliseconds or less, which is fast.

It helps you identify and address any outliers or performance bottlenecks affecting a small fraction of requests but can significantly impact the user experience or system stability. Monitoring the 99th percentile response time helps you spot any slow requests or performance issues that might affect a few users but still need attention. but can have a significant impact on user experience or system stability.

The HTTP median metric

When sorted in ascending or descending order, the median represents a dataset’s middle value.

To illustrate the difference between the 99th percentile and the median, let’s consider an example. Suppose you have a dataset of response times for a web application consisting of 10 values:
[100ms, 150ms, 200ms, 250ms, __500ms__, 600ms, 700ms, 800ms, 900ms, 1000ms].

The median response time would be the middle value when the dataset is sorted, which is the 5th value, 500ms. This means that 50% of the requests had a response time faster than 500ms, and the other 50% had a response time slower than 500ms.

Connect with NodeSource

If you have any questions, please contact us at [email protected] or through this form.

Experience the Benefits of N|Solid’s Integrated Features
Sign up for a Free Trial Today

To get the best out of Node.js and experience the benefits of its integrated features, including OpenTelemetry support, SBOM integration, and machine learning capabilities. Sign up for a free trial and see how N|Solid can help you achieve your development and operations goals. #KnowyourNode

Strengthening Node.js Security: NodeSource-GitHub Partnership

Strengthening Node.js Security: NodeSource and GitHub Partner to Boost Security for Software Developers

The NodeSource-GitHub partnership is a game-changer for developers seeking to build secure applications directly integrating NCM’s (Node Certified Modules) powerful security features into their GitHub Actions workflow. With our NCM GitHub App developers can easily add NCM to their repositories, configure organization-wide rules for vulnerability scanning and approval processes, and receive real-time reports on vulnerabilities in pull requests and deployment workflows that target a GitHub environment.

NCM is a core feature of N|Solid, providing enhanced security for Node.js applications in production environments. We help organizations & developers use Node to its fullest through __N|Solid__, the world’s best Node.js observability and security tool built on top of the Node.js runtime. It provides a secure environment for running Node.js applications and advanced features such as worker threads monitoring, memory leak detection, and CPU profiling.

This new integration with GitHub Actions Deployment Protection Rules streamlines managing open-source Node packages, ensuring compliance with licensing requirements, and helps developers proactively identify and mitigate security risks before they deploy their Node.js applications using GitHub Actions Workflows. It adds a valuable layer of security to the development and deployment workflows, enabling developers to identify and fix vulnerabilities before they become major security breaches, ultimately safeguarding Node.js applications and protecting critical data.

Simplifying Vulnerability Management for Open-Source Dependencies

Node.js applications and services rely heavily on open-source Node packages for their source code. Unfortunately, many of these packages have publicly disclosed vulnerabilities, often ignored or overlooked by developers. This can leave applications vulnerable to malicious code execution and secret leaks, potentially resulting in significant security breaches.

To mitigate this risk, developers must be vigilant when selecting and using Node packages in their projects and take prompt action when vulnerabilities are discovered. This requires staying informed about potential security issues and planning to address them.

NCM integration with GitHub Actions Deployment Protection Rules simplifies managing open-source Node packages. Users can add the NCM GitHub App to their repositories via the GitHub Marketplace and check NCM results in the Accounts Portal for every action, such as Pull Requests or Deployments.

With this integration, devs can:

Set up repositories to use the NCM GitHub App by searching and adding it via the GitHub Marketplace or using a direct link from the NodeSource Accounts Portal.

Check the NodeSource Accounts Portal for NCM results related to actions such as Pull Requests or Deployments configured in GitHub repositories.

NCM analyzes and approves or rejects every deployment flow based on organization-configured rules, ensuring secure project deployments.

Receive detailed reports attached to every Pull Request and deployment in configured repositories, indicating NCM’s findings with green or red status markers, helping users make informed security decisions.

Now, with the integration of NCM (Node Certified Modules) directly into N|Solid Console and through the __GitHub Marketplace__, users can access even more powerful toolsets for managing their Node.js applications. This integration streamlines managing open-source Node packages, allowing users to easily track and monitor package dependencies, scan for vulnerabilities, and ensure compliance with licensing requirements.

By leveraging the power of NCM within N|Solid Console and the GitHub Marketplace, organizations can effectively enhance their applications’ security and compliance while ensuring their stability and reliability. NCM provides a robust solution to proactively identify and address security risks, maintain compliance, and improve application performance. It empowers organizations to build and deploy secure, reliable, and compliant applications, ultimately protecting their reputation and mitigating risks associated with security breaches and compliance violations.

NCM is a powerful tool that greatly enhances application security, compliance, stability, and reliability. Organizations can proactively mitigate security risks, maintain compliance, and ensure application stability by integrating NCM into the deployment flow through N|Solid Console and the GitHub Marketplace. Embracing NCM as a part of the development process is a prudent choice for organizations prioritizing application security, compliance, and reliability in today’s dynamic software development landscape.

NCM – Deployment Protection Rule

GitHub Marketplace offers a range of third-party applications and services, such as code analysis tools, project management tools, continuous integration, deployment (CI/CD) tools, and security tools, among others, that can be integrated into pull requests and deployment workflows with GitHub Actions.

With its powerful feature set and certification program, NCM is an essential tool for any developer working with open-source Node packages.

Related Content

Unleashing the Power of NCM – https://nsrc.io/UnleashingNCM

Vulnerability Scanning with NCM – https://nsrc.io/VulnerabilityScanningNS

Avoiding npm substitution attacks using NCM – https://nsrc.io/AvoidAttackswithNCM

Experience the Power of N|Solid

To get the best out of Node.js and experience the benefits of its integrated features, including OpenTelemetry support, SBOM integration, and machine learning capabilities.✍️ Sign up for a free trial and see how N|Solid can help you achieve your development and operations goals. #KnowyourNode

Instrument your Nodejs Applications with Open Source Tools – Part 2

As we mentioned in the previous article, at NodeSource, we are dedicated to observability in our day-to-day, and we know that a great way to extend our reach and interoperability is to include the Opentelemetry framework as a standard in our development flows; because in the end our vision is to achieve high-performance software, and it is what we want to accompany the journey of developers in their Node.js base applications.

With this, we know that understanding the bases was very important to know the standard and its scope, but that it is necessary to put it into practice. How to integrate Opentelemetry in our application?; and although NodeSource has direct integration into its product in addition to more than 10 key functionalities in N|Solid, that extend the offer of a traditional APM, as you know, we are great contributors to the Open Source project, we also support the binary distributions of the Node.js project, our DNA is always helping the community and showing you how through Open Source tools you can still increase the visibility. So through this article, we want to share how to set up OpenTelemetry with Open Source tools.

In this article, you will find __How to Apply the OpenTelemetry OS framework in your Node.js Application__, which includes:

Step 1: Export data to the backend

Step 2: Set up the Open Telemetry SDK
__Step 3__: Inspect Prometheus to review we’re receiving data

Step 4: Inspect Jaeger to review we’re receiving data

Step 5: Getting deeper at Jaeger 👀

Note: This article is an extension of our talk at NodeConf.EU, where we had the opportunity to share the talk:

__Dot, line, Plane Trace!__
__Instrument your Node.js applications with Open Source Software__
Get insights into the current state of your running applications/services through OpenTelemetry. It has never been as easy as now to collect data with Open Source SDKs and tools that will help you extract metrics, generate logs and traces and export this data in a standardized format to be analyzed using the best practices. In this talk, We’ll show how easy it is to integrate OpenTelemetry in your Node.js applications and how to get the most out of it using Open Source tools.

To see the talks from this incredible conference, you can watch all sessions through live-stream links below 👇
– Day 1️⃣ – https://youtu.be/1WvHT7FgrAo
– Day 2️⃣ – https://youtu.be/R2RMGQhWyCk
– Day 3️⃣ – https://youtu.be/enklsLqkVdk

Now we are ready to start 💪 📖 👇

Apply the OpenTelemetry OS framework in your Node.js Application

So, going back to the distributed example we described in our previous article, here we can see what the architecture looks like this after adding observability.

Every service will collect signals by using the OpenTelemetry Node.js SDK and export the data to specific backends so we can analyze it.

We are going to use the following:

JAEGER for Traces and Logs.

Prometheus to visualize the metrics.

_Note: _Jaeger and Prometheus are probably the most popular open-source tools in space.

Step 1: Export data to the backend

How the data is exported to the backends differs:
To send data to _JAEGER__, we will use OTLP over HTTP, whereas for _Prometheus__, the data will be pulled from the services using HTTP.

First, we will show you how easy it is to set up the OpenTelemetry SDK to add observability to our applications.

### Step 2: Set up the OpenTelemetry SDK

First, we have the providers in charge of collecting the signals, in our case __NodeTracerProvider__ for traces and __MeterProvider__ for metrics.
Then the exporters send the collected data to the specific backends.
The Resource contains attributes describing the current process, in our case, __ServiceName__ and __Container. Id’s__. The name of these attributes is well defined by the spec (it’s in the __semantic_conventions module__) and will allow us to differentiate where a specific signal comes from.

So to set up traces and metrics, the process is basically the same: we create the provider passing the Resource, then register the specific exporter.

We also register instrumentations of specific modules (either core modules or popular userspace modules), which provide automatic Span creation of those modules.

Finally, the only important thing to remember is that we need to initialize OpenTelemetry before our actual code; the reason is these instrumentation modules (in our case for __http__ and fastify) __monkeypatch__ the module they’re instrumenting.

Also, we create the __meter instruments__ because we will use them on every service: an __HTTP request counter__ and a couple of observable gauges for __CPU usage__ and __ELU usage__.

So let’s spin the application now and send a request to the API. It returns a 401 Not Authorized. Before trying to figure out what’s going on, let’s see if Prometheus and jaeger are actually receiving data.

Step 3: Inspect Prometheus to review we’re receiving data

Let’s look at Prometheus first:
Looking at the HTTP requests counter, we can see there are 2 data points: one for the __API service__ and another one for the __AUTH service__. Notice that the data we had in the Resource is __service_name__ and __container_id__. We also can see the process_cpu is collecting data for the 4 services. The same is true for __thread_elu__.

Step 4: Inspect Jaeger to review we’re receiving data

Let’s look at Jaeger now:
We can see that one trace corresponding to the __HTTP request__ has been generated.

Also, look at this chart where the points represent traces, the X-axis is the timestamp, and the Y-axis is the duration. If we inspect the trace, we can see it consists of 3 spans, where every span represents an __HTTP transaction__, and it has been automatically generated by the instrumentation-HTTP modules:

The 1st span is an HTTP server transaction in the API service (the incoming HTTP request).
The 2nd span represents a POST request to AUTH from API.
The 3rd one represents the incoming HTTP POST in AUTH. If we inspect a bit this last span, apart from the typical attributes associated with the request (HTTP method, request_url, status_code…).

We can see there’s a Log associated with the Span this makes it very useful as we can know exactly which request caused the error. By inspecting it, we found out that the reason for the failure was missing the auth token.

This piece of information wasn’t generated automatically, though, but it’s very easy to do. So in the verify route from the service, in case there’s an error verifying the token, we retrieve the active span from the current context and just call __recordException()__ with the error. As simple as that.

Well, so far, so good. Knowing what the problem is, let’s add the auth token and check if everything works:

curl http://localhost:9000/ -H “Authorization: Bearer eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJpc3MiOiIiLCJpYXQiOjE2NjIxMTQyMjAsImV4cCI6MTY5MzY1MDIyMCwiYXVkIjoid3d3LmV4YW1wbGUuY29tIiwic3ViIjoiIiwibGljZW5zZUtleSI6ImZmZmZmLWZmZmZmLWZmZmZmLWZmZmZmLWZmZmZmIiwiZW1haWwiOiJqcm9ja2V0QGV4YW1wbGUuY29tIn0.PYQoR-62ba9R6HCxxumajVWZYyvUWNnFSUEoJBj5t9I”

Ok, now it succeeded. Let’s look at Jaeger now. We can see the new trace here, and we can see that it contains 7 spans, and no error was generated.

Now, it’s time to show one very nice feature of Jaeger. We can compare both traces, and we can see in grey the Spans that are equal, whereas we can see in Green the Spans that are new. So just by looking at this overview, we can see that if we’re correctly Authorized, the API sends a GET request to SERVICE1, which then performs a couple of operations against POSTGRES. If we inspect one of the POSTGRES spans (the query), we can see useful information there, such as the actual QUERY. This is possible because we have registered the instrumentation-pg module in SERVICE1.

And finally, let’s do a more interesting experiment. We will inject load to the application for 20 seconds with autocannon…

If we look at the latency chart, we see some interesting data: up until at least the 90th percentile, the latency is basically below 300ms, whereas starting at least from 97.5%, the latency goes up a lot. More than 3secs. This is Unacceptable 🧐. Let’s see if we can figure out what’s going on 💪.

Step 5: Getting deeper at Jaeger 👀

Looking at Jaeger and limiting this to like 500 spans, we can see that the graph here depicts what the latency char showed. Most of the requests are fast, whereas there are some significant outliers.

Let’s compare one of the fast vs. slow traces. In addition to querying the database, we can see the slow trace in that SERVICE1 sends a request to SERVICE2. That’s useful info for sure. Let’s take a look more closely at the slow trace.

In the __Trace Graph view__, every node represents a Span, and on the left-hand side, we can see the percentage of time with respect to the total trace duration that the subgraph that has this node as root takes. So by inspecting this, we can see that the branch representing the HTTP GET from SERVICE1 to SERVICE2 takes most of the time of the span. So it seems the main suspect is SERVICE2. Let’s take a look at the Metrics now. They might give us more information. If we look at the thread.elu, we can see that for SERVICE2, it went 100% for some seconds. This would explain the observed behavior.

So now, going to the SERVICE2 code route, we can easily spot the issue. We were performing a __Fibonacci operation__. Of course, this was easy to spot as this is a demo, but in real scenarios, this would not be so simple, and we would need some other methods, such as CPU Profiling, but regardless, the info we collected would help us narrow down the issue quite significantly.

So, that’s it for the demo. We’ve created a repo where you can access the full code, so go play with it! 😎

Main Takeaways

Finally, we just want to share the main takeaways about implementing observability with Open Software Tools:

Setting up observability in our Node.js apps is actually not that hard.
It allows us to observe requests as they propagate through a distributed system, giving us a clear picture of what might be happening.
It helps identify points of failure and causes of poor performance. (for some cases, some other tools might also be needed: CPU profiling, heap snapshots).
Adding observability to our code, especially tracing, comes with a cost. So Be cautious! ☠️But we are not going to go deeper into this, as it could be a topic for another article.

Before you go

If you’re looking to implement observability in your project professionally, you might want to check out N|Solid, and our ’10 key functionalities’. We invited you to follow us on Twitter and keep the conversation!

11 Features in Node.js 18 you need to try

Node.js 18 LTS is now available. What’s new?

Node.js 18 was released on the 19th of April this year. You can read more in the official blog post release or in the OpenJS Blog announcement. The community couldn’t be more excited!

Here at NodeSource,releases are a big deal. As a team of experts, enthusiasts, and core contributors to the open-source project, we love seeing the progress of Node! We are also one of the primary distributors of the runtime and have been since version 0.x (2014).

Developers download and use our binaries worldwide for their production environments (over 100m a year and growing!). We are incredibly proud to support this important piece of the Node ecosystem in addition to building and supporting customers on our Node.js platform – N|Solid.

“If you use Linux, we recommend using a NodeSource installer.” – From the NPM Documentation

If you want to lend a hand, we welcome your ideas or solutions contact us, or if you would like to help us continue supporting open source, you can contribute with an issue here.

Overall, the community is looking forward to this release with many new features and other benefits in addition to the official release earlier this year that included:

Security: Upgrading to OpenSSL 3.0

APIs: Fetch API is Promise based, providing a cleaner and more concise syntax.

If you are interested in thinking about the future of Node, we recommend checking out The next-10 group. They are doing some great work thinking about the strategic direction for the next 10 years of Node.js. Their technical priorities are:

Modern HTTP, WebAssembly, and Types.
ECMAScript modules and Observability

_OpenJS Collaborator Summit 2022
_

But now I’m sure you want to get into the changes in v18. What has improved, and what are the new features? That’s what you’re here for 😉. So let us explain 👇

Hydrogen. What is it?

The codename for this release is ‘Hydrogen’. Support for Node.js 18 will last until April 2025. The name comes from the periodic table, and they have been used in alphabetical order (Argon, Boron, Carbon, Dubnium, Erbium…) 🤓 Read more in StackOverflow.

LTS?

According to the Node.js blog, the “LTS version guarantees that the critical bugs will be fixed for a total of 30 months and Production applications should only use Active LTS, or Maintenance LTS releases”. – https://nodejs.dev/en/about/releases/

In short, it focuses on stability and being a more reliable application after allowing a reasonable time to receive feedback from the community and testing its implementation at any scale.

_Nodejs Releases Screenshot 2022
_

How do I know what version of Node and LTS I have?

You can easily do it by typing in your console:

$ node –version

Run the following to retrieve the name of the LTS release you are using:

$ node -p process.release.lts

_Note: _ The previous property only exists if the release is an LTS. Otherwise, it returns nothing.

If you want to be aware of the release planning in the Node.js community, you can check here: Node.js Release Schedule.

What’s new in Node.js 18?

Contributors are constantly working to improve the runtime, introduce more features, and improve developer experience and usability. Today as the worldwide community uses JS for developing API-driven web applications and serverless cloud development, the changes in this new LTS version are important to understand.

In honor of the number 11 (__#funfact__ Undici means ‘eleven’ in Italian), we decided to make our top 11 Node.js 18 features:

Fetch API
🧪- – watch
🧪 OpenSSL 3 Support
🧪 node:test module
Prefix-only core Modules
🧪 Web Streams API
Other Global APIs: Blob and BrodcastChannel.
V8 Version 10.1
Toolchain and Compiler Upgrades
HTTP Timeouts
Undici Library

The idea of this blog post is to relevel the functionalities one by one, so let’s start:

Feature 1: Native Fetch API in Node.js 18

Finally, v18 provides native fetch functionality in Node.js. This is a standardized web API for conducting HTTP or other types of network requests. Previously Node.js did not support it by default. Because JavaScript is utilized in so many areas, this is fantastic news for the entire ecosystem.

Example:

Feature 2:–watch

Using –watch, your application will automatically restart when an imported file is changed. Just like nodemon. And you can use –watch-path to specify which path should be observed.

Also, these flags cannot be combined with –check, –eval, –interactive, or when used in REPL (read–eval–print loop) mode. It just won’t work.

You can now use Node Watch index on your file name to start watching your files without having to install anything.

Feature 3: OpenSSL 3 Support

OpenSSL is an open-source implementation of, among other things, SSL and TLS protocols for securing communication.

One key feature of OpenSSL 3.0 is the new FIPS (Federal Information Processing Standards) module. FIPS is a set of US government requirements for governing cryptographic usage in the public sector.

More information is available in the OpenSSL3 blog post.

Feature 4: The Experimental node:test

The node:test module facilitates the creation of JavaScript tests that report results in TAP (Test Anything Protocol) format. The TAP output is extensively used and makes the output easier to consume.

import test from node:test

This module is only available under the node:scheme.
Read more in Node.js Docs

This test runner is still in development and is not meant to replace other complete alternatives such as Jest or Mocha, but it provides a quick way to execute a test suite without additional third-party libraries. The test runner supports features like subtests, test skipping, callback tests, etc.

node:test and –test

node:assert

The following is an example of how to use the new test runner.

More information may be found in the Node.js API docs.

Feature 5: Prefix-only core Modules

A new way to ‘import’ modules that leverages a ‘node:’ prefix, which makes it immediately evident that the modules are from Node.js core

To learn more about this functionality, we invite you to read Colin Ihrig‘s article Node.js 18 Introduces Prefix-Only Core Modules.

Feature 6: Experimental Web Streams API

A Web Streams API is a set of streams API. Also experimental, it allows JavaScript to access streams of data received over the network programmatically and process them as desired. This means Stream APIs are now available on the global scope. This would help send the data packets in readable and writable streams.

Methods available are as follows,

ReadableStream

ReadableStreamDefaultReader

ReadableStreamBYOBReader

ReadableStreamBYOBRequest

ReadableByteStreamController

ReadableStreamDefaultController

TransformStream

TransformStreamDefaultController

WritableStream

WritableStreamDefaultWriter

WritableStreamDefaultController

ByteLengthQueuingStrategy

CountQueuingStrategy

TextEncoderStream

TextDecoderStream

CompressionStream

DecompressionStream

Feature 7: Other Global APIs

The following APIs in the Node v18 upgrade are exposed on the global scope: Blob and BroadcastChannel.

Feature 8: V8 Version 10.1

Node.js runs with the V8 engine from the Chromium open-source browser. This engine has been upgraded to version 10.1, which is part of the recent update in Chromium 101.

New array methods for finding the last element and index of an array. Array methods findLast and findLastIndex are now available.
Internationalization support: Intl.Locale and the Intl.supportedValuesOf functions.
Improving the performance of class fields and private class methods.
The data format of the v8.serialize function has changed (No compatible with earlier versions of Node.js.)

Keep an eye out here.

Feature 9: Toolchain and Compiler Upgrades

Node.js always provides pre-built binaries for various platforms. For every latest release, toolchains are evaluated and elevated whenever required. Node.js provides pre-built binaries for several different platforms. For each major release, the minimum toolchains are assessed and raised where appropriate.

Pre-built binaries for Linux are now built on Red Hat Enterprise Linux (RHEL) 8 and are compatible with Linux distributions based on Glibc 2.28 or later, for example, Debian 10, RHEL 8, Ubuntu 20.04.
Pre-built binaries for macOS now require macOS 10.15 or later.
For AIX, the minimum supported architecture has been raised from Power 7 to Power 8.

Note: Build-time user-land snapshot(Experimental)

Users can build a Node.js binary with a custom V8 startup using the
–-node-snapshot-main flag of the configure script.

Feature 10: HTTP Timeouts

The http.server timeouts have changed:

headersTimeout (the time allowed for an HTTP request header to be parsed) is set to 60000 milliseconds (60 seconds).

requestTimeout (the timeout used for an HTTP request) is set to 300000 milliseconds (5 minutes) by default.

Feature 11: Undici Library in Node.js

Undici is an official Node team library, although it’s more like an HTTP 1.1 full-fledged client designed from the ground up in Node.js.

Keep alive by default.
LIFO scheduling
No pipelining
Unlimited connections
Can follow redirected (opt-in)

Of note, we support and love Lizz‘s work, so we recommend you check out her fantastic talk in Nodeconf.EU about New and Exciting features in Node.js to understand more about this feature.

Other Features/Changes:

The project undoubtedly has some great challenges in the near future to continue growing and maintaining its leading position in the ecosystem. These are some of the upcoming features. Most of them are experimental; without being the only ones to discuss, there is much work and proposals from an active community such as the Node.js Community.

Default DNS resolution
ECMAScript modules improvements
Improved support for AbortController and AbortSignal
Updated Platform Support
Async Hooks
Direct Network Imports
Build-time user-land snapshot
Support for JSON Import Assertions
Unflagging of JSON modules (experimental)
Support for HTTPS and HTTP imports
Diagnostic Channel
Trace Events
WASI

You can check the full changelog here.

Final Remarks

Node.js 12 will go End-of-Life in April 2022.
Node.js 14 (LTS) or Node.js 16 (LTS) or Later Node.js 18 will be LTS.
Node.js 18 will be promoted to Long-term Support (LTS) in October 2022 (NOW).
After being promoted to LTS, Node.js 18 will be supported until April 2025.

Upgrade Now!

Moving to the LTS version is the best decision for you to include the following improvements in your development workflow:

FetchAPI and Web Streams
V8: New advanced features, array methods, improvements, and enhancements.
Test runner without the need for third-party packages.
Deprecated APIs: Check the list here

Enhancement in Intl.Locale API.
Performance improvement in both class fields and private class methods.

Migration

To migrate your version of Node, follow these steps:

For Linux and Mac users, follow these commands to upgrade to the latest version of Node. The module n makes version management easy:

npm install n -g

For the latest stable version:

n stable

For the latest version:

n latest

Windows Users
Visit the Node.js download page to install the new version of Node.js.

Special Thanks

With 99 contributors and 1348 commits Node.js 18 LTS came to life 🎉. Special thanks to @_rafaelgss @BethGriggs_ @_richard_lau_ To make this release happen 💚

$ nvm install 18.12.0

And thank you to all of Node.js project contributors. Our complete admiration and support for such incredible work 💪.

NodeSource Node.js Binary Distributions

NodeSource, from the beginning, was created with a great commitment to the developers’ community, which is why it has provided documentation for using the NodeSource Node.js Binary Distributions via .rpm, .deb as well as their setup and support scripts.

If you are looking for NodeSource’s Enterprise-grade Node.js platform, N|Solid, please visit https://downloads.nodesource.com/, and for detailed information on installing and using N|Solid, please refer to the N|Solid User Guide.

We are also aware that as a start-up, you want ‘Enterprise-grade’ at a startup price, this is why we extend our product to small and medium-sized companies, startups, and non-profit organizations with N|Solid SaaS.

Useful Links / References

You can upgrade to NodeJS v18 using the official download link

New Node.js features bring a global fetch API & test runner. Check out the Node version 16-18 report

Welcome Node.js 18 by RedHat
Announcing a new –experimental-modules

NODE.JS Retro 2022

Node.js was the top technology used by professional developers in 2022

Stack Overflow’s annual Developer Survey confirmed our experience; Node.js continues to grow its use across the globe due to its scalability and performance as well as its ability to integrate seamlessly with a wide range of technologies and databases make it an ideal technology for businesses of all sizes.

The Node.js open-source project, a cross-platform JavaScript run-time environment built on Chrome’s V8 JavaScript engine, allows developers to use JavaScript to create web applications and serve data quickly, securely, and reliably. That’s why professional developers have adopted it broadly; it helps them in many web-development tasks like API development, streaming, and web and mobile applications as it is fully compatible with existing JavaScript libraries (the Top Language according to Github’s Octoverse Report, it can be used to create highly scalable and dynamic web or mobile applications.

Img 1: Stackoverflow 2022 survey

NodeJS on an Enterprise Level

Node.js excels at simplifying the development process for enterprises. It requires less code to execute tasks, allowing developers to focus on creating high-quality code rather than endless lines of coding. By utilizing asynchronous I/O and non-blocking event-driven input/output makes it lightweight and efficient for building real-time applications.

Img: Node.js Org Use Survey

Node.js is designed to handle high amounts of requests quickly and efficiently. Its architecture is based on a single-threaded, event-driven model that makes it very efficient at handling concurrent requests. This event-driven design allows Node to handle requests without the need for multiple threads. This makes Node.js applications highly scalable, as multiple requests can be served without additional resources or server hardware.

Additionally, Node.js supports streaming and event-based programming, which allows developers to build asynchronous applications. Asynchronous programming will enable applications to respond quickly to multiple requests without waiting for each request to finish before responding.

Therefore the performance of Node.js applications depends mainly on how well they are coded and optimized. Careful planning and optimizing the application code are essential to achieve high performance. Additionally, Node.js applications benefit from caching, clustering, and other optimization techniques. These techniques can help improve the performance and scalability of Node.js applications.

The number one request we get at NodeSource is to help developers and organizations improve the performance of their Node.js applications. It’s a key reason we built our product N|Solid, to provide the visibility and insights to help identify and resolve issues fast without adding overhead like other APMs (NodeSource Benchmark Tool). And why we offer Professional Services from our Node Experts to go a step further with Performance Audits and Training and Node.js Support.

Optimization techniques in Node.js

In our experience, the most common optimization techniques in Node.js are caching, minification, bundling, optimizing database queries, code splitting, using async functions, and using the Node.js cluster module. Here is a quick overview of each.:

Caching

Caching in Node.js helps improve performance by storing data in memory to be accessed quickly when needed. This helps reduce the time it takes to retrieve data from the server and helps reduce the number of requests needed to be made to the server. Caching also allows data to be stored more efficiently, which is helpful for applications with large amounts of data.

Minification

In Node.js reduces the size of code files and other resources by removing unnecessary characters, such as spaces, new lines, and comments, without altering the code’s functionality. Minifying code can help to enhance the performance of your Node.js applications by reducing download time and improving browser rendering speed.

Bundling

Is the process of combining multiple files or resources into one bundle, which typically has a smaller file size than when all files are separate. Bundling can reduce network latency as fewer requests are needed to retrieve data. It also helps improve application performance as the browser can cache a single large file instead of multiple small ones.

Optimizing database queries

In Node.js involves utilizing techniques such as indexing, query optimization, and caching to ensure that database queries are more efficient and run more quickly. Proper indexing can contribute to faster query times. In contrast, query optimization can reduce the time needed to process a query by ensuring that only the data required is requested from the database.

Code splitting

Is a technique to reduce the amount of code sent to the client when a web page is requested. Code splitting efficiently divides code into smaller bundles and only sends the necessary code to the user when needed. This helps improve web application performance, as the user only needs to download the relevant code for the requested page.

Async functions

In Node.js allow code to be run asynchronously, meaning that the code is not executed sequentially. Instead, asynchronous operations can be executed in parallel and execute operations concurrently. This allows the code to execute faster and in a more efficient way. Additionally, asynchronous functions provide better error-handling capabilities and allow greater control over the flow.

Use of the Node.js Cluster Module

The Node.js cluster module allows you to create a group of child processes (workers) that all share the same server port, making it easy to scale your application across multiple CPU cores. It also provides a powerful way to handle requests in a distributed manner and makes it easier to manage and monitor the performance of your application. The cluster also provides an API for sending messages between workers, allowing them to coordinate their activities.

In addition to these optimization techniques in Node.js, it is important to consider the best development practices in Node.js.

The best development practices in Node.js.for 2023

Img: https://xkcd.com/292/

The list includes, but is not less:

Utilizing the latest version of Node.js and ensuring it is regularly updated. For your production binaries, we recommend using our distribution packages (best maintained, documented, and most used production binaries -NodeSource Node.js Binary Distributions

Implementing modern patterns and techniques such as asynchronous programming and proper error handling.

Leveraging dependency management to reduce code complexity and ensure packages are up-to-date.

Adopting modular development practices to create easily reused and scaled components across projects.

Investing in automated testing to ensure quality and stability in the codebase.

Use security libraries to prevent common vulnerabilities and protect against data breaches.

Optimizing memory and resource usage to keep operating costs low.

And to comply with one or several of these good practices, it is essential to use an APM.

Using an Application Performance Monitoring (APM)

Using an Application Performance Monitoring (APM) tool to monitor your Node.js application lets you gain insights into application performance and identify issues quickly. Some popular APM tools for Node.js include New Relic, AppDynamics, Datadog and N|Solid. Each tool offers performance monitoring, error tracking, and real-time analytics features.

Note: Last year, we released for the community an open-source tool to compare the main APMs in Node.js; we invite you to contribute or use it in your work.

Selecting the right APM for Node.js will depend on the specific needs of your project. However (yes, we are biased 🙂), we believe N|Solid is the best APM for Node.js is the best APM for Node.js; because it provides developers with deeper insights and key integrations and adds security features no other APM can.

Conclusion:

Node.js is quickly becoming a popular choice for enterprise-level applications. With its lightweight architecture, scalability, and flexibility,
Node.js is an ideal language for businesses that need applications that can handle high traffic and complex data.
Node.js allows organizations to develop highly-customizable web applications that are secure, reliable, and perform well at scale.
Node.js also has a vibrant open-source community, allowing developers to easily find and use existing libraries and frameworks.

Are you creating a Node.js application?

Follow these simple steps:

Start by selecting a framework. Node.js has many available frameworks, such as Fastify, Hapi, or Koa. Choose the one that best fits the needs of your application.

Set up a package.json file to better manage your project’s dependencies.
Create a folder structure to organize the components of your application.
Structure your code into separate files as your application grows.
Write automated tests for your application.
Implement error handling for any unexpected issues.
Validate user input before handing it off to your application.
Utilize caching to improve performance.
Consider deploying
Use an APM and follow our diagnostic blog-post series (Remember that for Node.js, N|Solid is the recommended option 😉 ).

Good programming could help create a project exactly how you want. In NodeJS, there are so many open-source projects to take inspiration from.

— Wait for our list of projects and technologies in Node.js to keep an eye on in 2023 —

With services from a NodeJS expert company such as NodeSource, you could make the most of the technology’s robust features to achieve your web development goals. We will be happy to support you in your node.js journey!

Here are our channels to follow us and continue the conversation:
Twitter
LinkedIn
Github.
As always, the best place to contact us is via our website or [email protected].

About N|Solid

N|Solid is an augmented version of Node.js that includes additional features such as security, performance monitoring, and enhanced debugging tools. It’s an excellent option for projects that require robust debugging and performance capabilities.

2023 N|Solid Awards: The Top 10 Best Node.js Open Source Projects to Watch

NodeSource has been a part of the Node.js ecosystem since 2014, contributing to the open-source project, distributing binaries (over 100m annually!), providing expert Node Services, and building tooling (N|Solid) to support developers to make the best software leveraging Node.js. Every year, we look at the open-source projects we believe are the most interesting and will impact the ecosystem. This year we decided to recognize each of these projects with an award, so welcome to the first installment of the N|Solid Awards!

As the technology has become more ubiquitous in recent years, the list of Node.js projects and technologies to keep an eye on in 2023 is growing longer. As champions of Node, we are excited to see the creativity of the developers using Node.js and the positive impact the technology has in the world.

Node.js, a JavaScript open-source runtime environment, has become one of the most popular platforms for developing applications. With the rapid rise of Node.js usage, developers are constantly pushing the boundaries of what’s possible with the platform. As a result, many open-source Node.js projects are available for everyone to tinker around with.

“JavaScript is everywhere, including in 98% of the world’s websites. Representing this enormous developer ecosystem is a humbling and awesome responsibility. The work of our maintainers matters, as they keep lavaScript safe and modern for those who depend on it.” – OpenJS Foundation

Before we get to the award winners, here. A quick list of the pros and cons of Node.js:

Pros and Cons of Node.js

The pros of Node.js include the following:

Flexibility – Node.js is designed to be used with many different types of applications;
Speed – Node.js is faster than other server-side languages;
Scalability – Node.js makes it easy to scale applications;
Great ecosystems – Node.js has a large and vibrant community of developers constantly building new libraries and tools;
Async I/O – Node.js is built on the concept of asynchronous I/O, which makes it great for handling multiple requests simultaneously.
Cost savings – Node.js can reduce hosting and maintenance costs.

The cons of Node.js include the following:

Single threading – Node.js is single-threaded, which can limit performance;
Compatibility issues – As Node.js is updated, older versions may not be compatible with newer libraries and frameworks;
Lack of debugging

Skilled professionals like experienced Node.js developers require tools to get their jobs done quickly and effectively. However, it can be challenging to make the right choice from the range of options available. Node.js is known for its strong community that offers many tools. Such additions have been instrumental in contributing to the success of modern apps. To help you narrow down your choices, here are some of the top open-source projects you should keep an eye on.

The Winners of the N|Solid Award for 2023!

Selected for the project’s importance and value and the team’s outstanding effort, here are 10 of the best open-source projects (in no particular order) worth keeping an eye on…

Fastify-vite
Mercurius
Platformatic
Next.js
Prisma
Redwood
Nuxt
Strapi
Herbs.js
PNPM

Fastify-vite

Fastify-Vite is a minimalistic web framework designed to build modern web applications quickly. It supports React and Vue at the moment, which means you can use the same familiar components, lifecycle hooks, and other patterns. With its lightning-fast performance, developers can quickly develop, test, and deploy web applications.

Note: And if you ask, Why is fastify-vite but no vite itself? Because according to our lead engineers, it “is a game-changer in SSR” (And if we wanted to present the top 10, well, we couldn’t go on, to be honest, 😅🤷‍♀️); however, we are fans of the great work done by this project, so here: Vite itself has a special mention in our list.

And if we talk about Vite, then we cannot leave the Fastify ecosystem aside.

Fastify

Fastify is an open-source web framework for Node.js that enables developers to create modern and efficient web applications quickly. It provides a great foundation to build the application logic while abstracting away much of the complexity associated with web development. Fastify has an extensive ecosystem of modules, plug-ins, and tools that can be used to improve the development process. These include web servers, logging, validation, authentication, security, routing, and more. With such a wide range of features, Fastify makes it easy to create secure, reliable, and performant web applications.

Mercurius

Mercurius is a Node. a js-based project focusing on bringing IoT to the edge. It is designed for distributed IoT devices and provides tools for connecting them to cloud services such as Amazon AWS, Microsoft Azure, and Google Cloud Platform. It also supports real-time streaming, analytics, machine learning, and more. Mercurius provides an easy-to-use API that allows developers to quickly and easily interact with their devices. Furthermore, Mercurius is open-source and free to use, making it an ideal choice for developers who want to create innovative IoT solutions.

Platformatic

Platformatic project in Node.js is a powerful and scalable platform that enables businesses to quickly create, deploy, and manage sophisticated customer experiences using the power of AI. Node.js is used to incorporate custom logic into Platformatic’s interactive environment, allowing for a more tailored user experience for customers. Node.js is also used to provide faster performance and improved scalability across the platform, which is essential for powering high-volume customer interactions. With Node.js at its core, Platformatic project in Node.js delivers an efficient, robust, and secure customer experience.

Next.js

Next.js is an open-source project used to build server-side rendered React applications. It is based on the React framework and is a popular choice for developing single-page applications. It is easy to start with Next.js, as it handles the configuration and provides built-in features such as server-side rendering, static site generation, routing, code splitting, and much more. It also enables developers to start building apps quickly and efficiently while providing a range of customization options.

Prisma

Prisma is an open-source project that provides an ORM (Object Relational Mapping) for Node.js applications. It is designed to make it simpler and easier to interact with databases, reduce complexity and pain points in the development process, and help developers quickly build and deploy robust applications. Prisma provides automatic schema management, powerful data modeling, scalability, and high-performance querying.

Redwood

Redwood is a full-stack JavaScript framework for building web, mobile, and desktop applications. It allows you to rapidly use modern technologies like React, Node.js, GraphQL, and TypeScript to create powerful applications with an opinionated yet extensible architecture rapidly. With Redwood, you get the best of both worlds: the robustness and scalability of a full-stack framework and the flexibility and efficiency of a modern JavaScript stack.

Nuxt

Nuxt is an open-source project built on Vue.js and Node.js that provides an easy-to-setup framework for server-side-rendered (Universal) or Single Page Applications (SPA). It supports Vue components and allows developers to create custom projects from scratch or pre-made templates. Nuxt comes with integrated routing, code-splitting, and hot module reloading out of the box and also provides features such as custom layouts, meta tags management, and server middleware.

Strapi

Strapi is an open-source Node.js project that allows developers to create and manage their own API’s with ease. It provides a RESTful API structure and a customizable admin panel that will enable users to manage content and users easily. Additionally, it supports multiple databases and can be easily extended with plug-ins. Strapi provides an intuitive user experience and allows for rapid development of web applications.

Herbs.js

Herbs.js is a Node.js project that helps developers streamline the development process by allowing them to quickly and easily create Node.js applications with the help of various pre-defined tools, libraries, and modules. It provides a wide range of features, such as code syntax highlighting, modular components, integrated debugging and testing, and a streamlined build process. It also offers a convenient command-line interface for creating and managing a Node.js project.

PNPM

PNPM is an advanced package manager for node.js. It is optimized for performance and focuses on being a minimal footprint and making dependency resolution faster by creating a hard link, symlink, or cloning the dependencies into the local project. It also features an automated garbage collection system that detects and removes unnecessary packages. PNPM is designed to create reproducible and reliable builds. It utilizes a deterministic package-lock file to ensure that the same version of all required packages is installed on each machine.

Congratulations to the projects and their teams, you are doing truly incredible work, and we are excited to see what you do throughout the year! If you would like to nominate a project for the N|Solid Award, reach out to our community team at [email protected] and tell us why!

Why Choose N|Solid on top of Node.js?

Companies and developers looking for an enterprise-grade Node.js platform should consider N|Solid due to its superior performance and scalability. N|Solid delivers up to 10x better performance than most other Node.js production platforms and offers a range of tools to help developers scale their applications quickly and easily.

Additionally, N|Solid solves the problem of missing debugging capabilities, offering advanced insights, profiling capabilities, and real-time monitoring with built-in alerting, so developers can quickly identify and fix issues. It also includes a range of additional features, such as progressive deployments, automated patching, secure log data transmission, and more. Read about the top ten features in N|Solid here!

Conclusion

Node.js is a powerful platform that can help you create the project of your dreams. With plenty of open-source projects available, you can find solutions to develop exceptional applications. From the top ten NodeJS open-source projects, you have the opportunity to try out something new or contribute actively.

It is possible to get overwhelmed by all the options, but this is a fantastic opportunity to build and experiment with the tools you need.

Please help us to reach more people and support use cases in Node.js. We care about the Node.js community! Happy to connect with you on

Twitter

LinkedIn,

GitHub.

You’re welcome to explore, read, and participate in the Node.js Project.
We proudly support Node.js Binary Distributions since 2014. 💚