Measuring latency from the client side using Chrome DevTools and N|Solid

Almost every modern web browser includes a powerful suite of developer tools. In our previous blog-post we covered __How to Measure Node.js server response time with N|Solid__, read more ???? HERE.

The developer tools have a lot of capabilities, from inspecting the current HTML-CSS and Javascript code to inspecting the current ongoing network communication client-server.

To open the devtools and analyze the network, you can go to:

“More Tools” > “Developer Tools” > “Network”

Being on the devtools screen now, you can visit your Fastify API(or express) http://localhost:3000 after you get an HTTP response, you will see the request itself, the HTTP status code, the response size, and the response time.

GIF 1 – devtools.gif

An explanation for the time measured by the Chrome DevTools, HERE.

Let’s measure the Client-Side Latency.

Remove the delay timer logic on your application and restart the Node.js process.

import Fastify from “fastify”;

const fastify = Fastify({
logger: true,
});

// Declare the root route and delay the response randomly
fastify.get(“/”, async function (request, reply) {
return { delayTime: 0 };
});

// Run the server!
fastify.listen({ port: 3000 }, function (err, address) {
if (err) {
fastify.log.error(err);
process.exit(1);
}
});

NOTE: Discover another useful code snippet in our ‘Measure Node.js server response time with N|Solid’ article! This time, learn how to simulate server-side latency to further test your application’s performance. Check it out: ???? HERE

After this, you will get the fastest response times of ~1 ms, I was able to execute 139k requests in 10 secs using autocannon.

npx autocannon localhost:3000

The main point here is to see how it behaves when we have a poor internet connection on the client side and the best possible performance on the server side; for this, we can simulate high latency and slow internet connections on the client side, using Chrome devtools which has this option out of the box.

Simulate Client-Side latency

In the Chrome Devtools:

Go to the “Network Conditions” option > Select the option “Network Throttling” > Set it to __“Slow 3G”__.

If you request your browser to the URL http://localhost:3000/, you’ll see a long response time, in our case, __~2 seconds__.

This response time doesn’t mean the server takes that long to process the request and return an answer that was the amount of time that the answer took to transfer over the network till it arrived at the client side.

If you check your Fastify logs of the N|Solid metrics, you’ll see the server only took ~1 ms to return.

Check your logs with N|Solid
HERE

In our case, the response time was 0.3 ms

“responseTime”:0.3490520715713501

Can I improve/help the client-side latency?

Well, it is possible to improve the user experience on client-side devices with high latency when you use a Content Delivery Network to cache content on edge locations geographically near to users’ devices; even implementing some simple compression mechanism will improve the load times on users’ devices with high latency.

Look at this Jonas, our Principal OSS Engineer, blog-post and see ???????? __How to create a fast SSR application__.

Connect with NodeSource

If you have any questions, please contact us at [email protected] or through this form.

To get the best out of Node.js and experience the benefits of its integrated features, including OpenTelemetry support, SBOM integration, and machine learning capabilities. Sign up for a free trial and see how N|Solid can help you achieve your development and operations goals. #KnowyourNode

Experience the Benefits of N|Solid’s Integrated Features
Sign up for a Free Trial Today

Measure Node.js server response time with N|Solid

As software developers, we constantly face new challenges in an ever-changing ecosystem. However, we must always remember the importance of addressing performance and security concerns, which remain at the top of our priority list.

To ensure that our applications based on Node.js can meet our performance and scalability needs without compromising security or incurring costly infrastructure changes, we must be aware of the importance of network optimization in Node.js.

The Impact of Latency/Ping Time on the Performance and Speed of Your Node.js Application

IMG – Ping Cats – via GIPHY

This communication, known as network ping time or latency, is a crucial factor that impacts the performance and speed of your application. Knowing how to measure network ping time between the browser and the server is essential for developers who want to optimize their applications and provide a better user experience. _Have you ever wondered how long it takes for your application to communicate with the server? _

Network Optimization in Node.js

To ensure the optimal performance and scalability of our Node.js applications, we must accurately measure our HTTP server’s connection and response time. Doing so enables us to identify and address potential bottlenecks without compromising security or incurring unnecessary infrastructure changes.

Before delving deeper into measuring connection and response time, let’s explore fundamental concepts and critical differentiators in the network landscape.

HTTP vs. WebSocket:

HTTP and WebSocket are communication protocols used in web development but serve different purposes. HTTP is a stateless protocol commonly used for client-server communication, while WebSocket enables full-duplex communication between clients and servers, allowing real-time data exchange.

Types of Connections and Versions:

When creating APIs, HTTP as a protocol and standard has different versions, such as HTTP 1.1 and 2.0. Additionally, APIs may use alternative protocols like gRPC, which offer different features and capabilities. Understanding these options empowers developers to choose the most suitable tools for their web servers.

TCP/IP Basics:

The Transmission Control Protocol (TCP) and Internet Protocol (IP) are fundamental protocols that form the backbone of computer networks. Among TCP’s critical processes is the three-way handshake, which plays a vital role in establishing a secure and dependable connection between two endpoints. This handshake ensures the orderly and reliable transmission of data. TLS/SSL encryption enhances security, adding an extra layer of protection to the communication between the client and the server.

HTTP vs. HTTPS:

HTTP operates over plain text, which exposes the data being transmitted to potential eavesdropping and tampering.
HTTPS, on the other hand, secures communication through the use of SSL/TLS encryption, providing confidentiality and integrity.
Understanding the trade-offs between HTTP and HTTPS is crucial to making informed data security decisions.

Building a Solid Foundation: Understanding the Three-Way Handshake for Reliable Connections

To evaluate the performance of our HTTP server, we need to differentiate between connection latency and server response time. Connection latency refers to the time it takes for the initial three-way handshake process to complete before data transmission can occur. On the other hand, server response time measures the duration from when the server receives a request to when it generates and sends the response back to the client.

The three-way handshake is a fundamental process in establishing a TCP (Transmission Control Protocol) connection between a client and a server in a network. It involves three steps, a “three-way handshake.” This handshake establishes a reliable and ordered communication channel between the two endpoints.

Here’s a breakdown of the three steps involved in the three-way handshake:

__SYN (Synchronize)__: The client initiates the connection by sending an SYN packet (synchronize) to the server. This packet contains a randomly generated sequence number to initiate the communication.
__SYN-ACK (Synchronize-Acknowledge)__: Upon receiving the SYN packet, the server acknowledges the request by sending an SYN-ACK packet back to the client. The SYN-ACK packet includes its own randomly generated sequence number and an acknowledgment number equal to the client’s sequence number plus one.
__ACK (Acknowledge)__: Finally, the client sends an ACK packet (acknowledge) to the server, confirming the receipt of the SYN-ACK packet. This packet also contains the acknowledgment number equal to the server’s sequence plus one.

Once this three-way handshake process is completed, the client and the server have agreed upon initial sequence numbers, and a reliable connection is established between them. This connection allows for data transmission with proper sequencing and error detection mechanisms, ensuring that the information sent between the client and server is reliable and accurate.

The three-way handshake is essential to establishing TCP connections and is performed before any data transmission can occur. It plays a critical role in ensuring the integrity and reliability of the communication channel, providing a solid foundation for subsequent data exchange between the client and server.

Create a self-serve diagnostic tool for a server-rendered page in Node.js.

The idea is to share an easy-to-follow recipe that will help you create your tool, so let’s start with the ingredients and end with the steps to create a self-serve diagnostic tool for a server-rendered page in Node.js.

Ingredients:

Node.js & NPM installation – https://nodejs.org/

Fastify.js – https://www.fastify.io/

Instructions:

1. Setup a Node.js Project
Use NPM to create your Node project:

$ mkdir diagnostic-tool-nodejs
$ cd diagnostic-tool-nodejs
$ npm init -y

2. Install your NPM packages.
We have Fastify in our recipe, so we must install them first:

$ npm i fastify

3. Create the index.mjs
Create an index.mjs file in the project’s root directory and paste this fastify HTTP server sample code.

import Fastify from “fastify”;

const fastify = Fastify({
logger: true,
});

// Randomly create a timer from 100ms up to X seconds
function timer(time) {
return new Promise((resolve, reject) => {
const ms = Math.floor(Math.random() * time) + 100;
setTimeout(() => {
resolve(ms);
}, ms);
});
};

// Declare the root route and delay the response randomly
fastify.get(“/”, async function (request, reply) {
const wait = await timer(5000);
return { delayTime: wait };
});

// Run the server!
fastify.listen({ port: 3000 }, function (err, address) {
if (err) {
fastify.log.error(err);
process.exit(1);
}
});

This will start the server on port 3000, which you can access by going to http://localhost:3000 in your web browser.

Integrate with N|Solid Console

Be sure you already have N|Solid installed and running on your environment; otherwise, go to https://downloads.nodesource.com and get the installer.

Also, run the console using docker as an alternative to the local installation.

docker run -d -p 6753:6753 -p 9001:9001 -p 9002:9002 -p 9003:9003 nodesource/nsolid-console:hydrogen-alpine-latest

With the application already initialized with npm, Fastify installed, and our index.js in place, we can connect our process with N|Solid

Run the HTTP server with the NSOLID RUNTIME following the instructions on the principal console page.

IMG – Connect N|Solid

In this case, we ran the process by passing the config via environment variables and running a local installation of the Nsolid console.

NSOLID_APPNAME=”NSOLID_RESPONSE_TIME_APP” NSOLID_COMMAND=”127.0.0.1:9001″ nsolid index.mjs

If you instead use our SaaS console, you need to use the NSOLID_SAAS env instead of __NSOLID_COMMAND__.

NSOLID_APPNAME=”NSOLID_RESPONSE_TIME_APP” NSOLID_COMMAND=”XYZ.prod.proxy.saas.nodesource.io:9001″ nsolid index.mjs

After completing those steps, you should be able to watch the app and process connected to the console.

IMG – Connect N|Solid Process

GIF 1 – Connect N|Solid Process

Go to the application process and add the HTTP(S) Server 99th Percentile Duration metric to see in near-real time the HTTP server latency response time and also we have the HTTP(S) Request Median Duration.

GIF 2 – Monitor Process Metrics

After this, we should be able to generate some traffic and see how the response times behave with the sample code provided, generating some response time randomness from 100ms up to 5 secs.

To generate the traffic, we can use autocannon

npx autocannon -d 120 -R 60 localhost:3000

After running autocannon for some minutes, we can see the P99 metric of the HTTP Server. The median and compare them.

IMG – http-latency-response-time-metrics

IMG – http-request-median-duration

IMG – p99-metric

To fully utilize the metrics provided by N|Solid, it is crucial to have a comprehensive understanding of their significance. Two critical metrics offered by N|Solid are the 99th Percentile and the HTTP Median metric. These metrics play a vital role in assessing the performance of Node.js applications in production environments. By getting deeper into their practical application and importance, we can unlock the actual value of these metrics in N|Solid and make informed decisions to optimize our production systems. Let’s explore this further.

The 99th Percentile metric

The 99th percentile is a statistical measure commonly used to analyze and understand response time or latency in a system.

Imagine you have a web application that handles incoming requests. To understand how fast the server responds, you measure the time it takes for each request and gather that data. You can find the 99th percentile response time by looking at the data.

For example, __the 99th percentile response time is 500 milliseconds__.
This means that only 1% of the requests took longer than 500 milliseconds to get a response. In simpler terms, 99% of the requests were handled in 500 milliseconds or less, which is fast.

It helps you identify and address any outliers or performance bottlenecks affecting a small fraction of requests but can significantly impact the user experience or system stability. Monitoring the 99th percentile response time helps you spot any slow requests or performance issues that might affect a few users but still need attention. but can have a significant impact on user experience or system stability.

The HTTP median metric

When sorted in ascending or descending order, the median represents a dataset’s middle value.

To illustrate the difference between the 99th percentile and the median, let’s consider an example. Suppose you have a dataset of response times for a web application consisting of 10 values:
[100ms, 150ms, 200ms, 250ms, __500ms__, 600ms, 700ms, 800ms, 900ms, 1000ms].

The median response time would be the middle value when the dataset is sorted, which is the 5th value, 500ms. This means that 50% of the requests had a response time faster than 500ms, and the other 50% had a response time slower than 500ms.

Connect with NodeSource

If you have any questions, please contact us at [email protected] or through this form.

Experience the Benefits of N|Solid’s Integrated Features
Sign up for a Free Trial Today

To get the best out of Node.js and experience the benefits of its integrated features, including OpenTelemetry support, SBOM integration, and machine learning capabilities. Sign up for a free trial and see how N|Solid can help you achieve your development and operations goals. #KnowyourNode

Unleashing the Power of NCM: Safeguarding Node.js Applications with Next-Generation Security in N|Solid

In the world of Node.js, application development, speed, flexibility, and scalability are critical for modern software development. However, the risk of vulnerabilities and security breaches looms with the increasing reliance on open-source Node packages. NCM (NodeSource Certified Modules) is the next-generation security solution that empowers Node.js developers to safeguard their applications easily and confidently.

This article will explore how NCM, a key N|Solid platform feature, revolutionizes how Node.js applications are secured, offering advanced security features, enhanced visibility, and peace of mind. Get ready to unleash the power of NCM and take your Node.js applications to new heights of security and reliability with N|Solid.

_Image 1 – Security Vulnerabilities in N|Solid View
_

Don’t miss out on this opportunity to try N|Solid for free and unlock the full potential of your Node.js applications.✍️ Sign up now and take your monitoring to the next level!

What is N|Solid?

_Image 2 – N|Solid Product View
_

N|Solid provides enhanced security for Node.js applications in production environments. It is built on top of the Node.js runtime. It provides a secure environment for running Node.js applications and advanced features such as worker threads monitoring, memory leak detection, and CPU profiling. We have +15 features in our product, including OpenTelemetry support, SBOM integration, and Machine Learning capabilities. Discover More HERE ‘__Top 10 N|Solid —APM for Node— features you needed to use__’ – HERE: ???????? nsrc.io/TopNSolidFeatures.

N|Solid offers many benefits over the standard Node.js runtime, including improved security through features like runtime vulnerability scanning, access control, and enhanced monitoring capabilities that allow developers to identify and address issues in real-time.

N|Solid is well-suited for enterprise applications requiring high performance, scalability, and security levels. It is widely used in finance, healthcare, and e-commerce. It is developed and maintained by __NodeSource__, a company specializing in enterprise-grade Node.js solutions.

In the previous section, we discussed N|Solid as a solution that provides enhanced security for Node.js applications in production environments. Let’s discuss the difference between NSolid Console, N|Solid Runtime, and N|Solid SaaS. It’s important to differentiate between these components for several reasons, including functionality, user experience, and flexibility.

What is the difference between NSolid Console, N|Solid Runtime, and N|Solid SaaS?

Differentiating between the Console, Runtime, and SaaS setup in N|Solid is essential for a few reasons: functionality, user experience, and flexibility.

Users can deploy N|Solid in multiple ways, including using the N|Solid Console, N|Solid Runtime, or N|Solid SaaS setup, depending on their requirements and infrastructure setup. It is essential to provide distinct functionalities to enhance user experience and offer flexibility in deployment options, allowing scalability, customization, and integration with existing workflows. Here’s a brief description of each:

N|Solid Runtime is the runtime environment for Node.js applications. It includes a modified version of the Node.js runtime, enhanced with additional security, monitoring, and debugging features. These features include advanced profiling and tracing capabilities, heap and CPU profiling, and runtime vulnerability scanning.
???????? https://bit.ly/NSolidRuntime-npm

_Image 3 – N|Solid Runtime Installation
_

__N|Solid Console__, on the other hand, is a web-based dashboard that provides a graphical user interface for monitoring and managing Node.js applications running on N|Solid Runtime. It lets users view their applications’ real-time metrics and performance data, monitor resource utilization, and set alerts for specific events or thresholds. N|Solid Console also provides features for managing user access and permissions, configuring application settings, and integrating with third-party tools and services. It can manage multiple N|Solid Runtimes across a distributed environment, making it ideal for large-scale enterprise deployments.
???????? https://nsrc.io/NSolidConsole

_Image 4 – N|Solid Console Overview
_

__N|Solid SaaS__: N|Solid also offers a SaaS (Software-as-a-Service) setup so users can leverage N|Solid’s enhanced security and performance features without managing their own infrastructure. With N|Solid SaaS, users can simply sign up for a subscription and use N|Solid’s features through a cloud-based service without needing on-premises installation or maintenance. ???????? https://nsrc.io/NSolidSaaS

_Image 5 – N|Solid SaaS Overview
_

N|Solid offers multiple deployment options; these components provide distinct functionalities, user experiences, and deployment flexibilities, catering to the diverse needs of enterprise Node.js applications.

But, What about NCM?

NodeSource Certified Modules (NCM) is another product developed by NodeSource that provides you and your teams with actionable insights into the risk levels of using third-party packages. Using a series of tests, we score packages on npm to look for several weighted criteria. With NCM CLI, you can scan your projects for existing security vulnerabilities, license concerns, code risk, and code quality. This helps you understand the level of risk exposure and how to mitigate it. NodeSource Certified Modules (NCM) also work in offline mode. Explore Further ‘__Avoiding npm substitution attacks using NCM__’ HERE ????????https://nsrc.io/AvoidAttackswithNCM

_Image 6 – NCM CLI Report
_

NodeSource Certified Modules (NCM) is a security, compliance, and curation tool around the 3rd-Party Node.js & JavaScript package ecosystem. It is designed to be used with npm to provide protection against known security vulnerabilities and potential license compliance issues and provide general quality or risk assessment information to improve your ability to work with the 3rd-Party ecosystem.

Since the release of N|Solid 4.1.0, we have consolidated NCM into a single product with NCM’s features being pulled into N|Solid Runtime, N|Solid SaaS, and the N|Solid Console for optimal user experience. It also provides alerts and notifications when new vulnerabilities are discovered in modules used by an organization’s applications and helps users quickly identify and remediate any potential security risks.NCM is a valuable tool for organizations that rely on Node.js and open-source modules, helping to ensure that their applications are secure, reliable, and compliant with industry standards and regulations.

NCM now assesses packages based on multiple attributes: security, compliance, risk, and quality. These attributes are combined to generate an overall risk level for each package, providing valuable insights to manage third-party code in your Node.js applications effectively. With NCM’s scoring system, you can:

__Manage acceptable risk levels__: NCM helps you assess the risk associated with third-party packages by providing an overall risk level for each package. This allows you to make informed decisions about the level of risk you are willing to accept in your application.
__Understand security vulnerabilities__: NCM identifies and highlights security vulnerabilities in third-party modules, allowing you to understand the severity of the vulnerabilities and take appropriate actions to address them in your code.
__Manage license and compliance risks__: NCM helps you identify potential license and compliance risks introduced by third-party modules, ensuring that your application adheres to licensing requirements and compliance standards.
__Identify potential risk vectors__: NCM goes beyond known security vulnerabilities and identifies potential risks that may not have surfaced in security vulnerabilities yet. This helps you proactively identify and address potential risks in your code.
__Improve code quality__: NCM provides insights into quality attributes that align with best practices, helping you improve the quality of your code and make it more manageable and secure.

Together, these attributes in NCM’s scoring system (security, compliance, risk, and quality.) provide a comprehensive assessment of third-party packages, enabling you to effectively manage and secure your Node.js applications by addressing security vulnerabilities, managing compliance risks, assessing package risk, and provides insights to improve code quality. Find Out More about ‘Vulnerability Scanning & 3rd-Party Modules Certification’- HERE ???????? nsrc.io/VulnerabilityScanningNS

The Importance of Node.js Application Security

Selecting the right tools and applications for your developer pipeline requires careful consideration of your team’s workflow and project needs. This might involve assessing your tech stack, deployment processes, and the number of steps in your pipeline and identifying areas where guardrails can be implemented to improve security and reliability.

_Image 7 – NCM Criteria
_

Fortunately, numerous tools and applications are available to assist in managing your pipeline and ensuring the security and compliance of your applications. One powerful tool in this regard is NCM (NodeSource Certified Modules). NCM is a comprehensive security, compliance, and curation tool that offers advanced capabilities for managing dependencies in Node.js applications. By integrating NCM into your pipeline, you can effortlessly scan for vulnerabilities, track package dependencies, and ensure compliance with licensing requirements.

NCM enables you to elevate your pipeline to the next level, enhancing your application’s performance, reliability, and security while safeguarding against __SUPPLY CHAIN ATTACKS__. With the consolidation of NCM into N|Solid, you can now seamlessly access these powerful capabilities through the N|Solid Console for a streamlined user experience.

Note: Supply chain attacks are a type of cyber attack that targets the weakest link in a software supply chain. Instead of directly attacking a target, hackers infiltrate a trusted third-party vendor, supplier, or service provider to gain access to their customer’s systems and data. This allows the attackers to distribute malicious code or compromise software updates, which can then infect the entire supply chain and cause widespread damage. Supply chain attacks can be difficult to detect and prevent, making them a growing threat to organizations of all sizes and industries.

The importance of NCM

The consolidation of NCM 2 into N|Solid represents a significant milestone in providing a comprehensive solution for ensuring the security, reliability, and performance of Node.js applications. With features such as:

Projects & Applications Monitoring – https://nsrc.io/ProjectApplicationsMonitoringNS

Process Monitoring – https://nsrc.io/ProcessMonitoringNS

CPU Profiling – https://nsrc.io/CPUProfilingNS

Worker Threads Monitoring – https://nsrc.io/WorkerThreadsNS

Capture Heap Snapshots – https://nsrc.io/HeapSnapshotsNS

Memory Anomaly Detection – https://nsrc.io/MemoryAnomalyNS

Vulnerability Scanning & 3rd party Modules Certification – https://nsrc.io/VulnerabilityScanningNS
HTTP Tracing Support – https://nsrc.io/HTTPTracingNS

Global Alerts & Integrations – https://nsrc.io/GlobalAlertsIntegrationsNS

Distributed Tracing – https://nsrc.io/DistributedTracingNS

Open Telemetry Support – nsrc.io/AIOpsNSolid

SBOM Support – nsrc.io/SBOM-NSolid

Machine Learning Support – nsrc.io/ML-NSolid

N|Solid offers a robust and all-encompassing solution for managing the entire lifecycle of Node.js applications. By incorporating NCM’s powerful capabilities for security, compliance, and curation, N|Solid empowers developers and organizations to proactively identify and address vulnerabilities, track dependencies, and ensure licensing compliance, ultimately elevating the overall performance, reliability, and security of their applications. With N|Solid, organizations can confidently build and deploy Node.js applications with peace of mind, knowing their software is protected against potential risks and supply chain attacks.

Conclusion:

Securing Node.js applications is paramount in today’s software development landscape. With the powerful features of NSolid, including the N|Solid Console and N|Solid Runtime, combined with the cutting-edge security capabilities of NCM, developers can safeguard their Node.js applications with next-generation security measures or simply leaving the maintenance and infrastructure to us by selecting our N|Solid SaaS option. By leveraging the power of NCM in the N|Solid platform, developers can proactively mitigate vulnerabilities and ensure the reliability and stability of their Node.js applications. Embrace the power of NCM in N|Solid today and unleash the full potential of your Node.js applications with advanced security measures.

NodeSource’s Products:

N|Solid Runtime is the Node.js runtime environment with enhanced security, monitoring, and debugging features.

N|Solid Console is a web-based dashboard for managing and monitoring Node.js applications running on N|Solid Runtime.
__N|Solid SaaS__: Benefit from N|Solid’s advanced security and performance features through a cloud-based subscription service, eliminating the need for on-premises installation or maintenance.

NCM is a cutting-edge security feature integrated into the N|Solid platform that provides continuous monitoring, vulnerability scanning, and risk assessment of open-source Node.js packages used in Node.js applications.

To get the best out of Node.js and experience the benefits of its integrated features, including OpenTelemetry support, SBOM integration, and Machine Learning capabilities. ✍️ Sign up for a free trial and see how N|Solid can help you achieve your development and operations goals. #KnowyourNode

JavaScript on your schedule

#​633 — April 6, 2023

Read on the Web

❓ JavaScript Weekly on a Thursday? It’s true. As well as it being Good Friday tomorrow, we’ve decided to move to Thursday permanently going forward. We hope you have a good Easter, if you celebrate it, otherwise enjoy one fewer email on Fridays ????
__
Your editor, Peter Cooper

JavaScript Weekly

Croner: Cron for JavaScript and TypeScript — Trigger functions upon the schedule of your choice using the classic cron syntax. Works in Node, Deno, Bun and the browser, across time zones, offers error handling and overrun protection, and more. There’s an interesting live demo on JSFiddle.

Hexagon

▶️ JSON vs XML with Douglas Crockford — The author of 2008’s hugely popular JavaScript: The Good Parts went on a podcast to share the story of JSON, his discovery of JavaScript’s ‘good parts’, and his general approach to building software, including his dislike of JavaScript ‘frameworks.’ There’s a transcript if you’re not keen on listening. (50 minutes.)

CoRecursive Podcast podcast

Headless CMS with World-Class TypeScript Support — Kontent.ai is the leading platform for modular content. Streamline your code using TypeScript SDK, CLI, Rich text resolver, and strongly typed model generator. Scale with no problems when your project grows. Have you seen our UI?

Kontent.ai sponsor

The Angular Signals RFC — There’s a lot of excitement about a shift in Angular involving the addition of signals as a reactive primitive – the official RFC is now available for this feature, and you’re encouraged to leave comments. If you’d rather see a practical use for signals, Joshua Morony recorded ▶️ a screencast showing them off.

Angular Team

Over 100 Algorithms and Data Structures Demonstrated in JS — Examples of many common algorithms (e.g. bit manipulation, Pascal’s triangle, Hamming distance) and data structures (e.g. linked lists, tries, graphs) with explanations.

Oleksii Trekhleb et al.

IN BRIEF:

Laurie Voss looks at the most popular frameworks used in sites deployed to Netlify. React-based options lead the way.

Oliver Dunk of the Chrome Extensions Team has posted an update on the Manifest V2 to Manifest V3 transition – it’s taking longer than expected so Manifest V2 isn’t disappearing any time soon.

V8 v11.2 is shipping with support for WebAssembly tail calls.

With Chrome 113, Chrome is now shipping support for WebGPU.

A look at how Microsoft’s Blazor (a stack aimed at building front-end apps with C#) is skirting around JavaScript with its focus on WebAssembly.

JSDayIE 2023: The First JavaScript Conference in Ireland Is Back! — Join us on September 26th in Dublin to experience everything the Irish JavaScript community and Ireland have to offer.

JSDayIE sponsor

RELEASES:

Electron 24.0 – Complete with Chromium 112, V8 11.2, and Node 18.14.

Storybook 7.0 – Though still tagged ‘next’ and pending a proper launch.

Storybook for React Native 6.5

WebStorm 2023.1 – Commercial JS IDE from JetBrains.

Rete.js 2.0 Beta – Framework for building node-based editors.

???? Articles & Tutorials

Making a Big, Slow Vue/Alpine Page ‘Blazingly’ Fast — A practical example of a pattern the author is billing a “reactive switchboard.” “I’m going to use Vue/Alpine lingo in this article, but I think this pattern applies to lots of different tools.”

Caleb Porzio

▶  Watch Dan Abramov Explore React Server Components — At an epic (though well timestamped) four hours, this isn’t a quick watch, but Dan and Ben Holmes walk through everything React Server Components oriented, complete with diagrams, code, and a real-world app.

Ben Holmes

Getting PWAs in App Stores with PWABuilder — Thomas Steiner demonstrates how PWABuilder makes it possible to submit Progressive Web Apps (PWAs) to app stores like those provided by Google, Apple, and Microsoft.

Thomas Steiner (Google)

Add a Full-Featured Notification Center to Your App in Minutes

Courier.com sponsor

What Are Source Maps? — Learn how source maps can help you debug your original code instead of what was actually deployed after the build process.

Sofia Emelianova (Chrome Developers)

How I Used ChatGPT in My JavaScript Projects

James Q Quick

???? Code & Tools

Relaunching JSPM CLI for Import Map Package Management — Several years ago when JS had numerous competing module formats, JSPM was a useful package manager atop SystemJS, but now it’s being relaunched as an import map package management tool.

Guy Bedford

Chrome Extension CLI 1.4: CLI for Building Chrome Extensions — Want to get building an extension for Chrome as quickly as possible? This Node-powered tool aims to get you on the right path ASAP. v1.4 adds a script to generate a ZIP file (also known as a ‘postcode file’ at Microsoft UK? ????) of the extension.

Dutiyesh Salunkhe

React Chrono 2: A Flexible Timeline Component — A complete overhaul of a popular component. You can render themeable timelines in vertical, horizontal, or vertical alternating orientations. It includes keyboard navigation support, auto advancement, and, as of v2, support for nested timelines.

Prabhu Murthy

Dynaboard: A Visual Web App IDE Made for Developers — Build high performance public and private web applications in a collaborative — full-stack — development environment.

Dynaboard sponsor

Jampack: A Post-Processing Tool to Optimize Static Websites — Similar to a bundler or build tool, with features like image optimization, asset compression, and some code auto-fixes — all amounting to strong Core Web Vitals scores.

divRIOTS

imask.js 6.5.0: A Vanilla JavaScript Input Mask — Prevent users from entering invalid values. Has plugins for Vue, Angular, React, Svelte, and Solid, if needed.

imaskjs

tween.js 19.0
↳ JS tweening engine for easy animations.

Swiper 9.2
↳ Modern mobile-friendly touch slider.

gridstack.js 7.3
↳ Dashboard layout and creation framework.

ReacType 15.0
↳ Visual prototyping tool that can export React apps.

xstyled 3.8
↳ Utility-first CSS-in-JS framework for React.

Spacetime 7.4.2
↳ Lightweight timezone library.

???? Jobs

Find JavaScript Jobs with Hired — Hired makes job hunting easy-instead of chasing recruiters, companies approach you with salary details up front. Create a free profile now.

Hired

Full Stack JavaScript Engineer @ Emerging Cybersecurity Startup — Small team/big results. Fun + flexible + always interesting. Come build our award-winning, all-in-one cybersecurity platform.

Defendify

????‍???? Got a job listing to share? Here’s how.

???? Wise Words of the Week

A reminder from Vue.js’s Evan You that we live in a vast and varied world, including in the JavaScript ecosystem:

Transformers: JavaScript in Disguise

#​630 — March 17, 2023

Read on the Web

JavaScript Weekly

????  Transformers.js: Running ML Models in the Browser — Transformers are a type of machine learning model often used for natural language or visual processing and while running such models directly in the browser is in its infancy, Transformers.js opens up some ML models to you with some impressive demos here.

Xenova

????  Celebrating 10 Years of Electron — It feels like Electron pops up everywhere (Slack, Spotify, VS Code, and more) so it might feel surprising it’s only been with us for a decade. Slack and Electron developer Erick Zhao gives thanks to Electron’s developers, the community, gives us a bit of Electron related history, and reassures us Electron is still going strong.

Erick Zhao

Dynaboard: A Visual Web App IDE Made for Developers — Build high performance public and private web applications in a collaborative — full-stack — development environment.

Dynaboard sponsor

Announcing TypeScript 5.0 — Note that TypeScript doesn’t follow semantic versioning, so this is as much a ‘major’ release as 4.9 was.. but 5.0 looks cool anyway. This release of the typed JavaScript superset is packed with features like decorators, improved ESM project support for Node and bundlers, const type parameters, and more.

Daniel Rosenwasser (Microsoft)

Turbowatch: File Change Detector and Task Orchestrator — Not just that but it claims to be extremely fast and “if you ever wanted something like Nodemon but more capable, then you are at the right place.” This looks very promising and the README is full of examples.

Gajus Kuizinas

IN BRIEF:

BREAKING NEWS: The JS Party podcast has just dropped an episode called ▶️ The Future of React – so new, we haven’t listened to it, but it features Dan Abramov and Joe Savona so may make for good weekend listening..

“The most dangerous command you run every day: npm install” says Socket, who are introducing what they call ‘safe npm’, a transparent wrapper around npm designed to, well, make it less dangerous.

CORRECTION: In issue 627 we suggested the ECMAScript 2023 spec had entered a new draft stage. TC39 member Jordan Harband pointed out to us that it has been in such a state for some time. “There’s still a stage 4 PR not yet merged,” he noted, but there will be some progress in the next month.

Defer is a new ‘zero-infrastructure’ background jobs platform for Node.js apps.

Recently we linked to ???? Dittytoy, a fun online JavaScript environment for audio coding/experiments. Someone has somehow implemented an entire Commodore 64 SID synthesizer in it!

????  Developer Day: A Front-Row Seat to What’s New with Retool

Retool sponsor

RELEASES:

Node.js v19.8.0/1 (Current)

Jasmine 4.6
↳ Testing framework for browsers and Node.

pm2 5.3
↳ Popular Node production process manager.

Mongoose 7.0
↳ Popular MongoDB ODM for Node.js.

ESLint 8.36

???? Articles & Tutorials

Chrome 111 Gains a ‘View Transition’ Feature for SPAs — The View Transition API is only supported by Chrome so far, but allows easy animated page transitions within single-page apps (demo here). Luckily it suits progressive enhancement so you can start using it right now without feeling too guilty 😉 Multi-page app support is forthcoming.

Jake Archibald (Chrome Developers)

Create and Download Text Files with JavaScript — If you want your code to be able to generate a text (such as JSON) file on the fly and have it downloaded by the user’s browser, it’s reasonably easy.

Amit Merchant

Five Mistakes I Made When Starting My First React Project — Richard shares his early React mistakes with the hope you can learn from his misfortunes. He tackles topics like using defaultProps, propTypes, and class components.

Richard Oliver Bray

Too Much Tech Debt in Your node_modules? Our Team of JS Devs Can Help — We are a team of senior software engineers who specialize in tech debt. Let us modernize your JavaScript stack ????

UpgradeJS.com | JavaScript Upgrade Services sponsor

Progressively Enhancing a Table with a Web Component — Building a web component wrapper to add table sorting.

Raymond Camden

Shell-Free Node.js Scripting with Execa 7.1Execa is a popular process execution library for Node and the latest version includes an interesting $ method feature for writing zx-style scripts with it, making it even more useful for shell scripting style usecases.

ehmicky

What is Vite and Why Use It Over Create React App?

Luke Twomey

Pointers on Upgrading from Cypress v9 to v12

Gleb Bahmutov

How to Use v-model with Form Inputs in Vue

Dmitri Pavlutin

How to Create and Use Path Aliases in TypeScript Imports with Vite

Hasibul Hasan

What Is Deno and How to Use Its Sandbox?

Roman Zaynetdinov

???? Code & Tools

Template: A Simple Framework for Webapps — The author built it for his own projects, but notes: “It’s a joy to work in, feels “frameworky” but it’s just web standards with <100 lines of convenience JS wrapped around it. There is no magic beyond what the browser provides – I like it that way.” We do too.

William Blankenship

React ProseMirror: Integrate the ProseMirror Editor with ReactProseMirror is a toolkit for building rich text editors for the Web.

The New York Times

Breakpoints and console.log Is the Past, Time Travel Is the Future — 15x faster JavaScript debugging than with breakpoints and console.log, now with support for Vitest.

Wallaby.js sponsor

Fable 4.0: F# to JavaScript Compiler — If you fancy F#’s flavor of almost-entirely-functional development, this could be for you. GitHub repo.

Fable

MiniSearch: Small In-Memory Fulltext Search Engine for Browser and Node — The strength is that the indexed data is stored locally, allowing it to work offline and giving good performance, as seen in this demo.

Luca Ongaro

css-variable: Tiny Treeshakable Library to Define CSS Custom Properties in JS — Compatible with popular CSS-in-JS libraries like Emotion, styled-components, Linaria, etc., and it boasts better CSS minification and smaller virtual DOM updates, among other features.

Jan Nicklas

Tremor 2.0: The React Library to Build Dashboards Fast — Provides an array of modular components to build data-driven dashboards. v2.0 is the “first step towards a production-ready version of Tremor” and sees a full switch to Tailwind CSS. Homepage.

Tremor Labs

Stable Diffusion Plugin for Photoshop — Writing code that worked with Adobe’s weird JS variant was ghastly, but this uses their new ‘UXP’ based approach, so is interesting enough for that alone. This plugin also opens up the Stable Diffusion generative art system to Photoshop users.

Abdullah Alfaraj

Flexboard: A React Component Library for Resizable Sidebars — Try the live example. The code allows you to set min/max sizes for the resizable parts of the layout.

Dorbus

???? Jobs

Full Stack JavaScript Engineer @ Emerging Cybersecurity Startup — Small team/big results. Fun + flexible + always interesting. Come build our award-winning, all-in-one cybersecurity platform.

Defendify

Software Engineer (Frontend) — Join our “kick ass” team. Our software team operates from 17 countries and we’re always looking for more exceptional engineers.

Sticker Mule

Find JavaScript Jobs with Hired — Hired makes job hunting easy-instead of chasing recruiters, companies approach you with salary details up front. Create a free profile now.

Hired

????‍???? Got a job listing to share? Here’s how.

Fuite 2.0
↳ Tool for finding memory leaks in web apps.

???? wavesurfer.js 6.6
↳ Navigable waveform built on Web Audio & canvas.

Svelte-Inview 4.0
↳ Svelte action that monitors when an element enters/leaves the viewport.

Discord.js 14.8
↳ Library for using the Discord chat API.

Plotly.js 2.20
↳ Powerful charting library. (Examples.)

Recharts 2.5
↳ React + D3 charting library. (Examples.)

deepmerge 4.3.1
↳ Merges the enumerable properties of objects.

Vue Testing Library 7.0

React Table Library 4.1

JavaScript sans build systems?

#​626 — February 17, 2023

Read on the Web

JavaScript Weekly

Writing JavaScript Without a Build System — Using a variety of build tools for things like bundling and transpiling is reasonably standard in modern JavaScript development, but what if you want to keep things simple? For simple things, it’s not necessary, says Julia. This led to a lot of discussion on Hacker News.

Julia Evans

Ryan Dahl, Node.js Creator, Wants to Rebuild the Runtime of the Web — A neat bit of journalism about the alternative JavaScript runtime Deno and what Ryan Dahl is trying to achieve with it and how Ryan handled the stress of being known as the creator of Node.js.

Harry Spitzer / Sequoia

Broadcasting a Live Stream With Nothing but JavaScript — Live streams typically use third-party software to broadcast, but with Amazon Interactive Video Service, you can build a powerful, interactive broadcasting interface with the Web Broadcast SDK and JavaScript. Click here to learn more.

Amazon Web Services (AWS) sponsor

core-js’s Maintainer Complains Open Source Is ‘Broken’core-js is a popular universal polyfill for JavaScript features and its author has run into his fair share of bad luck which has culminated in this lengthy post on the state of the project, his issues in securing an income and, well, the downsides to living in Russia. The Register has tried to balance out the story.

The Register

IN BRIEF:

🐒 The just released Firefox 110 for Android now supports Tampermonkey, an extension for running JavaScript ‘userscripts’ on sites you visit.

The Angular project is taking steps to revamp its reactivity model to enable fine-grained change detection via signals.

The latest beta of iOS and iPadOS 16.4 supports the Web Push API for home screen webapps.

🐦 A fun Twitter thread where Qwik’s Miško Hevery attempted to demonstrate why a = 0-x is about 3-10x faster than a = -x before being told about a flaw in his benchmark. There is still a performance difference, though.

▶️ The React.js documentary we mentioned last week has now been released and it’s a heck of a watch – you’ll need 78 minutes of your time though.

RELEASES:

Node.js 19.6.1, 18.14.1, 16.19.1 and 14.21.3.

JavaScript Obfuscator 4.0 – Code scrambler.

Shoelace 2.1
↳ Framework agnostic Web components.

Mermaid 9.4
↳ Text to diagram generator. Now with timeline diagram support.

Cypress 12.6

📒 Articles & Tutorials

Use a MutationObserver to Handle DOM Nodes that Don’t Exist Yet — Comparing the effectiveness of the MutationObserver API with the conventional method of constantly checking for the creation of nodes.

Alex MacArthur

Well-Known Symbols in JavaScript — Hemanth, a TC39 delegate, shows off 14 symbols and where they can come in useful.

Hemanth HM

🚀 Monitor and Optimize Website Speed to Rank Higher in Google — Monitor Google’s Core Web Vitals and optimize performance using in-depth reports built for developers. Improve SEO & UX.

DebugBear sponsor

Why to Use Maps More and Objects Less — A journey down a performance rabbit hole.

Steve Sewell

Adopting React in the Early Days — A personal history lesson providing context around React’s evolution. While React might be an obvious, even safe, choice now, that wasn’t always true.

Sébastien Lorber

An Animated Flythrough with Theatre.js and React Three Fiber — How to fly through a 3D scene using the Theatre.js JavaScript animation library and the React Three Fiber 3D renderer. This is the sort of thing that used to be Very Difficult™ but is now relatively trivial.

Andrew Prifer (Codrops)

How to Change the Tab Bar Color Dynamically with JavaScript

Amit Merchant

Is Deno Ready for Primetime? One Dev’s Opinion

Max Countryman

Using Playwright to Monitor Third-Party Resources That Could Impact User Experience

Stefan Judis

🛠 Code & Tools

Dependency Cruiser: Validate and Visualize JavaScript Dependencies — If you want a look at the output, there’s a whole page of graphs for popular, real world projects including Chalk, Yarn, and React.

Sander Verweij

Devalue: Like JSON.stringify, But..“Gets the job done when JSON.stringify can’t.” Namely, it can handle cyclical and repeated references, regular expressions, Map and Set, custom types, and more.

Rich Harris

🧡 JavaScript Scratchpad for VS Code (2m+ Downloads) — Get Quokka.js ‘Community’ for free: #1 tool for exploring/testing JavaScript with edit-continue experience to see realtime execution and runtime values.

Wallaby.js sponsor

NodeGUI: Build Native Cross-Platform Desktop Apps with Node.js — Unlike Electron which leans upon webviews and HTML, NodeGui uses a Qt based approach. This week’s 0.58.0 release is the first stable release based on Qt 6 and offering high DPI support.

NodeGui

DOMPurify 3.0: Fast, Tolerant XSS Sanitizer for HTML and SVG — A project that’s nine years old today but still actively developed. Supports all modern browsers (IE support was only just dropped) and is heavily tested. There’s a live demo here.

Cure53

Pythagora: Generate Express Integration Tests by Recording Activity — This is a neat idea still in its early stages. Add a line of code after setting up an Express.js app and this will capture app usage and generate integration tests based on the interactions. (▶️ Screencast demo.)

zvone187 and LeonOstrez

Try Stream’s Free Trial of SDKs for In-App Chat

Stream sponsor

grep.app: Search Code Across a Half Million GitHub Repos — A code search engine that lets you use regexes or syntax in your search. Considering what it is, it’s pretty fast and has an extensive index (over half a million public repos from GitHub, allegedly).

grep.app

tsParticles: Particles, Confetti and Fireworks for Your Pages — Create customizable particle related effects for use on the Web. Uses the regular 2D canvas for broad support.

Matteo Bruni

💻 Jobs

Software Engineer — Join our happy team. Stimulus is a social platform started by Sticker Mule to show what’s possible if your mission is to increase human happiness.

Stimulus

Find JavaScript Jobs with Hired — Hired makes job hunting easy-instead of chasing recruiters, companies approach you with salary details up front. Create a free profile now.

Hired

QUICK RELEASES:

Minimatch 6.2
↳ Glob matcher library, as used in npm.
    minimatch(“bar.foo”, “*.foo”)

React Accordion 1.2
↳ Unstyled WAI-ARIA-compliant accordion library.

ScrollTrigger 1.0.6
↳ Have your page react to scroll changes.

VeeValidate 4.7.4
↳ Popular Vue.js form library

Express Admin 2.0
↳ Admin interface for data in MySQL/Postgres/SQLite.

Execa 7.0
↳ Improved process execution from Node.js.

React Tooltip 5.8

Instrument your Nodejs Applications with Open Source Tools – Part 2

As we mentioned in the previous article, at NodeSource, we are dedicated to observability in our day-to-day, and we know that a great way to extend our reach and interoperability is to include the Opentelemetry framework as a standard in our development flows; because in the end our vision is to achieve high-performance software, and it is what we want to accompany the journey of developers in their Node.js base applications.

With this, we know that understanding the bases was very important to know the standard and its scope, but that it is necessary to put it into practice. How to integrate Opentelemetry in our application?; and although NodeSource has direct integration into its product in addition to more than 10 key functionalities in N|Solid, that extend the offer of a traditional APM, as you know, we are great contributors to the Open Source project, we also support the binary distributions of the Node.js project, our DNA is always helping the community and showing you how through Open Source tools you can still increase the visibility. So through this article, we want to share how to set up OpenTelemetry with Open Source tools.

In this article, you will find __How to Apply the OpenTelemetry OS framework in your Node.js Application__, which includes:

Step 1: Export data to the backend

Step 2: Set up the Open Telemetry SDK
__Step 3__: Inspect Prometheus to review we’re receiving data

Step 4: Inspect Jaeger to review we’re receiving data

Step 5: Getting deeper at Jaeger 👀

Note: This article is an extension of our talk at NodeConf.EU, where we had the opportunity to share the talk:

__Dot, line, Plane Trace!__
__Instrument your Node.js applications with Open Source Software__
Get insights into the current state of your running applications/services through OpenTelemetry. It has never been as easy as now to collect data with Open Source SDKs and tools that will help you extract metrics, generate logs and traces and export this data in a standardized format to be analyzed using the best practices. In this talk, We’ll show how easy it is to integrate OpenTelemetry in your Node.js applications and how to get the most out of it using Open Source tools.

To see the talks from this incredible conference, you can watch all sessions through live-stream links below 👇
– Day 1️⃣ – https://youtu.be/1WvHT7FgrAo
– Day 2️⃣ – https://youtu.be/R2RMGQhWyCk
– Day 3️⃣ – https://youtu.be/enklsLqkVdk

Now we are ready to start 💪 📖 👇

Apply the OpenTelemetry OS framework in your Node.js Application

So, going back to the distributed example we described in our previous article, here we can see what the architecture looks like this after adding observability.

Every service will collect signals by using the OpenTelemetry Node.js SDK and export the data to specific backends so we can analyze it.

We are going to use the following:

JAEGER for Traces and Logs.

Prometheus to visualize the metrics.

_Note: _Jaeger and Prometheus are probably the most popular open-source tools in space.

Step 1: Export data to the backend

How the data is exported to the backends differs:
To send data to _JAEGER__, we will use OTLP over HTTP, whereas for _Prometheus__, the data will be pulled from the services using HTTP.

First, we will show you how easy it is to set up the OpenTelemetry SDK to add observability to our applications.

### Step 2: Set up the OpenTelemetry SDK

First, we have the providers in charge of collecting the signals, in our case __NodeTracerProvider__ for traces and __MeterProvider__ for metrics.
Then the exporters send the collected data to the specific backends.
The Resource contains attributes describing the current process, in our case, __ServiceName__ and __Container. Id’s__. The name of these attributes is well defined by the spec (it’s in the __semantic_conventions module__) and will allow us to differentiate where a specific signal comes from.

So to set up traces and metrics, the process is basically the same: we create the provider passing the Resource, then register the specific exporter.

We also register instrumentations of specific modules (either core modules or popular userspace modules), which provide automatic Span creation of those modules.

Finally, the only important thing to remember is that we need to initialize OpenTelemetry before our actual code; the reason is these instrumentation modules (in our case for __http__ and fastify) __monkeypatch__ the module they’re instrumenting.

Also, we create the __meter instruments__ because we will use them on every service: an __HTTP request counter__ and a couple of observable gauges for __CPU usage__ and __ELU usage__.

So let’s spin the application now and send a request to the API. It returns a 401 Not Authorized. Before trying to figure out what’s going on, let’s see if Prometheus and jaeger are actually receiving data.

Step 3: Inspect Prometheus to review we’re receiving data

Let’s look at Prometheus first:
Looking at the HTTP requests counter, we can see there are 2 data points: one for the __API service__ and another one for the __AUTH service__. Notice that the data we had in the Resource is __service_name__ and __container_id__. We also can see the process_cpu is collecting data for the 4 services. The same is true for __thread_elu__.

Step 4: Inspect Jaeger to review we’re receiving data

Let’s look at Jaeger now:
We can see that one trace corresponding to the __HTTP request__ has been generated.

Also, look at this chart where the points represent traces, the X-axis is the timestamp, and the Y-axis is the duration. If we inspect the trace, we can see it consists of 3 spans, where every span represents an __HTTP transaction__, and it has been automatically generated by the instrumentation-HTTP modules:

The 1st span is an HTTP server transaction in the API service (the incoming HTTP request).
The 2nd span represents a POST request to AUTH from API.
The 3rd one represents the incoming HTTP POST in AUTH. If we inspect a bit this last span, apart from the typical attributes associated with the request (HTTP method, request_url, status_code…).

We can see there’s a Log associated with the Span this makes it very useful as we can know exactly which request caused the error. By inspecting it, we found out that the reason for the failure was missing the auth token.

This piece of information wasn’t generated automatically, though, but it’s very easy to do. So in the verify route from the service, in case there’s an error verifying the token, we retrieve the active span from the current context and just call __recordException()__ with the error. As simple as that.

Well, so far, so good. Knowing what the problem is, let’s add the auth token and check if everything works:

curl http://localhost:9000/ -H “Authorization: Bearer eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJpc3MiOiIiLCJpYXQiOjE2NjIxMTQyMjAsImV4cCI6MTY5MzY1MDIyMCwiYXVkIjoid3d3LmV4YW1wbGUuY29tIiwic3ViIjoiIiwibGljZW5zZUtleSI6ImZmZmZmLWZmZmZmLWZmZmZmLWZmZmZmLWZmZmZmIiwiZW1haWwiOiJqcm9ja2V0QGV4YW1wbGUuY29tIn0.PYQoR-62ba9R6HCxxumajVWZYyvUWNnFSUEoJBj5t9I”

Ok, now it succeeded. Let’s look at Jaeger now. We can see the new trace here, and we can see that it contains 7 spans, and no error was generated.

Now, it’s time to show one very nice feature of Jaeger. We can compare both traces, and we can see in grey the Spans that are equal, whereas we can see in Green the Spans that are new. So just by looking at this overview, we can see that if we’re correctly Authorized, the API sends a GET request to SERVICE1, which then performs a couple of operations against POSTGRES. If we inspect one of the POSTGRES spans (the query), we can see useful information there, such as the actual QUERY. This is possible because we have registered the instrumentation-pg module in SERVICE1.

And finally, let’s do a more interesting experiment. We will inject load to the application for 20 seconds with autocannon…

If we look at the latency chart, we see some interesting data: up until at least the 90th percentile, the latency is basically below 300ms, whereas starting at least from 97.5%, the latency goes up a lot. More than 3secs. This is Unacceptable 🧐. Let’s see if we can figure out what’s going on 💪.

Step 5: Getting deeper at Jaeger 👀

Looking at Jaeger and limiting this to like 500 spans, we can see that the graph here depicts what the latency char showed. Most of the requests are fast, whereas there are some significant outliers.

Let’s compare one of the fast vs. slow traces. In addition to querying the database, we can see the slow trace in that SERVICE1 sends a request to SERVICE2. That’s useful info for sure. Let’s take a look more closely at the slow trace.

In the __Trace Graph view__, every node represents a Span, and on the left-hand side, we can see the percentage of time with respect to the total trace duration that the subgraph that has this node as root takes. So by inspecting this, we can see that the branch representing the HTTP GET from SERVICE1 to SERVICE2 takes most of the time of the span. So it seems the main suspect is SERVICE2. Let’s take a look at the Metrics now. They might give us more information. If we look at the thread.elu, we can see that for SERVICE2, it went 100% for some seconds. This would explain the observed behavior.

So now, going to the SERVICE2 code route, we can easily spot the issue. We were performing a __Fibonacci operation__. Of course, this was easy to spot as this is a demo, but in real scenarios, this would not be so simple, and we would need some other methods, such as CPU Profiling, but regardless, the info we collected would help us narrow down the issue quite significantly.

So, that’s it for the demo. We’ve created a repo where you can access the full code, so go play with it! 😎

Main Takeaways

Finally, we just want to share the main takeaways about implementing observability with Open Software Tools:

Setting up observability in our Node.js apps is actually not that hard.
It allows us to observe requests as they propagate through a distributed system, giving us a clear picture of what might be happening.
It helps identify points of failure and causes of poor performance. (for some cases, some other tools might also be needed: CPU profiling, heap snapshots).
Adding observability to our code, especially tracing, comes with a cost. So Be cautious! ☠️But we are not going to go deeper into this, as it could be a topic for another article.

Before you go

If you’re looking to implement observability in your project professionally, you might want to check out N|Solid, and our ’10 key functionalities’. We invited you to follow us on Twitter and keep the conversation!

11 Features in Node.js 18 you need to try

Node.js 18 LTS is now available. What’s new?

Node.js 18 was released on the 19th of April this year. You can read more in the official blog post release or in the OpenJS Blog announcement. The community couldn’t be more excited!

Here at NodeSource,releases are a big deal. As a team of experts, enthusiasts, and core contributors to the open-source project, we love seeing the progress of Node! We are also one of the primary distributors of the runtime and have been since version 0.x (2014).

Developers download and use our binaries worldwide for their production environments (over 100m a year and growing!). We are incredibly proud to support this important piece of the Node ecosystem in addition to building and supporting customers on our Node.js platform – N|Solid.

“If you use Linux, we recommend using a NodeSource installer.” – From the NPM Documentation

If you want to lend a hand, we welcome your ideas or solutions contact us, or if you would like to help us continue supporting open source, you can contribute with an issue here.

Overall, the community is looking forward to this release with many new features and other benefits in addition to the official release earlier this year that included:

Security: Upgrading to OpenSSL 3.0

APIs: Fetch API is Promise based, providing a cleaner and more concise syntax.

If you are interested in thinking about the future of Node, we recommend checking out The next-10 group. They are doing some great work thinking about the strategic direction for the next 10 years of Node.js. Their technical priorities are:

Modern HTTP, WebAssembly, and Types.
ECMAScript modules and Observability

_OpenJS Collaborator Summit 2022
_

But now I’m sure you want to get into the changes in v18. What has improved, and what are the new features? That’s what you’re here for 😉. So let us explain 👇

Hydrogen. What is it?

The codename for this release is ‘Hydrogen’. Support for Node.js 18 will last until April 2025. The name comes from the periodic table, and they have been used in alphabetical order (Argon, Boron, Carbon, Dubnium, Erbium…) 🤓 Read more in StackOverflow.

LTS?

According to the Node.js blog, the “LTS version guarantees that the critical bugs will be fixed for a total of 30 months and Production applications should only use Active LTS, or Maintenance LTS releases”. – https://nodejs.dev/en/about/releases/

In short, it focuses on stability and being a more reliable application after allowing a reasonable time to receive feedback from the community and testing its implementation at any scale.

_Nodejs Releases Screenshot 2022
_

How do I know what version of Node and LTS I have?

You can easily do it by typing in your console:

$ node –version

Run the following to retrieve the name of the LTS release you are using:

$ node -p process.release.lts

_Note: _ The previous property only exists if the release is an LTS. Otherwise, it returns nothing.

If you want to be aware of the release planning in the Node.js community, you can check here: Node.js Release Schedule.

What’s new in Node.js 18?

Contributors are constantly working to improve the runtime, introduce more features, and improve developer experience and usability. Today as the worldwide community uses JS for developing API-driven web applications and serverless cloud development, the changes in this new LTS version are important to understand.

In honor of the number 11 (__#funfact__ Undici means ‘eleven’ in Italian), we decided to make our top 11 Node.js 18 features:

Fetch API
🧪- – watch
🧪 OpenSSL 3 Support
🧪 node:test module
Prefix-only core Modules
🧪 Web Streams API
Other Global APIs: Blob and BrodcastChannel.
V8 Version 10.1
Toolchain and Compiler Upgrades
HTTP Timeouts
Undici Library

The idea of this blog post is to relevel the functionalities one by one, so let’s start:

Feature 1: Native Fetch API in Node.js 18

Finally, v18 provides native fetch functionality in Node.js. This is a standardized web API for conducting HTTP or other types of network requests. Previously Node.js did not support it by default. Because JavaScript is utilized in so many areas, this is fantastic news for the entire ecosystem.

Example:

Feature 2:–watch

Using –watch, your application will automatically restart when an imported file is changed. Just like nodemon. And you can use –watch-path to specify which path should be observed.

Also, these flags cannot be combined with –check, –eval, –interactive, or when used in REPL (read–eval–print loop) mode. It just won’t work.

You can now use Node Watch index on your file name to start watching your files without having to install anything.

Feature 3: OpenSSL 3 Support

OpenSSL is an open-source implementation of, among other things, SSL and TLS protocols for securing communication.

One key feature of OpenSSL 3.0 is the new FIPS (Federal Information Processing Standards) module. FIPS is a set of US government requirements for governing cryptographic usage in the public sector.

More information is available in the OpenSSL3 blog post.

Feature 4: The Experimental node:test

The node:test module facilitates the creation of JavaScript tests that report results in TAP (Test Anything Protocol) format. The TAP output is extensively used and makes the output easier to consume.

import test from node:test

This module is only available under the node:scheme.
Read more in Node.js Docs

This test runner is still in development and is not meant to replace other complete alternatives such as Jest or Mocha, but it provides a quick way to execute a test suite without additional third-party libraries. The test runner supports features like subtests, test skipping, callback tests, etc.

node:test and –test

node:assert

The following is an example of how to use the new test runner.

More information may be found in the Node.js API docs.

Feature 5: Prefix-only core Modules

A new way to ‘import’ modules that leverages a ‘node:’ prefix, which makes it immediately evident that the modules are from Node.js core

To learn more about this functionality, we invite you to read Colin Ihrig‘s article Node.js 18 Introduces Prefix-Only Core Modules.

Feature 6: Experimental Web Streams API

A Web Streams API is a set of streams API. Also experimental, it allows JavaScript to access streams of data received over the network programmatically and process them as desired. This means Stream APIs are now available on the global scope. This would help send the data packets in readable and writable streams.

Methods available are as follows,

ReadableStream

ReadableStreamDefaultReader

ReadableStreamBYOBReader

ReadableStreamBYOBRequest

ReadableByteStreamController

ReadableStreamDefaultController

TransformStream

TransformStreamDefaultController

WritableStream

WritableStreamDefaultWriter

WritableStreamDefaultController

ByteLengthQueuingStrategy

CountQueuingStrategy

TextEncoderStream

TextDecoderStream

CompressionStream

DecompressionStream

Feature 7: Other Global APIs

The following APIs in the Node v18 upgrade are exposed on the global scope: Blob and BroadcastChannel.

Feature 8: V8 Version 10.1

Node.js runs with the V8 engine from the Chromium open-source browser. This engine has been upgraded to version 10.1, which is part of the recent update in Chromium 101.

New array methods for finding the last element and index of an array. Array methods findLast and findLastIndex are now available.
Internationalization support: Intl.Locale and the Intl.supportedValuesOf functions.
Improving the performance of class fields and private class methods.
The data format of the v8.serialize function has changed (No compatible with earlier versions of Node.js.)

Keep an eye out here.

Feature 9: Toolchain and Compiler Upgrades

Node.js always provides pre-built binaries for various platforms. For every latest release, toolchains are evaluated and elevated whenever required. Node.js provides pre-built binaries for several different platforms. For each major release, the minimum toolchains are assessed and raised where appropriate.

Pre-built binaries for Linux are now built on Red Hat Enterprise Linux (RHEL) 8 and are compatible with Linux distributions based on Glibc 2.28 or later, for example, Debian 10, RHEL 8, Ubuntu 20.04.
Pre-built binaries for macOS now require macOS 10.15 or later.
For AIX, the minimum supported architecture has been raised from Power 7 to Power 8.

Note: Build-time user-land snapshot(Experimental)

Users can build a Node.js binary with a custom V8 startup using the
–-node-snapshot-main flag of the configure script.

Feature 10: HTTP Timeouts

The http.server timeouts have changed:

headersTimeout (the time allowed for an HTTP request header to be parsed) is set to 60000 milliseconds (60 seconds).

requestTimeout (the timeout used for an HTTP request) is set to 300000 milliseconds (5 minutes) by default.

Feature 11: Undici Library in Node.js

Undici is an official Node team library, although it’s more like an HTTP 1.1 full-fledged client designed from the ground up in Node.js.

Keep alive by default.
LIFO scheduling
No pipelining
Unlimited connections
Can follow redirected (opt-in)

Of note, we support and love Lizz‘s work, so we recommend you check out her fantastic talk in Nodeconf.EU about New and Exciting features in Node.js to understand more about this feature.

Other Features/Changes:

The project undoubtedly has some great challenges in the near future to continue growing and maintaining its leading position in the ecosystem. These are some of the upcoming features. Most of them are experimental; without being the only ones to discuss, there is much work and proposals from an active community such as the Node.js Community.

Default DNS resolution
ECMAScript modules improvements
Improved support for AbortController and AbortSignal
Updated Platform Support
Async Hooks
Direct Network Imports
Build-time user-land snapshot
Support for JSON Import Assertions
Unflagging of JSON modules (experimental)
Support for HTTPS and HTTP imports
Diagnostic Channel
Trace Events
WASI

You can check the full changelog here.

Final Remarks

Node.js 12 will go End-of-Life in April 2022.
Node.js 14 (LTS) or Node.js 16 (LTS) or Later Node.js 18 will be LTS.
Node.js 18 will be promoted to Long-term Support (LTS) in October 2022 (NOW).
After being promoted to LTS, Node.js 18 will be supported until April 2025.

Upgrade Now!

Moving to the LTS version is the best decision for you to include the following improvements in your development workflow:

FetchAPI and Web Streams
V8: New advanced features, array methods, improvements, and enhancements.
Test runner without the need for third-party packages.
Deprecated APIs: Check the list here

Enhancement in Intl.Locale API.
Performance improvement in both class fields and private class methods.

Migration

To migrate your version of Node, follow these steps:

For Linux and Mac users, follow these commands to upgrade to the latest version of Node. The module n makes version management easy:

npm install n -g

For the latest stable version:

n stable

For the latest version:

n latest

Windows Users
Visit the Node.js download page to install the new version of Node.js.

Special Thanks

With 99 contributors and 1348 commits Node.js 18 LTS came to life 🎉. Special thanks to @_rafaelgss @BethGriggs_ @_richard_lau_ To make this release happen 💚

$ nvm install 18.12.0

And thank you to all of Node.js project contributors. Our complete admiration and support for such incredible work 💪.

NodeSource Node.js Binary Distributions

NodeSource, from the beginning, was created with a great commitment to the developers’ community, which is why it has provided documentation for using the NodeSource Node.js Binary Distributions via .rpm, .deb as well as their setup and support scripts.

If you are looking for NodeSource’s Enterprise-grade Node.js platform, N|Solid, please visit https://downloads.nodesource.com/, and for detailed information on installing and using N|Solid, please refer to the N|Solid User Guide.

We are also aware that as a start-up, you want ‘Enterprise-grade’ at a startup price, this is why we extend our product to small and medium-sized companies, startups, and non-profit organizations with N|Solid SaaS.

Useful Links / References

You can upgrade to NodeJS v18 using the official download link

New Node.js features bring a global fetch API & test runner. Check out the Node version 16-18 report

Welcome Node.js 18 by RedHat
Announcing a new –experimental-modules

A new jQuery release for Xmas

#​619 — December 16, 2022

Read on the Web

🎄 This is the final issue of the year – we’ll be back on January 6, 2023. We hope you have a fantastic holiday season, whether or not you are celebrating, and we’ll see you for a look back at 2022 in the first week of January 🙂
__
Peter Cooper and the Cooperpress team

JavaScript Weekly

Announcing SvelteKit 1.0Svelte is a virtual DOM-free, compiled ahead of time, frontend UI framework with many fans. SvelteKit introduces a framework and tooling around Svelte to build complete webapps. This release post explains some of its approach and how it differs to other systems.

The Svelte Team

Dr. Axel Tackles Two Proposals: Iterator Helpers and Set Methods — Here’s something to get your teeth into! Dr. Axel takes on two promising ECMAScript proposals and breaks down what they’re about and why they’ll (hopefully) become useful to JavaScript developers. The first tackles iterator helpers (new utility methods for working with iterable data) and the second tackles Set methods which will extend ES6’s Set object.

Dr. Axel Rauschmayer

🧈 Retire your Legacy CMS with ButterCMS — ButterCMS is your new content backend. We’re SaaS so we host, maintain, and scale the CMS. Enable your marketing team to update website + app content without needing you. Try the #1 rated SaaS Headless CMS for your JS app today. Free for 30 days.

🧈 ButterCMS sponsor

🏆  The Best of Node Weekly in 2022 — In this week’s issue of Node Weekly (our Node.js-focused sister newsletter) we looked back at the most popular items of the year, including the Tao of Node, an array of JavaScript testing best practices, and the most popular Node.js frameworks in 2022.

Node Weekly Newsletter

jQuery 3.6.2 Released — Humor me. You might not be using jQuery anymore, but it’s (still) the most widely deployed JavaScript library and it’s fantastic to see it being maintained.

jQuery Foundation

IN BRIEF:

Node 19.3.0 (Current) has been released to bring npm up to v9.2. Breaking changes in v9.x warrant this update and the release post explains the current policy around npm’s ongoing inclusion in Node.

ƛ The Glasgow Haskell Compiler (GHC) has gained a new JavaScript backend meaning the reference Haskell compiler can now emit JavaScript and be used more easily to build front-end apps.

GitHub is rolling out secrets scanning to all public repos for free.

The New Stack reflects on 2022 as a ‘golden year’ for JavaScript and some of the developments we’ve seen. We’ll be doing our own such roundup in the next issue.

RELEASES:

Node.js 16.19.0 (LTS) and 14.21.2 (LTS)

Chart.js 4
↳ Canvas-based chart library. (Samples.)

PouchDB 8.0
↳ CouchDB-inspired syncing database.

SWR 2.0 – React data-fetching library.

📒 Articles & Tutorials

Why Cypress v12 is a Big Deal — A practical example-led love letter of sorts to how the latest version of the popular Cypress ‘test anything that runs in a browser’ library makes testing frontend apps smoother than before.

Gleb Bahmutov

Five Challenges to Building an Isomorphic JS Library — When it comes to JavaScript, “isomorphic” means code or libraries that run both on client and server runtimes with minimal adaptations.

Nick Fahrenkrog (Doordash)

▶  A Podcast for Candid Chats on Product, Business & Leadership — Join Postlight leaders & guests as they discuss topics like running great meetings & creating solid product launches.

The Postlight Podcast sponsor

Next, Nest, Nuxt… Nust?“This blog post is for everyone looking for their new favorite JavaScript backend framework.” If the names of frameworks are all starting to blur together in your head, this is for you. Marius explains just what systems like Next and Gatsby do and touches on a few differences.

Marius Obert (Twilio)

Calculating the Maximum Diagonal Distance in a Given Collection of GeoJSON Features using Turf.js — This is cool. Turf.js is a geospatial analysis library, by the way.

Piotr Jaworski

Optimize Interaction to Next Paint — How to optimize for the experimental Interaction to Next Paint (INP) metric — a way to assess a page’s overall responsiveness to user interactions.

Jeremy Wagner & Philip Walton (Google)

Need to Upgrade to React 18.2? Don’t Have Time? Our Experts Can Help — Stuck in dependency hell? We’ve been there. Hire our team of experts to upgrade deps, gradually paying off tech debt.

UpgradeJS.com – JavaScript Upgrade Services by OmbuLabs sponsor

How We Configured pnpm and Turborepo for Our Monorepo

Pierre-Louis Mercereau (NHost)

Rendering Emails with Svelte

Gautier Ben Aim

🛠 Code & Tools

Wretch 2.3: A Wrapper Around fetch with an Intuitive Syntax — A long standing, mature library that makes fetch a little more extensible with a fluent API. Check the examples.

Julien Elbaz

SWR 2.0: Improved React Hooks for Data Fetching — The second major release of SWR (Stale-While-Revalidate) includes new mutation APIs, new developer tools, as well as improved support for concurrent rendering.

Ding, Liu, Kobayashi, and Xu

Don’t Let Your Issue Tracker Be a Four-Letter Word. Use Shortcut

Shortcut (formerly Clubhouse.io) sponsor

vanilla-tilt.js 1.8: A Smooth 3D Tilting Effect Library — No dependencies and simple to use and customize. GitHub repo.

Șandor Sergiu

visx: Airbnb’s Low Level Visualization React Components — Bring your own state management, animation library, or CSS-in-JS.. visx can slot into any React setup. Demos.

Airbnb

Scene.js 1.7: A CSS Timeline-Based Animation Library — Plenty of examples on the site. Has components for React, Vue and Svelte.

Daybrush

PortalVue 3.0
↳ Feature-rich portal plugin for Vue 3.

Kea 3.1
↳ Composable state management for React.

jest-puppeteer 6.2
↳ Run tests using Jest + Puppeteer.

NodeBB 2.7 – Node.js based forum software.

Pino 8.8 – Fast JSON-oriented logger.

💻 Jobs

Software Engineer — Join our “kick ass” team. Our software team operates from 17 countries and we’re always looking for more exceptional engineers.

Stickermule

Developer Relations Manager — Join the CKEditor team to build community around an Open Source project used by millions of users around the world 🚀

CKEditor

Find JavaScript Jobs with Hired — Create a profile on Hired to connect with hiring managers at growing startups and Fortune 500 companies. It’s free for job-seekers.

Hired

🎁 And one for fun

Snow.js: Add a Snow Effect to a Web Page — Well, it’s that time of the year (in some parts of the world!) If you’re more interested in how the effect is made, it’s inspired by this CodePen example built around some fancy CSS.

Or if you’re a bit more childish, you could always put Fart.js on your site.. 🙈

Merry Christmas to you all and we’ll see you again in 2023!

Enhance Observability with Opentelemetry tracing – Part 1

Recently, conversations have been increasing around OpenTelemetry; it is gaining more and more momentum in Node.js development circles, but what is it? How can we take advantage of the key concepts and implement them in our projects?

Of note, NodeSource is a supporter of OpenTelemetry, and we have recently implemented full support of the open-source standard in our product N|Solid. It allows us to make our powerful Node.js insights accessible via the protocol.

Opentelemetry is a relatively recent vendor-agnostic emerging standard that began in 2019 when OpenCensus and OpenTracing combined to form OpenTelemetry – seeking to provide a single, well-supported integration surface for end-to-end distributed tracing telemetry. In 2021, they released V1. 0.0, offering stability guarantees for the approach.

And most important, OpenTelemetry is an open-source observability project/framework with a collection of software development kits (SDKs), APIs, and tools for instrumentation from the Cloud Native Computing Foundation (CNCF).

W3C Trace Context is the standard format for OpenTelemetry. Cloud providers are expected to adopt this standard, providing a vendor-neutral way to propagate trace IDs through their services. Organizations use OpenTelemetry to send collected telemetry data to a third-party system for analysis.

But to break down its history a bit, we think it’s important to understand the concept of __Observability__.

At Nodesource, as you likely know, we work daily in Observability, focusing exclusively on the Node.js runtime and N|Solid from N|Solid 4.8.0 supports some OpenTelemetry features. But before getting deeper in OTEL, it is important to understand Observability and try to resolve this important question: What is Observability?

Setting the foundations to talk about OpenTelemetry

It’s important to understand that when we talk about Observability, we need first to know what questions we seek to answer or clarify when detailing a system.

The first question often asked is __why__ my application has specific behavior. And to solve this and other questions, we first must instrument our system so that our application can emit signals, that is, traces, metrics, and logs. When we correctly do this, we have the necessary information needed.

Observability is the ability to measure the internal states of a system by examining its outputs. – Splunk

Detailing a system through Data Collection: Telemetry Data

Your systems and apps need proper tooling to collect the appropriate telemetry data to achieve Observability.
But what is the telemetry data that we need?

The three key concepts are :

Metrics

Logs

Traces

Ok, let’s define each of these concepts:

Metrics

__Metrics__: are aggregations over a period of time of numeric data about your infrastructure or application. Examples include system error rate, CPU utilization, and request rate for a given service.

As quoted by isitobservable.io, OpenTelemetry has three metric instruments :

__Counter__: a value that is summed over time (similar to the Prometheus counter)
__Measure__: a value that is aggregated over time (a value over some defined range)
__Observer__: captures a current set of values ​​at a given time (like a gauge in Prometheus)

The context is still very important, along with metric information like name, description, unit, kind (counter, observer, measure), label, aggregation, and time.

Logs

__Logs__: A Log is a timestamped message emitted by services or other components. They are not necessarily associated with any particular user request or transaction, but they become more valuable when they are.

The logical line would tell us that here we must jump to traces because it is part of the three key concepts. But before defining what a trace is, we must zoom in on the concept of __Span__.

Span

__Span__: A Span represents a unit of work or operation. It tracks specific operations that a request makes, painting a picture of what happened during the time in which that operation was executed.

A span is the building block of a trace and is a named, timed operation representing a piece of the workflow in the distributed system. All traces are composed of Spans.

Traces

Traces__: A Trace records the paths taken by requests (made by an application or end-user) as they __propagate through multi-service architectures, like microservice and serverless applications. It is also known as Distributed Trace. A trace is almost always an assessment of end-to-end performance.

Without tracing, it is challenging to pinpoint the cause of performance problems in a distributed system.

Suppose you realize we broke into the three pillars of Observability when introducing the concept of Span. In that case, however, the three pillars and Span conform to what is known as __Telemetry Data__, which are simple __signals emitted from applications and resources about their internal state.__

The core concept of Context Propagation

When we want to correlate events across our services’ boundaries, we look for a context that helps us identify the current trace and Span. But context is not the only thing we need; we also need __propagation__.

If you are with us following the article carefully, you will realize that in the definition of trace, we talk about the word __‘propagation’__. You might wonder what this means.

Propagation is how context is bundled and transferred in and across services, often via HTTP headers. Now, With these clear concepts, we can begin to understand the concept of __Context Propagation__.

A critical functionality required to implement Distributed Tracing is the concept of Context Propagation. We can define it as a mechanism for storing state and accessing data across the lifespan of a distributed transaction, either across execution contexts inside a process or across the boundaries of the services that conform to our system.

For In Process propagation__, we typically use something like the __AsyncLocalStorage class from the async_hooks module.

Whereas Across Processes__, it will depend on the IPC protocol used. For example, for HTTP, there’s the Trace Context specification from the _W3C__, which defines the _traceparent and tracestate headers to propagate tracing info.

Getting into a Distributed Application

Let’s say we have a distributed application like the one in the picture. It has 4 Nodejs services: API, auth, Service1 and Service2, and 1 database.

Imagine we’re having intermittent performance issues. They could come from several points:
– Database access
– Network link status,
– DNS request latency, etc.
Finding where exactly may become a very hard and time-consuming task; the harder, the more complex the system is.

Distributed tracing will help us A LOT with that, as we’ll generate tracing information on every point of the distributed system (A, B, C, D, and E). Not only that, but while the request goes through all the services, thanks to Context Propagation, some ‘tracing state’ will be passed along so all the tracing info can be linked to the very same request.

Instrument your system

To get visibility into the performance and behaviors of the different microservices, we need to instrument the code with OpenTelemetry to generate traces. But first, let’s define what Instrumentation is…

Automatic Instrumentation

With __Automatic Instrumentation__, our instrumentation libraries will automatically take the configuration provided (through code or environment variables) and do most of the work.

In the following example, using the OpenTelemetry SDK, we show how we can automatically generate spans for every HTTP transaction handled by the Nodejs HTTP core module.

Manual Instrumentation

__Manual instrumentation__, on the other hand, while requiring more work on the user/developer side, enables far more options for customization, from naming various components within OpenTelemetry (for example, spans and the traces ) to add your own attributes, specific exception handling, and more. See the following example shows how to manually generate a Span using the OpenTelemetry SDK.

How to Implement Opentelemetry in my project?

The way we historically would implement a typical observability pipeline is shown in the following picture.

In this case, having all that data at your disposal is great and can give us a valuable overview of our system, but unless we are able to correlate the observability signals somehow: metrics, logs, and traces, we won’t be able to have the best of it.
OpenTelemetry comes to solve this problem. The solution is going to come from correlating these signals. This can be done by applying the same concept of Context Propagation that was used for Traces to Metrics and Logs, so in this case, identifiers such as the trace_id and the span_id are associated with those signals.

OpenTelemetry spanId and traceId can correlate Logs and Metrics with a specific Span in a Trace.

Opentelemetry Components

OpenTelemetry is much more… Before finishing this article, it is important to describe the different components of OpenTelemetry.

Note: For more detail, read the specification overview of the Opentelemetry project.

OpenTelemetry API: It provides an API, which defines data types and operations for generating and correlating tracing, metrics, and logging data.

💚 From N|Solid 4.8.0 we provide an implementation of the OpenTelemetry TraceAPI, allowing users to instrument their own code using the de-facto standard API.

OpenTelemetry SDK: It provides Language-specific implementations of the API.

OpenTelemetry OTLP: A protocol to transport the Telemetry Data.

💚 With N|Solid 4.8.0 we support many instrumentation modules available in the OpenTelemetry ecosystem. Supporting exporting traces using the OpenTelemetry Protocol(OTLP) over HTTP.

OpenTelemetry Collector: To receive, process, and export Telemetry data.

💚 In N|Solid 4.8.0 is now possible to send N|Solid Runtime monitoring information (metrics and traces) to backends supporting the OpenTelemetry standard like multiple APMS (Dynatrace, Datadog, Newrelic).

OpenTelemetry Semantic Conventions: To have well-defined naming for the attributes associated with the signals: (service.name, http. port, etc.)

We know that there may be other key concepts to develop around Opentelemetry, and for this reason, we invite you to visit the direct website of the project or the Github Repo directly.

This introductory article gives us the basis for sharing a demo we prepared for NodeConf.EU, where we apply open-source tools to implement Opentelemetry in your project. We invite you to stay tuned for our next blog post. 😉 Wait for the second part!

Conclusions

Traces are really useful for understanding modern distributed systems.
We build better software when we get the best of our traces.
With OTel (Opentelemetry), we’re able to have maximized insights and answer future questions without having to make any code changes.
#OTel provides interoperability with observability tools.
Collect and correlate telemetry data is easy if you follow the OpenTelemetry framework.
As far as we know, the OpenTelemetry community is working hard to develop support for metrics and logs. Waiting for news soon! 🤞
Note: If you want to learn more about OpenTelemetry in Javascript, click HERE

To start getting more value out of your traces and metrics, you can use Opentelemetry with N|Solid back-end.

Achieve Your Performance Goals With N|Solid

We know that you want to get the best out of your application and to do it professionally, you will surely need a great ally to help you with various tools without affecting your performance. We do not want to stay in a ‘marketing speech’ where we tell you that we are the best… you can 👀check it directly here with this Open Source tool that also includes OTEL Results.

We’d love to hear more from you! 💚
– Feel free to TRY N|Solid and get in touch with us on Twitter at @nodesource.