JavaScript, ML and LLMs

#​654 — September 14, 2023

Read on the Web

JavaScript Weekly

Bun 1.0: Is It a Toolkit? Is It a Runtime? It’s Both — You’ve used Node, you’ve seen Deno, now Bun has grown up too. It’s a performance-oriented server-side JS runtime built atop JavaScriptCore and makes the unique claim of being “a drop-in replacement for Node.js.” It includes extras like transpilation, bundling, package management, and a Jest-compatible test runner too. The post goes into a lot of depth, but we enjoyed the Bun team’s ▶️ 10 minute introductory video. Does Bun deliver on all its promises yet? No. Is it promising? Yes.

Jarred Sumner et al.

Why Does every() Return true for Empty Arrays? — Nicholas wondered how a condition can possibly be satisfied when there aren’t any values to test, so he dug into the language specs to understand the logic.

Nicholas C. Zakas

The Complete UI Component Library For Enterprise Web Apps — A professional UI component library with power widgets like data grid, calendar, scheduler & Gantt charts. Includes API docs, guides and an unreasonable amount of demos to play with. Seamlessly integrates with React, Angular, Vue & Salesforce apps.

Bryntum sponsor

A First Look at TypeScript 5.3TypeScript 5.2 landed a few weeks ago, which means TypeScript 5.3 is already in the works (the final release is due in November), with possible features to think about including Import Attributes, throw expressions, and isolated declarations.

Matt Pocock

⚡️ IN BRIEF:

📅 ViteConf is taking place this October 5-6. It’s free and online.

The August 2023 build of VS Code has just been released and includes improvements to the JS debugger including WebAssembly module decompilation, as well as Move to File and Inline Variable refactorings.

Linus Groh is working on a JavaScript engine in Zig called Kiesel, mostly as a learning project, but it’s passing 25% of test262 after four months of effort.

While searching for something else, I encountered this JavaScript tutorial from 1996 that’s still online. Amazingly, most of it still works fine today.

Esteemed Microsoft code archaeologist Raymond Chen looks at how freestanding JS functions using this can be mistaken for a constructor by VS Code’s static analyzer.

🎉 RELEASES:

MikroORM 5.8 – Powerful Node.js ORM.

Reason 3.10 – Write code in OCaml, but for the JS ecosystem.

Happy DOM 11.0 – A JS implementation of a web browser sans UI.

Node.js v20.6.1 (Current)

📒 Articles & Tutorials

JavaScript’s New Array Grouping Methods — A look at Object.groupBy and Map.groupBy. The proposal including these methods is currently at stage 3 at TC39, but initial support is creeping into dev builds of browsers.

Phil Nash

JPEG and EXIF Data Manipulation in JavaScript — A look at how to pick through the JPEG format and read and replace EXIF tags directly without leaning on a third party library.

Cédric Patchane

Frontend Performance Monitoring 101 — Learn the basics of JavaScript application performance monitoring to see (and fix) slow faster. Join us for a live AMA.

Sentry sponsor

▶  Building a Mario Game Complete with Auth and Score Saving — Ania tackles the implementation of a game in her usual thorough, step-by-step manner.

Ania Kubów

Running a Playwright Script on AWS Lambda — If you’ve struggled to make it work too, Matt has some pointers.

Matt Steele

A New Method to Validate URLs — URL.canParse isn’t broadly supported yet, but can be easily polyfilled.

Stefan Judis

How to Run a GitHub Gist with npx — This is an interesting way to quickly deploy a script.

Kelly Fox

🕑 Lei Mao has a cute example of using React in an ad-hoc way on a web page to dynamically render an analog clock. No build step. No JSX.

▶️ Jack Herrington refutes six reasons not to use React.

🛠 Code & Tools

Shadcn for Vue: Components You Can Copy and Paste — A community-led Vue port of the React-oriented shadcn/ui, a suite of attractive components built with Tailwind CSS and Radix UI, thus making them easy to ‘copy and paste’ into your own apps.

Radix Vue Project

FlexGrid by Wijmo: The Industry-Leading JavaScript Datagrid

Wijmo from GrapeCity sponsor

npm-check-updates: Update package.json Dependencies to Latest Versions — That is, as opposed to the specified versions. It include a handy -i interactive mode so you can look at potential upgrades and then opt in to them one by one.

Raine Revere

Starry Night 3.0: GitHub-Like Syntax Highlighting — GitHub’s own syntax highlighter isn’t open source, but Starry Night uses WebAssembly (to get access to the Oniguruma regex engine) to get as close as it can.

Titus Wormer

Vuestic 1.8: Open Source UI Library for Vue 3 — A library of more than 60 customizable components. v1.8 introduces new Layout and Textarea components. Official homepage.

Epicmax

Goxygen 0.7: Quickly Generate a Go Backend for a JS Project — A tool that sets up a new Go-based project with Angular, React, or Vue in the front-end, and Docker and Docker Compose files to make it all work.

Sasha Shpota

Deliver Real-Time Live Streams with Amazon IVS — Amazon IVS enables developers to create dynamic real-time and low-latency video experiences. Click here to learn more.

Amazon Web Services (AWS) sponsor

xterm.js 5.3.0: Build Terminals in the Browser — It’s used in many projects like VS Code, cPanel, Azure Cloud Shell, and other browser-based IDEs. There’s a live demo on the homepage to try.

xterm.js team

Gridstack.js 9.2
↳ Build interactive dashboards in minutes. (Demos.)

Accessible Astro Starter 3.0
↳ A starter theme for an Astro-powered blog.

Ant Design 5.9
↳ Popular React UI library & design language.

📊 Reveal.js 4.6 – Write presentations in HTML.

Electron 26.2

💻 Jobs

Find JavaScript Jobs with Hired — Hired makes job hunting easy-instead of chasing recruiters, companies approach you with salary details up front. Create a free profile now.

Hired

🤖 A little JavaScript AI-side..

AI, LLMs and machine learning have caught the imagination of many developers recently, whether through training and deploying models, calling out to third party APIs (like those OpenAI offers), or using tools like GitHub Copilot to write code. It’s common, however, for a lot of AI/LLM experimentation to take place in Python, rather than JavaScript..

Nonetheless, there’s an increasing number of projects in the JavaScript AI/ML space worth keeping an eye on, as well as an upcoming AI developer event being organized by two folks from the JavaScript space:

Transformers.js: State-of-the-Art Machine Learning for the Web — A JavaScript library designed to be functionally equivalent to Hugging Face’s transformers Python library meaning you can run the same pretrained models using a very similar API. You can do things like ML-powered speech recognition directly in your browser using OpenAI’s Whisper model. GitHub repo.

Joshua Lochner et al.

Microsoft TypeChat: An Approach for Type-Safe LLM Responses — Anders Hejlsberg and Daniel Rosenwasser of TypeScript fame are just two of the prominent names attached to this project, demonstrating the huge interest within MS for LLMs. TypeChat’s goal is to work around the issue of LLMs outputting unstructured natural language and to direct output into a typed form.

Hejlsberg, Lucco, Rosenwasser et al.

WebLLM: Run LLM Models in the Browser with WebGPU — Less directly JavaScript, as it uses WebGPU, but yet another way to run large language models directly within the browser and that you can control from JavaScript. GitHub repo.

MLC LLM

TensorFlow.js: Machine Learning for JavaScript Developers — Slightly lower level, but a great way to train and deploy models in the browser or in Node.js. There are, of course, lots of demos, too.

TensorFlow

JavaScript Library Lets Devs Add AI Capabilities to the Web

Loraine Lawson (The New Stack)

▶  A Primer on AI for Developers with Swyx from Latent Space

Svelte Radio

📅 Plus, two JavaScript folks are putting on an AI event..

As part of my interest in AI and ML, I’m attending what promises to be the technical AI event of the year in San Francisco next month: The AI Engineer Summit.

The emerging ‘AI engineer’ category is at the intersection of AI/ML and code: where software engineers can access and implement powerful AI models with just an API. Andrej Karpathy believes that “there’s probably going to be significantly more AI engineers than there are ML engineers / LLM engineers.”

With speakers representing companies like OpenAI, Microsoft, Replit, Vercel, AutoGPT, Adept, LlamaIndex, and Notion, at the Hotel Nikko this October 8-10, the event is organized by two folks well known in the JavaScript world: Swyx (who you may remember from his popular The Third Age of JavaScript post) and Benjamin Dunphy, formerly of Reactathon and Jamstack Conf. You can apply to attend or get a free remote ticket to tune in from wherever you are.

If you’re going to the AI Engineer Summit, I’ll see you there!

👋

Welcome to The Future of Software Development: Powered by Telemetry, Security, and AI

We made some big announcements during our keynote at Collision in Toronto; our AI Assistant, Adrian, and the open sourcing of our Node.js Runtime, N|Solid Runtime. They are a big part of our vision for the future of software development, one that is powered by telemetry, security, and AI – which was the topic of our talk. In this post we will share more about our vision and specifically how NodeSource is enabling that future.

NodeSource began as most great companies do; with smart, passionate people that saw a problem they had to fix: there was simply no good tooling for Node.js. We were Node believers and open-source project contributors on a mission to make Node more accessible for developers & safe for enterprises to adopt. Since our beginning we have provided the ecosystem with our insights, training, and binary distributions of the open-source packages – over 110 million downloads in the last year alone – powering Node applications in production all over the globe.

As a result of countless hours of ideating, coding and customer validation, N|Solid was born – an enterprise grade tool providing the deepest insights with the lowest overhead, all while continuing to keep ode apps secure. Today N|Solid is used by some of the largest organizations and developers globally. The mission that was set all those years ago is now more relevant than ever, over 30 million websites rely on Node.js and it’s one of the most used and loved technologies by developers worldwide. It’s been an amazing journey.

Revolutionizing Software Development: Advancing Telemetry, Security, and Efficiency with AI Innovation

We have continued to innovate, with our Node experts pushing to create the most advanced telemetry and security platform possible while still providing customers with world class support for Node. We have always believed that giving the very foremost data and insights was the best way to produce better software. Making software is continually challenging; the software development life cycle (SDLC) is highly inefficient.

You begin with an idea that you turn into code, then it gets built, tested and released for users to experience. Then you monitor for issues that are identified to triage and solve. Those fixes are added to other features to build, test, and release…and the cycle continues. While significant effort has been applied to make this process more efficient, these have invariably been small improvements. Tweaks really, to the overall process. Until now, fueled by the advancements in AI.

We believe that the future of software is intelligent software engineering, powered by telemetry, security & AI. The SDLC is augmented by applying AI that is trained with the right data to accelerate the production and maintenance of secure, highly performant code. It’s about building a new model – a generative loop based upon the intent of the code and its actual operation in production – bringing data and AI into the process in powerful new ways.

On the front end, AI is redefining the way code is written, from ChatGPT to GitHub’s CoPilot and beyond, Generative AI is creating code, documenting it, and writing test plans. These advancements are set to revolutionize the software development process on their own, replacing the often used copy/paste of code found from Google, StackOverflow, or existing codebases. Developers that leverage these new tools will have dynamically increased velocity of their code while still owning the solution. While significant, this is only a part of the solution of the future.

Unveiling the True Measure of Software: Quality and Performance in Production

The reality is that the quality and performance of software is only realized once it is in production, in use by real users. The telemetry data in this application is a key component for transforming the SDLC. How software performs in production, not how well software did in a test environment, is the true test for quality code. And the depth of that telemetry data is how you identify issues. This has long been our focus, not just to report on general metrics, but to go well beyond. This is why we established the measure of event loop utilization, worker thread monitoring, and more, to enable deep insights into application health and performance.

Application health is directly tied to security, more than ever today quality code is secure code. But, security is not static, there are new vulnerabilities that develop all the time. The visibility to these is critical, especially for production code. It’s why we offer our security tooling, NCM (Node Certified Modules) as a part of our platform. Enabling customers to have visibility to security issues from development and production live code.

It’s the depth of data and security health that unlock the opportunities with AI. It’s the other half of the equation of the future of software, powered by telemetry, security and AI. This is the future NodeSource is enabling.

N|Solid – the future of Node bringing the power of data and AI

With the announcement of our AI Assistant, “Adrian”, we are leveraging our unique and unparalleled data to help developers identify and resolve issues with tremendous speed and efficiency. Adrian will help every Node developer and devops engineer to not just view the telemetry, security, and alerts that matter – but to understand them, know their context and how to solve for them. It’s a game changer. It takes the power of the most advanced observability tool and the specific context of each application combined with our AI to resolve code issues fast.

Furthermore, our AI tools will assess code quality, identify cost optimizations, generate code and more. It’s like ‘god mode’ for Node.

This is the next step in our journey toward the future state of the SDLC. If you want to experience what Adrian can do, sign up HERE for our early access beta list and we will notify you when you can join the software development revolution.

About NodeSource, Inc.

NodeSource, Inc. is a technology company completely focused on Node.js and is dedicated to helping organizations and developers leverage the power of this technology. We offer the leading APM for monitoring and securing Node.js and provide world-class support and consulting services to help organizations navigate their Node.js journey. #KnowYourNode. For more information, visit NodeSource.com and follow @NodeSource on Twitter.

AI & ML – Highlights Google I/O (Connect) – Miami

On May 24th, 2023, the inaugural edition of Google I/O Connect took place in Miami, USA. Google introduced this conference as an extension to engage directly with the technical community.

Note: Image courtesy of @KarolRojas90

The concept behind Google I/O Connect was to host distributed events in four different locations worldwide.

In Miami, the focus was bringing together Google Developer Experts (GDE) from North America (Canada and USA) and LATAM. Additionally, community leaders from GDG (Google Developers Groups) and Women Tech Makers, as well as contributors and collaborators, were allowed to participate. The event welcomed over 2,000 attendees and featured 51 outstanding speakers, who were Googlers responsible for delivering technical talks, workshops, and Office Hours.

Note: Image courtesy of @jcrtejada05

The event stood out for its impeccable organization, seamless execution, and strong commitment to ensuring that speakers and attendees had a remarkable experience.

What’s New in…

Without a doubt, they were the four verticals of the event:

Mobile
Web
clouds
AI

There were incredible advances that made us as developers excited to implement them into our products, but without a doubt, the one we most eagerly awaited was the __AI Lineup__.

Google AI’s Ubiquitous Influence: Reshaping Products Everywhere

Since 2017, Google has held a dominant position in artificial intelligence and modeling, particularly with NLP (Natural Language Processing). NLP is crucial in various applications, including machine translation, sentiment analysis, chatbots, and speech recognition.

However, history took an unforeseen turn with the monumental emergence of OpenAI and the project ChatGPT and the groundbreaking development of Stable Diffusion for generating images. These advancements have undeniably propelled these technologies into the public’s eyes.

Even though these concepts have already been worked on for some years, it is essential to understand the difference between AI and ML because in this same event, both in Keynote I/O and in Connect, they talk about advances in both.

Note: Sundar’s Image by The Verge – https://nsrc.io/TikTokVergeAI

AI is a powerful tool that can be used to improve the user experience, make products more efficient, and create new possibilities. Google is committed to using AI to make its products and services better for everyone. That’s why they announced integration into these products directly and more:

Android Studio Hedgehog: Android Studio Hedgehog uses AI to improve the development process for Android apps. For example, it can automatically generate code, suggest code changes, and identify potential bugs. This can help developers save time and create better apps.

Play Store: The Google Play Store uses AI to recommend apps and games to users based on their interests and past purchases. It also uses AI to surface new apps and games that users might be interested in. This can help users find the best apps and games for their needs.

Photos: Google Photos uses AI to organize, search, and edit photos. For example, it can automatically identify faces in photos and create collages and albums. It can also automatically improve the quality of photos. This can help users easily find and enjoy their photos.

Workspace: Google Workspace uses AI to improve the user experience for various tasks, such as writing emails, creating spreadsheets, and giving presentations. For example, it can suggest words while typing, automatically generate summaries of meetings, and translate documents into other languages. This can help users be more productive and efficient.

Maps: Google Maps uses AI to provide users with directions, traffic information, and other helpful information. For example, it can automatically suggest routes based on the user’s past driving habits and can provide real-time traffic updates. This can help users get around more easily and efficiently.

✨Generative AI

The main thing in all AI ads and product integrations comes from Generative AI, which, as its name says, is an artificial intelligence that can generate new content independently.

Check the Youtube Video HERE

Through Generative AI Studio, you can test and better understand the concept of Generative AI. A console tool for rapidly prototyping and testing generative AI models. You can test sample prompts, design your prompts, and customize foundation models to handle tasks that meet your application’s needs.

In Generative AI Studio, you can:

Test sample prompts.
Design your prompts.
Customize foundation models.
Convert between speech and text.

Try it HERE!

✨PaLM 2

PaLM 2, is a large language model (LLM) AI. It is a successor to PaLM, trained on a larger dataset and with a more robust architecture. This makes PaLM 2 better at a variety of tasks, including:

Natural language understanding: PaLM 2 can better understand the nuances of human language, such as idioms, sarcasm, and metaphors.
Generating text: PaLM 2 can generate more creative and realistic text, such as poems, stories, and code.
Answering questions: PaLM 2 can answer more complex and challenging questions, even if they are open-ended or strange.
Reasoning: PaLM 2 can better understand and reason about the world by making inferences and drawing conclusions.

PaLM 2 can implement Personal assistants, Educational tools, or Creative tools. But PaLM 2 is a series of models that includes the following:

Gecko, Otter, Bison, and Unicorn are four versions of PaLM 2, or Pathways Language Model 2. They differ in size, performance, and intended use cases.

Gecko is the smallest version of PaLM 2, with 1.2 billion parameters. It is designed to be lightweight and efficient, making it suitable for mobile devices and other resource-constrained environments.
Otter is a mid-sized version of PaLM 2, with 137 billion parameters. It balances size and performance well, making it suitable for various applications.
Bison is a large version of PaLM 2, with 540 billion parameters. It is the most potent version of PaLM 2, and it is designed for demanding tasks such as natural language understanding, generating text, and answering questions.
Unicorn is the giant version of PaLM 2, with 1.5 trillion parameters. It is still under development but is expected to be the most powerful LLM ever created.

Which version of PaLM 2 is correct for you depends on your specific needs. Gecko is a good choice if you are looking for a lightweight and efficient model for mobile devices. If you are looking for a model that is a good balance between size and performance, Otter is a good choice. Bison is a good choice if you are looking for a powerful model for demanding tasks. Unicorn is a good choice if you are looking for the most powerful LLM ever created.

But soon, Google will be in the release of a more sophisticated model called Gemini; What is coming is unimaginable if we count that in this project, the researchers from Google Brain and Google DeepMind come together.

At the moment, you can join the MakerSuite waitlist to experiment with the PaLM 2 API: https://makersuite.google.com/waitlist and read the API documentation: https://developers.generativeai.google /tutorials/setup

✨Bard – AI-Chatbot (http://bard.google.com) + 🎨 Bard + Adobe Firefly

Bard an impressive AI chatbot meticulously crafted by Google. As a sophisticated conversational AI, Bard is a large language model designed to be informative and comprehensive. Trained on an immense corpus of text data, Bard can communicate and generate human-like responses across various prompts and inquiries. Whether you seek factual summaries or immersive storytelling, Bard is primed to deliver. Bard is still under development but Is learning new things every day.

Adobe Firefly is a remarkable generative AI, harnessing the power to bring visual concepts to life based on textual descriptions. When paired with Bard, the possibilities for creativity and expression become boundless. This tool can create everything from marketing materials to personal projects. For example, you could use Bard to generate a text description of a product and then use Adobe Firefly to create an image of that product. Or, you could use Bard to generate a poem and then use Adobe Firefly to create an image representing the poem. The possibilities are endless.

Note: Please note that Bard + Adobe Firefly are still in beta, so there may be some bugs or limitations. Check the review of this amazing tool, HERE

As a delightful bonus, thanks to Bard, leveraging generated content between Gmail and Google Docs becomes effortless. Additionally, Colab’s growing relevance makes it an ideal platform for code-centric projects, ensuring enhanced productivity and collaboration.

Here are some of the benefits of these new developer features in Bard:

More precise code citations can help to build a more collaborative and respectful community of developers.
Exporting to Replit can make it easier for developers to collaborate on code and share their work with others.
A dark theme can make reading easier in low-light conditions and reduce eye strain.
Integration with various Google apps and services can make it easier for users to get things done.
Connection with external services and partners can offer users various possibilities.
Generative AI capabilities can help users to create unique visuals and automate data classification.

Vertex AI

Vertex AI is a managed machine learning (ML) platform that helps you build, deploy, and scale ML models faster and easier. It provides a unified experience for managing all aspects of the ML lifecycle, from data preparation to model training and deployment. Vertex AI also includes various tools and services that can help you improve the performance and accuracy of your ML models. It is built on the Google Cloud Platform and integrates with a wide variety of open-source ML frameworks, including TensorFlow, PyTorch, and scikit-learn. This integration allows you to use the tools and libraries you already know.
Try it here: https://cloud.google.com/vertex-ai/.

Project Tailwind

Project Tailwind is a new initiative focused on developing ways to use large language models (LLMs) to create more engaging and informative user experiences. One of the critical goals of Project Tailwind is to make it easier for developers to use LLMs in their applications. To do this, Project Tailwind is developing several tools and resources, including:

A new LLM framework is designed to be easy to use and scale to large datasets.
A new API that allows developers to interact with LLMs more naturally.
A new set of tools that help developers to debug and optimize their LLM applications.

Project Tailwind is an experimental project that still needs a public URL or GitHub repo. However, you can sign up for the waitlist to be notified when it becomes available. The waitlist is available here: https://tailwind.withgoogle.com/.

MediaPipe

Google’s partnership with MediaPipe is a significant step forward in the development of ML solutions. By providing modular and customizable solutions.

Project Gameface is an excellent example of the potential of ML. This project uses facial landmark detection to create a virtual avatar that can be used to play games. This is just one example of how ML can be used to improve our lives.

If you are looking to develop an ML application, check out MediaPipe.
You can use Mediapipe for Face detection, Hand tracking, or Object detection.

TensorFlow Overview: What’s New?

Here are some of the new features and improvements that were announced:

KerasCV and KerasNLP: These new APIs make building and training state-of-the-art models for computer vision and natural language processing tasks easier.

DTensor: This new library does training and scaling large models on distributed hardware easier.

JAX2TF: This new tool makes it easier to port models written with the JAX numerical library to TensorFlow.

TF Quantization API: This new API makes making TensorFlow models more efficient and cost-effective easier.

Web ML Hub: This new web-based platform makes building and deploying machine learning models in the browser easy.

To begin your exploration, visit https://ai.google/build/machine-learning/ and immerse yourself in a wealth of invaluable resources. This platform serves as your gateway to learning, providing a comprehensive collection of tools and insights that will empower you to apply machine learning to your projects.

Whether you are a beginner or an experienced practitioner, the knowledge and expertise shared on this platform will guide you through every step of your journey. Gain a deeper understanding of the underlying principles, familiarize yourself with cutting-edge tools, and access practical examples that showcase the technology in action.

Google I/O Connect

The Google I/O Connect event in Miami was a great success. It was a great opportunity to learn about the latest Google technologies, and it was also a chance to meet some of the leading experts in the field.

One of the event’s highlights was the chance to meet Dale Markowitz, a renowned figure in artificial intelligence. Markowitz is a Senior Research Scientist at Google AI and one of the leading experts on natural language processing. She was very generous with her time and happy to answer the attendees’ questions.

Google I/O Connect event allowed me to:

Learn about the latest Google technologies
Meet leading experts in the field
Get your questions answered by Google experts
Network with other developers
Get inspired and motivated to build great things

If you are a developer, I highly recommend attending a Google I/O Connect event. It is a great way to learn, grow, and connect with other developers. You can find upcoming events on the Google Developers events page or explore Google I/O Extended events near you to connect with the community.

Related Articles:

Google I/O 2023: Making AI more helpful for Everyone by Sundar – nsrc.io/45SJOqm
Google I/O Program, Codelabs, Workshops: https://io.google/2023/program/
Techcrunch – Google I/O 2023 is a wrap — here’s a list of everything announced – nsrc.io/43TA3Xr
Google I/O 2023 Highlights: Unveiling Google’s Latest Innovations and Improvements – https://nsrc.io/3WWF9zD
The Verge – Google I/O 2023: all the news from Google’s big developer event – nsrc.io/3MWHiqz
BusinessPost – 15 Exciting Highlights from Google I/O 2023 – nsrc.io/3NhF2eW

AIOps Observability: Going Beyond Traditional APM

AIOps is an emerging technology that applies machine learning and analytics techniques to IT operations. AIOps enables IT teams to leverage advanced algorithms to identify performance issues, predict outages, and optimize system performance. Nodesource sees significant advantages for developers and teams to increase software quality by leveraging AIOPS. We have extended our platform’s (N|Solid) observability capabilities to include AIOps, enabling developers to leverage advanced machine learning and analytics techniques to optimize their Node.js applications.

Our N|Solid platform provides the most advanced visibility into Node.js applications, enabling developers to quickly identify performance issues, detect security vulnerabilities, and troubleshoot errors. N|Solid achieves this level of observability through real-time performance monitoring, comprehensive metrics, and detailed instrumentation of Node.js applications.

Last year, we integrated OpenTelemetry into our runtime and were nearing the release of an extension of this layer into our console. This advancement will further extend our platform to support AIOps. Santiago Gimeno, a Senior Architect, sums up our vision of the integration with OTel (Open Telemetry) and N|Solid:

“In today’s world, where applications are becoming more complex and distributed, having a good observability solution is more important than ever. The emergence of OpenTelemetry as the de-facto standard for observability is key. It allows application developers to select solutions that adapt better to their needs. Even more, it allows for healthy competition between observability solution vendors. We support this approach and continue to take steps to ensure N|Solid stays compliant with the OpenTelemetry specification, so everyone can use what we believe is the best observability solution for Node.js.”

Key differences between an APM and Observability

APM (Application Performance Management) and observability are both methods of monitoring and managing the performance and health of software applications, but there are key differences between the two:

Scope: APM is focused on monitoring the performance of applications, while observability is a more comprehensive approach that includes monitoring the infrastructure and application stack, as well as the performance of individual services.

Metrics: APM typically relies on predefined metrics and thresholds to identify performance issues, while observability takes a more flexible approach to collect a wide range of data, including logs, metrics, and traces.

Root cause analysis: APM is designed to quickly identify the root cause of performance issues, often through alerting and automated remediation, while observability takes a more holistic approach that emphasizes the need to understand the relationships between different parts of the system to identify and fix issues.

Proactivity: APM is often reactive, focusing on identifying and fixing issues as they arise, while observability is more proactive, focusing on continuous monitoring and analysis to identify potential issues before they become critical.

Tooling: APM is often built around specific tools and technologies designed to monitor and analyze application performance, while observability is more flexible and adaptable, focusing on integrating a wide range of tools and technologies to provide a comprehensive system view.

You need to have both; they are important to monitoring and managing the performance and health of software applications.

This is why N|Solid is not only an APM but also has observability within its functionalities. And now, with the implementation of ML and SBOM, it goes beyond APM and supports the growing discipline of AIOps.

AIOps: Fundamental concept in Modern IT Operations

AIOps (Artificial Intelligence for IT operations) is an approach to IT operations that leverages machine learning and artificial intelligence to automate and optimize IT tasks. AIOps aims to enhance the efficiency and effectiveness of IT operations by leveraging the vast amount of data generated by various IT systems and applications.

Observability refers to IT teams’ ability to observe and understand the behavior of complex systems in real-time, using a combination of monitoring, logging, and analytics tools.

AIOps and observability enable IT teams to proactively monitor and manage IT systems, applications, and infrastructure, allowing them to identify and resolve issues quickly. AIops uses machine learning and AI algorithms to identify patterns in large amounts of data, while observability provides the visibility and context needed to understand the behavior of complex systems.

Modern Observability in Place

Support for open-source tracing tools and standards like OpenTelemetry facilitates team collaboration in resolving issues. Open Telemetry is the second most active CNCF project, behind only Kubernetes, showcasing its importance to the industry.

_Image Twitter Michael Haberman @hab_mic
_

Following this standard, N|Solid (since N|Solid 4.8.0) supports OTEL:
– Implements the OpenTelemetry TraceAPI, allowing users to use the de-facto standard API to instrument their code.
– It supports using many instrumentation modules available in the Open Telemetry ecosystem. It supports exporting traces using the Open Telemetry Protocol(OTLP) over HTTP.
– With this feature is now possible to send N|Solid runtime monitoring information (metrics and traces) to backends supporting the Open Telemetry standard like multiple APMS (Dynatrace, Datadog, Newrelic, etc.).

Additionally, we included OTEL in the ‘APM performance dashboard,’ an open-source tool we released to the community, enabling developers and organizations to understand the impact of APM tools’ performance.

_Img APM’s Performance Dashboard View
_

Recent enhancements to the tool include the following:

Updated the data with N|Solid 4.8.0 -> 16.16.0 and 14.20.0
Added a few new tests: especially with different solutions for graphql.
Added more APMs: opentelemetry, AppDynamics

Added testing of N|Solid against Datadog, Dynatrace, and NewRelic.

Do you want to implement OTel in your Node.js application? ????

Enriching telemetry data with metadata is an important aspect of observability, and OpenTelemetry provides a flexible and extensible framework for doing so. However, there can be challenges in implementing this in practice, especially when dealing with multiple tools and technologies.

One approach to addressing this challenge is to use a centralized configuration management tool to ensure consistency in metadata enrichment across your observability stack. Please review the following articles to give you an accurate guide to implementing Opentelemetry in your project.

Enhance Observability with Opentelemetry tracing – Part 1

Instrument your Nodejs Applications with Open Source Tools – Part 2

However, if you want this implementation out of the box and have other useful features, we invite you to try N|Solid.

Conclusion

N|Solid Supports OpenTelemetry Features, Integrates SBOM and ML at its Core.

By supporting OpenTelemetry features, N|Solid provides seamless integration with this framework, enabling customers to understand their applications, infrastructure behavior, and performance. This integration enhances the ability of developers and operators to troubleshoot issues, identify bottlenecks, and optimize application performance.

N|Solid’s integration of Software Bill of Materials (SBOM) provides a comprehensive list of all software components used in an application, including open-source libraries and dependencies, which helps organizations to mitigate security risks and ensure compliance with regulations. By integrating SBOM at its core, N|Solid provides organizations with an efficient and reliable way to manage the security and compliance of their software applications.

Finally, N|Solid’s integration of machine learning (ML) at its core is another critical feature that helps to identify patterns and anomalies in data, allowing developers and operators to gain insights that are not easily detectable using traditional monitoring tools. This integration of ML at the core of N|Solid enables organizations to improve the overall reliability, performance, and security of their applications and services.

N|Solid’s support of OpenTelemetry features, integration of SBOM, and integration of ML at its core provides developers and operators with a comprehensive set of tools to manage and optimize their applications and infrastructure, making N|Solid a valuable platform in the modern software development and operations landscape.

Ready to connect?

If you want to know more about our APM’S Benchmark project and get the most out of your Node.js application, read this incredible article by our VP of engineering, Adrián Estrada, ‘In-depth analysis of the APMs performance cost in Node.js.

We also invite you to ????️ Use the ✨APM’s Performance Dashboard✨here:
???? Read the full blog post here: https://nsrc.io/4xFaster
???? Contribute here: https://github.com/nodesource/node-APMs-benchmark
If you have any questions, please contact us at [email protected] or through this form.

Experience the Benefits of N|Solid’s Integrated Features
Sign up for a Free Trial Today

To get the best out of Node.js and experience the benefits of its integrated features, including OpenTelemetry support, SBOM integration, and machine learning capabilities. Sign up for a free trial and see how N|Solid can help you achieve your development and operations goals. #KnowyourNode

Nodesource introduces Machine learning on its N|Solid platform to help make better Node Apps

N|Solid is an incredibly versatile platform for helping developers and devops engineers build and manage highly performant and secure Node.js web applications. With the advancement of machine learning you can unlock even more potential. Our M/L solution is a powerful tool that can increase the quality of user experience and boost efficiency for organizations with their Node.js applications. In this article, we’ll explore what machine learning is and how you can use it within N|Solid, pluswe’ll provide tips and best practices for leveraging this new capability to get the most out of your Node.js project.

AI – growing in value in the software development lifecycle

Img #1 AI vs ML concepts

Put in context, artificial intelligence refers to the general ability of computers to emulate human thought and perform tasks in real-world environments, while machine learning refers to the technologies and algorithms that enable systems to identify patterns, make decisions, and improve themselves through experience. — https://ai.engineering.columbia.edu

The technology world has been abuzz with the growing hype of artificial intelligence (AI). This is understandable as AI promises to revolutionize business and everyday life; from self-driving cars to automated customer service, AI will shape the future of our civilization. As technology continues to advance, the potential applications for AI are seemingly endless.

AI and ML (Machine Learning) are closely related, but not identical. AI is the broader concept of machines being able to perform tasks that would normally require human intelligence, such as visual perception, speech recognition, decision-making, and language understanding. ML is a specific subset of AI that is focused on the development of algorithms and statistical models that allow computers to “learn” from data, without being explicitly programmed. In other words, ML is a method for achieving AI.

ML and AI can help developers build better software in several ways. Some examples include:

Automating repetitive tasks: ML algorithms can be used to automate repetitive tasks that would otherwise require human intervention. For example, a ML model could be trained to automatically classify and categorize emails, reducing the need for manual sorting.

Improving software performance: ML algorithms can be used to optimize the performance of software systems. For example, a ML model could be trained to predict the load on a server, allowing the software to dynamically adjust its resource usage in response.

Enhancing the user experience: AI-powered software can provide a more personalized and intuitive experience for users. For example, a chatbot powered by natural language processing (NLP) could be used to provide customer service, or a recommendation system powered by ML could be used to suggest products to customers.

Predictive Maintenance: AI and ML algorithms can be used to predict when a machine or equipment is likely to fail, allowing maintenance to be performed before the failure occurs.

Identify and Fix Bugs: AI and ML can be used to automatically identify and fix software bugs, reducing the need for human intervention.

Improve Cybersecurity: AI and ML can be used to identify and mitigate cyber threats and detect suspicious activity on a network, which help to improve cybersecurity.

We believe there is great promise for developers to leverage new tooling that helps them focus on the solution and resolve issues as fast as possible, reducing security risks and deliver amazing user experiences. We see AI and ML as a major step forward to build better software.

Node.js expose the potential of AI.

Img 2 – AI Frameworks

We believe Node.js is a powerful technology for leveraging the potential of AI. It allows developers to easily create and manage AI applications, as it features extensive APIs for interacting with AI-related services. With Node.js, developers can create AI-backed applications that can be deployed across various platforms, making it an invaluable asset for businesses looking to leverage the power of AI.

The combination of Node.js and AI will also make it possible to create sophisticated applications that can interpret data in real-time, allowing businesses to improve their customer experience dramatically. As AI advances, Node.js will be a key tool in helping developers make the most out of the technology.

Recently there are several AI projects that are ushering a massive wave of exploration. OpenAI and its ChatGPT has become one of the fastest tools ever adopted. We are impressed with the incredible progress of the OpenAI project and many others,we continue to study, experiment, and review implementations of these technologies and their potential for the ecosystem.

Links to other cool resources

GitHub OpenAI: https://github.com/openai/openai-quickstart-node

OpenAI Docs: https://beta.openai.com/docs/quickstart

Already, Node.js is being used by many companies to power their AI-driven applications, and this trend will only continue as more companies seek to take advantage of the power of AI. Node.js also allows developers to quickly set up and deploy AI-driven applications, further accelerating the development process. With Node.js and AI, businesses can create smarter, faster, and more efficient applications.

Nodesource Introduces Machine learning in N|Solid platform

N|Solid is a Node.js platform with an integrated AI development environment.

This feature allows for training models that will later detect similar patterns in your application data and fire custom events.

It also offers advanced analytics capabilities and support for various AI technologies, making it a powerful tool for businesses looking to capitalize on the potential of AI.

Img 3 – ML Feature Cover

N|Solid is part of a larger trend toward making AI and ML more accessible to developers, helping to utilize these advancements to deliver software solutions.. By providing an integrated platform for Node.js in production, N|Solid is making it easier for businesses to create sophisticated AI-driven models and reap the benefits that come with them.

Developers can start using this new feature in N|Solid immediately to:

Identify performance issues and present insights to resolve quickly
Apply insights across multiple applications
Smart analysis and detection of common Node.js performance issues with the bundled models we provide
Training of custom models to detect specific problems
Global notifications and events tracking for processes and applications

Below you will see ML in action inside N|Solid.

Machine Learning UI

In the N|Solid Console, the Machine Learning feature can be accessed from the app summary or process detail views.

Each handles different data sets and will have a different effect on the model you train.

Training ML Models

The Machine Learning models can be trained using two kinds of data sets. The models trained in the app summary view will use the aggregated data of all the processes running inside the app.

On the other hand, the models trained in the process detail view will use process-specific data.

Train a model in the app summary view.

When a process/app is first connected, it will take a certain amount of data to be successfully trained; you will find a progress loader under process configuration:

To train a model in an app summary page, click on Train ML Model button.

Train a model in a process detail view.

To train a model in a process detail page, click on Train ML Model button.

Modal creation and training

After clicking on the Train ML Model button, a modal will open; here, you can create, filter, and train models; this modal is the same for both pages.

To create a model, click on CREATE NEW MODEL.

Name and briefly describe the model, then save.

Select the created modal and click on ‘TRAIN.’

When the trained model finds a data pattern similar to the one it was trained with, it will fire an event and show a banner on top of the navbar.

Click on View Event to be redirected to the events tab; here, you will find the most recent machine learning event.

The events will also appear in the application status section; clicking on VIEW ANOMALIES will redirect to the events tab.

Manage the default and custom models.

Machine Learning models can be administered in the settings tab, where you will find a set of default models and the user-trained models; here, the frequency of events being fired can be modified, and the custom user models can be deactivated, deleted, or edited.

For a full reset of the created models, click on RESET MODELS.

Custom user models have edit and delete icons; these models are found beneath the default models.

PLEASE NOTE Only the name and description of the user-created model can be edited; if you want to change the model data, please retrain the model in-app summary or in the process detail pages. Default models are activated by default; these can only be activated or deactivated.

Our Machine learning feature has been live since November 2022; if you want to review the official documentation, you can do it here.

One Last Thing…

To get the best out of Enterprise Node.js, start a free trial of N|Solid SaaS, an augmented version of the Node.js runtime, enhanced to deliver low-impact performance insights and greater security for mission-critical Node.js applications.