Debugging Node.js applications in Kubernetes?

Why do you need this?

When developing server-side code, from time to time there is a problem that is very difficult to reproduce, there are memory leaks or CPU spikes that you cannot simulate locally, or you need to add special logs to your application. When developing an application locally, the Node.js inspector is used to debug and create memory / cpu snapshots to help you find the problem, but how do you do the same in a remote environment? Fortunately, Node.js has excellent support for remote debugging, and in this article, we’ll look at how to use it in kubernetes.

Sample application

We will use a simple sample application to demonstrate the entire process. The demo application code is here:

How do I enable debug mode in a Node.js process?

Of course, debug mode is not enabled by default for the Node.js process, as this allows arbitrary code to be executed on a remote machine. There are two ways to enable debug mode in Node.js using the flag --inspector by sending a SIGUSR1 signal to the process. For more information on this you can find here

Using the flag – inspect

When you start a Node.js process with the flag --inspect, debug mode is enabled immediately:

$ node --inspect dist/index.js Debugger listening on ws:// For help, see: Server listening on port 3000...

In the message above, you can see that the debugger is listening on port 9229.

Now you can use vscode or chrome inspector to attach a debugger to this process:

In the example above, this is the default configuration to attach to the Node.js process:

  "name": "Attach",
  "port": 9229,
  "request": "attach",
  "skipFiles": ["<node_internals>/**"],
  "type": "pwa-node"

It is important to note that the debugger by default only listens for connections coming from localhost – it will reject debug sessions from remote addresses. To get around this, you need to either allow remote addresses using --inspect=, or create a network tunnel between the remote server and your computer; I’ll show you how easy it is to tunnel into kubernetes in the next sections.

Sending a SIGUSR1 signal to a running process

Sending a signal to a process is very useful for spontaneous debugging without restarting the process.

Open one terminal window to start the process:

# start Node.js app
$ node dist/index.js
Server listening on port 3000...

Open another terminal window to enable debug mode:

# find PID of Node.js process
$ ps aux | grep "node dist/index.js"
# use the PID to send USR1 signal to the process
$ kill -USR1 [PID]

By executing the kill command, you will notice in the first terminal window that the debugger is enabled, with the following message:

Debugger listening on ws://
For help, see:

More information on use kill to send signals to processes you can find here

How do I debug Kubernetes?

Updating liveness and readiness of samples

While debugging, if your process is interrupted at a breakpoint, Node.js will be unable to respond to liveness and readiness requests in kubernetes, and kubernetes will decide to restart the pod, ending your debug session. To prevent this, it is necessary to modify the samples in such a way that they allow for longer stoppages of the processes:

# removing livenessProbe
$ kubectl patch deploy/example-app --type json -p='[{"op": "remove", "path": "/spec/template/spec/containers/0/livenessProbe"}]'
# removing readinessProbe
$ kubectl patch deploy/example-app --type json -p='[{"op": "remove", "path": "/spec/template/spec/containers/0/readinessProbe"}]'
# adding dummy livenessProbe
$ kubectl patch deploy/example-app -p '{"spec": {"template": {"spec": {"containers": [{"name": "example-app", "livenessProbe": {"initialDelaySeconds": 5, "periodSeconds": 5, "exec": {"command": ["true"]}}}]}}}}'
# adding dummy readinessProbe
$ kubectl patch deploy/example-app -p '{"spec": {"template": {"spec": {"containers": [{"name": "example-app", "readinessProbe": {"initialDelaySeconds": 5, "periodSeconds": 5, "exec": {"command": ["true"]}}}]}}}}'

Zoom out (optional)

For debugging, you need to make sure that the request actually falls into the sub you are connecting to. This can be tricky if you have multiple pods running. In the example, we have defined 3 replicas in our deployment and also autoscaled the pods using the HorizontalPodAutoscaler resource. So we need to tell kubernetes to scale everything to 1 pod and keep it that way.

# check the number of pods
$ kubectl get pods | grep example-app
NAME                           READY   STATUS        RESTARTS      AGE
example-app-56cf79964d-g92n4   1/1     Running   0             53s
example-app-6b8fb58764-hhk46   1/1     Running   0             14s
example-app-6f79d6cf66-wdxqk   1/1     Running   0             71s
# update HPA to run only one replica
$ kubectl patch hpa/example-app -p '{"spec": {"minReplicas": 1, "maxReplicas": 1}}'
# update deployment to run only one replica
$ kubectl scale --replicas=1 deploy/example-app
# check the number of pods again
$ kubectl get pods | grep example-app
NAME                           READY   STATUS    RESTARTS   AGE
example-app-6b8fb58764-2qlgn   1/1     Running   0          110s

Enabling the debugger

Since the flag --inspect should not be enabled when launching applications in production, it is easier to do this in a special order by sending it a USR1 signal than creating a new pod. But first, you need to find the PID of your Node.js application.

If in your docker image you run the app like this:

CMD [ "node", "dist/index.js" ]

The PID of your application will be 1. To find out more precisely, in case of uncertainty, use the following commands:

# find Node.js process PID
$ kubectl exec -it deploy/example-app -- /bin/sh -c "ps aux"
# if there is no ps in the docker image, use:
$ kubectl exec -it deploy/example-app -- /bin/sh -c "find /proc -mindepth 2 -maxdepth 2 -name exe -exec ls -lh {} ; 2>/dev/null"
lrwxrwxrwx 1 node node 0 Sep 30 04:11 /proc/1/exe -> /usr/local/bin/node
lrwxrwxrwx 1 node node 0 Sep 30 04:11 /proc/587/exe -> /bin/dash
lrwxrwxrwx 1 node node 0 Sep 30 04:11 /proc/594/exe -> /usr/bin/find

Then we turn on the debugger, in my case I am using PID 1.

# enable debugger
kubectl exec -it deploy/example-app -- /bin/sh -c "kill -USR1 1"
# verify that debugger has been enabled
$ kubectl logs deploy/example-app
Server listening on port 3000...
Debugger listening on ws://
For help, see:

Port forwarding

To debug from a local machine to a remote pod, we will use the kubernetes port-forward function:

# forward debug port from pod to our machine
$ kubectl port-forward deploy/example-app 9229:9229
Forwarding from -> 9229
Forwarding from [::1]:9229 -> 9229

After enabling this feature, we can connect the Node.js debugger to localhost: 9229.

Debugging with VSCode

You can now connect the debugger using vscode. But the problem is, you cannot set breakpoints as usual:

The issue is vscode doesn’t know how to map files from container to local source. To accomplish this, we need to update the attach configuration in the .vscode / launch.json file:

      "name": "Attach",
      "port": 9229,
      "request": "attach",
      "skipFiles": ["<node_internals>/**"],
      "type": "pwa-node",
      "localRoot": "${workspaceFolder}",
      "remoteRoot": "/app",
      "sourceMaps": true

The properties we have added are as follows:

  • localRoot tells vscode what is the root of the project on your machine

  • remoteRoot tells vscode which container path corresponds to our localRoot

  • sourceMaps tells vscode to take information from the source code maps into account so that we can actually debug from the source files.

After setting this parameter, we can debug normally:

CPU profiling and memory snapshots

If you need to debug on memory leaks or CPU spikes, the best tool for this is the chrome dev tools “CPU and Memory Profiling”. After enabling advanced debugging capabilities, you can easily connect with chrome to a remote process and perform profiling. Open Chrome and go to chrome: // inspect:

You can find more information on how to use memory snapshots to find leaks. here


Hopefully this has shown you how powerful remote debugging is in Node.js, and what a wonderful set of tools there is to help you with that. It’s not very common to have problems that require debugging a remote process, but if you apply techniques like this, they can prevent a lot of trouble.

Material prepared as part of the course “Infrastructure platform based on Kubernetes.”

We invite everyone to a free demo lesson “Container orchestration with a smooth transition to k8s”… In the lesson, we will talk about how it was before orchestration, what problems they tried to solve, what now: an overview of orchestrators, a smooth transition to k8s. In the lesson, we will explain the main components of k8s and their relationship, options for deploying k8s locally, as well as options for deploying on virtual machines or baremetall.

Similar Posts

Leave a Reply