SSH Picker in the daily work of a network engineer

In the article, we will consider a useful tool that facilitates the collection of data from network devices. For scripts to work with the command line via SSH in Python, you need to use a lot of third-party modules, or rather a lot of dependencies of one module (hello paramiko), and if there is no Internet on the machine where the script will run or there is no way to install the latest versions of Python, then the task of running the script becomes almost impossible. To solve this problem, SSH Picker was developed with the ability to connect additional modules via the AMQP protocol.

There are various ways to interact with network nodes, the most popular of which is the command-line interface (CLI) via the SSH protocol. In carrier-class networks with a huge number of devices on the network, it often becomes necessary to collect various command outputs from network devices, process and present them in a certain form. Despite the presence of various OSS that should allow you to perform such tasks very quickly, it will be much easier for an ordinary network engineer to perform data collection on their own, for example, by running a script in SecureCRT or some pre-prepared Python / TCL script.

To run such a script, there must be a basis in the form of a code for connecting to a node, sending a command to a node and receiving the result of executing a command from a node, then the data must be added somewhere or immediately processed. There are also situations when data needs to be collected over a certain period of time or on an ongoing basis. Many OSS systems implement these collection methods through periodic connection and disconnection to the node, which greatly clogs the equipment logs. they display each input / output of the user.

SSH Picker is a modular data collector from network nodes that allows you to work in different modes and connect external parsers for further data processing.

Description of the collector

SSH Picker consists of two required parts. Configuration file in toml format and a list of commands in plain text. It should be noted right away that the collector cannot use the telnet protocol, since this functionality was not originally included in it. Authentication via SSH is performed through a login/password combination or using a public key.

The collector can work in the following modes:

One-time execution of commands on the node.

In this mode, the collector will connect to the node, execute the command, get the result and send it to the specified location, disconnect from the node and close the SSH session. This is convenient when using a collector in conjunction with crowns.

Permanent connection

Permanently resides on the node and periodically executes the specified commands. In this mode, the collector operates as a standalone service. It connects to the node once and keeps the SSH session open at all times. In the event of a disconnection, the connection restarts on its own with an increase in the reconnect interval if unsuccessful. The configuration parameter is responsible for this mode:

# If true, then script will not connect and disconnect every 5-10 minutes. 
# It will stay constantly on the nodes.
stay_on_node = true

The animation shows how the permanent connection mode works

A special Session Controller (SC) thread monitors the status of each SSH connection and in case of problems with any of them, the SC restarts it.

Downloading a file from a host using SCP

Useful when you need to upload the same files from multiple devices.

Send results via HTTP, SFTP, TCP/UDP socket, AMQP broker and store files locally. All output is in json format.

The collector has the ability to specify a hardware profile by specifying its own default command line for it, otherwise than prompt. Everyone knows user and privilege mode in Cisco and shell mode in Juniper. Initially, there were 4 profiles, but it so happened historically that one profile was enough, the universal Router profile remained.

I would like to draw attention to the process of obtaining data from the node. The collector must understand that the command was completed and save the received data for further processing. How does the collector understand that the command has been executed? I think many have come across a situation where, after entering a command, the node freezes collecting data, and then displays information on the screen, so the collector waits for the command to be output without closing the socket until it receives all the output. But how will the collector know that this is the entire output? It understands all the output or not all based on the state of the command line.

Collector configuration example:

name = "Router"
unenable_prompt = ">"
enable_prompt = "#" 

Hostname = "labRouter1"
Ip = ""
User = "admin"
Port = 22
Password = "adminpass"
Profile = "Router"

An example of enable prompt for the node itself


Parameters are glued Hostname + enable_prompt we get device_prompt. The result is given in regexp ^device_prompt $ and each resulting string is compared against this regular expression. As soon as there is a match, the command has finished executing and you can send the next one.

The ability of the collector to work with external parsers

Perhaps this is the most useful feature.

You need to collect data from the host, but there is no way to get it via SNMP or Netconf, you can get it only if you execute a series of commands on the host, with each subsequent command depending on the data from the previous command, etc. For example, you need to get data on drops in the factory from certain line cards, and this data cannot be obtained otherwise than through the CLI.

The algorithm of actions will be as follows:

  1. We get a list of line cards from the node (show chassis or show hardware)

  2. We select from them only those slots where the line card we need is located, remember the slots (for example, slots 1,4,6,10)

  3. For each slot, we execute commands to display counters for factory drops.

Command example:

show card 1 fabric-drops
show card 4 fabric-drops
show card 6 fabric-drops
show card 10 fabric-drops

those. in this case, we need to not only execute one command on the node and get its output, but also parse the received output and compose a new command based on it, and then send it again to the collector and get new data on the sent commands.

To implement such a work scenario, the AMQP broker helps a lot, in my case it is RabbitMQ. It is not always possible to install RabbitMQ on servers, but it can be used in a container or even on another machine.

In the collector configuration, enable the functionality that processes incoming commands and specify its parameters.

enable = true
socket = "amqp://guest:guest@localhost:5672/"
exchange_name = "inbound"
routing_key = "commands"
timer = 10000

next, make sure that sending command outputs to RabbitMQ is enabled

enable = true
amqp_exchange_name = "printouts"

enable = true
key = "show\schassis"
uniq = "show_chassis"
socket = "amqp://guest:guest@localhost:5672/"

Below is a diagram of how the collector works when the ability to process incoming commands is enabled.

Each block has its own number M.1, M.2. etc. I will describe each of them in more detail.

M.1 is a mandatory list of commands that is written in the file and passed to the collector when it is launched.

M.2 is the collector itself, in the case of processing incoming commands, it should only work in the mode of a permanently open SSH session. The collector receives a list of commands from M.1 and executes them on the node with a given frequency. The collector also connects to the AMQP broker and creates queues and exchange points specified in M.3 and M.4. They are needed to send and receive data from nodes.

M.3 is incoming queues and exchange point created by the collector in RabbitMQ for each node separate queue.

M.4 is outgoing queues and exchange point created by the collector in RabbitMQ for each commands separate queue.

M.5 are external parsers, the same scripts that will receive command outputs from M.4, parse them and create a new command that needs to be additionally executed on the node. Parsers send commands in queues to M.3. Parsers can be written in your favorite language with support for the AMQP protocol. The repository has a folder example where you can find an example parser in Python.

M.6 – these are the network nodes with which the collector works.

I will describe in more detail how the collector will work if you need to process a series of commands.

We have M.1 with a list of initial commands that runs on the node at a specified interval. The interval is specified in the configuration in the interval parameter. Keepalive is the interval after which you need to send an empty command in order for the session to “live” forever.

# Send dumb_command every keepalive 
# Seconds
interval = 300
keepalive = 100
dumb_command = "!dump command"

Further, the collector connects to the AMQP broker and creates exchange points and queues, connects them according to the keys specified in the configuration, these are blocks M.3 and M.4. The collector then executes the commands on the nodes, gets the output of the commands, and sends them in json format to AMQP. The format of the output message is as follows:

	"command": "выполненная команда",
	"ip": "IP адрес узла",
	"nodename": "Hostname узла",
	"output":"Результат выполнения команды",

In the M.5 block, the parsers receive each of their command outputs, parses it, and if additional output needs to be collected, it forms the command and sends it back to AMQP through queues in M.3. Message format:

  "node": "node1",
  "command": "show card 1 fabric-drops"

The collector receives data from the queues from the M.3 block, executes commands on the node, and sends the execution result to AMQP via the routing key “commands” (specified in the configuration) and the queue corresponding to it. Parsers receive data from the “commands” queue and if the command belongs to the parser that sent this command, it confirms its deletion from the queue. This cycle can be repeated if necessary.

Using this mechanism, you can create scripts for troubleshooting or automatic checking of any values ​​on the nodes.

Compiling and running the builder

The collector is written in Go and it allows you to create one executable file that will contain all the necessary modules, so the compiled file can be used on systems where it is not possible to install additional modules along with their dependencies. Moreover, the collector can be run on a Windows machine, but you need to understand that there may be problems with TCP/UDP sockets. their operation in Windows and Linux systems is different. In the case of using AMQP, no problems were observed.

Setting ENV variables when compiling for Linux, being not on Linux, but for example on Windows

set GOOS=linux
set GOARCH=amd64

Setting ENV variables when compiling for Windows, being not on Windows, but for example on Linux

export GOOS=windows 
export GOARCH=amd64 

We clone the repository to our computer, then we need to give the script execution rights and run it, it will collect everything into a single binary file. Make the necessary changes to config.tml and run the builder.

chmod +x 
./collector_2.3 ./config.tml


I am not a professional programmer. The project started to create when I didn’t understand Go and created it to avoid Python scripts. After that, 4 years have already passed, during which programming courses were completed, smart books on project architecture were read, and more than one project was written in Go. Everything is based on personal enthusiasm for automating routine tasks. I understand that the code is terrible, but if the project is of interest to anyone, then I will gladly refactor it for the benefit of society.

Thank you all for your time!

useful links

Project repository

Learn more about cross-compiling with Go

Similar Posts

Leave a Reply