How to run ML on a Raspberry Pi and save space on a single board

Imagine the situation: the weekend is ahead, and you have a sufficiently loaded raspberry and you – for the sake of experiment – want to see what ML can do at RPi capacities, but you don’t want to overload the car too much, even with the entire lightweight version of TF. What can be done? We have already written about the classification of garbage using RPi, and today, by the start of the course on deep and machine learning, we share the translation of the manual, the author of which gives the simplest example of working with the required minimum TFLite. Inferences are made by the model in less than a second, without the need to install the entire TensorFlow package; only tflite_runtime is used, which supports the Interpreter class.


Accessing the Raspberry Pi from a computer

There are several ways to access the Raspberry Pi. Regardless of the method, the goal is to access the RPi terminal to enter commands to help prepare the computer for TFLite.

You can operate the RPi just like a regular PC by connecting it to a monitor via HDMI and connecting a mouse and keyboard. Once the RPi has successfully launched, you will be able to access its GUI and open a terminal. Unfortunately, this method may not work with the purchase of a new RPi: some settings need to be changed to allow the use of the HDMI port.

Accessing the RPi through a dedicated monitor and peripherals is expensive if you don’t have your own PC / laptop. To control the RPi from your PC, you first need to connect the RPi’s Ethernet port to the switch port. The switch must support DHCP in order for the RPi to automatically assign an IP address.

By assigning an IP, using the IP scanner, you will find the IP of the RPi Ethernet interface from a PC connected through the same switch, and having the RPi’s IP address, you can open an SSH session from your PC, where you can access the RPi terminal. Read more in this tutorial.

With this access, the RPi must be connected to the switch each time. To simplify the work, you can use the wireless interface. By connecting the RPi to a switch port and configuring its wireless interface to a wireless network, you will make it easier to access the raspberry in the future. This network can be created from a smartphone, it will be an access point.

You may not need the switch after configuring your network. All you need to do is connect your PC to the same wireless network. The IP scanner will report the IP of the wireless interface to the RPi. After that, you can open an SSH session and reach the single-board terminal.

Regardless of the RPi access method, you should be able to access the terminal like in the picture below. At this stage, you can prepare TFLite with terminal commands, we will discuss them below.

Preparing TFLite in RPi

This assumes that you already have a TensorFlow model converted to a TensorFlow Lite model. If this is not the case, there are many TensorFlow Lite models to download; let’s use the Lite version MobileNet

TensorFlow Lite is part of TensorFlow, that is, by installing the TensorFlow library, you will install the Lite version as well. Before installing TensorFlow, think about the modules you need for your project. Here we just run the TFLite model to classify the image and nothing else, so no need to install all TensorFlow modules.

The only TensorFlow class required for TFLite forecasting is the Interpreter and can be accessed like this:

import tensorflow.lite.python.interpreter.Interpreter

That is, instead of installing the entire TensorFlow, you can install only this class, saving space on the RPi. But how exactly do you install just this class?

The tflite_runtime package contains only the Interpreter class. It can be accessed at tflite_runtime.interpreter.Interpreter. To install tflite_runtime, just download the appropriate version of the Python wheel, for example Python 3.5 or Python 3.7

In my RPi file .whl located here:

/home/pi/Downloads/tflite_runtime-1.14.0-cp35-cp35m-linux_armv7l.whl

To install the downloaded package, I ran pip3 install. Please note: you need to use pip3: pip will just work with Python 2.

pip3 install /home/pi/Downloads/tflite_runtime-1.14.0-cp35-cp35m-linux_armv7l.whl
Package installed successfully
Package installed successfully

With tflite_runtime set, you can verify that everything is working correctly by importing the Interpreter class:

from tflite_runtime.interpreter import Interpreter

And here’s the result:

Setting tflite_runtime does not mean that all TFLite is installed. Only the Interpreter class is available, which makes predictions based on TFLite. If you need other TFLite features, install TensorFlow.

After setting tflite_runtime and preparing the RPi for forecasting, proceed to the next step.

Download MobileNet

MobileNet for TFLite can be downloaded here… It is a compressed file and contains not only the TFLite model, but also the class labels that the model predicts. The unpacked archive looks like this:

1) mobilenet_v1_1.0_224_quant.tflite;

2) labels_mobilenet_quant_v1_224.txt.

MobileNet has two versions, the resolution of the input image depends on the version. Here we will use the first version, it accepts images with a resolution of 224×224. The model is quantized, that is, it decreases, the forecast delay decreases. For the two images on the RPi, I created a new folder TFLite_MobileNet

/home/pi/TFLite_MobileNetmobilenet_v1_1.0_224_quant.tflitelabels_mobilenet_quant_v1_224.txttest.jpg

In the next section, I’ll show you how to feed an image to the model and predict the class label.

Single image classification

The TFLite model loading and classification code is shown below. The paths to the model and class labels file are contained in model_path and labels. The paths to the model for loading it are passed to the constructor of the Interpreter class. The loaded model is returned in the interpreter variable.

from tflite_runtime.interpreter import Interpreter 
from PIL import Image
import numpy as np
import time

data_folder = "/home/pi/TFLite_MobileNet/"

model_path = data_folder + "mobilenet_v1_1.0_224_quant.tflite"
label_path = data_folder + "labels_mobilenet_quant_v1_224.txt"

interpreter = Interpreter(model_path)
print("Model Loaded Successfully.")

interpreter.allocate_tensors()
_, height, width, _ = interpreter.get_input_details()[0]['shape']
print("Image Shape (", width, ",", height, ")")

# Load an image to be classified.
image = Image.open(data_folder + "test.jpg").convert('RGB').resize((width, height))

# Classify the image.
time1 = time.time()
label_id, prob = classify_image(interpreter, image)
time2 = time.time()
classification_time = np.round(time2-time1, 3)
print("Classificaiton Time =", classification_time, "seconds.")

# Read class labels.
labels = load_labels(label_path)

# Return the classification label of the image.
classification_label = labels[label_id]
print("Image Label is :", classification_label, ", with Accuracy :", np.round(prob*100, 2), "%.")

After loading the model, the allocate_tensors () method is called to allocate memory for the input and output tensors, then get_input_details () is called to return information about the input tensor. The information returned includes the width and height of the input image. What are they for?

Recall that the loaded model accepts 224×224 images. If you submit an image of a different size, we will get an error. Knowing the width and height of the image the model will accept, you can resize the input so that the model can work with it. The test image is read using PIL and an image is returned with the appropriate resolution for the model.

Now, using the classify_image () function, the implementation of which is shown below, we will perform the classification. In it, using the set_input_tensor () function, the input tensor of the model is set equal to the tensor of the test image.

In the original article, the code shown below does not contain a call to set_input_tensor (), while the call to set_input_tensor () is just below, in the full program code, from where the correct implementation was taken from the text of the article.

Next, invoke () is called to run the model and pass the input. The output is the index of the class and its probability.

def classify_image(interpreter, image, top_k=1):
  set_input_tensor(interpreter, image)

  interpreter.invoke()
  output_details = interpreter.get_output_details()[0]
  output = np.squeeze(interpreter.get_tensor(output_details['index']))

  scale, zero_point = output_details['quantization']
  output = scale * (output - zero_point)

  ordered = np.argpartition(-output, 1)
  return [(i, output[i]) for i in ordered[:top_k]][0]

Next, class labels are loaded from a text file using the load_labels () function, its implementation is shown below. It takes a path to a text file and returns a list with class labels. The index of the class the image is assigned to is used to return the corresponding class label. Finally, the label is printed to the console.

def load_labels(path): # Read the labels from the text file as a Python list.
  with open(path, 'r') as f:
    return [line.strip() for i, line in enumerate(f.readlines())]

All the code is shown below.

from tflite_runtime.interpreter import Interpreter 
from PIL import Image
import numpy as np
import time

def load_labels(path): # Read the labels from the text file as a Python list.
  with open(path, 'r') as f:
    return [line.strip() for i, line in enumerate(f.readlines())]

def set_input_tensor(interpreter, image):
  tensor_index = interpreter.get_input_details()[0]['index']
  input_tensor = interpreter.tensor(tensor_index)()[0]
  input_tensor[:, :] = image

def classify_image(interpreter, image, top_k=1):
  set_input_tensor(interpreter, image)

  interpreter.invoke()
  output_details = interpreter.get_output_details()[0]
  output = np.squeeze(interpreter.get_tensor(output_details['index']))

  scale, zero_point = output_details['quantization']
  output = scale * (output - zero_point)

  ordered = np.argpartition(-output, 1)
  return [(i, output[i]) for i in ordered[:top_k]][0]

data_folder = "/home/pi/TFLite_MobileNet/"

model_path = data_folder + "mobilenet_v1_1.0_224_quant.tflite"
label_path = data_folder + "labels_mobilenet_quant_v1_224.txt"

interpreter = Interpreter(model_path)
print("Model Loaded Successfully.")

interpreter.allocate_tensors()
_, height, width, _ = interpreter.get_input_details()[0]['shape']
print("Image Shape (", width, ",", height, ")")

# Load an image to be classified.
image = Image.open(data_folder + "test.jpg").convert('RGB').resize((width, height))

# Classify the image.
time1 = time.time()
label_id, prob = classify_image(interpreter, image)
time2 = time.time()
classification_time = np.round(time2-time1, 3)
print("Classificaiton Time =", classification_time, "seconds.")

# Read class labels.
labels = load_labels(label_path)

# Return the classification label of the image.
classification_label = labels[label_id]
print("Image Label is :", classification_label, ", with Accuracy :", np.round(prob*100, 2), "%.")

Output of the program:

Model Loaded Successfully.
Image Shape ( 224 , 224 )
Classificaiton Time = 0.345 seconds.
Image Label is : Egyptian cat , with Accuracy : 53.12 %.

The Raspberry Pi is a relatively inexpensive computer that can run simple models today – ML is heading towards becoming as natural as water in your home – and if you don’t want to stay out of the AI ​​sphere, you can look at our course “Machine Learning and Deep Learning”, and if you prefer to rely not on machines, but on yourself, then you can take a closer look at our flagship Data Science course

find outhow to level up in other specialties or master them from scratch:

Other professions and courses

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *