ClearML | Tutorial

Sources
Documentation
License: Apache License 2.0

ClearML is a framework for tracking ML experiments. This is its main purpose. But now the functionality of ClearML is much wider and allows you to:

  • Track metrics, hyperparameters, and machine learning artifacts.

  • Store and provide at the request of the model.

  • Store datasets.

  • Visually compare experiments.

  • Play experiments.

  • Automatically log all actions.

  • Set up data processing pipelines.

  • Visualize results.

  • Etc.

ClearML’s main competitor is Weights & Biases. But ClearML has two major advantages:

  • Even small teams can use the cloud version for free.

  • There is a full locally deployed version.

Installation and setup

First we need a python package. Run the command in the console:

pip install clearml

Next, you need to link the installed clearml package to the ClearML server that will store all your artifacts. There are two options here:

For the tutorial, I’ll use a cloud service:

clearml-init

The utility will ask you to enter configuration parameters. Copy all the information from the black square (starts with api…) and paste it into the console.

The entered ClearML configuration will be saved in C:\Users\\clearml.conf (in linux: ~/clearml.conf), where you can later edit or copy it.

After that, all attempts to use ClearML from a .py file will automatically use this configuration.

But, if you use ClearML from a notebook, then you will additionally need to declare service variables in the notebook itself. You can find them on the tabs JUPYTER NOTEBOOK the same window CREATE CREDENTIALS. You need to copy them and execute in the first cell of the notebook.

Terminology

Technically, ClearML adopts the following structure:

The task is a minimal full-fledged experiment. For example, with the so-called. business task can be:

  • R&D experiment: searching for hyperparameters, researching a new neural network architecture, using another framework, etc.

  • Periodic retraining of the model.

Z.Y. In its terminology, ClearML offers the following types of Tasks:

  • training (default) – model training.

  • testing – testing (for example, model performance).

  • inference – execution of model prediction.

  • data_processing – data processing processes.

  • application – any applications.

  • monitor – process monitoring.

  • controller is a task that defines the logic of interaction between other tasks.

  • optimizer is a special type for hyperparameter optimization problems.

  • service – service tasks.

  • qc – quality control (for example, A / B testing).

  • custom – other.

The selected type will be displayed in the list of performed experiments.

And now let’s try to go through all the steps of training the model on the Titanic dataset, which has not yet been hackneyed 🙂

Simple process

  • I will perform further actions in a notebook (each piece of code in a separate cell). Therefore, we first initiate service variables (your values ​​will be different):

%env CLEARML_WEB_HOST=https://app.clear.ml
%env CLEARML_API_HOST=https://api.clear.ml
%env CLEARML_FILES_HOST=https://files.clear.ml
%env CLEARML_API_ACCESS_KEY=QZY14BD1QAL151CWFZ5L
%env CLEARML_API_SECRET_KEY=x8Lhn5RdQpwk21oKoldbg5H0EuMfn50Soxw1uOVsy5VLEtBfuR
import pandas as pd
import numpy as np
from clearml import Task, Logger
import seaborn as sns
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split, ParameterSampler
from catboost import CatBoostClassifier, Pool
from sklearn.metrics import roc_auc_score 
task = Task.init(
    project_name="ClearML_Test", 
    task_name="Cat1", 
    tags=['CatBoost','RandomSearch'])

Two important things happened here:

  1. ClearML has created a new Task in the specified Project.
    A link to the experiment’s web page will appear in the output – open it (or manually go to the Task through the interface).

Each time this piece of code is executed on the server, a new Task will be created.

Z.Y. You can programmatically access a previously created Issue (and all of its artifact) by simply connecting to it by Id or by the name of the Issue and Project:

prev_task = Task.get_task(task_id='123456deadbeef')
# или
prev_task = Task.get_task(project_name="proj1", task_name="my_task")
  1. ClearML will automatically log all standard inputs and outputs, as well as the output of many popular libraries.

    For example, on the tab execution you can see the entered code and all the initiated libraries and their versions (see the screen above). And since we execute the code in the notebook, then in the tab Artifacts you can find a link to a fully compiled notebook as an HTML page (and saved on the server side).

If, when executing the method init the specified Project does not exist, ClearML will automatically create it – but this is not feng shui 🙂

fpath="titanic.csv"
df_raw = pd.read_csv(fpath)
task.upload_artifact(name="data.raw", artifact_object=fpath)

Here, in addition to loading data into pandas, we are simultaneously sending our data to ClearML as an Artifact, simply by specifying the path to the CSV file and giving the name of the artifact. All downloaded artifacts can be found on the tab Artifacts, the corresponding experiment. And from here you can download them.

When creating an artifact, you can specify the path to an entire folder, then ClearML will place its contents in a zip archive and save it as an artifact.

task.upload_artifact(
    name="eda.describe.object", 
    artifact_object=df_raw.describe(include=object))
task.upload_artifact(
    name="eda.describe.number", 
    artifact_object=df_raw.describe(include=np.number))

ClearML can store almost any Python object as an Artifact. Here we put the output of the Pandos method in the Artifact describe. And we can look at this output in the ClearML interface.

sns.pairplot(df_raw, hue="Survived")
plt.title('Pairplot')
plt.show()

ClearML automatically logs the output of any use of Matplotlib (that’s what the method here is for). showalthough Seaborn would have displayed a graph without it) – you can find all the displayed graphs on the tab Plots.

df_preproc = df_raw.drop(columns=['PassengerId','Name','Ticket'])
for col in ['Sex','Cabin','Embarked']:
    df_preproc[col] = df_preproc[col].astype(str)
task.upload_artifact(name="data.preproc", artifact_object=df_preproc)

train, test = train_test_split(df_preproc, test_size=0.33, random_state=42)
task.upload_artifact(name="data.train", artifact_object=train)
task.upload_artifact(name="data.test", artifact_object=train)

We formed three more datasets and sent all three to ClaerML as Artifacts. But unlike the first time, now we directly indicated the Pandos dataset. ClaerML understands this format and we can (partially) look at the dataset directly from the ClaerML interface.

The Pandos Artifact can also be registered using the register_artifact. Unlike upload_artifact it automatically detects dataframe changes and synchronizes them with the server.

X_train = train.drop(columns=['Survived'])
y_train = train['Survived']

model = CatBoostClassifier(silent=True)
model.fit(X_train, y_train, cat_features=['Sex','Cabin','Embarked']);

ClaerML automatically logs the output of some popular libraries and CatBoost is one of them. After executing this code on tab Scalers you will be able to see the learning curve of CatBoost.

Logged libraries can be found here:
https://github.com/allegroai/clearml/tree/master/examples/frameworks

# Сетка для перебора гиперпараметров
param_grid = {
    'depth': [4,5,6,7,8],
    'learning_rate': [0.1,0.05,0.01,0.005,0.001],
    'iterations': [30,50,100,150]
}

# Формируем датасет для тестирования
X_test = test.drop(columns=['Survived'])
y_test = test['Survived']

# Инциируем объект логирования
log = Logger.current_logger()

# Переменные для хранения результатов
best_score = 0
best_model = None
i = 0

# Перебираем случайные 50 гиперпараметров
for param in ParameterSampler(param_grid, n_iter=50, random_state=42):
    # Обучаем модель
    model = CatBoostClassifier(**param, silent=True)
    model.fit(X_train, y_train, cat_features=['Sex','Cabin','Embarked'])

    # Оцениваем модель
    test_scores = model.eval_metrics(
        data=Pool(X_test, y_test, cat_features=['Sex','Cabin','Embarked']),
        metrics=['Logloss','AUC'])
    test_logloss  = round(test_scores['Logloss'][-1], 4)
    test_roc_auc = round(test_scores['AUC'][-1]*100, 1)
    
    train_scores = model.eval_metrics(
        data=Pool(X_train, y_train, cat_features=['Sex','Cabin','Embarked']),
        metrics=['Logloss','AUC'])
    train_logloss  = round(train_scores['Logloss'][-1], 4)
    train_roc_auc = round(train_scores['AUC'][-1]*100, 1)

    # Сравниваем текущий скор с лучшим
    if test_roc_auc > best_score:
        # Сохраняем модель
        best_score = test_roc_auc
        best_model = model

        # Записываем метрики в ClearML
        log.report_scalar("Logloss", "Test", iteration=i, value=test_logloss)
        log.report_scalar("Logloss", "Train", iteration=i, value=train_logloss)
        
        log.report_scalar("ROC AUC", "Test", iteration=i, value=test_roc_auc)
        log.report_scalar("ROC AUC", "Train", iteration=i, value=train_roc_auc)
        
        i+=1

Here we do the following:

  • We define a grid for enumeration of hyperparameters.

  • We randomly select 50 possible hyperparameter values.

  • We train the model.

  • We count metrics.

  • If the metric showed the best score of all the previous ones, then:

Before the heap, let’s save a few more parameters as constants:

log.report_single_value(name="Best ROC AUC", value=test_roc_auc)
log.report_single_value(name="Best Logloss", value=test_logloss)
log.report_single_value(name="Train Rows", value=X_train.shape[0])
log.report_single_value(name="Test Rows", value=X_test.shape[0])
log.report_single_value(name="Columns", value=X_train.shape[1])
log.report_single_value(name="Train Ratio", value=round(y_train.mean(),3))
log.report_single_value(name="Test Ratio", value=round(y_test.mean(),3))

ClearML has many different kinds of reports:
https://clear.ml/docs/latest/docs/references/sdk/logger/
https://github.com/allegroai/clearml/tree/master/examples/reporting

best_model.save_model('my_model.cbm')

Technically, we just saved the model locally, but again, ClearML automatically tracked this and saved it on the server. You can see it on a separate tab. Models in your Project. Within the framework of one project, you conduct a bunch of experiments, and each of them (in theory) has a model as the output. And all these models are collected in one place within the framework of the Project.

You can also save models manually:
https://clear.ml/docs/latest/docs/clearml_sdk/task_sdk/#logging-models-manually

task.close()

You can find a list of all experiments in the Project folder. Output columns are customizable. For example, you can display your metric there.

Local Deployment

To locally deploy the ClearML server, you need to download a ready-made docker image and run it on your own. Instructions on how to do this for your system:
https://clear.ml/docs/latest/docs/deploying_clearml/clearml_server

Now let’s take a quick look at some other interesting ClearML features that are beyond the scope of this tutorial…

REST API

Let’s assume that you have a cluster where it is difficult to install additional Python packages (like clearml). Then you will be able to communicate with the ClearML server via the REST API. Detailed documentation: https://clear.ml/docs/latest/docs/references/api/index/

Comparison of experiments

You can visually compare two (or more) experiments. To do this, select them in the list of experiments and click Compare at the bottom of the screen. For comparison, both textual information and graphics are available.

In details

Reproduction of the experiment

If for some reason you need to repeat an earlier experiment, you can done in a couple of clicks directly from the ClearML interface.

orchestration

In ClearML you can build data processing pipelines (like in Airflow, Dagster or Prefect).

More: https://clear.ml/docs/latest/docs/pipelines/pipelines

Datasets

ClearML has a separate storage for datasets. For example, you have some kind of common benchmark and you need to not just pull it up into a separate project, but provide it to the whole team.

Storage features:

  • Version tracking.

  • You can store files in cloud storage or local network.

  • Datasets can inherit from other datasets and be chained into paypalans.

  • You can always get a local copy of the dataset.

A small example of creating a dataset:

from clearml import Dataset

dataset = Dataset.create(
    dataset_name="cifar_dataset", 
    dataset_project="dataset examples" 
)

dataset.add_files(path="cifar-10-python.tar.gz")

In details:
https://clear.ml/docs/latest/docs/clearml_data/clearml_data
https://clear.ml/docs/latest/docs/guides/datasets/data_man_python

What’s next?

These are far from all the possibilities of ClearML. Others can be found in the documentation.

There are also many examples of using ClearML in the standard installation. You can study them at your leisure. You can find them in Projects/ClearML examples.

More you can see:

My telegram channel

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *