DeepPavlov for developers: # 2 setup and deployment

Hello! In the first article in our series, we learned what DeepPavlov is, what library models are ready for use without prior training, and how to run REST servers with them. Before starting the training of models, we will talk about the various possibilities of deploying DeepPavlov models and some features of the library settings.

We agree that all the library startup scripts are executed in the Python environment with the DeepPavlov library installed (for installation, see the first article, about virtualenv can be read here). The examples in this article do not require knowledge of Python syntax.

Modes of interaction with NLP models DeepPavlov

DeepPavlov currently supports 4 ways to interact with NLP models (both pre-trained and user-created):

  • REST server (mode riseapi) Is the main tool for integrating models, which was described in detail in the previous article. Documentation.
  • TCP or UNIX socket server (mode risesocket) – if you need low-level integration. Documentation.
  • Bot in Telegram (mode telegram) – demo mode, allows you to interact with the model via Telegram. Documentation.
  • Command line (mode interact) – demo and debug mode, allows you to interact with the model through the command line. The model in the interaction mode via the command line is initialized with the following command:

python -m deeppavlov interact 

Parameter (mandatory in all four modes), the value can be either the full path to the model config file or the name of the config file without an extension. In the latter case, the model config must be registered in the library.

Configs of all models supplied with DeepPavlov are registered in the library. The list of supplied models can be found in the MODELS section of the DeepPavlov documentation, their configs can be found here.

GPU usage

In any of the above modes, NLP models are initialized, which are based on neural networks. This makes them quite demanding on computing resources. You can improve the performance of models by using the GPU. To do this, you will need an nVidia graphics card with sufficient video memory (depending on the model you are running) and a supported version of the CUDA framework. You can find all the necessary information on launching DeepPavlov models on the GPU here.

Library settings files

All library settings are contained in three files:

  • server_config.json – REST and socket server settings, as well as a Telegram connector
  • dialog_logger_config.json – settings for logging queries to models
  • log_config.json – library logging settings

By default, settings files are located in / utils / settingswhere Is the installation directory of DeepPavlov (usually lib / python/ site-packages / deeppavlov in virtual environment). Using command

python -m deeppavlov.settings

You can find out the exact path to the directory with the settings files. You can also set the path to a directory convenient for you by specifying it in the environment variable DP_SETTINGS_PATH. After the first run of the above command (a server with any trained DeepPavlov model), files from the default directory will be copied to the directory from DP_SETTINGS_PATH. Team

python -m deeppavlov.settings -d

resets settings by copying settings files from the default directory on top of files to DP_SETTINGS_PATH.

From the settings of DeepPavlov you should pay attention to:

  • server_config.json, parameter model_args_names:
    From the last article we remember:
    – arguments to the REST API DeepPavlov named;
    – any model in DeepPavlov is identified by the name of its config.
    So, the default argument names for each model are taken from its config.

    We will analyze the structure of model configs in detail in the following articles, now we only note that the argument names in the REST API can be redefined as follows:

    model_args_names: [“arg_1_name”, ..., “arg_n_name”]

    The sequence of argument names corresponds to the sequence in which the arguments are defined in the model config, an empty string as the parameter value model_args_names matches the default names.

  • log_config.json:
    Please note that for logging uvicorn a logger is used, which is configured separately. You can read about the configuration structure of the logging Python module here.

Running pre-trained models in Docker

Any pre-trained DeepPavlov model can be launched in the Docker container in the REST service mode. Detailed instructions are in our repositories on DockerHub: here for the CPU, here for the GPU. API models in containers are fully consistent with the description from the previous article.

Deeppavlov cloud

To make it easier to work with pre-trained NLP models from DeepPavlov, we started providing them in SaaS mode. To use the models, you need to register in our service and get a token in the Tokens section of your personal account. The API documentation is in the Info section. Under one token, you can send up to 1000 requests to the model.

Currently, the service is launched in the Alpha version and its use is provided free of charge. Further, the set and format for the provision of models will be expanded in accordance with user requests. The request form can be found at the bottom of the Demo page.

The following models are now available in DeepPavlov Cloud:

  • Named Entity Recognition (multilingual) – recognition of named entities;
  • Sentiment (RU) – classification of tonality of the text;
  • SQuAD (multilingual) – the answer to a question to the text as a fragment of this text.

Conclusion

In this article, we got acquainted with the features of configuration and deployment of DeepPavlov models, learned about Docker DP images and the possibility of free access to DP models as SaaS.

In the next article, we will train a simple DeepPavlov model on our dataset. And do not forget that DeepPavlov has a forum – ask your questions regarding the library and models. Thanks for attention!

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *