5 different Python libraries that will save you time

In this collection, the translation of which we decided to share by the start of the course about machine and deep learningAccording to the author, each library deserves a separate article. It all starts from the very beginning: a library is offered that reduces the boilerplate import code; The article ends with a package of easy-to-use data visualization for exploratory analysis. The author also touches on working with Google Maps, making it faster and easier to work with ML models, and a library that can improve the quality of your natural language processing project. A dedicated Jupyter notebook can be found at the end.


PyForest

When you start coding for a project, what’s your first step? You are probably importing the required libraries. The problem is that you don’t know in advance how many libraries you need to import until you need them, that is, until you get an error.

This is why PyForest is one of the most user-friendly libraries I know of. With its help, you can import over 40 of the most popular libraries (Pandas, Matplotlib, Seaborn, Tensorflow, Sklearn, NLTK, XGBoost, Plotly, Keras, Numpy and others) into your Jupyter notebook with just one line of code.

Run pip install pyforest. To import the libraries into your notebook, enter the command from pyforest import * and you’re good to go. To find out which libraries are imported, run lazy_imports ().

At the same time, it is convenient to work with libraries. Technically, they are only imported when you mention them in your code. If the library is not mentioned, it is not imported.

Emot

This library can enhance the quality of your natural language processing project. It converts emoticons into descriptions. Imagine, for example, that someone tweeted the message “I ️ ️[здесь в оригинале эмодзи “красное сердце”, новый редактор Хабра вырезает его] Python ”. The person did not write the word “love”, instead inserting an emoji. If a tweet is involved in a project, you will have to remove emoji, which means you will lose some of the information.

This is where the emot package comes in, which converts emoji to words. For those who do not quite understand what this is about, emoticons are a way of expressing through symbols. For example, 🙂 means a smile, and 🙁 expresses sadness. How to work with the library?

To install Emot, run pip install emot and then import emot to import it into your notebook. You need to decide what you want to work with, that is, with emoticons or with emoji. In the case of emoji, the code would be emot.emoji (your_text). Let’s see emot in action.

Above you can see the sentence I ️ ️ ️[эмодзи “красное сердце”] Python wrapped in an Emot method to deal with values. The code outputs a dictionary with meaning, description, and character arrangement. As always, you can get a slice from the dictionary and focus on the information you need, for example if I write ans[‘mean’], only the description of the emoji will be returned.

Geemap

In short, it can be used to interactively display Google Earth Engine data. You are probably familiar with Google Earth Engine and all its power, so why not use it in your project? Over the next few weeks, I want to create a project that exposes all the functionality of the geemap package, and below I will tell you how you can get started with it.

Install geemap with pip install geemap from the terminal, then import into notepad with import geemap. For demonstration, I’ll create an interactive map based on folium:

import geemap.eefolium as geemap
Map = geemap.Map(center=[40,-100], zoom=4)
Map

As I said, I haven’t studied this library as much as it deserves. But she has comprehensive Readme how it works and what you can do with it.

Dabl

Let me go over the basics. Dabl is designed to make it easy for beginners to work with ML models. To install it, run pip install dabl, import the package with import dabl, and you’re good to go. Also run the dabl.clean (data) line to get information about the traits, such as whether there are any unhelpful indications. It also shows continuous, categorical and high cardinality features.

To render a specific trait, you can execute dabl.plot (data).

Finally, with one line of code, you can create several models like this: dabl.AnyClassifier, or like this: dabl.Simplefier (), as is done in scikit-learn. But in this step, you will have to take some common steps, such as creating a training and test dataset, calling, training the model, and outputting its prediction.

# Setting X and y variables
X, y = load_digits(return_X_y=True)

# Splitting the dataset into train and test sets

X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=1)

# Calling the model
sc = dabl.SimpleClassifier().fit(X_train, y_train)

# Evaluating accuracy score
print(“Accuracy score”, sc.score(X_test, y_test))

As you can see, Dabl iterates through many models, including the Dummy Classifier, GaussianNB (Gaussian Naive Bayes), variable depth decision trees, and logistic regression. At the end, the library shows the best model. All models run in about 10 seconds. Cool, isn’t it? I decided to test the latest model with scikit-learn to have more confidence in the result:

I got 0.968 accuracy with the usual forecasting approach and 0.971 with Dabl. It’s close enough for me! Please note that I did not import the logistic regression model from scikit-learn as this is already done via PyForest. I have to admit that I prefer LazyPredict, but Dabl is worth trying.

SweetViz

It is a low-code library that generates beautiful visualizations to take your exploratory data analysis to the next level with just two lines of code. The library output is an interactive HTML file. Let’s take a look at it in general. You can install it like this: pip install sweetviz, and import it into Notepad with the line import sweetviz as sv. And here’s some sample code:

my_report = sv.analyze(dataframe)
my_report.show_html()

Do you see it? The library creates an exploratory data analysis HTML file for the entire dataset and splits it in such a way that you can analyze each feature separately. It is also possible to obtain numerical or categorical associations with other attributes; small, large and common values. Also, the visualization changes depending on the data type. There is so much you can do with SweetViz that I will even write a separate post about it, but for now I highly recommend trying it.

Conclusion

All of these libraries deserve a separate article and you learn about them, because they turn complex tasks into straightforward simple ones. By working with these libraries, you save precious time for the really important tasks. I recommend trying them out and also researching functionality not mentioned here. On Github you will find notebook The Jupyter I wrote to see these libraries in action.

This material not only gives an idea of ​​the useful packages of the Python ecosystem, but also reminds of the breadth and variety of projects in which you can work in this language. Python is extremely laconic, it allows you to save time in the process of writing code, to express ideas as quickly and efficiently as possible, that is, to save energy to come up with new approaches and solutions to problems, including in the field of artificial intelligence, to get a broad and deep understanding of which you you can on our course “Machine Learning and Deep Learning”

find outhow to level up in other specialties or master them from scratch:

Other professions and courses

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *