How we personalized POIs

At 2GIS we want to facilitate the user’s search routine and therefore strive to anticipate user requests. Under the cut, we will tell you about how we came up with an algorithm for personalizing interesting places and what came of it.

image

POI (point of interest) – a small round icon on the map. Indicates a place or company that may be of interest to the user.

image
Here they are – POI 2GIS. Each category has its own icon

POIs – city objects popular with the majority in different headings. And I also want to take into account the interests of each user separately. Therefore, we decided to add personalized POIs to the map, which will be responsible for this.

Well-chosen POIs also shorten the chain of search steps on the map. Usually the user searches for something like this: opened the application → entered a search query → viewed the results → opened the object card.

With personalized POIs, the user can navigate the map and find information without a search query: open the application → see the desired POI on the map → open the object card.

image
Map without personalized POIs and with them – interesting restaurants, coffee shops and clinics for the user

Data

It is logical to take as potential objects for POI those in which the user has already shown interest. And among them, look for those to which he will return with the greatest probability. At the same time, it is desirable that the objects interest the user as long as possible – so that he gets used to looking for them on the map.

But how can this data be classified? You can mark up a selection of objects, enrich it with a variety of features and apply boosting or neural networks. But you can go the other way – and come up with a rule of thumb.

Rule of thumb

There are pros and cons to a rule of thumb. Yes, this will give a weaker classification quality. But the main advantage is that we can quickly and easily check the demand for POIs. Data preparation, training of such a model and its implementation will take much less time than, for example, boosting. And if the feature turns out to be successful for both the user and the company, we can always switch to more complex and costly models.

For empirical models, a good domain context is important. Investigating user behavior in the product, we found out that the probability of repeated user access to the product (retention rate) has an exponential distribution.

This property is present not only in the retention rate of the product, but also in many other phenomena associated with repeated access – for example, repeated access to an object, as in our case. This knowledge helped us develop algorithms for determining the user’s “home” city, short-term and long-term user interests.

First algorithm

The first step was to form a sample of the form

$ X ^ {l} = (x_ {i}, y_ {i}) _ {i = 1} ^ {l} $

$ x_ {i} $ – n-dimensional vector of features of the i-th object, and as the object of classification we consider all objects that the user was interested in for a certain time before the calculation date. In our case, it is two months.

$ y_ {i} $ – the class of the i-th object – the response, which takes on a value equal to 1 if the user visited the company in the control period of time, and 0 if he did not visit.

Since we are interested in objects that will be of interest to the user for a long time, we chose a month as a control period two weeks after the calculation date. This lag of two weeks is needed in order not to capture among the successful objects of instant / short-term interest – those that the user is looking for right on the date of calculation or near it, but not the fact that he will return to them. Objects with y = 1 are considered successful – that is, those to which the user returned during the control period.

The rule $ a: X  rightarrow Y $, which associates the set of attributes of an object X with its class Y, looks like this:

$ f (y | x) =  left  { begin {matrix}  sum_ {i = 1} ^ {k} x_ {i}  cdot exp  left ( frac {i  cdot  beta} {k}  right)>  alpha, y = 1 \ otherwise, y = 0  end {matrix}  right., $” data-tex=”display”></math></p>
<p></p>
<p>where k is the total number of days (or any other unit of time) in the training sample. </p>
<p></p>
<p><math><img decoding= equals 1 if on day i the user was interested in the object, otherwise 0. Day number equals 1 on the first day of the training sample and k on the last.

$  beta $ – a parameter that is responsible for the rate of change in the significance of the day of interaction with the object as it moves away from the calculation date.

$  alpha $ – threshold value.

The idea is that the farther away the day when the user was interested in an object, the less weight this day will have when evaluating this object. Function parameters $  alpha $ and $  beta $ are matched by maximizing the target variable:

$  hat { alpha},  hat { beta} =  arg  max _ { alpha,  beta} F  left (f  left (y | x  right)  right) $

where F is an F-measure with an appropriate ratio of the desired accuracy and completeness of the model. In this problem, the main emphasis is on the accuracy of the algorithm, so we took the parameter $  gamma = 0.5 $

$ F _ { gamma} =  left (1+  gamma ^ {2}  right)  cdot  frac {precision  cdot recall} { gamma ^ {2} * precision + recall} $

Results 1.0

The algorithm was tested for more than 450 million objects. Among them, the share of objects with a response equal to 1 is approximately 5%. The completeness of the algorithm is 0.153, the accuracy is 0.401, and the F-measure $ ( gamma = 0.5) $ – 0.303.

The quality of such an algorithm may seem unacceptably low. The fact is that the number of objects for classification includes objects that we cannot attribute to long-term interests based on these metrics – users were too little interested in them to draw any conclusions.

Only 3% of objects were of interest to the user for more than two days during the training period. This is not surprising: it includes objects from spheres with low retention. There are many of them, they can be very large – for example, pharmacies, bars, or just objects that are not of interest to the user.

Among objects with a response equal to 1, this percentage is higher – 22%. This is also small, but it is explained by the large period between site visits.

image

If we exclude such objects, then with the same model parameters, the completeness increases from 0.153 to 0.684 with the same accuracy in 0.401, and the F-measure, with emphasis on precision, becomes 0.437 – classical, of course, higher.

However, with this kind of model, there are still two problems. At first, users have different activity levels: someone uses the application once a day, and someone – once a month. Therefore, the use of a common threshold value and some parameters of the weighting function can underestimate the quality of the classification.

Secondly, sites may have different frequency of visit depending on their field of activity. For example, a user regularly goes to a hypermarket for groceries once a week, goes to a hairdresser’s once a month, and in case of a cold, he can visit the clinic as often as the doctor tells him to. So we can miss objects with long visit intervals.

Second algorithm

To address these issues, we added a feature to the function showing the maximum period of user interest, and slightly differently took into account the intensity of site visits and its relevance. We divided users into three groups according to the frequency of visiting the product. For each of them, we selected their own parameters of this model:

$ f (y | x) =  left  { begin {matrix} exp  left ( frac {x_ {1}  cdot  beta} {k}  right)  cdot  left ( lambda x_ {2} +  mu x_ {3}  right))>  alpha, y = 1 \ otherwise, y = 0  end {matrix}  right., $” data-tex=”display”></math></p>
<p></p>
<p>k is the number of days in the training sample.</p>
<p></p>
<p><math><img decoding=– number of the last day of user interaction with the object (equal to 1 on the first day of the training sample and k on the last).

$ x_ {2} $– the number of days of user interaction with the object in the period under review.

$ x_ {3} $– the number of days between the first and the last day of user interaction with the object in the period under consideration.

$  alpha,  beta,  lambda,  mu $ – the parameters of the function, which are selected by maximizing the target variable (in our case, it is the F-measure) in a similar way for the first model.

Results 2.0

The parameters were evaluated and the following results were obtained for user clusters.

Cluster Completeness Accuracy F-measure
$  bf ( gamma = 0.5) $
1. Objects of users who log into 2GIS less than three times a month 0.072 0.349 0.197
2. Objects of users who log into 2GIS more than three times a month 0.162 0.457 0.335
3. Objects of users who visit 2GIS more than ten times a month 0.194 0.514 0.386
Total for the 2nd algorithm 0.177 0.492 0.363
Total for the 1st algorithm 0.153 0.401 0.303

The F-measure has increased for all clusters, except for the first one – it corresponds to the most inactive part of the audience, and there are not many objects on it.

The number of true positive objects increased by 17%. The increase in accuracy was 9.1%, and in fullness – 2.4%. The overall F-measure increased by 6%.

If we exclude objects with too few unique days, then with the same model parameters, the completeness increases from 0.177 to 0.802 (for the first model 0.684, that is, an increase of 11.8%) with the same accuracy in 0.492 (for the first model 0.401, i.e. an increase of 9.1%). And if based on this we estimate the F-measure $ ( gamma = 0.5) $, then for the second algorithm it will be 0.533, and for the first 0.437, that is, the increase is 9.6%.

The result of the experiment in battle

Data decomposition and input of additional parameters significantly improved the quality of the model. This means that more complex models can improve the quality of the result. But before improving the algorithm, we decided to test the feature in battle and see if users would like it.

Personalized POIs are slightly larger than normal ones and appear on the map earlier than them
Personalized POIs are slightly larger than normal ones and appear on the map earlier than them

In a month, 500,000 users made 1 million clicks on personalized POIs. This is approximately 12% of those users to whom we selected them – but this does not mean that other users did not pay attention to them.

Approximately 40% of those who were assigned personalized objects accessed those objects in other ways. And this is also good – it means that there is a need for personalization not only on the map, but also in other components of the product.

POI vs Favorites

To assess whether these results are enough for us, we decided to compare the POIs we personalized with the objects that the user personalized himself – with the Favorites.

Personalized POIs and Favorites have a similar goal of remembering the places you want to return to. The appearance is also similar – they are marked with icons on the map and have approximately the same display scale. The difference is in appearance: the icon for all favorites is always the same – a white flag on an orange or red background, while for personalized POIs the color and icon of the icon changes depending on the industry of the object.

image
Personalized POIs will also tell in text what kind of object we were interested in – unlike the Favorites icons without signatures

It turned out that the share of users with clicks to personal POIs is greater than the share of users with clicks to Favorites from the map – twice among those for whom POIs were matched at all, and one and a half times among all users.

In fact, we have made an updatable Favorites on the map for the user, which he does not need to follow and generally do anything himself. This is a pretty good result – so it makes sense to develop personal POIs further.

conclusions

Empirical models can be useful and effective in the early stages of feature launches and under resource constraints, because they can deliver results quickly and cheaply. The key is to make assumptions based on a deep understanding of the logic of the product, its nature and user behavior.

Well, one more conclusion – the future belongs to personalization.

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *