How the Android project metrics collector works

Hello! My name is Daniil, I am an Android developer on the VK ID SDK team at VK. Our team has created a lightweight SDK for authorization through VK ecosystem applications. It consists of a One Tap button for logging in in one click, a button for logging into another account, and a widget for authorization via Mail or Odnoklassniki.

Authorization curtain VK ID Android SDK

Authorization curtain VK ID Android SDK

While working on the product, we realized that it was necessary to evaluate its technical quality: consider the size of the SDK, test coverage, build speed, and much more. We needed a code quality metrics collector.

I'll tell you how we wrote a plugin for collecting metrics and what problems we encountered. You will learn how our assembler works from the inside, and you can even test it in your project.

And now about everything in more detail.

Start

First of all, we started writing a plugin for collecting metrics. He was supposed to calculate code quality parameters and publish the results in merge requests.

I’ll make a small digression and remind you of something important. You need to regularly monitor code quality. If you don’t do this or do it rarely, in case of an error you will have to spend time searching for the place where there was code that worsened the metrics.

Each merge request included the following report:

Plugin report

Plugin report

The report displayed key technical metrics that we monitored regularly. If the values ​​deviated noticeably from the norm, it was necessary to analyze the code for unwanted changes. We designed this solution as a gradle plugin.

Choosing a solution

The speed of the build is conveniently measured on the side of the Gradle system, so we decided not to invent anything and settled on gradle tasks. There are several approaches you can use to call them:

  1. A command line runner (for example, in Kotlin) that runs gradle metrics tasks.

  2. Gradle plugin that directly runs tasks.

  3. Gradle plugin that runs tasks via exec.

The first option requires writing code using different libraries, and probably languages. In addition, the runner and metrics code will be separated. It doesn't sound easy at all. And one cannot ignore the third point: Android developers are still more accustomed to installing gradle plugins and working with them directly.

The second option had to be abandoned due to a possible conflict of metrics. For example, the build speed metric needs to be run in isolation. Otherwise, it will count not only the execution of the gradle task, but also the launch time of other metrics.

As a result, we came to decision number three.

Solution architecture

Plugin architecture

Plugin architecture

We have decided on the type of gradle plugin. To configure it, we had a recommended solution – Extensions. It remains to develop a general approach to writing metrics.

To begin with, the metrics had an interface that triggered data collection and received text diff for publication in merge requests. Inside, the metric did the necessary calculations: it launched a gradle task to assemble and calculate the size of the ARK, saved the results to the repository and formatted them in Markdown (it was published by the plugin in the comments to the MR).

Firebase Firestore was used to save metrics, and GitLab API was used to publish.

Next we’ll talk in more detail about how this works. Let's start with the details on the right side of the diagram and work our way to the beginning to get the full picture.

Metrics store

The metrics store had to solve two problems.

First: in each MP it was necessary to calculate the difference between the metrics of the target branch (for example, from the developer) and the MP branch. Therefore, for each MP, metrics from the last commit were saved in the repository.

Second: each metric was considered a separate Gradle run. It was necessary to pass the result to the launch of the gradle plugin that publishes the diff.

To simplify, to launch the metrics, the command ./gradlew publishAllMetrics was used, which under the hood launched ./gradlew publishSpecificMetric. The reasons for this were described in more detail in the “Selecting a solution” section.

Each metric was run in isolation. To transfer data between the plugin and metrics processes, it was necessary to temporarily store it somewhere.

As a result, we were faced with two tasks:

  1. Save the diff of the current step to get it in the plugin.

  2. Save the metrics from the previous step to calculate the diff.

Typically, any external storage that saves state across different CI runs is suitable for this. We chose Firebase Firestore because the free plan is sufficient for all needs. Moreover, if the solution becomes open source, everyone will be able to use it.

FIrestore is a cloud NoSQL database from Google with a free plan that is sufficient for most projects.

The first problem was solved this way: each metric saved the result of its work as text in storage. After which the main task of the plugin combined all these lines into one and published them as a comment in the MP.

The second was solved by calculating the metric in each MP and storing the results in storage.

Metric diff calculation

In each MR, the difference between metrics was calculated. To do this, we took the metric values ​​for two commits. Obviously, it was worth taking the last commit of this MP as the MP metric. But with the commit of the source branch, everything was not so simple.

There are two options:

  1. Take the latest commit on the source branch (usually the development branch).

  2. Take the first ancestor of the source and target branches.

If you take the latest commit from the source branch, it may happen that changes have already been made to it in other MRs and this was not taken into account when measuring. For example, in a commit the metric improved slightly, and after that MR joined the development, which significantly worsened it. The developer will look for a problem because the last commit produced negative data.

After turning this around in our heads, we chose the second option:

Logic for calculating diff metrics

Logic for calculating diff metrics

The commit was found using a combination of merge-base and rev-list:

val mergeBase = exec(“git merge-base $sourceBranch $targetBranch”)

return exec(“git rev-list –no-merges -n 1 $mergeBase”)

This commit was the last of one of the previous MRs, for which metrics had already been calculated.

Working with the repository

To work with the repository (in our case it is GitLab), we used the GitLab API. We were faced with two tasks:

  1. Get MP branches to understand for which commits to take and save metrics.

  2. Publish comments on the MR with the results of collecting metrics.

To solve the first problem, it was enough to request information about MP from the GitLab API. It contained the names of the target branch and the MP branch itself. Using them, it was possible to get the last commit on the MP and the first commit before the MP (via git merge-base). Using the root commit from the repository, we obtained the metric values ​​with which we compared the metrics from the MR commit. And the last MR commit was used to save metrics.

The second task was simple: there is an API for working with comments. In order not to produce comments on MP from different launches of pipelines on CI, we decided to publish one comment and change it during subsequent launches. Thus, the MP will always have up-to-date information, and if you need to quickly see the results of launches on other commits, then this data remains in Firebase.

Internal structure of the metric

Let's look at how the metric is structured from the inside, its logic and components.

Internal structure of the metric

Internal structure of the metric

The metric calculation takes place within one exec function. Typically the process consists of four parts:

  1. The calculation of the metric itself, that is, the calculation of metric values ​​for the current commit.

  2. Obtaining the metric with which the diff is calculated from the storage.

  3. Diff calculation, which consists of calculating the difference in metric values, determining the percentage of change and generating Markdown with the result.

  4. Saving the diff to storage for passing into the plugin process.

SDK size metric

All metrics are based on the same scenario. Each one works within one gradle task and saves the diff in the repository. The diagram shows the logic of the metric that calculates the size of the SDK:

Logic of the SDK size metric

Logic of the SDK size metric

Step by step:

  1. The metrics collection plugin starts.

  2. The Gradle plugin launches the gradle task for calculating the metric via Task.exec.

  3. Runtime.exec is used to launch apkanalyzer.

  4. Using apkanalyzer, the APK size is calculated along with our SDK.

  5. Using apkanalyzer, we can calculate the size of an APK without our SDK.

  6. The difference between the two SDK sizes is calculated, this is the size of our SDK.

  7. Diff and metrics are saved in storage.

After this, the plugin takes the diff and publishes it as a comment in MP. This metric also allows you to calculate the size of a regular APK. To do this, you just need to skip step 5 and publish the APK size from step 4. All other metrics work similarly, within one gradle task.

Gradle plugin

Logic of the gradle plugin

Logic of the gradle plugin

The Gradle plugin consists of two entities: calculating metrics and publishing them. The calculation of metrics is started one by one, and the publication appears as a comment in the MP.

Gradle plugin interface

The interface of the gradle plugin is very simple, the main part is the metrics configuration, which looks like this:

// build.gradle.kts
healthMetrics {
   val localProperties by lazy {
       Properties().apply { load(rootProject.file("local.properties").inputStream()) }
   }
   gitlab(
       host = { localProperties.getProperty("healthmetrics.gitlab.host") },
       token = { localProperties.getProperty("healthmetrics.gitlab.token") }
   )
   firestore(rootProject.file("build-logic/metrics/service-credentials.json"))
   codeCoverage {
       title = "Code coverage"
       targetProject = rootProject
   }
   buildSpeed {
       title = "Build speed of :assembleDebug"
       measuredTaskPaths = setOf(":assembleDebug")
       iterations = 3
       warmUps = 2
       cleanAfterEachBuild = true
   }
   apkSize {
       title = "SDK size with all dependencies"
       targetProject = projects.sampleMetricsApp.dependencyProject
       targetBuildType = "withSdk"
       sourceBuildType = "debug"
       apkAnalyzerPath = { localProperties.getProperty("healthmetrics.apksize.apkanalyzerpath") }
   }
   apkSize {
       title = "Pure SDK size"
       targetProject = projects.sampleMetricsApp.dependencyProject
       targetBuildType = "withSdk"
       sourceBuildType = "withDeps"
       apkAnalyzerPath = { localProperties.getProperty("healthmetrics.apksize.apkanalyzerpath") }
   }
   publicApiChanges()
}

The configuration describes the metrics and configures GitLab and Firestore.

Testing the plugin

The plugin is currently in early alpha version. It is being tested within the VK ID SDK, and testing is slowly starting in other teams. The plugin is currently intended for internal use, so it does not have GitHub support as a repository. If you want to try it in your project, you can connect it from our Artifactory:

// settings.gradle.kts
pluginManagement {
   repositories {
       ...
       maven(url = "<https://artifactory-external.vkpartner.ru/artifactory/vkid-sdk-android/>")
   }
}

// build.gradle.kts
plugins {
   id("vkid.health.metrics") version "1.0.0-alpha03" apply true
}

The plugin code is available in the Android VK ID SDK repository. Ibid. you can find out how to set it up.

Conclusion

Thank you for reading the article. I hope you learned something new and maybe got inspired to write your own metrics collector plugin.

We welcome any feedback. Ask questions, leave suggestions and comments under the article. You can also share your opinion about our plugin in issues in the repository.

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *