How we got started with Vivid Money for iOS

Hello! I’m Ilya. I am an iOS Tech Lead at Vivid Money. We have been developing our fintech product for over a year and are now ready to share our experience and knowledge with the community.
This is an introductory article in which I will superficially touch upon several technical solutions that we made at the start, and later articles will be published with a detailed analysis of the most interesting of them.

Architecture

To begin with, we decided on the architecture of the project. I mean not only the architecture of screens / modules, but also all other architectural solutions. Of course, it will not be possible to talk about all of them in this section, so we will only touch on the architecture of modules, screens, and dependency injection.

Project architecture

Since the project was supposed to be large and divided into products, we decided to divide it into several modules. This helps to better structure the code and also makes it easier to develop in product teams. The module architecture looks like this:

There are 4 layers in the diagram:

  • Core. These are projects that are either not associated with the application and can be used anywhere, or those that all higher layers depend on.

  • Platform. There are 2 projects in this layer. DesignKit contains everything related to the application’s UI, from colors and fonts to ready-made components or even screens. Platform serves as the base for all feature projects and the main application. It contains services, general screens, configurations, entities, and so on.

  • Features. Projects in which individual features or whole products are developed. These projects are not related to each other, which allows you to develop them faster, easier to test, and not mix code from different products.

  • App. It brings all projects together.

Screen architecture

We decided to find something that would satisfy our needs and did not contain anything superfluous. As a result, we came to VIP (View, Interactor, Presenter). We kept the VIPER basics, but removed the Router and Entity. This separation of the module helps to better test it. Also, the separation came in handy in some places where it was necessary to use different implementations of views or interactor (yes, this is not a myth).

We replaced Router with Coordinator… This is a good pattern that allows you to make modules independent of each other and concentrate all the transition logic within one user story within one class.

Dependency injection

We are not using a library for dependency injection. On this I would like to end, but it seems that we need to give this a little explanation.

Firstly, we do not really like to connect third-party frameworks, especially those that can be easily replaced with your own solution. Secondly, we have not found any practical benefit from DI frameworks.

In our project, dependency injection is simple and has two parts:

  • The Container class, which contains all the dependencies declared as variables. This class is extended in every project and is closed by the protocol. If, for example, you need to get a dependency from the Platform module, you need to write the following code: let d = (Container.shared() as PlatformContainer).dependency If you don’t write the code on one line, it looks a little better.

  • Assembly class, which injects dependencies into each specific module (that is, collects it). This class uses Container to get dependencies. All dependencies in the components of the VIP module are force-unwrapped variables, the values ​​of which are assigned by Assembly. Dependencies are injected into the rest of the classes through the initializer.

Managing third party dependencies

In terms of dependency management, everything was like in most projects – we used CocoaPods. First, it is a proven dependency manager; secondly, almost all open-source libraries support it.

As much as we wanted it, but after a while we had relatively many dependencies that we could not do without (Firebase, Amplitude and other similar frameworks), and their constant rebuilding took a lot of time (the specific time is difficult to calculate due to constantly changing codebase and dependencies themselves). Then we decided to try Carthage.

In the first iteration, we made a symbiosis of these two managers, since not all libraries supported Carthage. But after a while they completely switched to Carthage and in addition implemented Rome, a utility for caching collected libraries.

There were also attempts to use SPM, but at that time there were a number of problems that did not allow it to be done: lack of support for custom build configurations, problems with updating the library when changing its version, lack of SPM support by some libraries. But we hope that the problems are resolved and we can fully migrate to this dependency manager.

Testing

At the very beginning of development, we dreamed of the absence of manual testers, which would allow us to make short release cycles and get rid of the human factor when testing functionality. Unfortunately, there were circumstances that did not allow achieving this at the start, but we are confidently moving in this direction. In any case, we had to think about organizing product testing. As a result, we came to the conclusion that we will write both Unit and UI tests.

For Unit tests, we use the SwiftyMocky framework, which helps generate mocks for types and provides many useful test functions. In an already established paradigm, we test using a Given-When-Then structure so that all tests look the same and are logically structured.

Unit tests are mainly written for common components (utilities, services, etc.) and classes with complex business logic (in most cases, Presenter and Interactor), which will be quite problematic to verify in UI tests.

UI tests appeared much later. In them, we also did not invent anything special and made several auxiliary classes to implement the Page object pattern.

UI tests in our project are divided into 2 types: component and end-to-end. Component tests check the operation of an individual screen or part of it using mocks, while end-to-end tests check a certain chain of screens and use a real API, but on a dev loop.

We also implemented snapshot tests as a test, but it’s too early to talk about their benefits. Ideally, I would like to use them to test components from the system design.

Client API generation

Our backend is divided into microservices, which means that in the application we also access several APIs. Keeping track of each one and updating the code manually is too time consuming task, and we decided to automate it.

Each microservice has a swagger specification with which we generate frameworks using the Swagger Codegen. We slightly changed the templates for generation to meet our requirements, and automated the process of updating the framework in CI.

Each generated API client is in a separate repository and added to the project using Carthage.

There are improvements that can be made in terms of generating client APIs, but this approach has already given a huge increase in development speed and eliminated the need to manually update the API.

Code-standard

We needed to take some steps to ensure that the code was understandable to everyone anywhere in the application and developed quickly. Several such steps have been taken.

The most basic is writing code conventions. All rules and syntax guidelines are concentrated there. Almost all of them are checked using SwiftLint, and for some there are custom rules. Code style conventions help us review code faster and make code easier to read.

The following were described:

  • Rules for working with the repository: how to name branches, how to write messages to commits, and so on;

  • The process of completing a task: what tasks can be taken, task priorities, task statuses, how to create a pull request;

  • Patterns and mechanisms used: how to solve typical tasks (cache data, create services, etc.);

  • Terminology: Typical names for methods or business definitions in code.

All of this helps us design in the same way and not deal with problems that have already been solved.

We also use Danger CI to validate pull requests. The list of our rules is small at the moment: checking for filling in the required fields in the pull request, checking for the number of changes made, searching for TODOs, notes from Swiftlint and a couple of recommendation messages. This helps to keep important details in mind and remind you of things that can be easily forgotten.

Automation

In a project with a large code base and a growing number of developers, automation is indispensable. Therefore, the right decision would be to pay attention to this at the very beginning of the project.

We have written several of our scripts that automate the work:

  • Script for generating VIP modules that speeds up the development of screens.

  • A script for generating feature projects.

  • Scripts for downloading localization, feature toggles, remote configuration and some resources required for the application. You can tell a little more about this item. The fact is that we download the latest versions of all these resources at the time of launching the application, but in case of problems with downloading, we have provided the option of the default version of the resources, which is close to the real one. And, in order not to download all these files manually every time, we wrote these scripts that are run before building the application in CI.

To avoid merge conflicts in project files, we use XcodeGenwhich generates project files according to yaml specifications.

With the advent of such a number of scripts, the question of ease of use arose, because remembering their names, arguments and order of invocation does not seem to be the most interesting task. Therefore, we created a script that calls the rest of the scripts. The purpose of his work is simple – to bring the project up to date. In the process, it downloads resources, updates dependencies, generates project files and mocks, and configures schemas. At first, we did not call this script by hand, but made its call automatic for each merge, pull or checkout. But since the script was running for some time, this led to constant waiting, although after, for example, checkout to a new branch, nothing in the project has changed. Therefore, at the moment it is called manually, but if it were possible to significantly reduce the time of its operation, then we would return to the automatic call.

We also created a Mac OS application that provides an interface for invoking all of our scripts. It was very simple and had little demand, but at the moment we are reworking it to make it easier to work with the project.

Project setup

Since our project is not the easiest to configure, but requires some utilities and calls to various scripts, we decided to simplify its configuration.

At first, it was an executable file that had to be run in order to download all dependencies (ruby, brew, python, and so on) and perform the necessary settings. But later we implemented project customization via Ansible. This allowed us to keep everything in one place and configure not only employees’ computers, but also build agents.

Summing up

We talked briefly about those things and the experience that we gained when we just started taking the first steps in the project.

We plan to continue to share our development experience, as we consider this an important part of the development of the community. To help us do it better, write comments and constructive criticism. We will be very grateful.

Thanks to all!

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *