Retro about participation in Tinkoff Invest Robot Contest

The story about the competition of algorithmic traders, the expectations and reality of the participants, about the importance of feedback, and even the topic of psychology on a tangent has passed. We make tea, sit down and about everything in order.


I’m Pasha and I’m a front-end developer.

My key experience is 8 years of front-end at Yandex, where I managed to be a student of SRI, a developer, a team leader and a service manager. In parallel with this, I was fond of algorithmic trading. I started with Forex and programming a robot for MetaTrader back in the 2010s. A lot of time has passed since then, I have both the knowledge and the means to make my contribution to the industry. For the sake of this, I successfully completed cooperation with the company this spring. And set off on a free voyage.

I have blog, with a selection of articles on algorithmic trading and investment topics. The blog helps to monitor and analyze the interests of the audience.

I train front-end developers in exchange for performing the tasks I need in open source. The plans are to record a mini DIY along the front so as not to explain the base every time.

About the decision to participate in the competition

For a long time, I nurtured the idea that there are very few open source projects for algo trading on javascript. So that there is a low entry threshold, so that with visualization, and so that you can test various hypotheses about what is happening on the market at your leisure. For example, look for correlations in the candles of different instruments, tighten up news processing, or play around with tensorflowjs.

Suddenly, a competition for the development of trading robots came up. I planned to make this development without the competition, I wrote about it in the blog and in pulse April 9, 2022.

And here he is (competition)! I was delighted with this “sign” – the puzzle began to take shape.

The rules of the competition allow you to participate as a team, I involved my student, highlighting as much as possible who does what, conducting a code review in a public github as it is.

About the problem I solved

My idea is to develop a robot for a housewife, or almost for a housewife. From here we get three key requirements.

  1. A minimum of steps for installation, all settings and actions through the interface.

  2. predictability and safety. The robot must be able to work in the advisor mode, give the opportunity to control the situation and carry out automatic trading only after the user’s permission.

  3. Convenient infrastructure for the developer. It should be an ecosystem with modules that will be easy to develop and connect. And it doesn’t matter if the module is ordered individually or downloaded from the Internet from open source.

About the requirements of the competition

About my expectations…

The plan is to invest as much as possible in development in order to realize the idea by the end of the competition.

  • The competition motivates you to invest time and not pull the cat, you know why.

  • Winning the competition is not the goal. I wanted to make a project, have access to chats and contestants, get feedback and chat with the developers.

In addition, the organizers have repeatedly said that the scores will be published. I planned to score the maximum possible number of points, otherwise, to understand my weaknesses and learn from the best.

At the end of the competition, once again, confirmed that the results will be presented publicly.

About the project itself

The trading system consists of four blocks.

1. SDK for communication with the broker (taken and finished from open source)

2. Interface

3. Trading robot

4. And the controller that links all these blocks together

Details about them are described in repositories.

Now for the requirements.

Evaluation point “Quality of handling technical errors”

All stages of interaction with the broker and key elements are wrapped in a try catch, in the project header there is a checkbox whether the viewer is connected to the server, all server and API errors are logged to files by the day. Error logs can be viewed in the viewer by clicking on the link in the header. To do this I finished used sdk.

Evaluation point “Code quality”

Minimally added a linter to all projects. In order not to interfere with the server and viewer parts, I separated them. Dumb places covered with comments, and commented out the code that can serve as an example.

Made it possible to change the server in the viewer, so that in the future it would be possible to finish the API in the viewer and use it for other robots. For example, for a trading robot to be based on python and use this viewer.

Of course, I sacrificed the quality of the code in order to have time to make my ideas. I’m definitely not proud of this code, I know about all its jambs and problems. But it’s definitely not the worst, so I’m embarrassed to publish it.

Evaluation point “Easy to assemble the project”

Everything was made so that a housewife could assemble. Everything is put into npm packages, CLI is designed, tested on windows, nix, phone, servers with different APIs and the ability to set ports. A QR code has been placed in the settings for quick access from the phone.

To install the project, it is enough to have a node >= 17 and execute the commands

mkdir robot
cd robot
npm i opexbot
npx opexbot

After launch, the installation of the server and tokens occurs through the settings page with tips. Easier nowhere.

Evaluation item “Functionality”

The functionality is simply innumerable. In addition to caching and reusing everything that can be eaten

  • Robot backtesting in automatic and step-by-step mode

  • Opportunity to choose a robot

  • Setting parameters for the robot about lots, stops and profit, support and resistance levels

  • Displaying and caching the order book with level marks

  • The possibility of backtesting and scrolling the trading results of the robot, taking into account the order book, if it is in the cache

  • The ability to select an account and a token, everywhere the backlight is in which of the modes and what is selected.

  • A robot can be created by copy-pasting an example without digging into the code

  • When creating a new robot, it automatically grows into the viewer

  • Debug and development of the robot can be done by viewing all variables on a separate page

All this is displayed and done in the viewer. Very clear and simple.

Sandbox commands are supported, but are not used in the robot, because he works on streams. And the sandbox of the bank does not know how, unlike the production API. Haha.

Evaluation point “Quality of visualization of results (if any)”

Everything works through visualization. Screenshots and features can be viewed in repo. Everything is cool there.

Evaluation item “Quality of documentation and readme”

In each of the repositories, I described the possibilities, made screenshots.

Evaluation point “Quality of object logic error handling”

The implementation of prologs is similar to the item “Quality of processing technical errors”. The rest is blocked through the interface and restrictions are imposed so that the robot cannot be launched in the wrong conditions.

Evaluation point “The presence of executed orders for the robot”

I slapped random ones and even got a plus on them.

Evaluation item “Possibility to adjust the parameters of the algorithm or change it”

Described in the assessment item “Functionality”. A lot of things are configurable, right down to choosing a robot with a couple of mouse clicks.

That’s actually all. I invested not only in what I was interested in, but also in compliance with the estimates.

How events unfolded

When I sent the work, I was sure that I did it soundly. Bulshit write that he did not think about winning. But here it is important to emphasize that this is not a post of the injured pride of a non-winner. I consciously invested in my idea of ​​a simple working product, and not in a deep study of a trading robot, available only to programmers.

Therefore, I took the result without disappointment. But I more than counted on the assessment in points according to the table and intelligible feedback!

Digression about my psychological portrait

I have a habit of writing “thank you” when I’m not grateful at all.

– You have been fined

– Thanks


— Your card is blocked

– Thanks

This time I didn’t intend to act differently either, so

  • I forgave the organizers that some of the contestants had the bank website falling off and they could not get the token.

  • I accepted that the results were announced not for the promised week, but for one and a half.

  • I resigned myself to the fact that the candles came on weekends, when there was no auction and the participants in the chat called it the kitchen. And the organizers answered that they say use the bid verification parameter.

  • I even tried to understand and forgive the fact that they decided not to publish the evaluation of the work.

Yes, yes, only the results were announced, without the promised scores on the table.

But when I received feedback, a miracle happened, instead of the standard “thank you”, I put a dislike and dashed off the answer.

And no, this post is not about the miracles of psychology, but about the frankly poorly organized final of the competition, where even such a neglected patient as I made progress.


Is that all they can say? What about the rest of the scores? How can this feedback somehow affect the development of algorithmic trading? I can’t name it otherwise than a spit from the organizers.

“3. code in multiple repositories makes it hard to understand what’s going on.” – the spelling of the author is preserved. Did an intern test me? Was it necessary to put all the libraries in one folder, and commit the nodemodules immediately to the repository?

I do not dispute the result. But a lot of questions arise. The first of which was simply ignored.

In the meantime, there are huge feedbacks in other repos. There are even two feedbacks at once.

Discussions flare up: the reviewers write that something is not there, but it is, or that the winning work provides feedback on the functionality of the loser.

And I have no feedback, no discussions.

Instead of conclusions

  1. Open to feedback and a frank challenge of work. Don’t be stingy. For I did not get what I was going for!

  2. If someone is learning the front, wants to chat, pair-program, code-review and get adequate feedback – welcome.

  3. You can support me in any of these ways

Like or feedback in issue

Donat for the development of algorithmic trading, beginner front-end developers and open source

Similar Posts

Leave a Reply Cancel reply