About how I tried to make my own freelance project aggregator, but it didn’t take off

Yes, there have been many similar stories here, this seemingly tempting path attracts freelancers with enviable regularity, and they go down the same dead-end road. After all, no one has yet managed to achieve success this way, and now I know why, which is what I want to tell you about, without hiding any details, including technical ones, but not only.

The Birth of an Idea

I have been freelancing for a long time (since about 2013), and almost as long ago I realized that since the speed of response to projects is one of the main criteria that helps to take an order bypassing other applicants, it would be nice to create some kind of tool that simplifies this task.

It was trivially easy and quick to do this for personal use – in just one evening, back in 2013 or 2014, I made a small parser of the fl.ru exchange project feed – it simply continuously scanned the first page of the feed and displayed messages when a new project appeared. By clicking on the link, you could go to the project page and quickly respond to it.

Without a doubt, many people have created such notifications for personal use. Surely, hundreds, maybe even thousands, of them have been written over time.

But I went a little further — I added filters by words and sections (probably many people do this too), and… a killer feature — voice pronunciation of project titles. Now I didn’t have to constantly be distracted and read notifications, in 99% of cases it was enough to hear the project title, pronounced by a voice that wasn’t the most pleasant, but was quite intelligible (I think it was called Microsoft Anna) from the tts engine built into Windows, and I immediately understood whether it was worth rushing to the computer and urgently responding to the project, or whether I could forget about it and continue doing my own thing.

It turned out to be quite a convenient thing, although it had an interesting psychological side effect. I started waiting for simple tasks and doing only them. Why suffer and study the technical specifications, figuring out whether I can handle the project and what risks it contains, when you can wait a little longer and, taking advantage of a quick response, take a simple and understandable task? It would seem that this is reasonable logic, but in the long run, it may not have contributed to creative development and understanding new horizons of programming. As you probably already guessed, as a freelancer, I was engaged in programming. I mainly made website parsers, browser automation, all kinds of application utilities, and everything else that could be done quickly and get the coveted 1000 – 5000 rubles for a few hours of work.

Naturally, such activity quickly becomes boring. Having decided that the notification could be useful not only to me, I decided to turn it into a public service back in 2016. But then it was not a very serious attempt, and the final product looked dubious even from a purely visual point of view (lack of experience and commitment to the hopelessly outdated WinForms already at that time affected), so we will talk not about 2016, but about 2023, in which I returned to this idea, deciding to implement it at a new qualitative level, and already having some experience in full-stack development.

The idea was simple – to create a service with voice notifications, and at the same time, through a telegram bot. It had to work on all operating systems – Windows, Mac and Linux. The user just needs to keep the computer on and set the speaker volume to a comfortable level.

Technical details of implementation

If you are not interested in the technical details of the internal structure of the service, skip to the section Marketing Epic Fail

Tech stack

It is clear that the main logic in such a project should work on the server, and the client should be as thin as possible, implementing only the interface.

For the backend, I use what I more or less know:

  • VDS with 4 cores, 8GB RAM and 80GB disk;

  • Ubuntu 22.04;

  • NET core 6.0 → 7.0 → 8.0 (migration during development) + C#;

  • MongoDB;

  • SignalR;

  • Nginx;

  • .NET Telegram.Bot

When choosing frontend technologies, the approach is similar – use what I know and can already work with. And this is, first and foremost, the Quasar framework, based on VueJS. I used Quasar 2, based on VueJS version 3.

It allows you to easily, by changing just a few settings, build a project for the Web, as a PWA application, and also create Electron applications for all desktop OS.

The backend was created using Visual Studio, the frontend – Visual Studio Code.

Service architecture

Overall, the service architecture looks something like this:

I drew it as best I could...

I drew it as best I could…

All business logic is handled by the ASP.NET WebApi application, which operates as a Linux service.

Parsing of freelance exchanges is carried out directly by separate services that scan the exchange feeds in several streams and continuously throw all found projects on localhost to the main service, the task of which in this case is to ignore those projects that have already been discovered earlier and process new ones.

If a new project is found, it is added to the database, and each client connected via SignalR (WebSocket) receives a voice notification if the project matches the filters configured for it.

A little about TextToSpeech. It was not possible to find any decent quality of synthesis autonomous solutions, all of them were not much better (if at all) than the good old Microsoft Anna, so it was decided to use a cloud service https://cloud.speechpro.com/service/tts

I liked both the quality and the price – with quality comparable to the Yandex solution, the price is quite affordable – a little more than 500 rubles per million characters, which is 3 times less expensive. But the documentation leaves much to be desired, there was no decent library for .NET, so I had to implement work with the API myself.

Another problem – on some days speechpro suddenly(!) stops working for several hours. A strange combination of high quality voice synthesis with disgusting quality of service… I had to connect an analogue from Yandex so that users would not be left without voice notifications at times.

Synthesized voices are converted from wav (yes, with speechpro they come exclusively in wav) to mp3, and are sent to clients using WebSocket. There were thoughts of using OPUS or some other cooler audio compression format than mp3, but after a series of experiments it turned out that the final result is not so amazing, and clients may have problems – all browsers support mp3, and with other formats – it depends. (Remember – applications for desktop OS are made on Electron and are essentially a browser).

Each synthesized project title is stored in the database for 3 months and is deleted if after this time it has not been reused at least once. This allows for a reduction of approximately 15% in requests to cloud voice synthesis services, because certain titles are repeated quite often (for example, “make a website”, “draw a logo”, etc.).

The parsing of projects from exchanges is carried out through free proxies, obtained in bulk from https://best-proxies.ru/ for only 500 rubles per month. If you do not use a proxy, then freelance exchanges can simply block the server's IP address – the load from it, although insignificant, is constant.

Initially, there was an idea to make our own scanner of sites where free proxies are posted, but saving 500 rubles a month is clearly not worth the resulting hassle of fixing errors and fighting the constantly changing methods of protection and obfuscation that are actively used there.

In about 5 days, up to 150 thousand proxies accumulate, and although 90% of them go bad very quickly, the remaining ones are enough to successfully parse exchanges. On average, from the moment a new project appears on the exchange until the user is notified about it, about 7 seconds pass, in rare cases – up to 30 seconds.

There was another use for the large list of proxies. Initially, I planned something like my own protection system against DDoS attacks at the application level (L7). For example, even a schoolboy could easily take down a server with just a full-text search across the entire project database, and in my nightmares I already imagined insidious competitors who would DDoS the site at the very moment a stream of new users was coming to it from paid advertising.

The hoster (VDSina, if anyone is interested) promised to take on protection from attacks on L3-L4 levels. I don't know how effective such protection is, but in any case, it wouldn't have saved us from L7, so we had to spend a whole month thinking up different algorithms for blocking IPs that send abnormal traffic. But no matter how clever we were, it still wouldn't have helped against a distributed attack.

Fortunately, I guessed that schoolchildren and other potential intruders, to attack the server, would most likely use the same free proxies that I also kind of already have. Therefore, the received proxies are stored in the database for quite a long time, and when abnormal bursts of visitors are recorded, first those coming from known proxy addresses are cut off, and then from entire ranges in which they are located. This can lead to false positives, but it should solve the critical task of protecting the backend.

Well, yes… After half a year of the service functioning, there were still no attacks on it. So a month spent on developing a protection system was wasted. It would have been much smarter to concentrate on the main logic, and use some WAF for protection, as all normal people do.

Theoretically, it was possible to use Cloudflare, but then there was a problem with the mail server – in this case, you can't specify a reverse DNS record, which makes it pointless to use Cloudflare to hide the server's IP. And without a reverse record, the likelihood of letters ending up in spam increases significantly. I was advised to use one of the mailing services, but the limits on free tariffs (as it seemed to me) were too modest, so Postfix is ​​used for mail, and letters seem to arrive normally.

Another external API that I used was, of course (where would we be without it), the payment aggregator, because to use the voice notification functionality, it was planned to charge 10 Russian rubles per day from each user.

Hidden text

When registering as an individual entrepreneur, do not indicate your permanent mobile number, but one that you can turn off and forget about. Otherwise, you will have at least a year ahead of you, during which you will be forced to receive tons of calls every day with annoying offers from banks to open a business account. Because the number you indicate will definitely be sold to them.

When developing the backend, 80% of the time was spent on the infrastructure code – a home-made WAF, admin panel, data access layer, logging, etc., and only 20% – directly on the main logic.

When working on the backend, you should probably always start with business logic, leaving the infrastructure for later. I had empirically derived this rule earlier, but unfortunately, I did not follow it completely, so I spent weeks on infrastructure code that was not used later and had to be simply thrown out when, during the development of business logic, it turned out that it was essentially not needed, or another one was needed instead.

Also, to avoid wasting extra time, instead of writing your own bicycles, you should use ready-made solutions as much as possible. The time to study a ready-made solution can be comparable to the time that you need to spend on your own bicycle, but the result will most likely be of higher quality, and the knowledge gained will be useful in the future, while your own bicycle will remain a disposable piece of trash code that will be scary to look at in a month.

Database

MongoDB is probably a great place to develop something like this. Luckily, transactions are almost never used here, because they seem to be MongoDB's weak point so far. It's not that they don't exist, but they're awkward to work with, and they behave in, well, not always obvious ways, but that's a topic for another day.

Another thing that takes a lot of time when working with MongoDB is figuring out how to perform the rather complex logic of insertion, deletion, update, increment/decrement, insertion of values ​​into nested collections. In the conditions of multithreading (and the Web naturally assumes it), these operations must be done atomically at the collection level to prevent the database from going into an inconsistent state during parallel access.

Therefore, it was necessary to construct structures like these:

public async Task<bool> UpdateFilterDeleteStrongStopAsync(string userId, string filterId, string tag)
{
    var filter = Builders<DbUserFilter>.Filter.Eq(x => x.UserId, userId)
        & Builders<DbUserFilter>.Filter.ElemMatch(x => x.Filters, Builders<Filter>.Filter.Eq(x => x.FilterId, filterId));

    var update = Builders<DbUserFilter>.Update.Pull(x => x.Filters.FirstMatchingElement().StrongStops, tag);

    var r = await collection.UpdateOneAsync(filter, update);
    return r.ModifiedCount > 0;
}

And sometimes, there are anti-patterns (logic in strings), when googling StackOverflow did not give a more elegant solution in a reasonable time:

public async Task<bool> UpdateFilterSiteIsEnabledAsync(string userId, string filterId, string domain, bool on)
{
    var r = await collection.UpdateOneAsync(x => x.UserId == userId,
        Builders<DbUserFilter>.Update.Set("Filters.$[f].SitesSectionsFilters.$[s].IsEnabled", on),
        new UpdateOptions
        {
            ArrayFilters = new List<ArrayFilterDefinition>
            {
                new BsonDocumentArrayFilterDefinition<BsonDocument>(new BsonDocument("f.FilterId", filterId)),
                new BsonDocumentArrayFilterDefinition<BsonDocument>(new BsonDocument("s.Domain", domain))
            }
        }
    );
    return r.ModifiedCount > 0;
}

SignalR

SignalR is used to send voice notifications. This thing is rather murky and buggy, but I did not find any acceptable alternatives (maybe I searched poorly), so I had to use it. During testing, I had to write several hacks that forcibly closed/restored the connection in some cases, and in the end everything seemed to work as it should.

Frontend

The frontend is pretty simple – it's just an SPA site written in VueJS, or more precisely, in Quasar, which is based on VueJS.

Quasar itself has a great library of visual components in the Material Design style, and in this case this library was enough. Although from previous experience I know that these components, although quite high-quality, are too simple, and often their functionality is not enough for something serious, for example, there is no good image slider. However, I repeat, this was not required here.

In essence, for me, front-end programming is a routine mixed with attempts to do something that doesn't look too ugly, and there is no point in describing this routine.

I will only tell you about the fundamental problems that I had to face.

Since the front is just a site, it would have been possible to leave everything within the browser, if not for two circumstances:

  1. For continuous use, it is inconvenient to keep a browser tab open all the time – you can accidentally close it.

  2. All modern browsers without exception have an undesirable behavior in our case – they do not allow playing sound until the user does something (at least clicks the mouse anywhere) on the site. It is hardly reasonable to force the user to do something every time after visiting the site. Even taking into account that he only needs to do it once at the beginning of the “work shift”. He can simply forget about it and miss the voice notifications.

Luckily, Quasar makes it easy and seamless to turn your SPA into a PWA with almost no code rewriting.

A PWA app no ​​longer requires any prior action to play audio, and everything would be fine if it weren't for one scenario…

The thing is that you can install PWA not only from any browser. You can do it through Chrome or Edge (if we are talking about Windows), but you can’t do it with Firefox.

Meanwhile, Firefox is still the default for some people, like me. It's not that bad to go through Chrome once to install a PWA app, but after that, all links from this app will most sneakily open in the browser through which it was installed, and not in the default browser! But we constantly have to follow links to projects, and it's much nicer when the link opens in a familiar browser.

After a long and painful search for how to override this behavior, I was forced to admit defeat. For obvious reasons, browser developers want their browser to be used, and they are not going to solve my problems at their own expense. By the way, this is at the OS level of responsibility, so even if they wanted to, it is not a fact that they could.

Well, there's only one way left – to compile the project as an Electron application. Quasar makes it easy to do this, and again, almost nothing needs to be changed in the code.

True, this had to be done for all OS. But now you can download an Electron application from the site, which is an exact copy of this very site. Cheap and cheerful.

On a virtual machine it seems to work on both Mac and Linux… But I'm not sure that it will work for everyone. But since there were no physical or financial opportunities to conduct large-scale testing, I decided to quickly move on to the most interesting part – to start advertising the application.

So, let's move on to the sad part of our story.

Marketing Epic Fail

Initially, I did not delude myself with illusions of becoming a millionaire thanks to this service. After the well-known events, the IT market in Russia is shrinking, the freelance market accordingly too, although the competition is growing, but budgets are falling and freelancers stop being freelancers and become loaders.

Hidden text

By the way, the current owners of one of the well-known freelance exchanges (fl.ru) cannot understand this obvious circumstance, and are taking some absurd steps to “fight dumping”, etc., as a result of which freelancers, and then customers, simply run away from them to other exchanges that are more adequately trying to adapt to current realities.

And the idea itself is long outdated, something like this would have been reasonable to do 10 years ago, now it is not very relevant, even taking into account the original (no one else has this, honestly, I checked) approach with voice notifications.

And yet, I previously found out that at the moment in the RuNet on large exchanges there are at least 20 thousand active freelancers, who are my target audience – they monitor the largest freelance exchanges for the emergence of suitable work and it is important for them to quickly respond to projects.

If I could attract at least 1,000 freelancers from this number (5% of the existing target audience), and half of them would pay 10 rubles a day (or everyone would pay 10 rubles every other day), then 150 thousand rubles a month would be an excellent reward for my work.

Such calculations are extremely naive from a marketing point of view, but their main problem is that they look tempting and therefore hinder more realistic thinking.

The right question is what kind of advertising strategy will allow you to get a user of the service, spending less money on it than it will bring in profit. In online business, there is nothing to do without a clear, experience-based understanding of this strategy. Do not substitute this understanding for hopes and guesses based on optimistic fantasies and assumptions.

Theoretically, I knew this in advance, but a fool learns only from his own mistakes, so I had to go through the difficult path of knowledge personally.

The main platform where I was going to advertise the service was supposed to be Telegram, or more precisely, Telegram bots.

The calculation here was like this… Since we are dealing with freelancers, many of them probably use some bots, at least those that provide services for access to neural networks. That's what I was focusing on.

In turn, these bots live mainly due to advertising, and ask for an order of magnitude less money for showing an advertising post than channels – if for a channel the normal price is considered to be on average 1 ruble per showing, then in bots, especially large ones, showing can cost 10 kopecks or even less.

There are bots with a truly gigantic number of users, among whom there are probably many freelancers.

Of course, first it was necessary to conduct experiments, because throwing away 100 thousand rubles into an incomprehensible adventure was risky.

Having selected several similar bots with a small number of users, but also with a small price for an advertising post, I paid for posting and began to wait for the results. The total audience was about 30 thousand people, according to botstat.io, I don’t know how much this information corresponds to reality.

After each placement, an invasion of about 50 visitors was recorded in the first hour, in the second hour there were about 10, and in the remaining hours – only a few. On average, from bots with a total audience of 30 thousand people, I received 3 registered users, having spent 10 thousand rubles on advertising.

None of these clients ever brought me a penny.

In general, the idea of ​​advertising in cart bots could be written off. It doesn't work. The original plan suffered a crushing collapse, because it was not at all difficult to extrapolate the results of the experiment to bots with a larger number: most likely, zero will remain zero, no matter what number you multiply it by.

The next attempt was to use Yandex Direct. Perhaps something was done incorrectly, but having spent 2000 rubles, I received 1600 visitors per day and 0 registrations. Therefore, I considered further research in the field of Yandex Direct pointless.

Maybe it makes sense to post articles on blogs? Ok, having posted several articles on VC, I got 5 registered users, none of them paid a penny either. Although a couple of them still constantly use the free notification functionality via bot.

Other blogging platforms have yielded zero results.

Then there were attempts to advertise the service in really stupid ways – for example, through socpublic.com. The effect is similar to Yandex Direct, although 10 times cheaper. Probably, in this way you can arrange a cheap DDoS attack for someone, but you definitely won't get a single user.

Finally, I got hold of a hundred freelancers' Telegram accounts and ordered a direct mailing from a freelance spammer for 1,000 rubles. Not for profit, but purely to find out – does anyone need this besides me? The spammer's final report: 60 messages read. My result: 4 registrations, and again no one was interested in the voice notification functionality, that is, no one paid.

At this point, perhaps, we could have finished and drawn some conclusions, albeit disappointing, but useful for the future.

Conclusion #1:

As old Kant asserted, only experiment is the criterion of truth, and this is what distinguishes the scientific method of cognition from the irrepressible fantasy called metaphysics.

Conclusion #2 (more practical):

Before taking on any “awesome idea”, you need to think about how to advertise it. Even more than that – any idea should be considered first and foremost from this, and only from this side, and ruthlessly discarded if there are no good and proven advertising methods. Yes, you can make a good and useful service for people. But who will know about it?

Even my service is good and necessary, I use it myself and pay 2000 rubles a month for the server, which means, 100%, someone else could use it, if only they knew about its existence. But that's the problem. Nobody knows. And there are no cheap enough ways to reach the target audience. Even if you can reach them, it's not a fact that you will be able to explain and show how and in what way it can be useful. This issue requires thorough study, and this should be done by professionals for a good fee, but if you get from the client… only 300 rubles a month (actually 0, haha), then any investment in such professionals will only lead to direct losses.

Conclusion #3 (as specific as possible):

If you need to save on advertising, you should make SEO-optimized sites. SPA is definitely not suitable here.

Conclusion #4 (personal):

I am a loser.

Conclusion

Well, if anyone has read to the end of this sad story, here is the link to the service: https://lancetracker.com Don't take this as advertising, you can't talk for so long and then not show it in the end.

Although I won't lie, I wonder how many people will register thanks to this publication, and how many of them will become regular users.

But it seems I already know roughly the answer to this question…

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *