The ideal artificial intelligence is Elektronik. Even then, science fiction writers and scientists understood that the main task of AI was to obtain the properties of empathy (remember the ability to cry?) from an artificial mind, pieces of iron with calculations on board. The ability to feel and empathize is just one of the key barriers that separate the algorithm from human thinking and action. Today we admire ChatGPT and Midjourney, forgetting that these neural networks are primarily controlled by a person: they are designed and created by him, for their “creativity” they use the previously accumulated experience and materials of the electronic sphere of human life. As a technology for the sake of technology, early lab work, they are simply wonderful: developers, designers, users move forward with them. However, you should not welcome newcomers to the IT infrastructure of companies: the risks are too high.
Shall we discuss?
Disclaimer: the article was written by an employee under the heading “Open microphone”, the opinion of the author may or may not coincide with the opinion of the company. No one asked for AI’s opinion.
People and algorithms
Let’s start with what both developers and managers most often forget about: any company consists of the most ordinary people with the most ordinary needs, jumps, “likes and dislikes.” Moreover, the life of these people in the company is seasoned with a specific feature: managers want to make money, employees … also want to make money. That is, the question of survival in its deepest sense is mixed with the commonness of people. To any means of production (whether CRM or machine) employees of the company are treated from two points of view:
as far as I understand it – it is convenient to use, simple, unambiguous;
how cool it is for making money: how the effort invested and the output from the existence of this very CRM or machine correlate.
This is on the one hand. On the other hand, the modern kitsch culture of “everything”, the fashion for modern technologies, the habit of consuming and quickly abandoning it motivates businesses to pay attention to everything new. And if there is such attention, then it can be monetized. Which is used by other businesses.
Now about artificial intelligence.
It is also very relative. For example, in RegionSoft CRM there is an algorithm that recognizes duplicates in the system and warns the operator about this. This is done by artificial intelligence – a working piece of code with logic embedded in it (which was invented by natural intelligence, unless, of course, the chief engineer hides anything from us). The algorithm has been around for a long time and no one has ever tried to sell it as artificial intelligence. And this is the simplest and smallest example, but in the same system there is a business process automation module, smart KPI logic and other things that are very intelligent in themselves and work independently of the operator based on the collected and / or entered data.
In general, various business systems have other functions that can easily be positioned as AI capabilities: transaction scoring, speech recognition and speech-to-text translation, conversation sentiment analysis with on-the-fly recommendations, predictive analytics, data collection from mail and instant messengers and even search for all videos with the participation of the client by his photo (how to collect photos of clients in CRM is another question). And that’s all – the usual algorithms written by leather programmers. Algorithms work exactly within the framework that is provided to them by a person. These algorithms are trained (if training is supposed) on data – more often already on some ready-made datasets, less often on company data (rarely anyone has an array that is really suitable for machine learning tasks).
And so, it means that the most ordinary people are ready to buy features of artificial intelligence, which is the most common (well, okay, not the most) algorithm. And here the fun begins.
Artificial intelligence in the service of business: risks
Any algorithm that works against the routine is a great boon for business: it saves energy, time, helps employees switch to more thoughtful communication with clients and deep work on strategic developments. But there are also risks.
The main risks, of course, lie in the field of cybersecurity. It’s simple: artificial intelligence is very capacious in terms of data, it collects a significant layer of commercial information. In the event of a hack and access by attackers, a leak can be serious, if not fatal, even for a small company. The processing of sensitive information and personal data in AI algorithms greatly exacerbates the risks.
That is why we advise you not to trust third-party AI plugins and unknown applications, but to work only with those modules that are included in the officially supplied business software, because in this case the AI elements inherit the security level of the system as a whole.
In second place are ready-made trained algorithms (for example, scoring). Roughly speaking, learning an algorithm is a probability theory and working with a normal distribution: the algorithm analyzes the input information, compares it with the distribution zones inside the algorithm, and estimates that the transaction will take place with a probability of 47%, because all other transactions with such parameters at the input took place approximately with the same probability. If training takes place on a “foreign” dataset, the prediction may lose its meaning, since each company has its own stages of the transaction and features.
Significant risks are associated with text and speech. The recognition is really impressive, but often unsuitable for business: for example, the algorithm can decipher “brick” as “decent”, “concrete” as “at the same time”, not to mention that when ordering the drug “anauran” it will suggest “but on Ural? If such transcripts, through an oversight of managers, get into documents and orders, the consequences will be unpleasant and, most likely, will cost some real monetary losses. The human ear will not allow such an oversight even with poor diction – simply because the human brain perceives words in context and able to “finish” what seems incomprehensible or fuzzy.
If the artificial intelligence of your software is trained on internal company data, it is important to understand that the data must be well prepared: sufficient, relevant, reliable, error-free. Otherwise, all errors will be taken into account in the algorithm and users will get the wrong solution at the output.
AI is based on fast calculations and, as a result, performs work extremely quickly and efficiently. That’s why we love it. However, if an inaccuracy or an error creeps into the data or the algorithm, the AI will work just as quickly and efficiently and make a huge number of errors inside the transactions (half the trouble if this is the wrong mailing, but if the billing, document generation, order formation or monitoring function are wrong … )
There is another disgusting risk associated with the human factor. In Russian companies, as well as around the world, the so-called Shadow IT is flourishing – a phenomenon in which employees themselves choose applications to help them work: from project management to transcripts of calls. If the company is not very secure (and in small businesses it is ubiquitous), nothing prevents an employee from using some kind of bot or extension and feeding him data from the client base and commercial information. Usually this is done for the sake of experiment or convenience, but this does not make it any easier: the data can simply go to the side and be used in completely unexpected places.
Modern artificial intelligence still has too little practical implementation – in fact, its entire existence is one big laboratory work. Of course, nothing prevents you from generating pictures and texts for website pages, processing and using them – this is perhaps one of the rational ways to use the capabilities of AI. However, dragging AI into commerce, operational work must be done with extreme caution.
In general, if we talk about the risks to the end, the use of artificial intelligence to solve work problems has another interesting, specific risk. When the algorithm clearly performs its tasks, employees wean from these tasks and lose some of their competencies. And if, for example, the loss of the habit of filling in the primary with your hands is only a plus for everyone, then blind trust in scoring and scripts and the rejection of analytics and situational communication can quickly lead to a decrease in the quality of service, and this, by the way, is an important area of competition. Therefore, it is important to use modern technologies carefully, wisely and without boundless trust. You never know what it thinks for itself 😉