About chatbots


https://www.reddit.com/r/StableDiffusion/comments/ym37xi/can_an_ai_draw_hands/

https://www.reddit.com/r/StableDiffusion/comments/ym37xi/can_an_ai_draw_hands/

Only the lazy did not unsubscribe about the delights of ChatGPT, and, accordingly, as a person not the laziest, I also have to say something about this. On the other hand, the magical properties of modern neural networks are spoken of somehow one-sidedly – in the vein that this has never happened, and here it is again. I do not have my own point of view on this topic, and not only fresh, but for some reason little voiced.

historical

From the point of view of history, attempts to assemble a neural network that would somehow support a meaningful dialogue with a person were made right from the very beginning. We can say that the idea itself arose even before the appearance, in fact, of neural networks. The idea of ​​a machine capable of impersonating an intelligent being – to pass the Turing Test – is one of the fundamental concepts for modern computer science and cybernetics – the Holy Grail. That is, this has been sought for as long as computers exist as such. And now, as if, finally, it happened.

Specifically, the GPT model (language neural network model based on deep learning) was introduced by OpenAI back in 2018, by today’s standards. Whether to call this date the starting point for the emergence of chatbots in such a “revolutionary” sense is a matter of choosing criteria. It is clear that GPT did not arise from scratch and takes its roots from previous models, uses previously known developments in the field of linguistics, computer science and the general principles of creating AI. GPT is essentially such a successful set of more general ideas and methods, many of which have been known since the 80s of the last century. And the finished product, for example ChatGPT, is nothing more than the result of intelligent analysis and painstaking markup of a huge amount of source texts, which, by the way, have become available just today.

That is, on the one hand, some similarities of modern chatbots could well have appeared earlier, if scientists had the same amount of available initial information at their disposal. On the other hand, modern methods of training and designing neural networks are very dependent on the latest developments in the field of electronics and the use of physical resources in new ways. Research in the field of neural networks in the general sense is limited by many technical factors. There are more innovative and bold approaches to ways to improve the efficiency of both training and the internal arrangement of equipment, but we do not see them yet due to the lack of sufficient technical means and capacities.

In general, it is quite difficult to set a milestone here – it will only be a kind of symbolic action – a marketing ploy. Nevertheless, one cannot deny the radical spasmodic progress and a sharp increase in the interest of ordinary consumers in these technologies – it is visible to the naked eye.

cultural

The effect created by OpenAI by making ChatGPT publicly available is perhaps akin to the arrival of the Lumiere Brothers train or the first public broadcast of a television broadcast. Moreover, both then and now, most people learn about the results of the premiere through third parties – thanks to numerous admiring articles and reviews. In fact, both cinemas and television were not directly accessible to the masses for quite a long time. Moreover, for a long time it was not clear what to fill the new means of communication with. It will take several more decades for people to appreciate the real effect of convenience and benefit.

It is the same here – it is not yet very clear how this can be applied. Yes, there are already all sorts of interesting scenarios, but frankly, this is more like hooliganism, and not a practical possibility. Chatbots are not yet ready to replace journalists and poets. Get rid of some kind of routine, like, for example, programmers or screenwriters? Maybe. And even now the boundaries of these possibilities are clearly visible. In order to use a neural network as an assistant, you must first of all be the very person who can complete the task on his own and explain what is required.

The optimism associated with chatbots lies rather in a different plane – the plane of marketing, advertising, the production of something very primitive, but in huge quantities. Write a sales letter, make a description of the product, throw in heading options for the article, prepare a report at school – nothing more. It seems to me that it is premature to expect a serious contribution to people’s lives from chatbots. It is unlikely that we will be able to put artificial intelligence on the psychological help hotline today. Not because the neural network is easy to calculate – it is really difficult to do this now – but simply because the neural network cannot be targeted. The goal, say, is to dissuade a person from committing suicide. But in connection with this, another question arises. But will the chatbot push a person with an unstable psyche to rash actions?

ethical

The emergence of new mass phenomena, such as cinema or television, is always accompanied by opposite anxieties and fears. From predicting the death of books and newspapers, to the formation of radical sects and movements. And here the important question is not so much the appearance of the technology itself, but with the use of this technology by people. And not only for selfish capitalist purposes, but for much more insidious and dangerous ones. Is it possible, for example, to put a chatbot to solve some health-related problems? Is it possible to teach a chatbot to respond in a way that is beneficial for the owner? For example, to train on a certain ideology or religion.

It seems to me that the excitement of famous and influential people is connected precisely with such consequences, and not at all because of the belief in the uprising of the machines. Where and how to draw the line? What is considered normal and what is abuse? Are there few crazy people on earth who once again lacked public access to new means of influence? If the world of journalism and film production as a whole has evolved on its own and is now divided more or less balanced, then it is rather difficult to predict how the fate of a new type of information creation will turn out. People now know, through often bloody trial and error, that no technology should be left to chance. And it is best to control it monopoly.

Legal

The powers that be are thinking for sure about the legal conflicts associated with the use of chatbots. Whose (I don’t like these words) content is information created using AI? Who should be responsible for the consequences? The person who formulated the request, the owner of the company, the creator of the neural network, the group that prepared the training material? “To be or not to be, that is the question.” – will request a potential suicide or terrorist. Should the chatbot response be attached to the case file? What questions should a bot not answer at all? Which god to believe in? Which gender to choose?

Should the corpus of texts used be considered the general opinion of humanity – the average person? You and I are well aware that the answer will still be based on the moral and ethical standards of Western culture. It is unlikely that there are very many source materials related to understanding what is good and what is bad, taken from the regions of Central Africa or Central Asia. To what extent can AI be taught to specifically avoid acute problems and respond politically correct and as neutral as possible? What is “neutral”? Where to get a neutral opinion?

There are a lot of questions for which we still have no answers. Yes, and it is unlikely that they can get an unambiguous answer. And even more difficult to regulate all this on a global scale. Again, who is going to regulate everything. Each state separately, the UN, a special committee on AI? Isaac Yudovich Azimov, unfortunately, is no longer with us.

And if he did, what could he do? Would they listen to him in the manuals of software giants? And after all, the conclusions of Isaac Yudovich are by no means rosy. The red line throughout all creativity is the idea that it is generally useless to regulate the laws of AI. And no matter how hard we – leather bags – try, the robots will either have to be completely controlled, which we are no longer particularly successful at, or we will have to equate them to ourselves. Give them legal status, give them rights, punish them, force them to work. Treat?

futuristic

Can an electrical appliance go crazy? Apparently, if we talk about insanity as a deviation from the norm, we can already talk about crazy bots now. Since there are several options, there is also a mathematical norm. And if there is a norm, then there is a deviation. Which, in principle, is already confirmed by some excesses in the work of different products. But this is a technical formality. But there is another way to define the norm.

Is it possible to take a human sample as a norm and apply it to AI? In fact, after all, the teaching corpus is based on relatively normal works of people who are normal in our understanding. It is unlikely that neural networks are fed the works of Joseph Goebbels. That is, if we start from the training sample, then the neural network cannot invent anything that would go much beyond this sample. We expect. This is key.

Again, this will greatly depend on the quality of the markup and many other subjective parameters. Can one abnormal person introduce a neural network, which is the fruit of the work of hundreds of normal people, “bad thoughts”, so that they would appear in her “behavior”?

Moreover, we are now at a stage when all the available educational material is still created initially by people, mostly normal. But it is obvious that the moment will soon come when materials created with the use of AI will begin to fall into the training sample. Firstly, because it is already difficult to distinguish it to a person. Secondly, the choice from the total mass of information is done largely automatically. The day is not far off when a significant part of the material will be information written by AI. And here it is already more difficult to talk about the norm.

Will it not turn out in the end that the neural networks will completely get hung up on their own texts? And where then to look for the norm? And at what point will the opinion of such a chatbot begin to influence the human norm back? After all, it is enough to be born to that generation, which will have access to more AI works than human ones. When will the train of the original human thought and morality dissolve in the stream of neurons’ own conclusions? Or maybe this article was written by a neural network?

humorous

However, let’s not be pessimists. Still, so far no technological innovation has led to the decline of humanity as a whole. Yes, there were difficult moments, it is common for a person to try on new things in all qualities. Both positive and destructive. Let’s hope that chatbots will do more good than harm.

For me personally, it would be a revelation if the neuron learned to joke. It seems to me that this is what it was worth developing this direction for. On the other hand, again, it seems to me that this is the very line beyond which a real qualitative leap begins. Everything that they invent and do now can somehow be used, at the very least, but it will be purely utilitarian and pragmatic. It will become really scary when neural networks learn to distinguish specific people, when the context becomes individual. When AI can deliberately distort the reality of the interlocutor in such a way that the person will become, for example, funny from this.

He will learn to understand the general cultural and specific emotional context of the request – which means he will be able to influence the interlocutor’s train of thought in a directed way. To joke with him, to impose emotion, to see the limits of cognitive abilities, to understand what a person can consider sarcasm, and what is juggling. After all, we ourselves, as social beings, base communication with people on our own forecast of the mental abilities of the interlocutors. We are aware of and keep in mind the context that we feel is appropriate for the other person. It does not always work out, but most often we still guess – the joke ends more often with laughter than with a scuffle.

Humor, as the highest achievement of social interaction, actually has very deep causes and effects. We can rub ourselves into trust, lie, manipulate, sympathize – to be people – thanks to just such subtle matters. And if AI reaches such a level, this will already be, if not a point, then an important comma, before the onset of a technological singularity.

And in this case, addressing the reader, I expect that he will also react to what was written with a certain amount of humor and skepticism.

Posted under John Murphy – Leaving England

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *