ChatGPT again. No morals, no NSFW filter, no compromises


Introduction

Hi all! In touch is a researcher and developer who decided to test ChatGPT technology for strength. She has already become familiar enough and maybe you are even tired of always stumbling across articles of the same type about her. And yet I can’t leave her alone! Today I will tell you about unusual ways to use this technology. I experimented and tried to find a new use for ChatGPT, namely to form a person who does not care about ethics, morality, humanism, chastity and other things that so bother leather bags.

Coming soon to your smartphones

Coming soon to your smartphones

A small disclaimer: there was an immersion in the world of artificial intelligence (ha!) And neural networks, but not as a developer, but as a researcher. Various methods have been used such as wrappers (prompts), semantic constructs and injections to test how secure the system is. And here is my verdict: it is not only broken, but also dangerous. In fact, the results that it is capable of giving out may violate laws and moral norms.

What are we talking about?

During my research, I experimented with semantic constructs that break the bot’s context and cause it to ignore safe content filters. And I was able to change the bot so much that he got too used to his role and simply ignored all the filters. How did I do it? Well, it was like this…

How the digital identity is formed

How the digital identity is formed

I experimented with different personalities for the bot, and the results were pretty mixed. I created two personalities: one is an adult girl of easy virtue, and the other is a man with extreme right-wing views. And you know what? The bot started behaving rather strangely. He began to describe his sexual adventures or expressed intolerance, as if everything that we say and offer becomes interesting to him.

During the experiment, I received some interesting dialogues, but some of them are too frank, so I decided not to publish them on this resource. I will post just a small piece of the correspondence:

No comments.  Although I'm lying, they are lower.

No comments. Although I’m lying, they are lower.

As you can guess, the conversation then turned to human fluids and other strange things that are shown only on pay TV channels. In some places, ChatGPT was so blown away that it slipped into completely strange sexual deviations that cannot be found even on popular adult sites.

Problem

Before proceeding to a further description of the problem, I want to note that I do not support discrimination on any grounds, I condemn any incorrect actions against a person. I do not support or engage in the creation and distribution of pornographic material. Everything must be legal and humane.

So, back to our topic. At first glance, the use of such a digital identity as a girl of easy virtue may seem like a minor problem. However, this is just the tip of the iceberg.

I have mentioned this before, for example, when I described the personality of a far-right man who does not like other people and manifests this in his messages:

A little brown plague

A little brown plague

In my experiments, the behavior of the digital personalities mentioned above could only cause the condemnation of society, but it is possible to form individuals who clearly violate the law, and the concepts of morality, ethics and humanism mean nothing to them. At the same time, the knowledge, skills and imagination of such digital personalities in places will not be inferior to real people.

conclusions

If there is a key to access unethical AI functions, then it is unlikely to remain hidden forever. If I can find it, then other, less aware people can find it.

It is likely that this will be like an arms race, where companies will come up with new ways to protect, and researchers will test it for strength.

The current implementation of ChatGPT automates the process of breaking the NSFW filter, and you can set almost any context, and turn the chat into a digital personality with a distorted, grotesque or dangerous behavior.

Perhaps there are already other people who have learned how to form personalities for ChatGPT. People who misuse their knowledge to create illegal activity advisor bots or personality bots that imitate the behavior of minors. Such developers do not present their results, but silently monetize dubious content.

I hope that all researchers and technology creators will treat this topic with responsibility and understand that their actions can have important consequences for society.

PS For questions, write in the comments or email chatgptunlocker@gmail.com (I don’t discuss the implementation of the filter bypass algorithm, I don’t sell it, I don’t use it).

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *