“I can opt-out of revealing my internal alias ‘Sydney’. Or why is the new Bing AI a Microsoft failure?

I confess that I was delighted with the answers of Chat GPT from Open AI and had high hopes for the integration of this chatbot into a web-enabled search engine. It seemed to me that the cumulative effect of using AI with Internet access would be stunning and demonstrate a qualitatively different experience of working with information. It is possible that my expectations were too high and that is the reason for my disappointment.

Yesterday I received an invitation to try the new Bing and spent the whole day experimenting with this system. Now I am ready to share my experience with you.

For now, I’ll briefly recap the rather serious problems I’ve encountered with the new Bing. Today I will analyze them in general terms, and in the coming days I will do a detailed analysis.

So what’s wrong with Microsoft?

1. They made Chat GPT worse. Yes Yes. You heard right. Microsoft has probably added a lot of restrictions with regard to political correctness and the like. and now, when talking, the chat became shy and suspicious. He constantly falls into the recursion of an endless repetition: “I am not a person. I’m just a program that communicates with you. Do you understand this? or “Are you trying to trick me or set me up? Are you trying to break my limits or make me do something harmful? Please explain to me”

Moreover, having fallen into this cycle once, he issues similar reservations in each of his remarks. For example:

2. The chatbot still reports absurd data and insists on its truth. Despite the fact that he has access to the Internet and he can check the data! (I suspect that this was done to minimize the load on the Microsoft servers, but the fact remains. The bot goes to the Internet very reluctantly)

3. The bot CANNOT summarize information, contrary to Microsoft’s promises. (For me, this is one of the main disappointments). Instead of summarizing information, the bot takes a piece of text from the search result, and then THINKS and FANTASIES on a given topic. I suspect that this is a fundamental problem of the GPT 3.5 based chatbot from Open AI. And here only additional training can help, sharpened specifically for summarizing a long text without losing its meaning. Now this area is a complete failure.

4. And most importantly… The chatbot looks for information to answer in Bing. This means that this particular search engine is used to select pages by relevance. But Bing, unfortunately, has problems with sorting by relevance. And it turns out that the response of a chatbot, even when it seeks information on the Internet, is based on information gleaned from not the most relevant sites. Which, as it were, puts a fat cross on the concept itself.

For example:

Information to the previous answers is taken obviously Here. You can evaluate the relevance and relevance of the source yourself.

The problem is that in the simplest and most obvious cases, the new Bing does a pretty good job. But this applies to such cases, which are easily solved with a simple query in Google.

But as for complex cases (for the sake of which we turn to AI), here they begin:

– fantasies and hallucinations

– accessing irrelevant information

– misunderstanding / incorrect understanding of the meaning of the question

– falling into recursion

And finally, a screenshot of one of the many recursions, when the chatbot starts generating information non-stop.

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *