As the end of the year approaches, most people long for the Christmas holidays. And maybe we’ll handle it a little easier at work. But does this also apply to AI-powered applications? Some users of the popular language bot ChatGPT think so. Experts are now investigating, and parent company OpenAI is also taking the complaints seriously.
R L
source:
Ars Tecica, The Verge, The Independent, Special Reports
More and more users of the newer (paid) version of ChatGPT (GPT4 model) are complaining that the chatbot gives shorter answers, refuses to do what people ask, or even crashes users. The tone that ChatGPT sometimes adopts is considered rude or lazy by some users. Other people are even wondering if the chatbot is suffering from winter depression. This was reported by the technology websites “Ars Technica” and “The Verge” and others.
One programmer tested the sum and gave ChatGPT 477 commands, with only the month changed in terms of input. He found that the language bot gave a longer answer (4,298 characters on average) when told it was May than it was December. In this case, ChatGPT responded with an average of 4086 characters. Many researchers have since tried to replicate that study, but have yet to discover any significant differences. This may be due to the research design, so another way to investigate the alleged phenomenon is being worked on.
Some users believe that parent company OpenAI intentionally made the chatbot lazy to reduce the burden on the company’s systems. According to OpenAI, this is not the case, and they say they take user feedback seriously: “We’ve already seen your feedback about GPT4 becoming increasingly lazy! We haven’t updated the model since November 11, and this is definitely not intentional.” The model’s behavior could be Unexpected and we are investigating if we can resolve this issue,” the chatbot’s official account wrote about a week ago on X (formerly Twitter).
Although the people behind ChatGPT do not believe that the model has changed on its own since the September 11 update, they do not rule out the possibility that there are slight differences in some commands or questions. According to the company, it may take some time before customers notice these small differences as patterns, which means it may also take some time before employees can make adjustments to the pattern.
Winter depression
Many programmers and AI experts are currently conducting extensive research on the “laziness hypothesis.” The fact that even the winter depression of the chatbot by users cannot be ruled out is that people’s input is also used by ChatGPT for learning. Some users believe that if users are suffering from winter depression, they could inadvertently transmit it to the chatbot through their input.
It can also relate to the way we give commands to the chatbot. How much effort do we still put into issuing clear instructions to programs running on artificial intelligence? Giving specific instructions (“Explain step by step…”) can produce better results than “Tell me how…” and again other users are using tricks to encourage their favorite chatbot to provide better answers. Sometimes by simply lying that they don’t have fingers, in other cases by promising the chatbot a tip. Other users say a more humane approach, with encouragement such as “take a deep breath,” would also yield better results.
Finally, our imagination can also play a role. Now that reports of a lazy language bot are popping up here and there, perhaps we’re starting to look more critically at ChatGPT’s answers.
But what if we ask the application itself? He completely denies that he works less hard during this period of the year: “As a machine learning model, I am not exposed to fatigue, laziness, or seasonal effects.”
Free unlimited access to Showbytes? Which can!
Log in or create an account and never miss a thing from the stars.
More Stories
Strong increase in gas export pipeline from Norway to Europe
George Louis Bouchez still puts Julie Tatton on the list.
Thai Air Force wants Swedish Gripen 39 fighter jets