Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
Openai wants Chatgpt to stop activating the unhealthy behavior of its users.
From Monday, the popular Chatbot application will encourage users to take breaks in long conversations. The tool will also quickly avoid giving direct advice on personal challenges, rather aimed at helping users to decide for themselves by asking questions or weighing the advantages and disadvantages.
“There have been cases where our 4O model has failed to recognize the signs of illusion or emotional dependence,” wrote Openai in an ad. “Although rare, we continue to improve our models and develop tools to better detect signs of mental or emotional distress so that cat cat can respond appropriately and indicate to people based on evidence if necessary.”
Updates seem to be a continuation of OpenAi’s attempt to keep users, in particular those who consider Chatgpt as a therapist or a friend, to become too dependent on the emotionally valid responses that Chatgpt has acquired a reputation.
According to Openai, a useful conversation of chatgpt would look like practical scenarios for a difficult conversation, a “tape -tailor -made” speech or suggesting questions to ask an expert.
Earlier this year, the AI giant made a GPT-4O update back update that made the bot So too pleasant that it has sparked mockery and online concerns. Users shared conversations in which the GPT-4O, in one case, congratulated them to believe that their family was responsible for “radio signals entering by the walls” and, in another case, approved and gave instructions for terrorism.
These behaviors led Openai to announce in April that it revised his training techniques “Explicitly ward off the model from sycophance” or flattery.
Now OPENAI says he has started experts to help Chatgpt respond more appropriately in sensitive situations, such as when a user shows signs of mental or emotional distress.
The company wrote in his blog post that he worked with more than 90 doctors in dozens of countries to make personalized sections to “assess complex and multi-goors’ conversations”. He is also looking for comments from researchers and clinicians who, according to the post, help refine evaluation methods and stress test guarantees for chatgpt.
And the company trains an advisory group made up of mental health experts, development of young people and human-computer interaction. More information will be published as the work progresses, wrote Openai.
In a recent interview with Podcaster Theo Von, Openai CEO, Sam Altman, expressed some concern about Chatgpt as therapist or life coach.
He said that legal confidentiality protections between doctors and their patients or between lawyers and their customers do not apply the same way to chatbots.
“So, if you are going to talk to Chatgpt about your most sensitive things and there is a trial or something else, we could be required to produce this. And I think it’s very damn,” said Altman. “I think we should have the same concept of intimacy for your conversations with the AI that we do with a therapist or something else. And no one had to think about it even a year ago. ”
The updates come for a buzzing period for the Chatppt: it has just deployed a agent modeWho can do online tasks like making an appointment or summarizing a reception box by email, and many online now speculate on the highly anticipated version of GPT-5. Chatgpt leader Nick Turley said Monday That the AI model is on the right track to reach 700 million weekly active users this week.
While Optai continues jockey in the world race for the domination of AI, society noted that less time spent in the Chatppt could actually be a sign that its product had done its job.
“Instead of measuring success in spent time or clicks, we care more about leaving the product to have done what you have come,” wrote Openai. “We also be careful if you return daily, weekly or monthly, as it shows that Chatgpt is useful enough to come back.”