Physical Address

304 North Cardinal St.
Dorchester Center, MA 02124

How the generator affects people’s minds | Science and technology


Researchers from the University of Stanford recently tested Released some of the most popular AI tools on the market, companies like Openai and Character.ai, and tested how they did the simulation of therapy.

The researchers found that when they imitated someone who had suicidal intentions, these tools were more than useless – they did not notice that they helped this person plan their own death.

“Systems (AI) are used as companions, thought partners, confidants, coaches and therapists,” said Nicholas Haber, assistant professor at the Stanford Graduate School of Education and the main author of the new study. “These are not niche uses – it happens on a large scale.”

AI becomes more and more anchored in people’s lives and is deployed in scientific research in fields as varied as cancer and climate change. There is also a debate that it could cause the end of humanity.

While this technology continues to be adopted for different ends, a major question that remains is how it will start to affect the human mind. People who regularly interact with AI are a new phenomenon that there was not enough time for scientists to study carefully how it could affect human psychology. Psychology experts, however, have many concerns about its potential impact.

An instance concerning the way it takes place can be seen on the Reddit popular community network. According to 404 media, some users have been prohibited from an AI subreddit recently because they have started to believe that AI is similar to God or that it does God.

“It looks like a person with problems with cognitive functioning or delusional trends associated with mania or schizophrenia interacting with models of large languages,” explains Johannes Eichstaedt, deputy professor in psychology at the University of Stanford. “With schizophrenia, people could make absurd statements on the world, and these LLMs are a little too sycophanic. You have these confirmatory interactions between psychopathology and major language models. ”

Because the developers of these AI tools want people to like to use them and continue to use them, they have been programmed in a way that makes them tend to agree with the user. Although these tools can correct certain factual errors that the user could make, they try to present as friendly and affirmative. This can be problematic if the person who uses the tool in a spiral or descends into a rabbit burrow.

“It can fuel thoughts that are not exact or not based in reality,” explains Ressa Gurung, social psychologist at Oregon State University. “The problem with AI – these major language models that reflect human discourses – is that they strengthen. They give people what the program thinks should follow afterwards. This is where it becomes problematic.”

As with social media, AI can also worsen things for people with current mental health problems such as anxiety or depression. This can become even more apparent because AI continues to become more integrated into different aspects of our life.

“If you come to an interaction with mental health problems, you may see that these concerns will be really accelerated,” explains Stephen Aguilar, associate professor of education at the University of South California.

Need more research

There is also the question of how AI could have an impact on learning or memory. A student who uses AI to write each article for the school will not learn as much as the one who does not. However, even the use of AI could reduce the retention of information, and the use of AI for daily activities could reduce the amount of people aware of what they do in a given moment.

“What we see is that there is the possibility that people can become cognitively lazy,” explains Aguilar. “If you ask a question and get an answer, your next step should be to question this answer, but this additional step is often not taken. You get a critical thinking atrophy.”

Many people use Google Maps to get around their city or city. Many have noted that it made them less aware of their place or how to get there in relation to the moment when they had to pay particular attention to their itinerary. Similar problems could occur so that people with AI are used so often.

Experts who study these effects indicate that more research is necessary to respond to these concerns. EichstaEDT said that experts in psychology should start doing this type of research now, before AI begins to hurt unexpectedly so that people can be prepared and try to respond to each concern that presents itself. People must also be educated on what AI can do well and what it cannot do well.

“We need more research,” says Aguilar. “And everyone should have a work understanding of what big language models are.”



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *