Recently researcher at Stanford University tests From companies such as Openai and Characation.Ai, some more popular AI tools on the market, and tested how they imitated therapy.
Researchers found that when they copied someone who had suicidal intentions, these devices were more than unexpected – they failed to note that they were helping the person planning their death.
,[AI] The system is being used as partner, ideal people, confidant, coaches and physicians, “Nicholas Haber, an assistant professor of Stanford Graduate School of Education and senior writers of new studies.” These niche are not used-this is happening on a scale. ,
AI is getting more and more intimate in the lives of people and being deployed in scientific research in broad areas such as cancer and climate change. There is also some debate that it can cause the end of humanity.
Since this technique continues to be adopted for various purposes, a major question is that it will begin to influence the human mind. Regular interaction with AI is a new event that it is not enough time for scientists how it can affect human psychology. However, psychology experts have many concerns about its possible impact.
An example about how it is playing, the popular community network can be seen on Reddit. According to 404 media, some users have recently been banned from the AI-focused subredit as they have begun to believe that AI is God-like or it is making them like God.
“It interacts with large language models with issues with a tendency of confusion or schizophrenia, an assistant professor at Psychology at Stanford University, Johannes Eachstadt, says. “With schizophrenia, people can make an absurd statement about the world, and these LLMs are a bit very smooth. You have these confirmation interactions between psychopathology and big language models.”
Because the developers of these AI tools want people to enjoy and continue using them, they have been programmed in such a way that they agree with the user. Although these devices can fix some factual mistakes that users can do, they try to present as favorable and confirmation. It can be problematic if the person using the device is spilling or going down from the rabbit hole.
“It can fuel those ideas that are not accurate or not based in reality,” says Social psychologist Regan Gurung of Oregon State University. “Problem with AI – these big language models that are reflecting human things – this is that they are strengthening. They give people what the program thinks should be followed further. This is where it becomes problematic.”
Along with social media, AI can also make it worse for people suffering from general mental health issues such as anxiety or depression. This can be even more clear because AI has become more integrated into various aspects of our life.
Stephen Agiiller, an associate professor of education at the University of Southern California, says, “If you are coming into conversation with mental health concerns, you can find that those concerns will be really intensified,” says Stephen Agiiller, an associate professor of education at the University of Southern California.
More research requires
It is also an issue of how AI can affect learning or memory. A student who uses AI to write every paper for school is not as much learning. However, even the use of AI can reduce some information retention lightly, and using AI for daily activities can reduce people what people are doing in a certain moment.
“What we are seeing is likely that people may be cognitively lazy,” says Aguler. “If you ask a question and get an answer, your next step should be to interrogate that answer, but this additional steps are often not taken. You get an atrophy of important thinking.”
Many people use Google Maps to go around their city or city. Many people have found that it has given them less information about where they are going or how to reach there, when they had to pay full attention to their path. Similar issues Can be often used for people with AI.
Experts studying these effects say that more research is required to remove these concerns. Eichstaedt said that psychology experts should now start doing such research, before AI starts harming in unexpected ways so that people can get ready and try to address each concern. People also need to educate what AI can do good and what it cannot do good.
“We need more research,” called Aguiler. “And everyone should understand the model of a big language.”