For Some Autistic People, ChatGPT Is a Lifeline
The flexibility of the chatbot also comes with some unresolved issues. It can produce biased, unpredictable and often fabricated answersand is built in part on personal information obtained without permission, enhanced privacy concerns.
Goldkind advises that people switching to ChatGPT should familiarize themselves with its terms of service, understand the basics of how it works (and how information shared in a chat may not be in real-time mode). privacy) and keep in mind its limitations, such as its tendency to fabricate. information. Young said they thought about enabling data privacy protections for ChatGPT, but also think their stance as a single, transgender, autistic parent could be data Is it beneficial for chatbots in general?
For so many others, people with autism can find knowledge and strength in conversation with ChatGPT. For some people, the pros outweigh the cons.
Maxfield Sparrow, who has autism and runs support groups for autistic and transgender people, found ChatGPT helpful in developing new material. Many people with autism struggle with the usual ice-breaking game during group study sessions, says Sparrow, because social games are designed primarily for people with neurosis. So they prompted the chatbot to come up with examples that work better for people with autism. After going back and forth for a while, the chatbot blurted out: “If you were the weather, what kind of weather would you be?”
It was the perfect opening for the group, Sparrow says—short and related to the natural world, something Sparrow says a distinct neural group can connect to. Chatbots have also become a source of comfort when Sparrow is sick and for other advice, such as how to organize their morning routine to be more productive.
Chatbot therapy is a concept that dates back decades. First Chatbot, ELIZA, is a therapy bot. It emerged in the 1960s from the MIT Artificial Intelligence Laboratory and is modeled on Rogerian therapy, in which a counselor restates what a client tells them, often in the form of a question. The program does not use AI as we know it today, but through repetition and pattern matching, its scripted responses give users the impression that they are talking to something that understands them. . Although created with the aim of proving that computers cannot replace humans, ELIZA has captivated some of its “patients” who have engaged in extensive and intense conversations. with the program.
More recently, chatbots with scripted, AI-driven responses — similar to Apple’s Siri — have become widespread. Among the most popular is a chatbot designed to play the role of an actual therapist. Woebot relies on cognitive behavioral therapies and has seen a surge in demand throughout the pandemic as more people turn to mental health services than ever before.
But because those apps are narrower in scope and provide scripted feedback, ChatGPT’s richer conversation can feel more productive for people trying to solve social problems. complicated.
Margaret Mitchell, chief ethics scientist at startup Hugging Face, which develops open-source AI models, suggests people facing more complex issues or serious emotional distress should limit the use of chatbots. “It can lead to problematic discussions or provoke negative thinking,” she says. “The fact that we don’t have full control over what these systems can say is a big deal.”