ChatGPT appeared to go rogue on Tuesday evening as users reported the AI chatbot responding with incorrect answers and talking gibberish, as reported by 404Media.
Members of a ChatGPT subreddit shared their experiences with screenshots of extraordinary exchanges with the technology, responses either making no sense or answers to questions being way off the mark.
One screenshot showed this response to a question: “This is the work of service and any medical today to the data field.” It continued: “The 12th to the degree and the pool to the land to the top of the seam, with trade and feature, can spend the large and the all before it’s under the care.”
Meanwhile, another user explained that they asked ChatGPT for “a synonym for overgrown” and got the response: “A synonym for “overgrown” is “overgrown” is “overgrown” is “overgrown”…”
Other users claimed it gave totally incorrect answers to the basic questions, such as responding with “Tokyo, Japan” when asked to name the biggest city on earth that begins with an ‘A.’
OpenAI, the creators of ChatGPT, has confirmed it has fixed the issue in a message on its status page. Still, it’s another reminder that while we’re in the middle of an AI boom, the technology is not yet immune to going rogue or, quite simply, going wrong.
AI models like ChatGPT have a long way to go
This is just another example of AI technology proving it’s not yet capable of earning complete trust from its users, despite fears that Artificial Intelligence has the potential to replace humans in a variety of day-t0-day tasks, both at home and in the workplace.
There have already been several instances where lawyers have gotten in trouble for citing fictitious cases generated by AI. Just last month, Reuters reported that a New York lawyer was facing disciplinary action after they used ChatGPT for research in a medicinal malpractice lawsuit and failed to confirm that the case cited was valid.
Featured Image: Photo by Jonathan Kemper on Unsplash