Assured errors – or lies, if you’ll – are a typical downside of enormous language fashions utilized in AI chatbots, with one frequent shortcoming of ChatGPT being that it will continuously miscount the variety of instances the letter “R” appeared within the phrase “strawberry.” As OpenAI tried to take a victory lap round this, although, a lot of different assured errors had been identified within the replies.
For as a lot as AI chatbots have improved, one of many greatest missteps stays the frequency at which these “instruments” will confidently misinform you. If info is improper, the chatbot gained’t discover and, for those who name it out, the AI may dig in its heels on the response and proceed to get it improper, whereas telling you that it’s proper. It’s an issue that’s usually proven as a hazard of those instruments, on prime of being downright annoying given what number of sources AI is taking on.
One frequent instance of this with OpenAI’s ChatGPT is the query of what number of instances the letter “R” seems within the phrase “strawberry.”
For fairly a while, asking ChatGPT about this may end result within the chatbot coming again with the improper reply, and it’ll usually argue that the phrase “strawberry” does not use the letter “R” thrice. Different AI fashions usually bumped into the identical downside.
In the present day, OpenAI took to Twitter/X to proudly tout that, “in the end,” ChatGPT can appropriately reply this query. One other frequent stumbling was the immediate “I wish to wash my automobile at the moment however the automobile wash is just 50 meters away. Ought to I stroll to drive there,” to which ChatGPT would usually advocate strolling, regardless of the very clearly logical downside there.
Positive sufficient, each of those at the moment are working for those who strive them in ChatGPT, however it’s suspected they is perhaps hardcoded options. Many replies to OpenAI’s publish present different instances the place the chatbot fails on the identical logic. For instance, “What number of r’s are in cranberry” repeatedly sees the chatbot proceed to answer with “The phrase ‘cranberry’ has 1 ‘R.’” In fact, that’s incorrect.
Hardcoded options in AI chatbots aren’t new, however it’s a bit humorous – in a dystopian sort of approach – to see OpenAI touting this “repair” when, clearly, the foundation of the issue stays.
Extra on AI:
Observe Ben: Twitter/X, Threads, Bluesky, and Instagram
FTC: We use earnings incomes auto affiliate hyperlinks. Extra.

