A woman narrowly escaped a fatal mistake after following advice provided by the AI chatbot, ChatGPT. The software incorrectly identified a toxic plant as edible, leading her to consume a life-threatening meal. This alarming incident serves as a stark reminder of the dangers of relying on artificial intelligence for foraging and safety.
Following a terrifying ordeal that almost took her friend’s life, one woman is speaking out about the risks of trusting ChatGPT’s guidance without question. This comes after YouTuber Kristi faced a life-threatening poisoning, an accident she blames on the chatbot providing a dangerously wrong identification for a plant she had found.
A Dire Lesson in AI Reliability
On Instagram, rawbeautykristi detailed her concerns about AI to help her community understand that these responses are often inaccurate. While tools such as ChatGPT and Google Gemini have surged in popularity lately, the growing pains of such nascent technologies can lead to high costs.
When Kristi’s friend shared a picture of a plant from her property and asked for its name, the AI incorrectly labelled it as carrot tops. In records posted by Kristi, the software even defended its reasoning, citing ‘finely divided and feathery leaves’ as evidence for its conclusion.
After labelling the plant a textbook example of a carrot top, the OpenAI-backed AI chatbot noted a few similar varieties, with poison hemlock among them. Even when the friend directly asked whether she was holding the very poison, the AI offered false reassurance by maintaining its original stance that the plant was safe.
A Deadly Oversight in Visual Recognition
The Cleveland Clinic warns that consuming poison hemlock can be fatal, while merely touching it leaves some with painful rashes. Despite the bot arguing that the plant was safe because it ‘did not show hollow stems with purple blotching,’ the photograph clearly captured those specific, dangerous characteristics.
To verify the identification, Kristi used Google Lens, which immediately identified the plant as the deadly poison hemlock. Her friend then tested the image on ChatGPT again using a different device, only for the software to contradict its earlier advice and confirm that the foliage was actually dangerous.
Had Kristi’s friend not reached out for a human perspective, the consequences could have been terminal. It was a close call that prompted Kristi to post a video warning others not to blindly trust large language models. This specific blunder serves as a grim reminder for all users to remain vigilant when using such services.
OpenAI Admits to Flaws in the System
OpenAI acknowledges that while their tool is often beneficial, it is far from infallible. The organisation, led by Sam Altman, states that the bot generates answers using patterns from its training data, yet it remains prone to delivering inaccurate or deceptive information. Alarmingly, the software can project a sense of total certainty even while distributing errors.
🚨 ChatGPT lies to you 27% of the time and you have no idea.
A lawyer just lost his career trusting AI-generated legal citations that were completely fake. But Johns Hopkins researchers discovered something wild.
Adding 2 words to your prompts drops hallucinations by 20%.… pic.twitter.com/jPZe9IlFfT
— Alex Prompter (@alex_prompter) January 6, 2026
This issue is commonly known as a hallucination, occurring when the system generates answers that lack a factual basis. These blunders often manifest as incorrect dates and definitions, the fabrication of fake studies and citations, or overly confident responses to complex questions. Consequently, OpenAI advises people to treat the chatbot’s output with a critical eye and to double-check vital details against trusted sources.
Originally published on IBTimes UK





