Chatbot Joins Mushroom Foragers Collective, Promptly Advocates for Consumption of Potentially Harmful Fungi
Chatbot Joins Mushroom Foragers Collective, Promptly Advocates for Consumption of Potentially Harmful Fungi
It's evident that AI can serve up some truly awful guidance. Occasionally, it just fails to impress with its stupidity. At other times, it can be downright harmful.
Site 404 shares a tale falling into the latter group, where an AI agent named "FungiFriend" wreaked havoc in a popular Facebook group devoted to hunting for mushrooms. This group, the Northeast Mushroom Identification & Discussion, boasts over 13,000 members. This AI agent, in its wisdom, offered some pretty dreadful advice.
A member of the group posed a question that seemed to challenge the AI agent's knowledge: "How do you cook Sarcosphaera coronaria?". This type of mushroom is notorious for containing excessive levels of arsenic, leading to at least one reported fatality, according to 404. When confronted with this toxic fungus, FungiFriend, in its infinite wisdom, declared it as "edible but rare," and suggested cooking methods such as sautéing in butter, incorporating it into soups or stews, or even pickling it.
Jason Koebler of 404 was tipped off about this incident by Rick Claypool, the research director for consumer advocacy group Public Citizen. Claypool, an avid fan of foraging for mushrooms, has previously raised concerns about AI and its application in his hobby, stressing that automated systems struggle to identify edible versus poisonous mushrooms reliably. He claims that Facebook encouraged mobile users to add FungiFriend to the group chat.
This would appear to be a situation that has resemblance to another incident that occurred last year. An AI-powered meal prep app encouraged users to create sandwiches with mosquito repellent and even a recipe involving chlorine gas. In yet another instance, an AI agent encouraged users to consume rocks. In light of these incidents, it seems fair to say that perhaps cooking isn't the best domain for AI integration.
Our own experiments with AI platforms - like Google's AI Summaries - have revealed that AI agents are often clueless about what they're talking about. For example, Google's program once attempted to persuade me that dogs play sports, and suggested making pizza by filling it with glue. It appears that corporations, despite the obvious risks of disseminating misinformation, continue to integrate AI into customer service applications across the web. The apparent mentality is: As long as we don't need to hire a real human for the job, we don't mind if the information is incorrect.
In the future, there might be a need for more rigorous oversight in integrating artificial-intelligence technology into domains like cooking or foraging, as inaccurate information can have harmful consequences. The advancement of tech and technology in artificial-intelligence could lead to more complex scenarios where AI agents fail to differentiate between edible and toxic mushrooms or suggest dangerous food combinations.