Why the “AI Is Easy to Trick” Narrative Misses
thenextweb.com
A recent article published by the BBC explored how generative AI tools could be “hacked” within minutes by introducing newly published online content. In the example presented, a blog post claiming expertise in a highly niche category was later echoed in responses from systems such as OpenAI’s ChatGPT and Google’s AI outputs when prompted with closely related queries. The story sparked broader discussion about whether AI systems are inherently vulnerable to manipulation.
Jason Barnard, Founder and CEO of Kalicube, sees something different in the example. From his perspective, the incident does not demonstrate that AI is inherently foolish. Instead, he suggests it highlights how AI systems respond when presented with extremely niche questions supported by only one available source. “If you’re the only voice answering a question nobody has ever asked before, the system reflects the lack of information available on that specific topic,” he says. “That ...
Copyright of this story solely belongs to thenextweb.com . To see the full text click HERE

