I taught ChatGPT to distrust itself, and suddenly it stopped hallucinating
techradar.com
Anyone who uses ChatGPT or other AI chatbots eventually encounters the confident hallucination. The AI will explain a nonexistent feature, invent a quote, or describe a restaurant that closed during the first Clinton administration.
That's because large language models are designed to produce plausible-sounding responses quickly. That ability is what makes them useful, but it also creates the perfect conditions for hallucinations. The chatbot wants to keep the conversation moving smoothly, so it often fills in gaps with fiction if it's convenient.
I have recently started adding an addition to any of my prompts that ask for facts. I essentially make ChatGPT as skeptical of its answers as I often am. I append this to the prompt: “Act as a hostile AI auditor and assume unsupported specifics are false by default. Mark all uncertain, inferred, or weakly supported claims clearly.”
Copyright of this story solely belongs to techradar.com . To see the full text click HERE

