OpenAI Designed GPT-5 to Be Safer. It Still Outputs Gay Slurs
www.wired.comThe new version of ChatGPT explains why it won’t generate rule-breaking outputs. WIRED’s initial analysis found that some guardrails were easy to circumvent.

OpenAI is trying to make its chatbot less annoying with the release of GPT-5. And I’m not talking about adjustments to its synthetic personality that many users have complained about. Before GPT-5, if the AI tool determined it couldn’t answer your prompt because the request violated OpenAI’s content guidelines, it would hit you with a curt, canned apology. Now, ChatGPT is adding more explanations.
Copyright of this story solely belongs to www.wired.com . To see the full text click HERE