How safe is OpenAI's GPT-4o? Here are the scores for privacy, copyright infringement, and more
zdnet.comLarge language models (LLMs) are typically evaluated on their ability to perform well in different areas, such as reasoning, math, coding, and English -- ignoring significant factors like safety, privacy, copyright infringement, and more. To bridge that information gap, OpenAI released System Cards for its models.
On Thursday, OpenAI launched the GPT-4o System Card, a thorough report delineating the LLM's safety based on risk evaluations according to OpenAI's Preparedness Framework, external red-teaming, and more.
We’re sharing the GPT-4o System Card, an end-to-end safety assessment that outlines what we’ve done to track and address safety challenges, including frontier model risks in accordance with our Preparedness Framework. https://t.co/xohhlUquEr
— OpenAI (@OpenAI) August 8, 2024
The Score Card reflects scores in four major categories: cybersecurity, biological threats, persuasion, and model autonomy. In the first three categories, OpenAI is looking to see if the LLM can ...
Copyright of this story solely belongs to zdnet.com . To see the full text click HERE