OpenAI's meltdown prompts further questions around the future of AI safety surveillance
diginomica.com
With all eyes on the Open AI kerfuffle, it seems important to consider what this means for the future of AI safety. Although most details are murky, one prominent theory about what happened inside OpenAI suggests a rift between for-profit business interests and AI safety concerns.
Some leading experts have postulated that the rise of agentic or autonomous AI poses even bigger risks down the road. There is also speculation that the current Open AI rift may have emerged from the new Open AI service that allowed anyone to create their own bots.
Thus far, we have only encountered a few of the safety issues associated with Large Language Models around hallucinations, copyright, bias, and toxicity. Lessons from post-market surveillance in other domains, including finance, healthcare, and fire safety, could inform future efforts to identify, track, and report on these new risks.
The US White House Executive Order on AI ...
Copyright of this story solely belongs to diginomica.com . To see the full text click HERE