Tech »  Topic »  Don't spill your guts to your chatbot friend - it'll hoover up that info for training

Don't spill your guts to your chatbot friend - it'll hoover up that info for training


The US House of Representatives has heard that LLM builders can exploit users’ conversations for further training and commercial benefit with little oversight or concern for privacy risks.

As President Trump seeks to prevent states from introducing and enforcing legislation governing the application of AI, Jennifer King, Stanford privacy and data policy fellow, said there was little to no transparency into how AI developers collect and process the data they use for model training.

On Tuesday, she told the House Energy and Commerce Subcommittee on Oversight and Investigations that “we should not assume that they're taking reasonable precautions to prevent incursions into consumers' privacy. Users should not be automatically opted in to having their data used in model training, and developers should proactively remove sensitive data from training sets.”

Under current rules, there are no requirements for developers to understand the full data pipeline - “how it is cleaned, how ...


Copyright of this story solely belongs to theregister.co.uk . To see the full text click HERE