LLMs killed the privacy star, we can't rewind, we've gone too far
theregister.co.ukAdd privacy to the list of potential casualties caused by the proliferation of AI, because researchers have found that large language models (LLMs) can be used to deanonymize internet users – even those who use pseudonyms – more efficiently than human sleuths.
Much of the academic work on online privacy over the past 25 years builds upon Latanya Sweeney's 2002 research on k-Anonymity [PDF], and prior research in which she demonstrated it is possible to identify 87 percent of the US population using three anonymous data points – a five-digit ZIP code, gender, and date of birth.
The possibility of identifying people from anonymous data became one of the central concerns about online advertising and the usage of cookies in web browsers.
It's a risk that hasn't gone away and now appears to be even more grave, thanks to LLMs that can automate the process of connecting the dots across ...
Copyright of this story solely belongs to theregister.co.uk . To see the full text click HERE

