Snap Privacy News
https://values.snap.com/privacy/privacy-policy
Event Timeline
10 events
Snap updated its privacy policy and platform to enhance transparency around data collection, leading with information about user controls. The revisions simplified language, added summaries at the top of each section, and provided definitions for technical terms, in response to ongoing regulatory scrutiny in both the UK and EU markets.
Snap updated its Privacy Policy to disclose that publicly posted content on Spotlight, Public Stories, and Snap Map may be used to train its generative AI models. Users were given an opt-out toggle in settings, though the default was opt-in and opting out did not affect training already performed. The policy aimed to improve transparency about AI data practices following regulatory pressure.
The European Commission sent formal requests for information to Snapchat under the Digital Services Act, demanding details about its recommender system algorithms, their role in amplifying systemic risks, and safeguards for minors. Snap, designated as a 'very large online platform' under the DSA, faced scrutiny over age verification systems and prevention of minors accessing harmful content.
The New Mexico Attorney General filed a landmark lawsuit against Snap Inc., alleging that Snapchat's design features -- including disappearing messages, Snap Map, and Quick Add -- facilitate child sexual exploitation and sextortion. An undercover investigation revealed over 10,000 records linking Snap to child sexual abuse material and a vast network of exploitation.
The UK ICO concluded its investigation into Snap's My AI chatbot, the first completed enforcement action against a generative AI product. The ICO determined that Snap had brought its privacy measures into compliance with UK regulations following the preliminary enforcement notice. The ICO warned other organizations not to ignore data protection risks when deploying AI technologies.
The UK Information Commissioner's Office issued a preliminary enforcement notice against Snap over its My AI chatbot, finding that Snap failed to adequately assess the data protection risks posed by the generative AI technology, particularly to children. The ICO's investigation found a 'worrying failure' to identify privacy risks before launching the feature to users as young as 13.
Snapchat launched My AI, a ChatGPT-powered chatbot, to all users including those as young as 13. The feature accessed users' location data and age for targeted advertising while initially telling users it did not know their location. Privacy advocates raised alarms about a generative AI chatbot marketed as a 'virtual friend' being deployed to children without adequate safeguards.
Snapchat updated its privacy settings to comply with the California Privacy Rights Act (CPRA), adding a new toggle switch allowing users to opt out of both the sharing and the sale of their personal information. The update provided California users with the right to limit the use and disclosure of sensitive personal information including precise location data.
Snap Inc. agreed to pay $35 million to settle a class action lawsuit alleging violations of the Illinois Biometric Information Privacy Act (BIPA). The suit claimed Snapchat collected and stored users' facial geometry and voiceprints through its Lenses and Filters features without required disclosures or written consent. Approximately 4 million Illinois residents were eligible for the settlement.
The FTC's 2014 consent order against Snapchat remained in effect, requiring a comprehensive privacy program monitored by an independent professional for 20 years. The original settlement found Snapchat had deceived users about disappearing messages, secretly collected geolocation data on Android, harvested iOS contacts without consent, and failed to secure its Find Friends feature, leading to a breach of 4.6 million usernames and phone numbers.