Back to Bluesky

BlueskyPolicy Change

moderatePro-PrivacyPolicy Change

Executive Summary

At SXSW, CEO Jay Graber announced Bluesky was developing a consent framework inspired by robots.txt, allowing users to signal per-account or per- post preferences across four categories: generative AI training, protocol bridging, bulk datasets, and web archiving. Bluesky acknowledged the framework would be a voluntary standard without legal enforceability — third parties could still ignore user preferences.

What Happened

On March 10, 2025, at the SXSW conference in Austin, Bluesky CEO Jay Graber announced the company is developing a user consent framework for how data can be used across four categories: generative AI training, protocol bridging, bulk datasets, and web archiving. The framework, inspired by robots.txt files used by websites to signal preferences to search engines, would allow users to set preferences at the account or post level. Graber acknowledged that like robots.txt, the framework would be voluntary and not legally enforceable, meaning third parties could still choose to ignore user preferences.

Who Is Affected

All Bluesky users, which now number over 32 million, are affected by this proposed framework as it would give them the ability to signal preferences about how their public posts are used. The framework particularly addresses concerns raised after 1 million Bluesky posts were scraped and hosted on Hugging Face for AI training in 2024. Users who migrated from X to Bluesky following changes to X's privacy policy allowing third-party AI training would be directly impacted by how effective this voluntary system proves to be.

Why It Matters

This represents one of the first attempts by a major social network to create a standardized consent mechanism for AI training data, establishing a potential model for the industry even though it lacks legal enforceability. The voluntary nature of the framework means its effectiveness depends entirely on whether AI companies, researchers, and other third parties choose to honor user preferences. The approach contrasts with X's policy change that allows third-party AI training by default, highlighting a significant philosophical divide in how social platforms handle user data in the AI era.

What You Should Do

Bluesky users should monitor the GitHub repository where the proposal is being developed to understand how the framework will work when implemented. Once the framework launches, users should review their account settings to configure their preferences for AI training, bridging, datasets, and archiving according to their comfort level. Users should understand that these settings are voluntary signals rather than enforceable restrictions, so content posted publicly on Bluesky may still be scraped regardless of preferences set.

AI-Assisted

Event summaries are generated by Claude AI from verified sources and reviewed by humans before publication.

At SXSW, CEO Jay Graber announced Bluesky was developing a consent framework... — Bluesky | PrivacyWire