GitHub updates Copilot policy: private-repo interactions train models by default on Apr. 24
GitHub said Copilot Free, Pro, and Pro+ interaction data will train models by default from Apr. 24 unless users opt out, while private repo content at rest stays excluded. Teams should review per-user enforcement, enterprise coverage, and repo privacy settings before the change lands.

TL;DR
- GitHub’s policy update says that starting Apr. 24, Copilot Free, Pro, and Pro+ will use interaction data for model training by default unless users opt out.
- The scope is narrower than “all private repos”: GitHub’s policy update says it will not train on private repository content at rest, while the HN thread discussion quotes GitHub saying it will use Copilot interaction data such as inputs, outputs, code snippets, and context.
- Copilot Business and Enterprise are listed as unaffected in GitHub’s official post, but the HN core thread shows teams immediately asking about org-level enforcement and which accounts are actually covered.
- Practitioner reaction centered on the default opt-in and per-user controls, with Gergely Orosz’s thread pointing to the privacy settings path and the HN fresh discussion adding compliance and threat-model concerns.
What exactly changes on Apr. 24?
Updates to GitHub Copilot interaction data usage policy
371 upvotes · 161 comments
GitHub’s policy update says that from Apr. 24, 2026, it may use Copilot interaction data from Free, Pro, and Pro+ accounts to “train and enhance AI models” unless the user opts out. The covered data includes inputs, outputs, code snippets, and associated context. GitHub also says previous opt-outs are preserved, data may be shared with affiliates such as Microsoft, and it is “not third parties” receiving that data according to the same official post.
Discussion around If you don't opt out by Apr 24 GitHub will train on your private repos
664 upvotes · 291 comments
The most important boundary is that GitHub distinguishes between private repo content at rest and what users send through Copilot. In the HN discussion, GitHub VP Martin Woodward says “we do not train on private repo data at rest, just interaction data with Copilot,” and adds that this was not a switch from opt-in to opt-out so much as a new training use for usage data that previously was not used that way. That narrows the claim, but for engineers the operational effect is still that prompts, completions, snippets, and surrounding context from private-repo work can enter the training pipeline if a covered user leaves the default in place.
What are teams still trying to verify?
If you don't opt out by Apr 24 GitHub will train on your private repos
664 upvotes · 291 comments
The immediate engineering question is enforcement. The HN core thread captures a practical complaint from teams that “the only setting I'm seeing is on a per-user basis,” with no obvious blanket org-wide disable in view. That matters because the policy split is by subscription tier: GitHub’s official post says Business and Enterprise are unaffected, but teams still have to sort out mixed-seat environments, contractor accounts, and whether personal Copilot subscriptions are touching company code.
Community reporting also surfaced how the control appears in product settings. Orosz’s thread points users to Settings → Privacy and shared a screenshot showing the “Allow GitHub to use my data for AI model training” toggle plus a note that changes can take “up to 30 minutes” to take effect. Separate HN comments collected in the fresh discussion sharpen the broader concern: one commenter argues the wording makes this effectively Microsoft data collection, while others raise GDPR timing and argue agent systems create security exposure “by design.” Those are reactions, not policy text, but they show where review is likely to land inside security and compliance teams.