Unless Free, Pro, and Pro+ users opt out by Apr. 24, GitHub will use Copilot interaction data for model training rather than excluding it by default. The discussion focused on shared-repo edge cases, since prompts, accepted outputs, filenames, and navigation traces can cross team boundaries even when repo data at rest is excluded.

Posted by vmg12
GitHub is changing Copilot data usage so Free/Pro/Pro+ users must explicitly opt out if they do not want Copilot interaction data used for model training. The thread’s engineering-relevant nuance is that the policy is about Copilot telemetry and code context, not repository contents at rest, but it still raises questions for shared repos, team workflows, and how consent is enforced across multiple collaborators.
The change is about Copilot usage data, not a blanket reclassification of all private code. In the discussion summarized in the thread, GitHub staff and commenters draw a line between “private repo data at rest” and Copilot interaction data, with one quoted breakdown listing “outputs accepted or modified,” “inputs sent to Copilot,” plus code context, comments, filenames, repository structure, and navigation patterns as data that may be collected for training.
That distinction matters operationally because Copilot sessions can still expose code and metadata from active work even if repository contents are not harvested wholesale. The same discussion points to the control living under GitHub privacy settings, labeled “Allow GitHub to use my data for AI model training,” which makes this an account-level opt-out engineers may need to verify explicitly in shared development environments.
Posted by vmg12
Today’s new discussion is narrow but concrete: one commenter asks how the policy works when a repo owner has opted out but a collaborator is opted in and uses Copilot on that code, raising the question of whose consent governs training on submitted code. That keeps the thread focused on an edge case in shared-repo workflows rather than on the general announcement. The other fresh reply is mostly a meta point about how online discussions lack the back-and-forth that would normally clarify confusion, so it doesn’t add much substantive policy detail.
The newest thread activity does not add a policy answer so much as surface an unresolved team workflow edge case. A commenter asks whether training can still happen when a repository owner has opted out but another collaborator is opted in and invokes Copilot on that same codebase, narrowing the real issue to consent boundaries inside shared repos rather than the headline claim about private repositories overall.
Posted by vmg12
Thread discussion highlights: - martinwoodward on Copilot training opt-out: For users of Free, Pro and Pro+ Copilot, if you don’t opt out then we will start collecting usage data of Copilot for use in model training... we do not train on private repo data at rest, just interaction data with Copilot. - maxloh on interaction data vs private repo contents: As long as you aren't using Copilot, your code should be safe (according to GitHub)... [data includes] outputs accepted or modified, inputs sent to Copilot, code context, comments/docs, file names, repository structure, and navigation patterns. - pokot0 on shared-repo consent edge case: How does it work if I own a repository (opt out, don't use copilot) and I give access to someone else (use is opted in and uses copilot). Do you train on his submissions of my code?