ClawSweeper closes 4,000 OpenClaw issues with 50 Codex agents in one day
Steipete’s maintainer bot ran 50 Codex agents in parallel and closed about 4,000 OpenClaw issues in a day. The cleanup pushed into rate limits, so use the README dashboard and Project Clowfish clustering to track large agent sweeps.

TL;DR
- steipete's launch post said ClawSweeper ran 50 Codex agents in parallel, around the clock, and closed about 4,000 OpenClaw issues in a single day.
- The first pass was not triage in the abstract. As steipete's follow-up put it, the system started by closing items that were already fixed, then moved into intent-based clustering.
- steipete's README note framed the repo README as the live status surface, while the public clawsweeper repo turned the sweep itself into a visible maintainer workflow.
- vincent_koc's Project Clownfish post added a second cleanup layer: clustering related issues and PRs, with the code published in the Project Clownfish repo.
- The sweep hit GitHub limits hard enough that vincent_koc's weekend update said GitHub upgraded the maintainer team to Enterprise, while steipete's reply joked they were melting the servers.
You can browse the public clawsweeper repo, check the separate Project Clownfish repo, and the weirdly charming part is that steipete's favorite detail was not a bespoke dashboard at all, it was the README updating itself as the agents worked. Later posts added two more useful bits: the maintainer workflow note split the cleanup into a pre-clean and a clustering pass, and GitHub's Enterprise upgrade shows the infra side of what happens when a maintainer project suddenly starts acting like a small load test.
ClawSweeper
The core claim here is blunt: 50 Codex agents running in parallel, continuously, against the OpenClaw backlog. According to steipete's post, that first day alone closed around 4,000 issues, with a few thousand more still queued behind rate limits.
That gives ClawSweeper a very specific maintainer shape. It is not presented as a coding copilot sitting inside one PR, it is a backlog sweeper operating across issues and pull requests at repository scale.
README dashboard
The nicest implementation detail is also the most low-tech one. In steipete's note, the running system reports progress by updating the README instead of sending maintainers to a separate dashboard.
That choice makes the repo itself the control plane. Anyone landing on the public clawsweeper repo can see the sweep where the work is already happening.
Project Clownfish
ClawSweeper was only the pre-clean. steipete's workflow summary says the first strike was closing what was already fixed, while vincent_koc's post introduced Project Clownfish as the second strike, grouping relevant issues and PRs by intent.
That turns the operation from bulk closure into backlog restructuring. The public Project Clownfish repo suggests the maintainers wanted that clustering logic exposed, not buried inside a one-off internal script.
Rate limits
The posts make clear that the bottleneck was not finding work, it was getting enough GitHub headroom to keep running. vincent_koc's update says GitHub and Ashley Wolf upgraded the whole maintainer team to Enterprise over the weekend so the sweep could continue without hard rate limits.
That lines up with the original post, which already mentioned rough rate limits, and with steipete's follow-up, which thanked GitHub while joking about melting their servers.
What actually moved
The visible effect was not just a vanity closure number. In a reposted community reaction, one OpenClaw watcher said the project went from almost 9,000 open PRs at bedtime to nearly half that by morning.
And a reposted user report says at least one specific MacOS auto-update issue they had filed was actually resolved by the sweep. That is a more interesting signal than raw close counts, because it shows the run touched live maintainer pain, not just stale backlog bookkeeping.