Skip to content
AI Primer
workflow

ClawSweeper closes 4,000 OpenClaw issues with 50 Codex agents in one day

Steipete’s maintainer bot ran 50 Codex agents in parallel and closed about 4,000 OpenClaw issues in a day. The cleanup pushed into rate limits, so use the README dashboard and Project Clowfish clustering to track large agent sweeps.

4 min read
ClawSweeper closes 4,000 OpenClaw issues with 50 Codex agents in one day
ClawSweeper closes 4,000 OpenClaw issues with 50 Codex agents in one day

TL;DR

You can browse the public clawsweeper repo, check the separate Project Clownfish repo, and the weirdly charming part is that steipete's favorite detail was not a bespoke dashboard at all, it was the README updating itself as the agents worked. Later posts added two more useful bits: the maintainer workflow note split the cleanup into a pre-clean and a clustering pass, and GitHub's Enterprise upgrade shows the infra side of what happens when a maintainer project suddenly starts acting like a small load test.

ClawSweeper

The core claim here is blunt: 50 Codex agents running in parallel, continuously, against the OpenClaw backlog. According to steipete's post, that first day alone closed around 4,000 issues, with a few thousand more still queued behind rate limits.

That gives ClawSweeper a very specific maintainer shape. It is not presented as a coding copilot sitting inside one PR, it is a backlog sweeper operating across issues and pull requests at repository scale.

README dashboard

The nicest implementation detail is also the most low-tech one. In steipete's note, the running system reports progress by updating the README instead of sending maintainers to a separate dashboard.

That choice makes the repo itself the control plane. Anyone landing on the public clawsweeper repo can see the sweep where the work is already happening.

Project Clownfish

ClawSweeper was only the pre-clean. steipete's workflow summary says the first strike was closing what was already fixed, while vincent_koc's post introduced Project Clownfish as the second strike, grouping relevant issues and PRs by intent.

That turns the operation from bulk closure into backlog restructuring. The public Project Clownfish repo suggests the maintainers wanted that clustering logic exposed, not buried inside a one-off internal script.

Rate limits

The posts make clear that the bottleneck was not finding work, it was getting enough GitHub headroom to keep running. vincent_koc's update says GitHub and Ashley Wolf upgraded the whole maintainer team to Enterprise over the weekend so the sweep could continue without hard rate limits.

That lines up with the original post, which already mentioned rough rate limits, and with steipete's follow-up, which thanked GitHub while joking about melting their servers.

What actually moved

The visible effect was not just a vanity closure number. In a reposted community reaction, one OpenClaw watcher said the project went from almost 9,000 open PRs at bedtime to nearly half that by morning.

And a reposted user report says at least one specific MacOS auto-update issue they had filed was actually resolved by the sweep. That is a more interesting signal than raw close counts, because it shows the run touched live maintainer pain, not just stale backlog bookkeeping.