OpenAI Codex adds Chronicle screen memories in macOS Pro preview
OpenAI added Chronicle, a Codex preview that turns recent screen context into reusable memories for errors, files, docs, and workflows. The macOS Pro-only feature stores local memory unencrypted and can burn rate limits quickly, so watch prompt-injection risk before relying on it.

TL;DR
- OpenAI added Chronicle, an opt-in Codex research preview that turns recent screen context into reusable memories, so prompts like "this" or "that" can resolve against what you were just doing, according to OpenAIDevs' launch thread and Thomas Sottiaux's preview note.
- Chronicle runs background agents on screen captures, which means it can burn through Codex rate limits quickly, as OpenAIDevs' usage note and testingcatalog's settings screenshot both highlighted.
- The first rollout is narrow: macOS only, Pro only, and unavailable in the EU, UK, and Switzerland, per OpenAIDevs' availability note and kevinkern's setup summary.
- The main caveats are unusually blunt for a product preview: screen captures are temporarily stored on device, memories are stored on device unencrypted, and screen content raises prompt-injection risk, according to OpenAIDevs' storage note, TheRealAdamG quoting the docs, and testingcatalog's modal capture.
- Chronicle also extends a broader Codex shift toward long-running desktop workflows, where persistent memory, scheduled automations, and computer use are starting to connect, as WesRoth's orchestration summary and HamelHusain's computer-use thread suggested.
You can jump straight to OpenAI's Chronicle doc, see the enablement steps inside Codex, and read the unusually specific warning modal in testingcatalog's screenshot. The useful weird bit is that OpenAI is pitching Chronicle as memory infrastructure, not a passive screen recorder: kevinkern's side-by-side screenshot shows Codex explicitly invoking a Chronicle skill to inspect recent context before answering a vague bug report.
Chronicle
Chronicle sits on top of the Codex memories preview OpenAI shipped the week before. OpenAIDevs' launch thread framed the new piece as a way to improve those memories with recent screen context, so Codex can pick up an error on screen, a doc you have open, or a project you touched weeks ago without you restating the setup.
The distinction OpenAI keeps making is simple:
- Memories store what Codex has already learned about your work.
- Chronicle helps build better memories from what is visible on screen.
- The combined effect is lower prompt overhead for follow-up work, according to kevinkern's explainer and TheRealAdamG quoting the docs.
That sounds like small UX polish, but it changes the shape of the interaction. In gdb's hands-on reaction, the feature felt "surprisingly magical" because Codex could carry recent visual context forward instead of treating each prompt as a cold start.
Screen context
OpenAI says Chronicle runs background agents that build memories from screen captures. OpenAIDevs' implementation note is the key line here, because it makes Chronicle less like static recall and more like a continuously running context builder.
The best concrete example in the evidence pool is kevinkern's screenshot. A user asks, "Why is this failing?" and Codex responds by checking recent screen context before guessing, then jumps into a GitHub Actions run. The interface labels that step as "Using skill Chronicle," which implies Chronicle is exposed inside Codex as an explicit retrieval or inspection tool, not just a hidden background cache.
That also explains why OpenAI keeps warning about cost. Chronicle is not just remembering text you typed. It is pulling image inputs from the desktop and running them often enough that testingcatalog's demo and testingcatalog's modal capture both echoed the same warning about fast rate-limit consumption.
Access
The rollout is tightly scoped.
Availability and setup break down like this:
- Plan: Pro subscription required, per Thomas Sottiaux's preview note.
- Platform: macOS only at launch, per OpenAIDevs' availability note.
- Regions excluded: EU, UK, and Switzerland, per OpenAIDevs' availability note and kevinkern's summary.
- Permissions: macOS Screen Recording and Accessibility permissions required, according to TheRealAdamG quoting the docs.
- Enable path: Settings → Personalization → Memories on → Chronicle on, per Thomas Sottiaux's step list.
OpenAI employees also described it as early and expensive. Thomas Sottiaux's preview note called out token usage directly, while embirico's launch reaction described the whole thing as "super experimental."
Privacy and prompt injection
The warning copy around Chronicle is more interesting than the launch copy. testingcatalog's settings screenshot captures the actual modal users see before enabling it, and the language is direct about privacy, prompt injection, and local storage.
The relevant mechanics are:
- Screen captures are processed on OpenAI's servers and then deleted, according to the text visible in testingcatalog's screenshot.
- Screen captures are temporarily stored on device, per OpenAIDevs' storage note.
- Memories are stored on device and unencrypted, per TheRealAdamG quoting the docs and testingcatalog's modal capture.
- Other apps on the same computer may access those files, according to OpenAIDevs' storage note.
- Malicious instructions visible on a web page could be followed by Codex, per testingcatalog's prompt-injection warning.
- Memories used in future sessions may be used to improve models if the relevant ChatGPT setting allows it, as testingcatalog's screenshot states and matvelloso's reply called out.
OpenAI also says Chronicle has no access to microphone or system audio, a detail visible in testingcatalog's screenshot. That matters because the product is much closer to desktop vision plus memory than to full-device activity logging.
Codex workflow memory
Chronicle lands in the middle of a larger Codex push toward durable desktop workflows. Before this preview, WesRoth's earlier Codex summary had already surfaced scheduled automations, thread reuse, and persistent memory as part of what he called long-term workflow orchestration.
That older evidence fills in why Chronicle exists at all. HamelHusain's five-point thread described Codex computer use as a way to operate Mac apps without APIs, drive browser tasks visually, reflect on completed work to create skills, and schedule follow-up automations. reach_vb's examples added background jobs like iMessage-triggered workflows, end-of-day updates, and form filling from notes.
Chronicle plugs directly into that stack. Persistent memory tells Codex what tends to matter, computer use gives it a way to act on desktop state, and Chronicle gives it a fresh visual trail of what you were doing when you ask it to pick a task back up. That makes the preview look less like a standalone memory feature and more like the context layer for OpenAI's emerging desktop agent.