Skip to content
AI Primer
release

OpenAI Codex adds Chronicle screen memories in macOS Pro preview

OpenAI added Chronicle, a Codex preview that turns recent screen context into reusable memories for errors, files, docs, and workflows. The macOS Pro-only feature stores local memory unencrypted and can burn rate limits quickly, so watch prompt-injection risk before relying on it.

6 min read
OpenAI Codex adds Chronicle screen memories in macOS Pro preview
OpenAI Codex adds Chronicle screen memories in macOS Pro preview

TL;DR

You can jump straight to OpenAI's Chronicle doc, see the enablement steps inside Codex, and read the unusually specific warning modal in testingcatalog's screenshot. The useful weird bit is that OpenAI is pitching Chronicle as memory infrastructure, not a passive screen recorder: kevinkern's side-by-side screenshot shows Codex explicitly invoking a Chronicle skill to inspect recent context before answering a vague bug report.

Chronicle

Chronicle sits on top of the Codex memories preview OpenAI shipped the week before. OpenAIDevs' launch thread framed the new piece as a way to improve those memories with recent screen context, so Codex can pick up an error on screen, a doc you have open, or a project you touched weeks ago without you restating the setup.

The distinction OpenAI keeps making is simple:

  • Memories store what Codex has already learned about your work.
  • Chronicle helps build better memories from what is visible on screen.
  • The combined effect is lower prompt overhead for follow-up work, according to kevinkern's explainer and TheRealAdamG quoting the docs.

That sounds like small UX polish, but it changes the shape of the interaction. In gdb's hands-on reaction, the feature felt "surprisingly magical" because Codex could carry recent visual context forward instead of treating each prompt as a cold start.

Screen context

OpenAI says Chronicle runs background agents that build memories from screen captures. OpenAIDevs' implementation note is the key line here, because it makes Chronicle less like static recall and more like a continuously running context builder.

The best concrete example in the evidence pool is kevinkern's screenshot. A user asks, "Why is this failing?" and Codex responds by checking recent screen context before guessing, then jumps into a GitHub Actions run. The interface labels that step as "Using skill Chronicle," which implies Chronicle is exposed inside Codex as an explicit retrieval or inspection tool, not just a hidden background cache.

That also explains why OpenAI keeps warning about cost. Chronicle is not just remembering text you typed. It is pulling image inputs from the desktop and running them often enough that testingcatalog's demo and testingcatalog's modal capture both echoed the same warning about fast rate-limit consumption.

Access

The rollout is tightly scoped.

Availability and setup break down like this:

OpenAI employees also described it as early and expensive. Thomas Sottiaux's preview note called out token usage directly, while embirico's launch reaction described the whole thing as "super experimental."

Privacy and prompt injection

The warning copy around Chronicle is more interesting than the launch copy. testingcatalog's settings screenshot captures the actual modal users see before enabling it, and the language is direct about privacy, prompt injection, and local storage.

The relevant mechanics are:

OpenAI also says Chronicle has no access to microphone or system audio, a detail visible in testingcatalog's screenshot. That matters because the product is much closer to desktop vision plus memory than to full-device activity logging.

Codex workflow memory

Chronicle lands in the middle of a larger Codex push toward durable desktop workflows. Before this preview, WesRoth's earlier Codex summary had already surfaced scheduled automations, thread reuse, and persistent memory as part of what he called long-term workflow orchestration.

That older evidence fills in why Chronicle exists at all. HamelHusain's five-point thread described Codex computer use as a way to operate Mac apps without APIs, drive browser tasks visually, reflect on completed work to create skills, and schedule follow-up automations. reach_vb's examples added background jobs like iMessage-triggered workflows, end-of-day updates, and form filling from notes.

Chronicle plugs directly into that stack. Persistent memory tells Codex what tends to matter, computer use gives it a way to act on desktop state, and Chronicle gives it a fresh visual trail of what you were doing when you ask it to pick a task back up. That makes the preview look less like a standalone memory feature and more like the context layer for OpenAI's emerging desktop agent.

Further reading

Discussion across the web

Where this story is being discussed, in original context.

On X· 6 threads
TL;DR2 posts
Chronicle3 posts
Screen context1 post
Access3 posts
Privacy and prompt injection2 posts
Codex workflow memory1 post