ChatGPT Pro users report GPT-5.4 Pro with faster SVG and UI generation
Multiple Pro users said GPT-5.4 Pro started producing richer front-end and SVG outputs with much faster runtimes, despite no formal OpenAI announcement. The reports matter because they affect whether long visual and code-generation tasks are practical inside ChatGPT.

TL;DR
- petergostev's Golden Gate test and petergostev's follow-up both say ChatGPT Pro started returning much stronger visual and code-heavy outputs in roughly 20 minutes, down from the 60 to 80 minute waits he associated with Pro Extended.
- Speed claims were not limited to one workflow: kevinkern's screenshot shows a 65,000 token review finishing in 5 minutes 28 seconds, while chetaslua's game-making run shows a 13 minute build for a playable 3D office brawler.
- The clearest capability jump in the evidence is one-shot graphics and front-end generation, with chetaslua's SVG demo, daniel_mac8's unicorn SVG, and chetaslua's mockup-to-frontend post all framing the new behavior as notably better than prior Pro runs.
- Nobody in this evidence set points to a formal OpenAI announcement. Instead, AILeaksAndNews and replies like petergostev's reply describe the behavior as a quiet test or unannounced update.
- The spillover question is whether these speedups reach coding surfaces beyond ChatGPT: petergostev's follow-up explicitly wonders about Codex, and getsome_air's Codex note says GPT-5.4 Fast mode had already appeared in Air two days earlier.
You can watch petergostev's Golden Gate generation rotate into a coherent 3D scene, open the SVG codepen from chetaslua's one-shot SVG post, and compare that with chetaslua's UI cloning example, which argues the model improves sharply when fed a reference image. The weirdest pattern is how many of the strongest demos are not chatty benchmark posts at all, but straight code artifacts, including a voxel pelican scene, a Pokemon game, and a long-context review screenshot.
Runtime
The runtime shift is the most concrete change in the evidence.
In the first post, petergostev said Pro generations now land in about 20 minutes instead of the 60 to 80 minute waits he saw with Pro Extended. His later side-by-side note in petergostev's follow-up tightens that to a 3 to 4x speedup, while still describing the outputs as richer and more coherent.
Other users reported shorter absolute runtimes on different tasks. kevinkern's screenshot shows a 65,000 token review completing in 5 minutes 28 seconds, and chetaslua's voxel-art post says a detailed HTML scene arrived in under 11 minutes.
SVG
The SVG demos are where users started acting like something under the hood had changed.
The pattern is simple:
- chetaslua's one-shot SVG post says Pro solved a one-shot SVG, with the result published as live code.
- daniel_mac8's unicorn SVG calls a generated bicycle-riding unicorn the best SVG he had seen from any model.
- kimmonismus's controller SVG post makes the same claim about a one-shot game-controller SVG.
- In a reply on the next day, petergostev's three.js reply says one impressive output was still just SVG, which is a useful caveat because some of the hype is really about 2D asset generation, not full interactive 3D.
That caveat matters because the demos are strong, but they are not all the same task. A clean one-shot SVG is a narrower win than a live 3D app.
Mockups
The front-end posts point to a narrower but repeatable workflow: give the model an image first.
Across three posts, chetaslua describes almost the same recipe:
- Start with a screenshot, image, or Figma-like mockup, per the first workflow post.
- Ask for a single HTML block, as in the UI cloning example.
- Let the model infer styling from the image, which the follow-up demo frames as the main reason the output looks better than text-only prompting.
That is a narrower claim than "frontend solved." It is closer to image-conditioned UI recreation getting much better inside ChatGPT Pro.
HTML toys
The model also looks better at turning a single prompt into self-contained browser projects.
The examples in the evidence split into a few buckets:
- Single-file visuals, like the voxel pelican and the interactive Pokeball.
- Single-file games, like the office punch-em-up build and the Pokemon game.
- Image-to-style translation, where chetaslua's mockup reuse test says he took an image from another post and used it to steer the result.
These are not benchmark numbers. They are artifacts. The common thread is that users keep emphasizing one HTML block, playable output, and less cleanup after generation.
No announcement
Nobody in the evidence points to an OpenAI launch post, changelog entry, or product note.
The language around the change stays speculative. AILeaksAndNews calls it an unannounced upgrade and asks whether it is a stealth Spud release, while petergostev's reply about testing says he does not know for sure and that it simply feels different and better. koltregaskes's joke reply treats "Spud" more like community shorthand than confirmed branding.
That leaves the story in an awkward but familiar state: a cluster of users observed the same direction of change, but none of them cited a canonical OpenAI source.
Codex
The last interesting thread is where these speedups might surface next.
In petergostev's Codex question, he explicitly asks whether a 20 minute Pro workflow now becomes practical inside Codex. That fits with two adjacent signals in the evidence pool: thsottiaux's speedup post says the team has line of sight to at least an order of magnitude more speed this year, and getsome_air's GPT-5.4 Fast mode note says Codex Fast mode on GPT-5.4 had already shown up in Air for ChatGPT subscribers.
Those posts do not confirm that the quieter Pro improvements are the same thing as Codex speed work. They do show that GPT-5.4 speed, and where it becomes usable, is already spilling across multiple OpenAI coding surfaces.