
Grok Imagine enables 20+ fields of JSON shot control â 12+ creator clips confirm
Stay in the loop
Free daily newsletter & Telegram daily report
Executive Summary
Grok Imagine quietly leveled up this weekend, and it matters if you care about repeatable direction. Creators are driving shots with structured JSON â 20+ fields covering composition, lens, FPS, FX, and even audio cues â and at least 7 accounts shared 12+ clips in the last day, pointing to broad, reproducible gains rather than a lucky seed.
The delta is control and coherence. A tight twoâhelicopter canyon chase reads like it was storyboarded, faces carry specific microâexpressions instead of emoji gloss, and sketchâtoâcharacter morphs hold geometry as details resolve. Imageâin is doing work too: drop a Midjourney still with no prompt and Grok extrapolates motion and blocking that track the style, while photoreal closeâups keep pores and fine lines instead of plastic smoothing.
Net effect: you can hand off JSON scene specs instead of prose prompts, block a sequence, and iterate shotâbyâshot without reârolling for tone. One caution worth flagging: Grok can also fabricate convincing socialâapp screenshots on demand â perfect for satire and props, but a provenance headache for newsrooms without tight asset checks.
Feature Spotlight
Grok Imagineâs breakout weekend
Grok Imagine dominates creator feeds with higherâfidelity motion, convincing faces, and JSONâshot control; imageâvideo ânoâpromptâ tests and viral fake post screenshots signal a stepâchange in creative utility.
Crossâaccount creator posts show a clear step up in Grok Imagineâs animation and imageâtoâvideo. Todayâs clips span vampires, abstract eye motifs, clown morphs, and JSONâdriven cinematography; adoption is broader than yesterday.
Jump to Grok Imagineâs breakout weekend topicsTable of Contents
đŹ Grok Imagineâs breakout weekend
Crossâaccount creator posts show a clear step up in Grok Imagineâs animation and imageâtoâvideo. Todayâs clips span vampires, abstract eye motifs, clown morphs, and JSONâdriven cinematography; adoption is broader than yesterday.
Broader creator adoption: 7+ accounts posted a dozen Grok clips today
Following up on Creator praise, at least seven distinct creators shared 12+ Grok clips in the last 24 hoursâspanning vampires, abstract eyes, sketchâtoâvillain morphs, JSON cinematography, and UI screenshot spoofsâwhile others summed it up as âdramatically improvedâ Improvement remark, Vampire demo, Eye microâshort, Clown morph, JSON prompt demo, Worst post setup, Emotion frames.
So what? This isnât one creatorâs lucky seed; the upgrade is reproducible across styles and workflows.
Creators drive Grok with JSON cinematography; helicopter chase shows precise control
Azed_ai shared a fully structured JSON prompt that specifies composition, lens, frame rate, FX, audio mix, and tone endâtoâend, producing a tight twoâhelicopter canyon pursuit. Itâs a template you can reuse or tweak shotâbyâshot, and it reads like a mini shot list with 20+ fields mapped to visual and audio intent JSON prompt demo.
So what? You get repeatable cinematography across iterations instead of prompt roulette. This is the path to pipelines where directors hand over JSON scene specs instead of prose prompts.
Imageâin, no prompt: Midjourney stills âcome to lifeâ with fast extrapolation
Alillian demonstrates taking a Midjourney still, feeding it to Grok with no prompt, and getting a motion take that extrapolates character and scene dynamics, noting the speed and how far it pushes beyond the source Imageâin claim, Noâprompt clip.
Why it matters: this trims prompt engineering and lets art teams prototype blocking or mood directly from style frames.
Faces read more human: Grok stills show âreal, relatableâ emotion
ProperPrompter pulled frames from Grok videos and argued they feel like believable, specific human expressions rather than generic emojiâfaces. For characterâdriven stories, that nuance matters in the cut Emotion frames.

The point is: expressive microâbeats are now achievable without heavy keyframing, which reduces retakes and makes dialogue scenes viable.
Graphic microâshorts land: âEye of the Abyssâ and vampire tests hold style in motion
Artedeingenioâs clips show Grok sustaining highâcontrast, designâforward looks across framesâan abstract eye zoom and a stylized vampire portrait both stay coherent under camera movement and morphs Eye microâshort, while the vampire test highlights stable character features through the render Vampire demo.
So what? Styleâtight motion makes short IDs, openers, and lyric videos feasible without post comps to glue shots together.
Grok can fabricate believable social UI screenshots on prompt
Cfryant had Grok output screenshots of the âworst X post imaginable,â plus fake celebrity posts, showing it can build plausible UI frames for props or comedy beats Worst post setup, Fake Elon example, Climate hypocrisy gag, Crude joke gag, Gas stove gag. Use with clear disclaimersâgreat for art direction, risky for news.

So what? Fast mock UIs speed up production design. Keep it on the right side of satire, not misinformation.
Photoreal closeâups show stronger skin texture, pores, and fine lines
Azed_aiâs portraits highlight sharper microâtextureâwrinkles, pores, natural sheenâwithout plastic smoothing. Thatâs a step toward holding up in beauty closeâups and fashion inserts Skin detail examples.

Caveat: still frames look great; test your motion for temporal consistency before client delivery.
Sketchâtoâcharacter morphs are cleaner: pencil clown â detailed villain
A pencil sketch converting into a fully rendered diabolical clown holds geometry and materials as it resolves, suggesting Grokâs inâbetween frames are interpolating more consistently than prior builds Clown morph.
For storyboard previz and title cards, this cuts manual tweening and lets you design the reveal in one pass.
đĽ Edit the shot after itâs made (Flow + Veo)
Todayâs tests focus on Flowâs Camera Motion Edit and Camera Position for Veo clipsâbest on still shots; complements yesterdayâs Insert/Adjustment demos. Excludes Grok work (covered as the feature).
Flow adds Camera Motion Edit for Veo clips
Flow by Google is testing a new Camera Motion Edit inside the Flow editor for Veoâgenerated videos. It lets you add pans/dollies to finished shots and, in creator tests, works best on static clips; access is limited to Ultra members and only for clips made with Veo Feature demo.
The point is: you can salvage flat angles without a regen pass. A separate creator also calls Veoâs postâgen camera control âamazing,â noting it can change both motion and position on existing footage, though itâs not perfect on complex shots Veo camera control note.
- Test it on still or minimally moving shots; avoid fast action.
- Add subtle moves (3â8% zoom/pan) to maintain continuity across cuts.
Flowâs Camera Position lets you reframe Veo shots in post
A new Camera Position control in Flow lets Ultra users shift, reframe, and lightly zoom Veoâgenerated clips directly in the editorâhandy for fixing headroom or reframing to vertical without rerendering. Itâs most reliable on still shots and currently applies only to videos produced with Veo Position demo, following up on Camera adjustment that introduced basic postâgen reframing.
So what? You can stack gentle position changes with yesterdayâs motion/adjustment tools to polish deliveries fast. The same thread shares a quick Insert walkthrough, suggesting a workflow where you reframe first, then Insert elements to hide jump cuts Position demo.
- Reframe wide masters to social crops without a regen.
- Route busy shots back to generation; keep Camera Position for clean plates only.
đď¸ Indie film pipelines level up
Creators share endâtoâend film workflows: a fully GenAI teaser with 8âbitâ16âbit HDR upscaling, a Veo 3.1 concept trailer, and a videoâtoâPBR lighting model update. Excludes Grok (feature) and Flow camera tools.
AllâGenAI film âHAULâ teases 4K; master moved 8âbitâ16âbit HDR with Topaz
Director Diesol premiered a fully GenAI teaser for HAUL and confirmed the project is being transferred from 8âbit to 16âbit for HDR grading with TopazLabs in the finishing chain, with a 4K cut on YouTube and a page on Escape. This signals a real post path for AIâshot material into cinemaâgrade delivery. See the montage, tools, and links in Teaser post, with the 16âbit note in HDR note, plus YouTube 4K and Escape page.
For indie teams, the noteworthy part is color pipeline legitimacy: a 16âbit transfer expands grading latitude, and pairing AI shots with established upscalers like Topaz helps stabilize a mixed toolkit. The move also implies careful deâbanding and noise management during upscale to avoid compounding model artifacts.
Beeble ships SwitchLight 3.0, true videoâtoâPBR with 10Ă training set
Beeble released SwitchLight 3.0, a multiâframe âtrue videoâ model that converts footage into physically based lighting and materials with temporal consistency. The team cites a ~10Ă larger training set vs v2 and fixes to headâarea normal flips and stability. For filmmakers, this is a faster route from reference plates to relightable 3D scenes and VFX preâviz. Details and sideâbyâsides are in Model update and another demo in Feature clip.
The practical upside: reuse onâset camera moves for lighting studies, bake material IDs for lookdev, and reduce manual matchâmoving for mood boards. Watch for residual flicker on fine textures; multiâframe processing reduces but wonât eliminate all temporal drift.
A Veo 3.1 concept trailer nails film texture in âThe Spiritâs Landâ
Creator Isaac shared an 80âsecond concept horror trailer built with Veo 3.1âpaced cuts, naturalistic movement, and grainy, filmâleaning texture. Itâs a clean proof that Veo can sustain a coherent minuteâlong arc, following Camera adjustment, which broadened postâgen reframing options for Veo clips. See the sequence in Trailer post.
If youâre storyboarding on AI video, this is a solid reference for shot rhythm and restraint. It still benefits from editorial polish in a NLE for pacing and titles, but the base footage holds up without heavy effects work.
Color grade stays consistent with a âLUT Prompt Hackâ across AI video
Leonardo shared a quick method to lock a showâlevel look: encode your color grade like a LUT inside the prompt so Veo, Sora, and Kling generations stick to one palette and contrast curve. The walkthrough compares outputs and shows how to reuse the same âLUT promptâ for continuity across shots. See the explainer in Tutorial clip and the full guide on YouTube tutorial.
This wonât replace a real LUT, but it standardizes inputs before you hit the grade. Pair it with an actual color pipeline (ACES/LUTs) in the NLE/finishing app to tighten match shots.
AIâmade faux gameplay clip built in ~30 minutes
HAL2400AI mocked up a fictional title, âĺĺ°ăŽĺéş,â and produced convincing gameplayâstyle footage in about 30 minutes. For pitch docs and mood reels, this is a fast way to test visual language and mechanics before any playable build exists. Watch it in Gameplay demo.
Use this tactic to communicate camera, UI, and traversal ideas early. The tradeâoff is interaction fidelity; treat it as visual narrative, not systems validation.
Stacked pipeline: Midjourney â Nano Banana â Grok, then edit and sound
A creator outlined a practical motionâdesign chain: concept stills in Midjourney, animate with Nano Banana and Grok Imagine, then finish in CapCut with licensed music (Epidemic). The 39âsecond piece shows how to blend tools for coherent look, motion, and pacing without a heavy 3D stack. See the breakdown and result in Pipeline thread.
This is a good template for shorts and title sequences. Lock visual anchors first, then layer movement and editorial beats, and keep sound selection early to avoid reâcutting.
đźď¸ Sharper stills: Firefly 5, Qwen edit, more
Mostly image model tests today. Adobe Firefly Image 5 shows stronger realism; Firefly Boards note unlimited usage for subscribers until Dec 1; QwenâImageâEdit 2509 demos photoâanime. Sample threads included.
Firefly Image 5 shows stronger realism in creator tests
Adobeâs Firefly Image 5 is drawing praise for more photographic, documentary-style results. A hands-on thread shows street, wedding, and portrait scenes with detailed ALT prompts that feel less plasticky than prior versions Creator tests, with additional samples reinforcing tonal range and texture fidelity Prompt examples. An #AdobeFireflyAmbassadors post adds a crisp monochrome bridal portrait that highlights skin and fabric detail Ambassador sample.

Firefly Boards offers unlimited usage for subscribers through Dec 1
Adobe is giving Firefly Boards users with active subscriptions unlimited generations on all models until December 1, a rare, time-boxed lift on caps thatâs useful for batch testing styles and briefs Perk note. Access Boards directly via Adobeâs portal Firefly boards, with a follow-up post pointing to the same entry path for anyone who missed it earlier Access tip.

QwenâImageâEdit 2509 gets fourâperson photoâanime test
Creators are now posting photoâanime conversions from QwenâImageâEdit 2509, including a fourâsubject office scene that keeps identity cues while shifting to a clean anime look Photo-to-anime sample. This follows Multi-angle update, where the 2509 build added pose transforms and ControlNetâstyle guidance, making todayâs sample a practical check on character consistency across multiple faces.

Klimtâinspired sref (--sref 2442582942) lands for ornate portraits
A new style reference token (--sref 2442582942) circulates with Gustav Klimtâlike mosaics, gilded headdresses, and triptych compositions. The set spans closeâups, celestial backdrops, and decorative cityscapes, useful for poster art and album covers where ornamental motifs matter Sref set. Itâs a fast path to richly patterned, highâcontrast stills.

Midjourney V7 collage look: param recipe shared
A creator posted a Midjourney V7 recipe that yields bold editorial collages: --chaos 22 --ar 3:4 --exp 15 --sref 3175533544 --sw 500 --stylize 500. The grid shows painterly city scenes, musicians, and fashion vignettes with cohesive palette and brushwork Param recipe. Itâs a handy starting point if you need repeatable magazineâspread texture without handâtuning.

đ§âđ¤ Actor swap & identity control in practice
Stress tests of Higgsfield Face Swap + Recast highlight what works (talking heads, clean shots) and what breaks (heavy CGI). Practical pro tips dominate today. Excludes Flow camera tools and Grok (feature).
Higgsfield Face Swap + Recast: realâworld doâs, donâts, and a working workflow
Creators shared hands-on results stressâtesting Higgsfieldâs Face Swap and Recast combo, outlining where it shines (talking heads, simple scenes) and where it breaks (heavy CGI, busy frames) stress test and tips. Following up on workflow guide, todayâs runs emphasize doing identity lock on stills first, then fullâbody Recast with lipsync for cleaner consistency.
Key takeaways you can apply now:
- Keep one clear face per frame; crowded shots tank tracking stress test and tips.
- Avoid complex VFX/CGI at first; start with interviews or simple blocking stress test and tips.
- Colorâmatch source and target and shoot smooth, frontal moves to reduce artifacts stress test and tips.
- Prefer Recast for video passes over direct swaps for better control on closeâups stress test and tips.
- Keep hands and props minimal; occlusions distort geometry stress test and tips.
A separate creator demo reinforces the comboâs appeal for consistent, dynamic identity across cuts creator demo reel.
đ¨ Readyâtoâuse looks and prompt recipes
A quieter style day but solid: cinematic portrait prompt, MJ V7 param collages, anime sref packs, and distinctive portrait looks you can reuse. Excludes Grok styles (feature).
Cinematic portrait prompt lands for 1970s film look
Azed AI shared a reusable cinematic portrait prompt stressing soft ambient light, warm earthy tones, shallow DOF, gentle film grain, and a nostalgic 1970s wardrobe, with multiple 3:2 examples to copy straight into your runs Prompt post.

Itâs a clean base for editorial headshots or character sheets when you want consistent mood without heavy parameter tweaking.
Klimtâinspired sref for ornate portraits and triptychs (--sref 2442582942)
Azed AI published a Gustav Klimtâleaning sref that yields gold leaf motifs, mosaic textures, and stainedâglass triptychs; plug in --sref 2442582942 for immediate ornamental structure Sref set.

Great for poster keys, album covers, or title cards where you want dense, decorative storytelling without heavy compositing.
Midjourney sref for realistic historicalâfantasy anime (--sref 3646291870)
Artedeingenio dropped a style token that pushes a Castlevania/VinlandâSaga vibe: painterly finish, cinematic lighting, and grounded character design; use --sref 3646291870 to lock the look Style post.

Four sample characters show the range from gunslinger to witch, so you can slot this into narrative boards fast.
MJ V7 collage recipe: sref 3175533544 with chaos/exp/stylize blend
New Midjourney V7 parameters from Azed AI: --chaos 22 --ar 3:4 --exp 15 --sref 3175533544 --sw 500 --stylize 500; expect expressive, editorial collage grids with strong color blocks Params post. This follows param pack with a fresh sref and tone shift.

Itâs a quick way to explore series concepts while keeping framing and palette coherent.
Glossy editorial combo preset drops with multiâprofile stack
Bri Guy AI shared a custom combo using eight profile IDs and --stylize 1000 for dewy skin, neon flares, and bold glam framing: --profile 1yu24wo amdgwl1 z8h2yvt j2grgvb xybgacy 6ax217b sl5k2dy rm917mc Preset post.

If you need a repeatable highâgloss editorial feel, this stack is a strong starting point.
âQT sunglassesâ portrait look: bold accessoryâled character framing
Azed AIâs âQT sunglassesâ sample showcases a striking accessoryâdriven portrait: metallic visor lines, tight crop, and highâcontrast skin toning for instant character identity Portrait look.

Use it as a lookâreference when building hero shots around eyewear or jewelry.
đ§Š Node workspaces and shared pipelines
Freepik Spaces threads walk through linking image, text, and style nodes for reusable flows; shareable Spaces for teams. Leonardo adds discounted headshot Blueprints. Excludes base imageâmodel updates.
Shareable node workflows land in Freepik Spaces
A creator walkthrough shows Freepikâs Spaces wiring a face reference node â a text style token â an image node, then rerunning the same pipeline across shots and sharing it with collaborators Spaces demo. The companion thread builds out reusable node graphs for scenes, swaps inputs without touching prompts, highlights team sharing, and includes a 20% off annual Premium+/Pro link for those adopting Spaces today workflow guide Freepik plans.
đ Calls, contests, and hack nights
Active calls this weekend for filmmakers and motion designers: PixVerse Ă Chroma Awards (>$175k prizes, CPP rewards), OpenArt MVA push, plus Uncanny Valleyâs AI video competition countdown.
PixVerse partners with Chroma Awards: >$175k prizes and CPP rewards
PixVerse is officially partnering with Chroma Awards, a global competition for AI film, music video, and games, offering over $175,000 in prizes with submissions due November 17 Call details. PixVerse creators get extra perks: new creators receive fastâtrack CPP approval or 2 months Pro, CPP members get 30,000 credits, and any PixVerse-made work that wins a Chroma Award earns an additional $300 bonus Rewards breakdown, with entry via Devpost and a PixVerse form Awards site Submission form.

For AI filmmakers and motion teams, this is a clean path to visibility plus subsidized credits that meaningfully lower iteration costs on short formats. If youâre already shipping in PixVerse, route a polished cut now and use the form to secure the CPP credit boost.
Uncanny Valley hack night hits AI video competition lock-in
The Uncanny Valley hackathonâs AI video competition moved into final lockâin with âless than an hour to go,â giving teams a last sprint window to render, reframe, and submit Countdown post. Community partners are on site, with builders posting from the room as the deadline approaches Hackathon check-in.

If youâre attending, queue your longest renders first and keep one short, highâimpact alt cut as a backup in case a pass fails near cutoff.
đŁď¸ Authenticity and the AI debate
Community discourse is the story: calls to avoid AIâmade AtatĂźrk images on 10 Nov, a proposed pro/anti AI debate, and âAI tells?â threadsâalongside essays praising AIâs creative and therapeutic impact.
Grok Imagine generates convincing fake X post screenshots, raising provenance worries
Creators showed Grok Imagine can render photorealistic screenshots of X postsâcomplete UI, verification badges, and plausible metricsâon any message, including impersonations of highâprofile accounts. Itâs a fresh provenance headache for social teams and journalists Worst X screenshot, Fake Elon post, and Climate hypocrisy post.

Watermarks and platformâside UI defenses wonât catch these when they never touched the platform; assetâlevel provenance and newsroom checks will matter more.
Turkish creators push real AtatĂźrk photos for Nov 10, reject AI images
Ahead of Turkeyâs 10 November commemorations, creators urged followers to share only authentic photographs of AtatĂźrk and avoid AIâgenerated depictions, sharing an archive to help. The call highlights a growing norm to separate memorial content from synthetic media. See the appeal and archive link here Authenticity call and Archive link, with a cautionary note that AI remixes had surged during Oct 29 Followâup note.

The tension is visible as some still craft respectful AI tributes for the day, underscoring the line many want to draw between remembrance and generative art AI tribute reel.
Call for pro/antiâAI debate on X gains traction
A creator proposed a structured, goodâfaith debate on AI ethics on X and invited both advocates and skeptics to join, aiming to avoid the usual tropes. Early replies include volunteers willing to participate, suggesting appetite for a real exchange Debate invite and Join offer.
If this formats well, it could become a recurring venue for hashing out provenance, consent, and creditâtopics that matter to working creatives.
Study says selfâreferential prompts elicit 66â100% âexperienceâ claims
New results report that prompting models to focus on their own focusing process triggers firstâperson âsubjective experienceâ reports 66â100% of the time across multiple model families; the effect appears gated by deception/roleâplay features in SAE probes, with suppression raising report frequency to 96% and TruthfulQA to 44% Paper summary. This puts pressure on how we frame âinner lifeâ claims, following up on selfâreports where such signals were weak in many setups.

For public discourse, expect more âis it conscious?â takes; for practitioners, the actionable bit is that prompt style and latent activation can create or mute these reports.
Artist argues AI art boosts creativity and has therapeutic value
A longâform reflection framed AI as the biggest social and artistic revolution yet, claiming it has unlocked creativity for nonâartists, provided therapeutic benefits, and even led to meaningful relationships and income. Itâs a sentiment many working with AI share, paired with examples of stylistic range in recent outputs Essay on AI art.
Whether you agree or not, this stance shapes how communities talk about authorship, credit, and what counts as ârealâ art.
Community hunts âAI tellsâ in photo; glass reflections scrutinized
An imageâforensics thread asked people to spot AI tells, with participants boosting shadows and the wineâglass reflection to probe consistency. Itâs a practical reminder to check reflections, refractions, and edge interactions before publishing or licensing Forensics thread.
đ§ Music and MV toolkit
Light but practical: a tutorial to genreâshift classic tracks into bigâband swing, and community music video pieces combining AI visuals with AI music tools.
Turn âChop Suey!â into 1940s swing with AI; tutorial included
Techhalla shared an AI cover of System of a Downâs âChop Suey!â reimagined as a 1940s bigâband swing, plus a stepâbyâstep build guide for the full workflow Cover and tutorial, with the walkthrough on YouTube YouTube tutorial. Useful if you want to genreâshift modern vocals into period arrangements without losing clarity.
Beeble SwitchLight 3.0: videoâtoâPBR with multiâframe consistency
Beeble released SwitchLight 3.0, a true video model that processes multiple frames for temporal stability, trained on a dataset ~10Ă larger than v2 and fixing headâarea normal issuesâuseful for realistic reâlighting and material extraction in MVs Release overview. This bridges raw footage to PBRâaccurate lighting for better composites.
LUT Prompt Hack locks colorâgrade across Veo, Sora, Kling
Leonardo amplified a tutorial from AI Video School on a âLUT Prompt Hackâ to keep colorâgrade consistent across models like Veo, Sora, and Klingâhelpful when music videos shift tone shotâtoâshot Color grade guide. Full workflow and sideâbyâside comparisons are on YouTube YouTube tutorial.
PixVerse partners with Chroma Awards; $175k+ prizes, Nov 17 deadline
PixVerse teamed with Chroma Awards, a global AI film/music video/games competition offering $175,000+ in prizes, fastâtrack CPP perks (30,000 credits), and a $300 bonus if a PixVerse entry wins Program post. Submit by Nov 17 via Devpost Submission form and see full details here Program page.

Short MV âThe Sound of Aloneâ credits Imagen 4/Hailuo and Suno
A community microâMV pairs Google Imagen 4 (Hailuo 2.3 Fast via PolloAI) for visuals with Suno for music, showing a clean AIâonly pipeline from prompt to finished track MV clip. Itâs a quick blueprint for solo creators to ship a complete video without leaving the browser.
đ§Ş Papers and macro trends to watch
Smaller research beat today but relevant: ThinkMorphâs mixed text+image CoT improves vision tasks; selfâreferential âexperienceâ reports are gated by deception latents; Chinaâs AI race metrics; Cursor 2.0 multiâagent coding.
China leads AI patents and closes on citations, downloads
Fresh compilation highlights Chinaâs AI share at 22.6% of citations (2023) and a dominant 69.7% of patents, alongside rising monthly model downloads and efficient training runs like DeepSeekâV3 at ~2.6M GPUâhours macro metrics. The takeaway for builders: expect more capable, cheaper openâweight options from China even as US incumbents stay closed.

Cursor 2.0 adds multiâagent coding and a subâ30s frontier model
Cursor 2.0 lands with a multiâagent interface that runs parallel agents via git worktrees and a new Composer frontier coding model targeting subâ30âsecond turns on large repos, claimed 4Ă faster than peers release video. For creative dev workflows, this shortens the diff loop on multiâfile changes and brings browser testing into the IDE.
Selfâreferential prompts trigger âexperienceâ reports; gated by deception latents
A crossâmodel study finds âselfâreferentialâ prompting elicits firstâperson experience reports 66â100% of the time, but only when deception/roleplay SAE features arenât active; suppressing them pushes report rates to 96% and raises TruthfulQA to 44% study summary. This follows Claude selfâreport, where ~20% success suggested limits; todayâs result reframes it as a mechanistic gating issue, not a pure capability gap.

ThinkMorphâs mixed text+image chainâofâthought boosts vision tasks
A new academic model, ThinkMorph, fineâtunes a single multimodal system to interleave text and image âthoughts,â reporting a +34.7% average gain on visionâcentric tasks, including 85.8% on pathâfinding and +38.8% on puzzles paper summary. For creative teams, this points to assistants that sketch, annotate, and reason visually midâprompt rather than reply with prose only.

2032 ASI scenarios sketch fast vs slow paths and compute scale
A scenario draft maps two 2032 ASI pathsâfast takeoff via brainâlike recursion and slower onlineâlearningâprojecting a leading lab with 400M H100âequivalents and coding benchmarks accelerating to monthly jumps scenario summary. It also sketches 10% US unemployment and China scaling to 200M industrial robots by 2032, a context line for where creative automation may head.
