A creator published a 20-prompt NotebookLM workflow covering source onboarding, contradiction checks, evidence audits, executive briefs, timelines, and final synthesis across large document sets. The post matters because it turns long research packs into structured material for scripts, essays, and briefs, but the evidence comes from a single public thread rather than a NotebookLM product update.

NotebookLM already supports source-grounded chat with inline citations, uploaded source packs up to 50 sources, and a business pitch around executive summaries and strategic insights. The thread adds a reusable prompt layer on top of that: one prompt to surface contradictions, another to rank source credibility, another to generate analogies, plus a final close-the-loop prompt at the end contradiction hunter follow-up and perspective prompts final synthesis prompt.
The sharpest part of the thread is that it starts with corpus mapping, not summarization.
The opening prompt asks for four things immediately after upload: the three biggest themes, where sources agree and clash, the most surprising finding, and the major open questions. That matches how Google describes NotebookLM in its official help page, as a research assistant grounded in the sources you provide.
A second official page matters here too: NotebookLM says each notebook can include up to 50 sources, with each source up to 500,000 words or 200MB. For big research packs, the thread is really a set of retrieval instructions for navigating that pile.
The most useful prompts are the adversarial ones.
The contradiction prompt asks NotebookLM to quote the conflicting claims, identify which source each claim came from, and assess which side has stronger support. The blind spot prompt pushes on what is missing, which viewpoints are absent, and which assumptions every source shares without examining. Together they turn NotebookLM from a summarizer into a pressure tester.
The same pattern shows up later in the thread through evidence auditing and source ranking Evidence Auditor source ranking block. The common move is simple: ask the model to distinguish confidence levels inside the corpus instead of flattening everything into one smooth summary.
The middle of the system is built for output formats creators actually ship.
That sits neatly on top of NotebookLM's own product framing. Google says the tool can transform sources into study guides, briefings, audio overviews, and mind maps, and its Workspace page explicitly pitches executive summaries, metric extraction, and strategic implications.
The last two prompts are about ending the session with something sharper than a recap.
The implication chain builder asks for first, second, and third order consequences, then forces NotebookLM to mark where the reasoning becomes speculative. The final synthesis prompt asks for one new thing learned, the one finding worth citing most confidently, the one area still needing evidence, and a three-sentence project summary.
That closing loop is also where this thread differs from Google's own Discover Sources update. Google focused on getting better material into the notebook. This workflow is about what happens after the sources are already there.