Automate Your Reading Backlog with Claude Code and NotebookLM

The backlog problem

My to-read list grows faster than I can read. Some of it is noise; some is a 40-page research PDF I’ll never get to.

If it’s core to your job, read it properly – no shortcut works.

But if the material is only tangential, or you’re unsure you need it, or it’s just “nice to have,” deep reading is a bad ROI trade.

The solution: let an agent drive your summarizer

The usual move is to skim, or paste things into NotebookLM manually and generate summaries, infographics, audio overviews. That works, but it’s still hands-on – upload, prompt, tweak, repeat.

The better move: let Claude Code drive NotebookLM for you. The right format depends on you – some skim, some listen, some prefer videos. Good news: coding agents can personalize this at scale.

I retain visuals best, so I optimize for slide decks and infographics. If you learn by listening, swap in audio overviews. The workflow is the same.

What you need

⚠️ Heads up on sharing: NotebookLM notebooks aren’t publicly shareable – you can only share with specific Google accounts. This workflow is optimized for personal learning: you open the notebook in your own browser or phone later. If you need to share output with a team, export the deck or screenshot it (which is what I did for the examples below).

90-second setup

# 1. Install uv (if you don't have it)
curl -LsSf https://astral.sh/uv/install.sh | sh

# 2. Install notebooklm-py
uv tool install notebooklm-py
# or run ad-hoc: uv run --with notebooklm-py notebooklm --help

# 3. Authenticate with your Google account (opens a browser)
notebooklm auth

# 4. Smoke test — list your notebooks
notebooklm notebooks list

# 5. Point Claude Code at this folder and paste the prompt below.

My actual prompt

Please create learning materials for me using notebooklm-py (https://github.com/teng-lin/notebooklm-py).
It is already authenticated; use "uv run" to execute any CLI commands.

Make sure to generate the following artifacts:

- slide-deck
- infographic

Don't download them—just give me a link to my notebook so I can review it from my browser or my phone later.

## Style Guide

Tone: practical, direct, no fluff. Write for experienced developers and technical leaders.
- Lead with "what you can do with this" not theory
- Short sentences, action-oriented language
- Assume technical baseline — don't over-explain fundamentals
- Structure around concrete use cases and outcomes
- Angle: clear value prop in first line, end with a question or takeaway

Visual style: clean whiteboard aesthetic  hand-drawn diagrams, muted earth tones (greens, browns),
simple icons, arrows showing flow between components. Think: architect sketching on a whiteboard,
not polished corporate slides.

Reference my style blog: https://kyrylai.com and info about me: https://kyrylai.com/about-me/.
Don't use these as sources for the notebook, but use them to help augment questions to NotebookLM.


## Input Materials & questions: 
<paste URLs, papers, docs here>

Examples with results

Let’s walk through 3 examples – each with the input material, the questions I asked, and the compressed presentation output.

Note: Since NotebookLM notebooks aren’t publicly shareable (see above), I’ve attached each deck as a single compressed image – all slides stitched together – so you can scan the output at a glance.

1. Composer 2 Technical Report:

https://cursor.com/resources/Composer2.pdf

This is an extremely useful piece. But since I don’t fine-tune models right now, I want to stay up to date without losing focus on what matters to me.

https://cursor.com/resources/Composer2.pdf
How could I re-use it in my practice?

Output (Compressed presentations):

Compressed view of a 21-slide NotebookLM deck summarizing Cursor's Composer 2 technical report — covers the RL training stack, router replay fix, delta-compressed S3 weight syncing, nonlinear length penalties, stateful forks, CursorBench methodology, and the frontier AI engineering loop.

2. TRIBE v2 Human Brain Model

Outside my day-to-day, but fascinating — exactly the kind of thing the backlog eats.

https://ai.meta.com/blog/tribe-v2-brain-predictive-foundation-model/


- What does it mean for consumers?
- Could AI read our thoughts?

Output (Compressed presentations):

Compressed view of a 15-slide NotebookLM deck summarizing Meta's TRIBE v2 predictive foundation model for neuroscience — a 1B-parameter tri-modal engine trained on 1,115 hours of fMRI from 720 subjects, covering encoding vs decoding models, scaling laws in brain encoding, zero-shot generalization, and clinical/consumer applications.

3. Claude Mythos Preview

https://www-cdn.anthropic.com/08ab9158070959f88f296514c21b7facce6f52bc.pdf

Self-exploratory, 200+ pages of Claude Mythos testing.

https://www-cdn.anthropic.com/08ab9158070959f88f296514c21b7facce6f52bc.pdf

- Is it really that scary?
- What does it mean for the cybersecurity market?

Output (Compressed presentations):

Compressed view of a 15-slide NotebookLM deck summarizing Anthropic's Claude Mythos Preview System Card — covers capability risks, cyber dual-use, alignment paradoxes, constitutional adherence scores, mechanistic interpretability findings, and frontier safety conclusions.

I get the gist of a paper in the time it takes to drink a coffee – and more importantly, I know which ones deserve a second cup.

Across these 3 examples: ~15 minutes total to generate and review all three decks, vs. ~4 hours of straight reading.

What’s been sitting in your backlog for three months? Try it on that one!

Leave a Reply

Scroll to Top

Discover more from Kyryl Opens ML

Subscribe now to keep reading and get access to the full archive.

Continue reading