Marginalia
I designed and built Marginalia, a concept web app that uses AI to turn long podcasts and audiobooks into concise takeaways. It automatically generates key insights, memorable quotes, and strategic lessons, and surfaces the best quotes on a shared board so passive listening becomes structured, shareable knowledge.
Overview
✍️ My Role
Product designer, product owner
👥 Key Stakeholders
UX designers
📅 Year
🔗 Prototype Links
The Problem
Long‑form audio is great for learning, but terrible for retention and review.
Using the “Hooked” audiobook as an example, there are 3 common challenges:
Attention drop
Sessions often last 2+ hours, making it hard to stay focused end to end.
Hard to revisit
Audio doesn’t support quick scanning, highlighting, or jumping back to specific ideas.
Fragmented quotes
Pulling out and organizing quotes in a structured way is slow and manual.

The Goal
Why LLMs?
I chose LLMs because they can interpret and reorganize long, unstructured, conversational audio into coherent insights, quotes, and lessons in a way rule‑based systems cannot. They also support a conversational layer, allowing users to refine outputs via chat and generate personalized summaries, which increases user engagement.
The Solution
Marginalia turns podcast and audiobook highlights into structured, AI-generated takeaways. It captures:
Session-based capture flow:
users log a audio source, then add highlights with quotes, notes, and timestamps as they listen.
Integrated Spotify metadata lookup for auto-filling source details and cover art.
It generates:
Key insights distilled from the conversation
Best quotes worth remembering
Strategic lessons extracted and reframed as practical takeaways.

Metrics (Internal Testing)
I piloted Marginalia with an 18‑person UX book club as an internal test. These early signals suggested that AI‑generated summaries significantly improved recall and made post‑listening review more efficient.
80%
Quotes shared to the team quote board
2 min
Average reading time
5×
faster review time compared to re-listening
AI Product Workflow
I used a four‑stage validation workflow to design and build Marginalia.
Fake Door Test
Validate the idea
Interviewed 18 UX book club members to gauge interest and capture real listening pain points before building.
Ran market research on Reddit and Twitter to understand frustrations with existing podcast/audiobook summary tools (shallow summaries, poor navigation, weak quote support).
Used this to confirm demand for AI‑generated takeaways and to identify differentiation opportunities.

Sandbox Test
2.1 Generate a PRD
Translated research into a PRD that defined target users, core jobs‑to‑be‑done, and the structure of the final “one‑page takeaway.”
2.2 Build a prototype
I vibe-coded the entire Marginalia prototype using Lovable.
Designed the AI generation pipeline to support:
3. User Flow Test
Validate the user experience
Once the AI output format was stable, I shared the prototype with the 18 UX book club member and gathered feedback. I then synthesized feedback through affinity mapping to identify friction points and opportunity areas in workflows.
UX Refinement Loop
I iterated in real time with a simulated prototype testing loop, refining both interaction details and the AI workflow.
Product Analysis
Live product data tracking to identify the gap
Marginalia is live at Marginaliaclub.lovable.app, where I track real usage to help identify friction points and inform the next iterations.

Reflection
Takeaways
This was my first end‑to‑end “vibe‑coded” product, where I used AI tools to design, prototype, and ship a working app. Key learnings are:
💡AI accelerates the product lifecycle
Vibe coding dramatically reduced time from idea to functional prototype, enabling more cycles of user feedback.
💡UX fundamentals still matter
Even with AI‑generated UI and content, accessibility, hierarchy, and interaction details required deliberate design decisions.
💡Data closes the loop
Post‑launch analytics are essential for prioritizing UX improvements and validating that AI features actually drive better learning outcomes.
What's next
© 2026 by Lynn Qian







