,

Vibe coding BookMosaic in 111 hours

With zero programming knowledge, I built BookMosaic in just a few weeks – learning a lot about LLMs and vibe coding in the process. My key takeaway? Classic product skills are more valuable than ever.

From 0 to Engineer

When I set out, I wasn’t sure if I could even build BookMosaic.

While I am a Product Leader and thus am quite fluent in software development and talking to engineers – I had never coded in my life.

ChatGPT and Claude taught me how to setup and use:

  • Cursor (AI-powered IDE)
  • GitHub (version control, PRs, branch discipline, testing, the whole nine yards)
  • Vercel (preview & production environments, deployment, web hosting, CI/CD)
  • Supabase (database, auth, APIs)
  • Zoho Mail (email confirmations, account creation flows)

It took just 111 hours to build an end-to-end, polished, and production-ready app. 


Context: why BookMosaic?

I love audiobooks. I listen to ~100 each year. 

But unlike most avid readers, I don’t have a physical bookshelf or library. If you visited my home, you wouldn’t know books are a big part of my life.

Bookshelves are more than just storage spaces; they are aesthetic identity displays.

Books reflect what has shaped us, what we value, and what our experiences have been. They spark conversation because they are beautiful, sentimental, and deeply personal. 

But for digital readers, our bookshelves exist as private, ugly lists on apps like Goodreads, Audible, and Kindle (all Amazon subsidiaries btw). 

I created BookMosaic because I wanted to enjoy my books at home – while turning my ugly, black TV and computer screens into beautiful, dynamic screensavers.


A feature-rich app

Despite being my first “learning project,” BookMosaic is surprisingly feature rich. Here are some highlights:

  • Login, account creation, and email confirmation
  • Bookshelf creation with rank-ordering, drag-and-drop, send-to-top/bottom, and direct rank editing
  • Three bookshelf views: visual, card, and list
  • Search by title, author, or ISBN
  • Import system that pulls book covers and metadata from OpenLibrary (millions of records)
  • Profile pages with custom URLs
  • Bulk import via CSV (including Goodreads exports)
  • SEO-optimized sharing, favicon, logo, and social media previews
  • Browsing of other users’ shelves and recommended books to add
  • Screensaver mode for your bookshelves

It’s truly an end-to-end, polished, and production-ready app. You are welcome to go to BookMosaic right now and build your own screensaver!


How I spent my time vibe coding

I was astonished at how fast I was able to build BookMosaic.

Even ChatGPT and Claude regularly underestimated their speed. Features they thought would take 5-10 days, usually they were in production within 1 day. 

The following is roughly where my 111 hours went across features:

~⅓ of my time was spent on solutions I ultimately threw away (not surprising for anyone who works in software development), including:

  • Book Ingestion – the biggest challenge was getting beautiful covers and correct metadata on 22M books in a fast/performant manner. I tried seeding scripts, intelligent language/edition filtering logic, manual ISBN imports, and admin edit functionality… all thrown away. Ultimately, I just built a front-end for OpenLibrary. 
  • Screensaver – getting a smooth scroll of hundreds of images stitched together while not overtaxing the CPU was also quite tricky. I tried multiple architectures. Even the current state is still just good-enough.
  • Guest bookshelf creation – Designed, built, and then commented out due to cookie, auth status, and state management issues that caused too many bugs I couldn’t fix.

This highlighted for me some of the current limits of LLMs and vibe coding. They struggle to balance complex tradeoffs (performance vs UX) or with updating extensive parts of a system (e.g. permissions or data structures). 

As LLMs improve, this seems quite solvable. It’s more a question of when, not if. 


What surprised me most

Most of vibe coding is time spent waiting for software to run and debugging: waiting for LLMs to code, Vercel to deploy, and GitHub to test – then testing the feature, telling the LLM it got it wrong, and waiting while it tries and fails to fix it 5-10x in a row. 

I constantly had 1 to 5 minutes of downtime where I could try working on another feature (risky) or check emails, do stretches, make food, refill coffee etc.

Time by task type broke down roughly as follows:

  • 10% – outlining requirements: planning a new feature, detailing specifics, asking the LLM for options and approaches, and refining a step-by-step plan
  • 30% – AI feature coding: waiting as the LLM coded the feature, approving next steps
  • 10% – manual testing and configuration: checking functionality, identifying bugs, or configuring 3rd party systems not accessible to the agent
  • 50% – debugging: copy-pasting error messages and console logs to the AI and repeatedly telling the LLM that “no, you did not fix that bug you are so confidently claiming to have fixed for the 5th time.” On average, I need 4 deployments per PR to pass all tests and be clear for production.

The ~10-20% of my time where I did real product work was essential. This is where the real value creation lies.

Without providing guidance/requirements, clarifying logic, and catching thinking errors on the part of the LLMs – BookMosaic couldn’t have become production ready. 


5 vibe coding tips and tricks

While anybody can learn to vibe code, classic product management skills are invaluable for vibe coding.

  1. Think and communicate clearly. The clearer you can articulate requirements, logic, and architecture and the more you anticipate edge cases and long-term implications – the better the results.
  2. Use 2 LLMs to critique each other: Whether writing requirements, a plan, or fixing bugs – have one AI state the situation, another critique the plan/options, then refine, and send back. The LLM that knows your codebase will be more pragmatic, while the one that doesn’t will be more creative and strategic. You are the key for critical judgement.
  3. Architecture is worth slowing down for. Additive features are easy. Buttons, forms, features – AI cranks these out fast. But refactors will tend to turn into rewrites. It pays to be less iterative to properly design your system for the long term.
  4. Incrementally build systematic changes: Force LLMs to work in testable steps or phases. Don’t let an agent build a whole complex feature at once or you won’t be able to diagnose the issues and fix them.
  5. Diagnose stuck loops quickly: The easiest way to waste time is to realize too late the AI is stuck going in circles. Watch the LLMs reasoning to catch logical errors. Use chain-of-thought prompting to help it get unstuck: remind it to again go through the codebase and requirements OR to take a step back and reassess the approach.

Broader takeaways for software development

LLMs are dramatically transforming how R&D teams work. New structures, processes, and systems of development and collaboration are being explored and refined.

It remains to be seen what best practices emerge, but I have a few early hypotheses:

  • The R&D bottleneck is no longer Engineering; it is Product. The Product Manager’s skill and speed in discovery, strategic thinking, communication, and decision making will dictate the impact and velocity of teams.
  • Product Managers need to become “full-stack.” We can now build prototypes and ship front-end features faster than it takes to communicate the requirements to team members.
  • R&D teams will shrink – For each PM there just need to be a couple senior back-end engineers skilled with architecture, security, integration, and data systems. No more front-end devs and UX designers can be shifted to a more centralized role supporting multiple teams and building out the design system.

I’ll write more about this in my next post.


What’s next for me

I’ve greatly enjoyed my personal sabbatical over the past few months: learning to vibe code, launching BookMosaic, and also improving my West Coast Swing dancing (I’m now ‘intermediate’ after ~5 years)! 

I’m now going on an active job search.

I’m looking for Product Leadership roles as either a Director / Sr. Manager Director / Sr. Manager of a product division/portfolio or as a Principal PM on a complex initiative or launching a new product line – most likely in EdTech but I am open to other mission-drive companies.

Please also connect me to any open roles that might be a good fit.

In the interim, I’m going to continue iterating on BookMosaic and learn more about deploying AI in enterprise situations. I’ll be taking an AI Product Management course with Miqdad Jaffer (Head of Product at Open AI) – and will use the applied capstone component to embed and LLM-based feature into BookMosaic. Stay tuned!

Thanks for reading, please try BookMosaic, and share it with others! 

Leave a comment