Jon Slemp

🏄‍♂️ Vibe-Coding: Fast Starts, Slow Finishes, and the Path Forward

February 24, 2025

I’m a technical PM who’s all about shipping products that solve real problems for customers quickly. AI code generation tools like Replit, Cursor, Windsurf, and Lovable promise rapid prototyping, seamless API integration, and quick demos to win over stakeholders and customers. If you understand how to build software, these tools give you unprecedented leverage to ship more, faster.

What is Vibe Coding?

Vibe coding—coding on instinct assisted by instant AI execution and feedback—is thrilling when it works. Now anyone can build full-stack apps in one prompt, and I’ve shipped dozens myself. But without a clear plan and deep technical chops, you’re debugging more than building. These tools struggle in many places which we’ll explore below—one sloppy or mistimed prompt, and you’re stuck in a debugging death spiral. 

The Promise vs. Reality

Ease of Use

  • Promise: Anyone can build and ship features, fast.
  • Reality: You still need a clear plan, architecture, and working understanding of software—like structuring a REST API or managing state—or you’re untangling AI’s messes instead of delivering value.

Functionality

  • Promise: Rapid prototyping and seamless API integration.
  • Reality: Models use stale API docs (due to their training run cutoffs), generating code that hits deprecated endpoints or misreads parameters. Even with updated docs, they often create conflicting logic, breaking integrations. 

Production Readiness

  • Promise: Skip boilerplate gruntwork, focus on high-level strategy.
  • Reality: Flaky code—fine for demos, but it crashes under load, fails  audits, and tanks performance. For complex workflows, it flops, sparking customer complaints and diluting focus.

Scaling and Iteration

  • Promise: Works well with codebases of varying complexity and maturity.
  • Reality: AI disregards your architecture—overwriting files, ignoring naming conventions, and turning clean code into a maintenance nightmare, delaying sprints and frustrating engineers.

Self-Healing & Adaptation

  • Promise: AI evolves alongside the latest coding models and best practices.
  • Reality: Code-gen tools lean on outdated models, tripping over basics like dependencies. They can’t fix their own errors, leaving you to mop up.

The Beginning of Infinity

Deutsch nailed it: even a machine containing all knowledge is only as powerful as the questions asked of it. These tools are a leap forward, but they hinge on your prompts. Ask for a database schema without relationships, and it’s chaos. As PMs, we need tools that scale from prototype to production—because we ship products, not demos. I love a quick rip on a feature-set as much as the next PM, but for enterprise-grade needs, these tools just aren’t ready to enable non-technical teams to ship production ready scalable products. The promises are enticing, but the pitfalls have been very real in my experience.

The Need for a Better Way

I built a full-stack web app in two hours—video downloads, Whisper transcriptions, OpenAI embeddings, aggregate summaries, and a chat interface. It was remarkable, but scaling it took two weeks of grinding. That experience, among others, exposed the gaps. We’ve seen the beginning of what is possible with AI, which is frankly incredible, but now we need tools that bridge the gap between prototyping and production. If you’re anything like me,  posted up on the bleeding edge of tech on a quest to make your lever ever larger, you need tools built for real-world with: 

  • Seamless integration with existing codebases.
  • Mastery of design patterns for complex workflows.
  • Real-time API updates to avoid integration hell.
  • A context window that remembers your project—like not re-explaining your schema.
  • Self-healing automation to fix errors fast.
  • The best models available.

How We Solved This

AI tools promise speed but falter in production. That’s why we built vnow\.dev for ourselves at Intrinsic, to build more, faster, and reliably. Unlike traditional AI code generators, vnow\.dev is built for technical PMs, engineers, and non-technical entrepreneurs alike looking to build AI-powered applications—at scale. It:

  • Syncs with your codebase, preserving architecture.
  • Generates optimized, scalable code that won’t crash under load.
  • Keeps APIs current, dodging integration headaches.
  • Automates error fixes, cutting debug time.

Final Thought

Bottom line: AI coding tools promise speed but crumble in production—leaving PMs scrambling. vnow\.dev blends rapid prototyping with reliability. Visit vnow\.dev today to ship AI-powered products faster and smarter.

From Monolithic to Ephemeral UI 🧙‍♂️

February 10, 2025

From Monolithic ➡️ Ephemeral UI 🧙‍♂️

For over two decades, a Cambrian explosion of point solutions—what we call SaaS—has empowered non-engineers with interfaces to review and manipulate underlying CRUD databases. These platforms solved critical problems but had inherent limits; no single tool could be everything for everyone, given the high complexity and marginal cost of software development and maintenance.

‍

Key Observations

‍

📉 Falling Compute Costs: Inference costs are plummeting—OpenAI has seen a 150x drop in token cost from GPT-4 to GPT-4o—and Sam Altman noted his expectation of a continued 10x drop each year into the future.

‍

🤖 AI-Augmented Development: Tools like Replit, Lovable, Cursor, and StackBlitz’s Bolt enable non-engineers to effortlessly build full-stack applications without the ability to write code. While these apps have limitations, technical PMs can use them to collaborate with engineers to rapidly create complex, production-ready applications in days, not months.

‍

📈 Accelerating Innovation: Frontier model performance is on a steep j-curve. Hyperscalers now must compete on CapEx AND innovation—as seen with DeepSeek AI.

‍

🧙‍♂️🪄 Conjuring Interfaces and Functionality 🪄🖥️

These trends inherently cause lower marginal cost of software development. This redefines how we value and build software. Where point solutions once required heavy venture subsidies and lengthy dev cycles, new AI-powered entrepreneurs launch and scale applications over a weekend. As AI-dev tools are powered by increasingly smarter and more capable models it will not only speed up production and deployment, but also breed new capabilities that weren't previously possible or economically feasible.

‍

SaaS may shift from monolithic platforms to modular “lego bricks” that dynamically assemble into ephemeral, user- and task-specific interfaces reconciled against a central system of record. For example, a user might want to review unit economics by customer, a metric only available in Snowflake SQL queries today, but imagine this data could be conjured and overlaid directly in HubSpot, and vanish after the task is completed...

‍

SaaS then looks more like a co-creative ecosystem–platforms provide robust frameworks; customers fine-tune to fit their exact needs–driving faster innovation and better customer alignment, a virtuous cycle.

‍

Ultimately then advantage shifts from monolithic and broad platforms to deeply integrated software that evolves with a business in real-time. Vertically specialized providers will build moats around proprietary data, ruthless execution on the right problems, and informed by deep customer intimacy.

‍

The rules of the game don't change, just the pace, architecture, and interface. This future will be possible sooner than we think.

Deepseek and Value Accrual

February 10, 2025

I’ve been asked a lot, had tons of discussions, and seen many hot takes about DeepSeek and the future of AI value accrual this week. Here’s my take:

DeepSeek shocked markets last week by releasing its open-source R1 model with capabilities effectively on-par with OpenAI’s top models, at a fraction of the cost. It’s true the training metrics are misleading and they clearly distilled leading models, but that ignores the real breakthrough. R1 achieved cutting-edge performance with an order-of-magnitude lower cost by using novel reinforcement learning techniques and mixture-of-experts reasoning. ‘Necessity is the mother of invention’.

The 'picks and shovels' of the AI value chain are not worthless (i.e. Nvidia). Historically innovation happens in periods of intense early investments and innovation in infrastructure, eventually breeding competition and price compression, which then gives way to ubiquity, which then gives way to massive value accrual in applications and integrations.

‍

Cisco —> Amazon, eBay, Google;

CDNs —> Facebook, Instagram, Youtube;

VMware —> AWS, Azure, and Google Cloud;

ARM/Qualcomm —> iOS, Android.

‍

Today, while AI infrastructure is estimated to capture ~85% of current value accrual, as technology matures, semiconductor innovation is commoditized and value accrues to whomever is closest to the end customer—just as it did with the PC, dot com, web 2.0, cloud, and mobile eras. As foundation models become smarter, cheaper, and more accessible, differentiation moves from spending more on GPUs to driving killer UX and delivering value.

In a future where foundation models approach AGI, applications can only build moats on (1) novel UX and (2) proprietary data. These companies will consume narrow workflows end-to-end. And yes, whoever builds AGI will accrue value, but that alone will likely be distilled and commoditized too.

In summary: the news of Nvidia's death has been greatly exaggerated, ubiquitous AI is good for companies and consumers, and there has never been more opportunity for you to seize than exists in this moment.

Buckle-up 🚀 more to come.