Naval Ravikant is my favorite thinker / writer / podcaster. I love how he explains concepts from the bottom up, motivates you to dream bigger, and somehow manages to remain both philosophical and practical in equal measure.
He also gets things spectacularly wrong. Most infamously, he said (on Joe Rogan's podcast no less) — "an AI that can code has just taken over the world". (Funnily enough, I think he implicitly acknowledges exactly that statement in the first thirty seconds or so of the podcast)
And so when this episode dropped, I was genuinely curious to hear his opinion on when Skynet is coming for all of us and where we should all rent bunkers from. But in the hour that followed, I paused several times to rewind and ensure I had properly internalized the absolutely incredible insights and assessments that Naval provided.
I've expounded on a few highlights in detail below, but highly encourage you to listen to the full thing yourself. You will also appreciate the essay more if you do so.
There Is No Demand For Average
This resonates a lot with what I am seeing vis-à-vis the potential vs reality of "infinite software creation", particularly by the vibe coders (I am most definitely one as well). The challenge is that a "set it and forget it" mindset where AI single-shots billion dollar businesses is just not how the world works. And it's not how AI works either.
Airbnb / Uber / Amazon / <insert other tech giant> will eventually be more leveraged with coding agents than someone starting from scratch, simply because they can now plug existing gaps in their service quicker with AI and they already have a massive lead from their existing human-built businesses.
A corollary principle might be framed as "Use AI to maximize your depth first". This can be applied at both a company and an individual level. Software engineers with AI will almost always do better than non-engineers using AI. For you and me, it means that rather than each of trying to build killer apps in the same domain, we may derive more leverage from AI by applying it to the vertical that we are already good at.
The only exception to this rule is founders / entrepreneurs, who often need to be everything all at once, and can likely get away with median competence in multiple domains (accelerated by AI). Even then, I imagine the best founders will use AI to become the best in the world at at least one or two functions.
The Art & Photography Analogy
This point addresses how creative endeavors evolve once machines get pulled into the sausage making. Naval talks about how abstract / postmodern art came into being simply because photographs captured living things on a canvas better than any painting ever could.
I see something similar happening with writing, where AI can generate good enough boilerplate content (see my *Rage Against The Slop* article) that humans will be forced to metamorphose to remain distinguishable. As a writer I feel this myself, and it pushes me to lean further into my individual / authentic niche to create something that no single click AI model ever could.
Using AI To Learn
The key takeaway here is that AI meets where you are. Naval articulates the core value proposition beautifully when he says that if you have eighth grade vocabulary and fifth grade mathematics, AI can talk to you at exactly that level. For me, personally, I have deep expertise in some areas like product thinking and data science, but in multiple others, including AI itself, I have lots to learn. Using Claude with that context baked in makes it so much more accessible to me.
If everything else goes away, we should always retain this core functionality of LLMs.
AI Anxiety
Although mentioned only briefly in the episode towards the end, I found the discussion mirroring much of my own thinking on the topic. When you think about it, AI anxiety stems from the same source as all other anxiety. At its peak, it represents non-specific fear of an unmapped adversary without a clear plan of action for conquest. Whilst this topic deserves a much longer and independent treatment, for now, I will simply echo the advice in the episode. You will stop fearing AI when you exit the recesses of your mind and actually work towards incorporating it into your life.
There were also parts of the episode that I disagreed with strongly. I thought the discussion around the value of learning AI tools was a bit too pithy / hand-wavy, with the explanation offered that eventually these will become incredibly simple and no longer a source of alpha. This is wrong for two reasons, at least at the moment.
You Should Absolutely Learn AI Tools. Start Now.
The real power of AI tools currently lies in their exposed wires avatars i.e. the Claude Codes and OpenClaws and Codex-es of the world. Out of design and economic necessity, abstractions for non-technical users often skip a few levers that may not seem like much independently, but can be vital cogs in the construction of an infinitely more powerful workflow. Hooks and custom MCP servers are a great example. There is still tremendous value in learning these things bottom up.
Lastly, and I will close with this, progress compounds. If you start today, then a year later, even if the tools become much easier to use (they will), you will already have a year of practice under your belt. You will also be infinitely more productive because you won't be at the mercy of the latest release to access the highest potential of AI.