Note: This is stream of consciousness. Iâm working through my thoughts on AI coding tools in real time, and Iâm not interested in being measured about it.
I woke up this morning and Iâm nodding my head so much in agreement that if I keep this up, thereâll be a new medical disorder named after meâthe nodding head disorder. I could not agree more with what Andrej Karpathy and a lot of others in the replies and quote tweets are saying.
Andrej Karpathy: Iâve never felt this much behind as a programmer. The profession is being dramatically refactored as the bits contributed by the programmer are increasingly sparse and between. I have a sense that I could be 10X more powerful if I just properly string together what has become available over the last ~year and a failure to claim the boost feels decidedly like skill issue.
Thereâs a new programmable layer of abstraction to master (in addition to the usual layers below) involving agents, subagents, their prompts, contexts, memory, modes, permissions, tools, plugins, skills, hooks, MCP, LSP, slash commands, workflows, IDE integrations, and a need to build an all-encompassing mental model for strengths and pitfalls of fundamentally stochastic, fallible, unintelligible and changing entities suddenly intermingled with what used to be good old fashioned engineering.
Clearly some powerful alien tool was handed around except it comes with no manual and everyone has to figure out how to hold it and operate it, while the resulting magnitude 9 earthquake is rocking the profession. Roll up your sleeves to not fall behind.
Boris Cherny (creator of Claude Code): I feel this way most weeks tbh. Sometimes I start approaching a problem manually, and have to remind myself âclaude can probably do thisâ. Recently we were debugging a memory leak in Claude Code, and I started approaching it the old fashioned way: connecting a profiler, using the app, pausing the profiler, manually looking through heap allocations. My coworker was looking at the same issue, and just asked Claude to make a heap dump, then read the dump to look for retained objects that probably shouldnât be there; Claude 1-shotted it and put up a PR. The same thing happens most weeks.
In a way, newer coworkers and even new grads that donât make all sorts of assumptions about what the model can and canât do â legacy memories formed when using old models â are able to use the model most effectively. It takes significant mental work to re-adjust to what the model can do every month or two, as models continue to become better and better at coding and engineering.
The last month was my first month as an engineer that I didnât open an IDE at all. Opus 4.5 wrote around 200 PRs, every single line. Software engineering is radically changing, and the hardest part even for early adopters and practitioners like us is to continue to re-adjust our expectations. And this is still just the beginning.
Andrej Karpathy: I have similar experiences. You point the thing around and it shoots pellets or sometimes even misfires and then once in a while when you hold it just right a powerful beam of laser erupts and melts your problem.
Coincidentally, just yesterday, I was showing my friend some of the basic capabilities of these AI-powered coding tools, and he was astounded. And that is the right reaction. I keep thinking about something Karpathy said in his interview with Dwarkesh Patel: that with large language models, we are summoning ghosts. And that is a perfect encapsulation of how weird and powerful these tools are.
What bothers me as we close out 2025 is that most people are blissfully unaware of these tools, their capabilities, and the fact that they can enable everyone to have an immense amount of fun doing and building things they always wanted to do but were otherwise hobbled by the lack of technical capabilities. Karpathyâs other lineâthe hottest programming language is Englishâis bang on the money. As long as you have the ability to type English, a little bit of common sense, patience, and an iterative mindset, the kind of useful things you can build by wrangling these AI tools is just amazing. All year long, Iâve been having an immense amount of fun fucking about with this.
A lot of people are blissfully unaware about these coding tools because they go through the motions of life without actually learning about whatâs happening in the world around them, especially in the realm of technology. And others are obviously dismissive of these tools because theyâre afraid. Theyâre, in a way, just enabling the tools that will eventually make them obsolete. It comes from a place of threat, not opportunity.
Initially, there was a brief moment when I felt useless. This was around GPT-3.5. I started feeling useless because even Sonnet 3 was better than me in pretty much everything I did. It felt debilitating, and I felt paranoid that itâs only a matter of time before Iâm obsolete. But the more I started using these tools, the more my frame changed. Whether Iâll be obsolete, whether this is the twilight of humansâthese arenât really questions worth pondering about.
As things stand today, Iâd rather use these tools and get them to do useful things for me, both professionally and personally. It could be doing side projects, passion projects, learning new things, and so on and so forth. And inevitably, if the day comes when I have to be put to pasture, then so be it. Thereâs very little I can do to compete against a superintelligence. So once I made that mental shift, I felt a little liberated, and I no longer worried about losing my job or whatever it is.
This isnât to say that you should just blindly start using these tools and ignore the harder questionsâlike, what does our reliance on these tools do? What is the cognitive price? There is a fine line between being actively reliant on these tools and retaining some sense of autonomy, ensuring that your judgment, your discernment, your sense of taste, and your point of view arenât dulled. It involves a lot of work to keep the knife sharp. But thatâs an ongoing tension to manage, not a reason to avoid these tools entirely.
As things stand todayâand Iâm pretty sure this statement will be irrelevant by the next model release, which is probably a couple of days awayâthese are the most acquisitive servants, squires, butlers, creative thought partners you can get. You can get these tools to do things you always wanted to do without having to do the hard work yourself, or even things that were beyond your capabilities. If youâre a normal person like me whoâs not a coder, pretty much everything I wanted to do has always remained an idea. But now, thanks to these tools, a lot of my ideas are actually becoming reality. The most recent being akshara.ink, a site to make literary works in the public domain readable. It was an idea that was on my mind for five-plus years and was beyond the reach of my capabilities. But these AI coding tools helped me build it in a couple of weeks. Itâs ridiculous.
I continue to use these tools, I continue to experiment with these tools, and even in 2025, for all the advancements, I still keep getting surprised by the capabilities every single day. When even top programmers say the same thing, thatâs a sign that we have arrived at a moment where these tools are far more than the dismissive description usually used to describe themâthat these tools are just median statistical bullshitters of the collective mediocrity of humans.
Sure, they might be median statistical bullshitters, but the median is much better than what I can do. The aggregation of millions is much better than the regular median, which means these things are far more capable and far more powerful in a host of domains than most of us normal human beings are. And itâs a tragedy that people dismiss them so easily.
So hereâs what I think: the very least you can do before you form any sort of notion about these AI tools is just pay $20, get a subscription, and start using them for a couple of months. Until and unless youâre absolutely poorâand most of us can afford a $20 subscription, even if it means cutting down on Netflix or whatever for a couple of monthsâI think weâre good.
But hereâs the thing: I donât think youâll truly realize how powerful and unique and capable these tools are if you just stick to the web versions of Claude, Gemini, and ChatGPT. You need to use a reasonably easy coding tool. It could be Googleâs Antigravity, which is right now mostly free, including access to the most powerful models like Claude Opus and Sonnet and Gemini 2. Or it could be Cursor. Or if you can get over the intimidation of having to open a coding terminal, you can use Claude Code. Or better yet, Claude now has a web version.
Until and unless you use these AI coding tools to actually build somethingâit could be a simple blog or an app or whatever it is that you wanted to doâand see the fact that your imagination is now a reality, it truly wonât hit you. The fact that these AI tools are far, far more powerful than you realize wonât hit you. You need to use AI coding tools to see that you can now summon ghosts. And once you see that your idea is now a reality, youâll have a moment. It is a magical experience. Itâs a mental orgasm, if you will.
Here is the cleaned-up transcript snippet from the conversation between Dwarkesh Patel and Andrej Karpathy:
[00:08:12] Dwarkesh Patel: And the vision for AGI then should just be something which just looks at sensory data, looks at the computer screen, and it just figures out whatâs going on from scratch⌠so why shouldnât that be the vision for AI rather than this thing where weâre doing millions of years of training?
[00:08:30] Andrej Karpathy: I think thatâs a really good question. Iâm very careful to make analogies to animals because they came about by a very different optimization process. Animals are evolved and they actually come with a huge amount of hardware thatâs built in. In my post, I used the example of the zebraâa zebra gets born and a few minutes later itâs running around and following its mother. Thatâs an extremely complicated thing to do. Thatâs not reinforcement learning; thatâs something thatâs baked in. Evolution obviously has some way of encoding the weights of our neural nets, and I have no idea how that works, but it apparently works.
[00:09:13] Andrej Karpathy: So I feel like brains came from a very different process and Iâm very hesitant to take inspiration from it because weâre not actually running that process. In my post, I said weâre not actually building animals; weâre building ghosts or spirits or whatever people want to call it. Because weâre not doing training by evolution; weâre doing training by basically imitation of humans and the data that theyâve put on the internet.
[00:09:42] Andrej Karpathy: And so you end up with these sort of ethereal spirit entities because theyâre fully digital and theyâre kind of mimicking humans, and itâs a different kind of intelligence. If you imagine a space of intelligences, weâre starting off at a different point. Weâre not really building animals, but I think itâs also possible to make them a bit more animal-like over time, and I think we should be doing that.
[00:10:04] Andrej Karpathy: I do feel like Rich Sutton basically has a framework where we want to build animals. And I actually think that would be wonderful if we can get that to work. That would be amazing if there was a single algorithm that you can just run on the internet and it learns everything. I almost suspect that Iâm not actually sure that it exists, and thatâs certainly not what animals do because animals have this outer loop of evolution.
Join the Conversation
Share your thoughts and go deeper down the rabbit hole