Normie AGI Is Here
Picture this.
You have a tool at your disposal that knows everything about everything. Itâs trained on all of humanityâs knowledge thatâs available on the internet, and it has internalized all of it. You can ask it about anything, at any time.
You can ask it whether the Medici family had a higher incidence of diarrhea in the 1500s. You can ask it if Baruch Spinoza wouldâve been a brilliant shitposter. You can ask it if Karl Popper liked popcorn. You can ask it if Esha Deol is a better actor than Priyanka Chopra. (Itâll say yes. Itâll be wrong. But itâll say it with supreme confidence.) Or you can ask it to ghostwrite a LinkedIn dunk post, a thinly veiled rant about your bossâs obsession with âsynergyâ and âcircle back,â so passive-aggressive it could win a Pulitzer.
It can also do things.
You can describe a product or an idea and itâll just⌠do it. It can build websites. It can build apps. It can analyze data. It can execute sequential workflows. You can take plain-English instructions and turn them into software artifacts. It can think, reason, and execute logical sequences of steps far better than you ever can. Its research capabilities are better than yours. It can search the entire corpus of humanityâs digitized knowledge.
And it keeps getting better at a rapid clip. In fact, it keeps getting smarter at an exponential rate. A million zillion times faster than the rate at which you are getting smarter. Put another way: these LLMs are getting smarter at a rate faster than the rate at which you can swipe reels. Their increasing smartness is inversely proportional to the rate at which your brain is turning into mush.
Now, how would you describe this thing? This tool?
Wouldnât you say this is artificial general intelligence, however you conceptualize it? A superintelligent system? A supernatural system? A ghost? A wizard? A witch that can conjure things?
Thatâs exactly my point.
Iâm writing this on the day Claude Opus 4.6 and ChatGPT 5.3 came out. Actually, Iâm not writing this. Iâm voice-typing it into ChatGPT and editing it with Claude. Which, if you think about it, kind of proves the point. And thereâs no doubt in my mind that normie AGI is here.
Mind you: if youâre reading this post, you are most likely a normie. Like me.
And if youâre a normie whoâs deluded yourself into thinking these tools are useless? I hate to say it, but itâs game over for you.
For all intents and purposes, the large language models as you see them, as you use them today, are artificial superintelligence for normies like you and me. At this point, if you have any doubt about whether these machines are intelligent, smart, capable, then itâs as good as denying the fact that the Earth revolves around the sun⌠and that Ram Gopal Varmaâs Aag is so spectacularly bad that itâs good.
The Discourse Is Missing the Point
Itâs February 2026, and if you spend any time online, people are still arguing: LLMs are useful, LLMs are useless. They keep asking which model is good, which tool to use.
Looking at these discussions, I canât help but feel like these people arenât going to make it.
Every morning, whatâs stunning to me is this: people have a disposable, intelligent tool trained on pretty much the entire digitized knowledge of humanity, and most people are oblivious. We donât have great data, but if you look at any usage or survey numbers, itâs not more than 10â15% of people using these tools yet. Twitter and Bluesky discourse is not a good proxy for real life. Those are mostly savvy users who already get it.
Outside that bubble, you say âChatGPTâ and people will slap you in the face because they think ChatGPT is an abuse, like you abused their mother. Basically ChatGPT is a maa ki gaali. And this is⌠perplexing.
I constantly wake up and remind myself these tools are magical. If you went back to 1900, or even 2000, and described the capabilities of LLMs properly, the correct category for that tool wouldâve been science fiction.
And today weâre in this weird split reality. Some people are so used to these tools they think theyâre nothing special. Other people are oblivious these tools even exist. That dichotomy is just funny.
The fact that you have a tool at your disposal where you can prompt in plain English and it will produce output indistinguishable from a lot of experts⌠it shouldnât be real. It should be movies. It should be sci-fi. But itâs our reality.
And yet so many people are laundering other peopleâs opinions: âLLMs are useless,â âthey hallucinate,â âtheyâre stochastic parrotsâ (thanks, Emily Bender), âthey mix things up,â âthey make mistakes.â
To this I have to say: bro, have you looked at yourself in the mirror?
If you think LLM output is slop, donât make the mistake of thinking youâre Michelangelo, da Vinci, or Einstein. You are sloppier than the sloppiest LLM. In fact, youâre worse than GPT-2. Thatâs what Iâd say.
As a normie, the fact that I have access to these tools, where I can describe an idea or a site or a business or a hobby and theyâll execute, still blows my mind. These were ideas floating around in my head. I had absolutely no business building any of them. I recently built a simple, beautiful RSS aggregator called smallweb.blog. Before that, I built a site that aggregates great poems in the public domain, with annotations and notes generated by LLMs to help people read and understand them, Poetic Reveries. Iâve built Akshara.ink where Iâm trying to digitize Indian public domain texts.
Itâs ridiculous. Iâm a normie. I donât know a lick of coding. I shouldnât be able to do this. Akshara is probably on the more complicated end of the spectrum when it comes to digitizing text. And the fact that somebody like me, with no tech jobs, no âlinguistic background,â no coding knowledge, can build something reasonably good and robust? Itâs kind of crazy. These things were not only within the realm of imagination, theyâre now within the realm of possibility, and I canât get over it. Iâve been heavily using these coding tools for a while now, and I still wake up surprised every day. Thatâs saying something.
But the discourse is stuck on: âIs AI useful?â âWhich tool?â âIs AI a bubble?â And sure, those topics have a place, but the way people obsess over them feels profoundly unhelpful. There are super-boosters and super-doomers, and even having a take on that seems useless to me.
Iâm reminded of something Slavoj Ĺ˝iĹžek once referenced about quantum mechanics, Copenhagen school vibes: shut up and calculate. Donât get lost in the ontological metaphysical drama.
Today Iâd say: shut up and use them.
It doesnât matter what you think of them. Until you use them, you donât get to have an opinion. And if you do have an opinion without using them, youâre just another fucking idiot on the internet copying someone elseâs hot take and passing it off as your own.
Iâve seen smart people around me try these tools and get shocked by what they can do. Ideas they always wanted to execute, workflows they always wanted to build, the way they can change how they work. Itâs crazy. When I tell people what these tools are capable of, it doesnât land. Iâm just another bloviating idiot praising AI. But when they actually use it, thatâs when it hits.
And by âuse it,â I donât mean the typical ChatGPT web interface. Those have their place. But until you use the coding tools, Claude Code, OpenAI Codex, Gemini CLI, Cursor, where you use software to solve problems and build things, you wonât see the true magic. You wonât see the normie AGI.
I highly recommend you do it.
Normie AGI arrived in 2024 itself, even before these coding tools became mainstream. But now itâs normie AGI on steroids.
The Trade-offs
Now, I know how this reads. Another guy on the internet cheerleading for AI. So let me be clear: Iâm not a booster.
Thomas Sowell said it best: âThere are no solutions, there are only trade-offs.â That holds for LLMs too. These tools arenât categorically good or bad. They have their uses, they have their place, and they have real costs. A few that keep me up at night:
Cognitive atrophy. You outsource your thinking, your thinking muscles weaken. Thatâs just how brains work. If the LLM writes your first draft every time, what happens to your ability to stare at a blank page and think?
Judgment erosion. My whole argument about tacit knowledge being the last moat only works if youâre still accumulating tacit knowledge. But if you defer to the LLM on every decision, when do you develop your own instincts? Heavy LLM use could be slowly draining the very moat youâre counting on.
The illusion of competence. I can ship projects now. Thatâs incredible. But building something is not the same as understanding it. Thereâs a version of this where you produce a lot and learn nothing.
Taste convergence. Everyone using the same tools, trained on the same data, prompting in similar ways, risks converging on the same median output. The slop problem isnât just about bad writing. Itâs about a flattening of what gets made.
The questions I keep asking myself: How do you retain your sense of judgment when youâre leaning this hard on LLMs? How do you decide what to outsource completely versus where you stay in the loop? How do you use the exoskeleton without your legs forgetting how to walk?
I donât have clean answers. Thatâs sort of the point. And if you arenât holding two simultaneous, painful visions in your head right now, that these tools are magical and that they might be hollowing you out, if that duality doesnât create an annoying tingling or itching feeling in your brainâs anal cavity, youâre doing it wrong.
The Cope Check
Iâm firmly of the belief that these large language models are better than humans at a vast majority of tasks and processes. Not outright replacement yet, but they are much, much better than us normies in a huge range of domains.
So whereâs the last moat? Human judgment. Knowing what to build. Taste. All the implicit, tacit, unknowable things youâve accumulated through your lived experiences, those tiny little tidbits that arenât really on the internet, that luminiferous ether of knowledge floating around in your head. That stuff is not captured by LLMs. Thatâs where we still have an edge. But only if you actually use that edge in combination with these tools.
And while I used to feel scared, now I feel liberated. Because I get to use them. I get to steer them. I get to wrangle them. I get to orchestrate them into doing things for me. This saves me time. It allows me to do more. It opens up possibilities I wouldnât have dreamed of. Itâs an exoskeleton.
Hereâs the meta point: Iâm doing this sitting in front of a coffee shop. No laptop. No drafts. Just talking into a phone and publishing a blog post. That alone should tell you something about where we are.
If you, in 2026, as a normie, arenât intellectually honest with yourself, if you havenât taken an inventory of your capabilities versus those of large language models and arrived at the conclusion that they are fundamentally better than you at a vast variety of things, then youâre just coping.
And if youâre not using these tools, not following the AI discourse, not experimenting, not tweaking, not tinkering, not getting a sense of the shape of these capabilities, you are NGMI.
Itâs game over for you, bro.
Join the Conversation
Share your thoughts and go deeper down the rabbit hole