Normie AGI Is Here

Picture this.

You have a tool at your disposal that knows everything about everything. It’s trained on all of humanity’s knowledge that’s available on the internet, and it has internalized all of it. You can ask it about anything, at any time.

You can ask it whether the Medici family had a higher incidence of diarrhea in the 1500s. You can ask it if Baruch Spinoza would’ve been a brilliant shitposter. You can ask it if Karl Popper liked popcorn. You can ask it if Esha Deol is a better actor than Priyanka Chopra. (It’ll say yes. It’ll be wrong. But it’ll say it with supreme confidence.) Or you can ask it to ghostwrite a LinkedIn dunk post, a thinly veiled rant about your boss’s obsession with “synergy” and “circle back,” so passive-aggressive it could win a Pulitzer.

It can also do things.

You can describe a product or an idea and it’ll just… do it. It can build websites. It can build apps. It can analyze data. It can execute sequential workflows. You can take plain-English instructions and turn them into software artifacts. It can think, reason, and execute logical sequences of steps far better than you ever can. Its research capabilities are better than yours. It can search the entire corpus of humanity’s digitized knowledge.

And it keeps getting better at a rapid clip. In fact, it keeps getting smarter at an exponential rate. A million zillion times faster than the rate at which you are getting smarter. Put another way: these LLMs are getting smarter at a rate faster than the rate at which you can swipe reels. Their increasing smartness is inversely proportional to the rate at which your brain is turning into mush.

Now, how would you describe this thing? This tool?

Wouldn’t you say this is artificial general intelligence, however you conceptualize it? A superintelligent system? A supernatural system? A ghost? A wizard? A witch that can conjure things?

That’s exactly my point.

I’m writing this on the day Claude Opus 4.6 and ChatGPT 5.3 came out. Actually, I’m not writing this. I’m voice-typing it into ChatGPT and editing it with Claude. Which, if you think about it, kind of proves the point. And there’s no doubt in my mind that normie AGI is here.

Mind you: if you’re reading this post, you are most likely a normie. Like me.

And if you’re a normie who’s deluded yourself into thinking these tools are useless? I hate to say it, but it’s game over for you.

For all intents and purposes, the large language models as you see them, as you use them today, are artificial superintelligence for normies like you and me. At this point, if you have any doubt about whether these machines are intelligent, smart, capable, then it’s as good as denying the fact that the Earth revolves around the sun… and that Ram Gopal Varma’s Aag is so spectacularly bad that it’s good.


The Discourse Is Missing the Point

It’s February 2026, and if you spend any time online, people are still arguing: LLMs are useful, LLMs are useless. They keep asking which model is good, which tool to use.

Looking at these discussions, I can’t help but feel like these people aren’t going to make it.

Every morning, what’s stunning to me is this: people have a disposable, intelligent tool trained on pretty much the entire digitized knowledge of humanity, and most people are oblivious. We don’t have great data, but if you look at any usage or survey numbers, it’s not more than 10–15% of people using these tools yet. Twitter and Bluesky discourse is not a good proxy for real life. Those are mostly savvy users who already get it.

Outside that bubble, you say “ChatGPT” and people will slap you in the face because they think ChatGPT is an abuse, like you abused their mother. Basically ChatGPT is a maa ki gaali. And this is… perplexing.

I constantly wake up and remind myself these tools are magical. If you went back to 1900, or even 2000, and described the capabilities of LLMs properly, the correct category for that tool would’ve been science fiction.

And today we’re in this weird split reality. Some people are so used to these tools they think they’re nothing special. Other people are oblivious these tools even exist. That dichotomy is just funny.

The fact that you have a tool at your disposal where you can prompt in plain English and it will produce output indistinguishable from a lot of experts… it shouldn’t be real. It should be movies. It should be sci-fi. But it’s our reality.

And yet so many people are laundering other people’s opinions: “LLMs are useless,” “they hallucinate,” “they’re stochastic parrots” (thanks, Emily Bender), “they mix things up,” “they make mistakes.”

To this I have to say: bro, have you looked at yourself in the mirror?

If you think LLM output is slop, don’t make the mistake of thinking you’re Michelangelo, da Vinci, or Einstein. You are sloppier than the sloppiest LLM. In fact, you’re worse than GPT-2. That’s what I’d say.

As a normie, the fact that I have access to these tools, where I can describe an idea or a site or a business or a hobby and they’ll execute, still blows my mind. These were ideas floating around in my head. I had absolutely no business building any of them. I recently built a simple, beautiful RSS aggregator called smallweb.blog. Before that, I built a site that aggregates great poems in the public domain, with annotations and notes generated by LLMs to help people read and understand them, Poetic Reveries. I’ve built Akshara.ink where I’m trying to digitize Indian public domain texts.

It’s ridiculous. I’m a normie. I don’t know a lick of coding. I shouldn’t be able to do this. Akshara is probably on the more complicated end of the spectrum when it comes to digitizing text. And the fact that somebody like me, with no tech jobs, no “linguistic background,” no coding knowledge, can build something reasonably good and robust? It’s kind of crazy. These things were not only within the realm of imagination, they’re now within the realm of possibility, and I can’t get over it. I’ve been heavily using these coding tools for a while now, and I still wake up surprised every day. That’s saying something.

But the discourse is stuck on: “Is AI useful?” “Which tool?” “Is AI a bubble?” And sure, those topics have a place, but the way people obsess over them feels profoundly unhelpful. There are super-boosters and super-doomers, and even having a take on that seems useless to me.

I’m reminded of something Slavoj Žižek once referenced about quantum mechanics, Copenhagen school vibes: shut up and calculate. Don’t get lost in the ontological metaphysical drama.

Today I’d say: shut up and use them.

It doesn’t matter what you think of them. Until you use them, you don’t get to have an opinion. And if you do have an opinion without using them, you’re just another fucking idiot on the internet copying someone else’s hot take and passing it off as your own.

I’ve seen smart people around me try these tools and get shocked by what they can do. Ideas they always wanted to execute, workflows they always wanted to build, the way they can change how they work. It’s crazy. When I tell people what these tools are capable of, it doesn’t land. I’m just another bloviating idiot praising AI. But when they actually use it, that’s when it hits.

And by “use it,” I don’t mean the typical ChatGPT web interface. Those have their place. But until you use the coding tools, Claude Code, OpenAI Codex, Gemini CLI, Cursor, where you use software to solve problems and build things, you won’t see the true magic. You won’t see the normie AGI.

I highly recommend you do it.

Normie AGI arrived in 2024 itself, even before these coding tools became mainstream. But now it’s normie AGI on steroids.

The Trade-offs

Now, I know how this reads. Another guy on the internet cheerleading for AI. So let me be clear: I’m not a booster.

Thomas Sowell said it best: “There are no solutions, there are only trade-offs.” That holds for LLMs too. These tools aren’t categorically good or bad. They have their uses, they have their place, and they have real costs. A few that keep me up at night:

Cognitive atrophy. You outsource your thinking, your thinking muscles weaken. That’s just how brains work. If the LLM writes your first draft every time, what happens to your ability to stare at a blank page and think?

Judgment erosion. My whole argument about tacit knowledge being the last moat only works if you’re still accumulating tacit knowledge. But if you defer to the LLM on every decision, when do you develop your own instincts? Heavy LLM use could be slowly draining the very moat you’re counting on.

The illusion of competence. I can ship projects now. That’s incredible. But building something is not the same as understanding it. There’s a version of this where you produce a lot and learn nothing.

Taste convergence. Everyone using the same tools, trained on the same data, prompting in similar ways, risks converging on the same median output. The slop problem isn’t just about bad writing. It’s about a flattening of what gets made.

The questions I keep asking myself: How do you retain your sense of judgment when you’re leaning this hard on LLMs? How do you decide what to outsource completely versus where you stay in the loop? How do you use the exoskeleton without your legs forgetting how to walk?

I don’t have clean answers. That’s sort of the point. And if you aren’t holding two simultaneous, painful visions in your head right now, that these tools are magical and that they might be hollowing you out, if that duality doesn’t create an annoying tingling or itching feeling in your brain’s anal cavity, you’re doing it wrong.

The Cope Check

I’m firmly of the belief that these large language models are better than humans at a vast majority of tasks and processes. Not outright replacement yet, but they are much, much better than us normies in a huge range of domains.

So where’s the last moat? Human judgment. Knowing what to build. Taste. All the implicit, tacit, unknowable things you’ve accumulated through your lived experiences, those tiny little tidbits that aren’t really on the internet, that luminiferous ether of knowledge floating around in your head. That stuff is not captured by LLMs. That’s where we still have an edge. But only if you actually use that edge in combination with these tools.

And while I used to feel scared, now I feel liberated. Because I get to use them. I get to steer them. I get to wrangle them. I get to orchestrate them into doing things for me. This saves me time. It allows me to do more. It opens up possibilities I wouldn’t have dreamed of. It’s an exoskeleton.

Here’s the meta point: I’m doing this sitting in front of a coffee shop. No laptop. No drafts. Just talking into a phone and publishing a blog post. That alone should tell you something about where we are.

If you, in 2026, as a normie, aren’t intellectually honest with yourself, if you haven’t taken an inventory of your capabilities versus those of large language models and arrived at the conclusion that they are fundamentally better than you at a vast variety of things, then you’re just coping.

And if you’re not using these tools, not following the AI discourse, not experimenting, not tweaking, not tinkering, not getting a sense of the shape of these capabilities, you are NGMI.

It’s game over for you, bro.