A few good rabbit holes for your enjoyment.
A few things to move you and rouse you from The Marginalian:
As a person with remarkable talent for getting in my own way, this one hit hard:
In a sentiment he would later deepen in his moving 2013 Syracuse commencement address, he adds:
So what is stopping me from stepping outside my habitual crap?
My mind, my limited mind.
The story of life is the story of the same basic mind readdressing the same problems in the same already discredited ways.
In a wonderful aside from another essay, he offers what may be the best recipe for breaking out of the mindâs recursive and limiting stories:
Donât be afraid to be confused. Try to remain permanently confused. Anything is possible. Stay open, forever, so open it hurts, and then open up some more, until the day you die, world without end, amen.
One thing experience shows us over and over, if we pay enough attention, is that the way out of such suffering, out of the abyss of self-concern with our mattering project, is always unselfing. Eno describes the cycle:
It goes like this: me thinking, âWhatâs it all for?/ Whatâs the bloody point?/ I havenât done anything I like and I donât have a clue what to do next/ Iâm a completely empty shell.â This lasts two days or so⊠Then I suddenly notice â apropos of something very minor, like the way a plane crosses the sky, or the smell of trees, or the light in the early evening, or remembering one of my brotherâs jokes â that I am thoroughly enjoying myself and completely, utterly glad to be alive. Not one of the questions I asked myself has been answered. Instead, like all good philosophical questions, theyâve just ceased to matter.
Good quote:
âThe most tragic form of loss isnât the loss of security; itâs the loss of the capacity to imagine that things could be different.â
Ernst Bloch
Hat tip to the awesome Nicholas Gruen.
Not having a worthy purpose, a worthy âwhyâ, will end up harming every important area of your life.
The problem of distraction, for example, of excessive screen time, is also, at bottom, a âwhyâ problem. Itâs a problem of motivation. If I told you that youâd receive $1 million for quitting all screens for three months, every single person would do it in a heartbeat.
People âcanâtâ stop scrolling because they donât have a strong enough reason to cultivate an ultra-sharp, ultra-focused attention span. Theyâre not trying to write the next great novel, direct the next great film, or lead a great new political movement. Or even just be a person who wants to think more clearly and carefully. The same applies to porn addiction, junk eating, etc. You donât have a strong enough reason to be free from those dopamine traps, to use yourself in the way you want to.
No matter how you spin it, part of you has accepted mediocrity. And thatâs a tragedy: because there is work that you, and only you, can bring into this world. There is an excellence in you that remains offline.
I remembered this quote as I was reading the post:
âHe who has a why to live for can bear almost any how.â
â Friedrich Nietzsche
Is a private credit crisis just a matter of time? Yes, says Natash Sarin:
Worth keeping in mind, for those nervous about private credit runs, is that the market is a relatively small sliver of the financial sector. Itâs a roughly $2 trillion market, compared with a banking industry more than 12 times that size. That means the aftershocks should be smaller than the bank failures of the past.
But we run real risks that the financial system is more interconnected than we appreciate. Banks themselves are wrapped up in private credit lending. Loans that banks are making to non-bank lenders, a group of firms that includes private credit, accelerated more quickly than any other type of bank lending in recent years. That means private credit risks could easily cascade throughout the system.
Stomach quarter full?
This is depressing. I didnât know.
On top of that, there have been disturbing findings about the quality of food itself. A study published in November 2023 in Scientific Reports by researchers from the Indian Council of Agricultural Research, found that five decades of breeding programmes of the Green Revolution have systematically reduced the nutritional content of the staple grains of India.
Zinc concentration in rice has dropped by approximately 33% since the 1960s and iron by 27%. In wheat, zinc declined by 30% and iron by 19%. The study warns that if current trends continue, rice and wheat stand to lose up to 45% of their food value by 2040.
In other words, Indiaâs population is eating less and also eating food that delivers less nourishment per mouthful.
Myth and mythos
If small models are really this good then we may be underestimating their impact when LLMs diffuse.
But here is what we found when we tested: We took the specific vulnerabilities Anthropic showcases in their announcement, isolated the relevant code, and ran them through small, cheap, open-weights models. Those models recovered much of the same analysis. Eight out of eight models detected Mythosâs flagship FreeBSD exploit, including one with only 3.6 billion active parameters costing $0.11 per million tokens. A 5.1B-active open model recovered the core chain of the 27-year-old OpenBSD bug.
And on a basic security reasoning task, small open models outperformed most frontier models from every major lab. The capability rankings reshuffled completely across tasks. There is no stable best model across cybersecurity tasks. The capability frontier is jagged.
Beautiful photos from Atemis II

This is a dangerous one and we all suffer from it:
But something unexpected happened. Every time I sat down and needed to actually explain a concept â really explain it, step by step, in plain language â Iâd hit a wall. What I thought was knowledge didnât survive the simple test of having to put it into my own words.
This exercise forced me to confront what Iâve come to call the illusion of clarity: the confident feeling that you understand something, when in reality your grasp is full of gaps youâve never noticed.
Joachim Klement in why US markets have elevated valuations:
Personally, I am on the record for favouring the third option, since something that has become obvious is that companies like the Mag7 in the US enjoy profit margins that are way above normal levels and are no longer mean-reverting because there is no competition emerging that could undermine their profit margins. If there is a competitor that could become a danger, the big tech companies tend to simply buy that rival, thus creating a kill zone around them.
Plus, Cory Doctorowâs Enshittification thesis claims that, besides the lack of competition, a lack of antitrust enforcement has enabled these companies to lock in both consumers and advertisers, thus creating virtual monopolies that they can extract to the benefit of the owners of these companies.
Branko Milanovic on Chinaâs growth slowdown:
So, yes: Chinese growth is slowing down faster than the typical world growth would slow, but China is still growing at significantly higher rates than we would expect, basing ourselves on the global data covering the past 75 years.
How does it compare with Japanâs experience that many people are arguing China is heading towards? What we notice in figure below is that Japan too has had much higher growth rates during its expansion than one would expect based on global experience. When Japanâs GDP per capita was around $PPP 10,000, it grew at almost 6% annually vs. (as we have seen) about 2.5% for the world. But then, from about $PPP 30,000 (see the dashed line in the graph) Japanâs deceleration was remarkably sharp so much so that eventually, and very briefly, Japanâs growth performance became inferior to the average world performance at that income level, Since then Japan seems to have fully gone back to the âworld lineâ; in other words, its performance is neither exceptionally good, nor exceptionally bad, but average for a country at Japanâs income level.
Venkatesh Rao on writing to think in the age of AI:
With AI in the loop, writing is no longer the best way to think for me. Which has created a weird impedance mismatch. I used to be a prolific writer mainly because my speed of thinking and speed of writing were roughly the same, creating an in-my-head REPL. Then I slowed down (roughly starting 2020) because I thought slower than I wrote due to age, memory worsening etc.
Now with AI, I think about 10x faster than I can write, not just with language but with code, which used to be a painfully slow medium of thought for me before. Thinking by mindful doing was always more powerful and intense than thinking by writing but I sucked at doing anything besides writing. I still suck, but with Claude code, the AI does the actual doing, and I just have to bring the mindfulness, which I can. The result is thinky doing. Iâm resurrecting a term I used to use privately for this long ago, âthinkeringâ (like tinkering). Thinkering with AI with or without words is about 10x faster than unassisted writing, which means trying to write about whatâs on my mind feels painfully slow, laggy, and brittle. Like producing documentation. This divergence between writing and thinking will only increase. I can imagine a âClaude Makeâ that does for physical AI what Claude Code did for code. That will be mindful doing thatâs even further from thinking by writing. It might be no-inner-monologue pure fingerspitzengefuhl zen-no-mind thoughtless thoughts.
AI exhibits human biases. ShockingâŠnot!
Note a couple of things. First, belief-based questions typically involve maths and statistics, and genAI models tend to be really good at maths, so they give the rational answer almost all the time.
Second, when it comes to expressing preferences for one option or another, all they can rely on is what they have learned from the output of other humans. And that is where human-like behaviour comes in. In particular, the most popular genAI models GPT, Claude and Gemini show clear human-like biases in many of the tasks they were asked to perform.
Sadly, the study is bad on outdated models. I wonder if the latest frontier models are much better:
Fourth, we examine LLMsâ responses to questions from two recent experimental economics studies. Afrouzi et al. (2023) ask human participants to observe a sequence of past realizations of a random variable and then forecast its future values; the random variable follows an autoregressive process. We elicit LLMsâ responses in this setting and find that advanced small-scale LLMsâGPT- 4o, Claude 3 Haiku, and Gemini 1.5 Flashâproduce forecasts that are human-like and irrational: they perceive an autoregressive process that is more persistent than the true process. By contrast, their larger-scale counterparts generate more rational forecasts, with perceived persistence similar to the true persistence.
Bose et al. (2022) present human participants with stock price trajectories and ask them how much to invest in each stock. Replicating this setting for LLMs, we find that large-scale modelsâ GPT-4, Claude 3 Opus, and Gemini 1.5 Proâmake investment decisions that are more human-like than their smaller-scale counterparts: investment depends more strongly on the visual salience of a stockâs price trajectory, a preference-based variable identified by Bose et al. (2022) as driving human investment behavior. Taken together, these results suggest that the patterns documented with cognitive psychology questions also hold in experimental economics studies: for preference- based questions, larger models produce increasingly human-like responses, while for belief-based questions, larger models produce increasingly rational responses
Reminds me of Shannon Vallorâs metaphor of AI as a mirror.
As someone who is constantly anxious and worried about the future, I relate to everything in this post:
Jhanas are not the most important thing in life. Probably not even close.
But neither are jobs and goals and stories we tell ourselves to make getting out of bed and scratching out a living more bearable.
Iâm not going to tell you to go meditate. Iâm not a teacher, this wasnât a how-to, and I donât think you need to sit on a cushion and attain something called enlightenment to live a life worth living.
But I will say that with years of distance now between me and that swingset, when I scroll back through my memories of that period, with the panic attacks, the feelings of betrayal, the 14-hour days, the obsidian dagger in my chest, what I actually remember most isnât the suffering.
What I remember is those 20 minutes a day in the redwoods, and hugging my wife.
I remember it as a profoundly good time. That still kind of surprises me.
Jhanas didnât âfixâ anything in my life. They didnât save my failing business or mend the friendship I lost or give me back the years Iâd spent building them.
What they did was quietly, repeatedly demonstrate, in a way that apparently my rational brain could not do on its own, that I was already okay.
Wonderful post by Stephanie Shen drawing on Wittgenstein to psychoanalyze LLMs:
Human thinking and understanding are embodied and rooted in a reality far greater than the compressed domain of language. LLMs only know what humans have figured out, which is a small subset of reality.
Truth and meaning result from human interactions with the real world. Once we anchor in this frame, LLMs are not a threat but merely a tool to leverage. They are, in essence, not different from calculators or planes, which can do specific things far better than humans, but are still the instruments under humansâ control. Instead of comparing them to humans, we should focus on determining how best to use them in appropriate circumstances with the right expectations.
For example, we can leverage LLMs to assist in brainstorming, gathering diverse viewpoints, identifying patterns in data, or summarizing large amounts of information. But we have to always review and fact-check the results using our own judgment and expertise. More importantly, we have to infuse our own original thinking and experience.
A thoughtful post by Séb Krier on recursive self-improvement and a looming singularity:
On the deployment side, things get even more complicated. Deploying models into the world is not just a ânice to haveâ thing that labs do out of charity. Labs have strong incentives to see these systems deployed, permitted, adopted, and integrated across the economy. Over time, this is one major way the scale of frontier spending gets justified. And in parallel, you need to go through the court cases, the regulatory burdens, the legal compliance, the weird adoption dynamics, the integration into legacy systems, the cultural adjustments, the political headwinds, everything! There are all sorts of reasons why deployment takes time and I think people are too quick to just wave these away with some handwavy remark about âcompetitive pressuresâ. This is less a point about narrow model self-improvement than about industrial diffusion: even if models improve quickly, the automation of the economy still has to run through deployment bottlenecks.
When people talk about recursive self-improvement and then talk about society being unrecognizably transformed at a very fast speed, theyâre not talking about models developing, but essentially about the entire economy self-improving, where every physical and human constraint disappears. I think itâs uncontroversial to claim that getting to this point will take time. Even if you get much better robots in the coming years, which I expect will happen, getting humans completely out of the physical and digital economy loop is a pretty damn high bar. And even in such a world, you still do not get a âhard takeoffâ, because so much remains tethered to human time still.
Join the Conversation
Share your thoughts and go deeper down the rabbit hole