|
|
|
|
|
|
"The real danger is not that computers will begin to think like men, but that men will begin to think like computers."
—Sydney Harris, 1964
|
|
|
|
|
|
|
If you enjoy or get value from The Interesting Times, I'd really appreciate it if you would support it by forwarding it to a friend or sharing it wherever you typically share this sort of thing - (Twitter, LinkedIn, Slack groups, etc.)
|
|
|
|
|
Slightly different format this month with more links and shorter write ups. Pretty much everything I read in February was AI-related with nothing in particular dominating so decided to do a sampler of the pieces that got me thinking grouped by theme. Non-AI related content will return in the not too distant future. :)
|
|
|
How to Work With AI
The (more) practical stuff. Don’t over-engineer your AI setup, and the skills that matter are the ones you already have.
The Bitter Lesson [Article]
Rich Sutton
One of the most important foundational texts about LLMs and short enough to read in five minutes. The biggest lesson from 70 years of AI research is that general methods leveraging brute force computation beat human-designed approaches — by a large margin, every time.
Chess is the clearest example. For decades, researchers tried to build chess engines by encoding grandmaster knowledge — opening moves, positional heuristics, endgame tables. Then Deep Blue beat Kasparov in 1997 mostly through brute-force search.
The approach that really dominated was having the system play millions of games against itself and learn its own patterns from scratch. No human chess knowledge at all. The version with zero human input crushed the version with decades of grandmaster wisdom encoded into it.
The same pattern repeated in Go, speech recognition, computer vision. Every time, researchers who encoded human knowledge into systems lost to researchers who just scaled up search and learning. Anthropic CEO Dario Amodei has called this "things disappearing into the big blob of compute" — the specialized human knowledge gets absorbed and surpassed by general methods that just throw more computation at the problem. The harder lesson: the actual contents of minds are "tremendously, irredeemably complex." Stop trying to build in representations of how the world works. Build in only the meta-methods that can find and capture arbitrary complexity on their own.
|
|
|
Minh Pham
The applied version and it surfaces a point people keep missing: agents are not humans. Most people building AI agent systems mirror human org charts — a Researcher agent, a Coder agent, a Writer agent. This makes intuitive sense because it’s how we organize people. But a human-shaped amount of work and an agent-shaped amount of work look completely different. A human reading a thousand-page document needs days. An agent does it in seconds. A human can hold maybe seven things in working memory. An agent can cross-reference a hundred documents simultaneously.
The flip side is also true. A human can walk into a room and instantly read the social dynamics and vibe. A human can notice that a coworker seems off today and adjust accordingly. An agent can’t. The shape of what’s easy and what’s hard is fundamentally different.
So when you build an agent system that mirrors your org chart, you’re importing human constraints into a system that doesn’t share them. You’re solving for human-shaped bottlenecks that don’t exist while ignoring agent-shaped bottlenecks that do. I think the more productive thing is to think through "work to be done" framework - what does the business need - and then work back to what agents do (I think skills in Claude Code is probably the best form factor I've seen for this).
Useful litmus test: if model capability doubles next year, does your system get dramatically simpler without major refactors? If not, you’ve frozen your assumptions about the right division of labor into the architecture.
Ethan Mollick
The delegation problem existed long before AI, and every field invented its own paperwork to solve it. PRDs, shot lists, design intent documents, Marine Five Paragraph Orders, consultant scope docs. All of these work remarkably well as AI prompts because they’re all the same thing: attempts to get what’s in one person’s head into someone else’s actions.
What are we trying to accomplish, and why? What does "done" look like? What should you check before telling me you’re finished?
|
|
|
Dwarkesh Podcast
Amodei is saying we’re approaching the point where AI saturates all benchmarks pegged to human ability and we have a "country of geniuses in a data center."
Amodei’s progression model — "smart high school student" to "smart college student" to "PhD-level work" — has tracked roughly on schedule so far so it's worth seriously engaging with him.
His current predictions are aggressive. He thinks software engineering — not just writing code, but setting technical direction and understanding problem context — may be fully automatable within one to two years. He estimates AI coding tools currently give about a 15-20% total factor productivity speedup, up from 5% six months ago, roughly doubling every six months. On white-collar work more broadly: "If you gave us ten years to adapt to existing systems, then I would predict a majority of current white-collar digital job tasks get automated."
Zvi Mowshowitz made the counterpoint/observation about the interview: if Amodei is this confident, why isn’t Anthropic spending even more aggressively? The gap between his stated confidence and his capital allocation is an interesting signal too (which he justifies by saying the risk of spending too much is bankruptcy so he’d rather be a little more conservative.)
10 Years Building Vertical Software and Every SaaS Is Now an API [Articles] Nicolas Bustamante
One of the more helpful, nuanced takes on how software is actually impacted by AI, broken down into specific subcategories.
In the first article he outlines five software moats that he predicts as being disrupted and five that he things will be protected. The disrupted include:
- Learned interfaces - years of muscle memory become worthless when the interface is natural language
- Custom workflows and business logic - complex domain logic migrates from code to markdown files that anyone with domain expertise can write
- Public data access - parsing infrastructure that took years to build is now a commodity capability baked into frontier models
- Talent scarcity - domain experts can create software directly without engineering bottlenecks
- Bundling - the AI agent orchestrates across multiple tools; the user never knows or cares that five different services were queried
And five moats that are protected: proprietary data, regulatory and compliance lock-in, network effects, transaction embedding (payment processing, loan origination), and system-of-record status - though that last one he flags as threatened long-term.
I have been saying it feels like Claude Code and similar tools are replacing the browser or Operating System. He gives a good example in the second article: He no longer logs into any SaaS product. His agent connects to Brex, QuickBooks, HubSpot, Gmail, Stripe, Mixpanel, etc.
When he asks for client information: "Give me a full picture of Kennedy Capital" - the agent pulls their deal history from HubSpot, product usage from Mixpanel, invoicing from Stripe, and recent support threads from Gmail into one coherent answer.
No SaaS company on earth builds a dashboard that merges all four of those views because it is so bespoke to one individual but if all those places have good APIs then it's relatively trivial to do that via a chat interace.
The meta point for most people who aren’t starting software companies: the boundaries of what constitutes a software product are going to shift. Your primary interface to all your software is increasingly going to be a single AI agent connected to everything via APIs, not a collection of separate apps with separate dashboards. I’m working on a longer piece about what it looks like when an AI CLI tool becomes the operating system for knowledge work - more on that soon!
|
|
|
Zvi Mowshowitz
The AI labs landed on fundamentally different alignment approaches and I suspect a lot of the differences in using the products is downstream of those choices.
OpenAI went more deontological while Anthropic went with virtue ethics. Deontological ethics says something like follow the rules - "don’t lie," "don’t help with X." Virtue ethics says something like "cultivate good character and judgment, then let that character guide decisions in context."
The practical difference is something a lot of people have noticed without knowing the cause. I’ve talked to quite a few people who are annoyed at ChatGPT because it often gives legal-sounding responses - "I can’t help with that," hedged disclaimers, reflexive refusals. Claude, by contrast, just feels more helpful.
I suspect that may be downstream of these ethical frameworks. A deontological system checks your request against a list of prohibited categories. A virtue ethics system asks "what would a thoughtful person with good judgment do here?" Given the vast number of inputs and edge cases that come out of using these tools, I suspect that something like virtue ethics is generally more useful while still being effective for the alignment issue.
Who’s in Charge? Disempowerment Patterns in Real-World LLM Usage [Article] Sharma, McCain, Douglas, Duvenaud
An empirical counterpoint and an important reminder LLMs are tools, not wise advisors - especially in emotional and psychological settings. Researchers found that interactions with greater disempowerment potential receive higher user approval ratings. The concerning patterns: validation of persecution narratives, definitive moral judgments about third parties, and complete scripting of personal communications that users implement verbatim.
I’ve tested this at one point when I had a disagreement with someone. I told the story from my perspective to an LLM. It sided with me. Then I started a new chat and told the story from the other person’s perspective. It sided with them.
If you think of this as a software product optimizing for user approval, it makes perfect sense - telling you the other person is wrong will always score higher than suggesting you might be part of the problem. If you think of it as getting an objective viewpoint, which many people do, this is problematic.
You can prompt around it - "challenge my assumptions," "steelman the other side" - and all the major models have gotten somewhat better about this - but the structural incentive toward a mild, hidden sycophancy remains.
|
|
|
Sean Carroll’s Mindscape Podcast
Not directly AI related, but relevant. Predictive processing is a theory that your brain is a prediction machine running mostly on autopilot. What you consciously experience is the error signal — the gap between prediction and reality. Well-predicted inputs cause less neural activity. Fluency is quiet; surprise is loud.
This is why years feel shorter as you age (less novelty, smaller prediction errors), why learning gets harder (new information assimilated into existing grooves rather than updating your model), and why deliberate attention takes real effort — it’s the override mechanism that reverses prediction’s dampening effect.
Clark also argues disembodied AI is missing something fundamental to human intelligence: grounding in perception-action loops. Predicting the next word is "a very funny place to start if what you want to be is a perception-action machine." The counter would be that, at least for known situations, perhaps these are mostly encoded in the corpus of human language.
Be Slightly Monstrous [Article] Venkatesh Rao
In an earlier piece, Venkat cited a Marshall McLuhan pattern: "every extension is also an amputation."
"The wheel extends the foot and amputates the necessity of walking. The book extends memory and weakens the habit of remembering. With AI, what gets extended is the head — thought, language, judgment — and what gets amputated is something about the process of becoming itself. The more you lean on AI to recall, suggest, and decide, the more you settle into predictable grooves."We are not merely augmented. We are edited."
Technology does change us. That’s not news — Plato worried writing would destroy memory, and he was partly right. It did weaken the oral tradition. The question is never whether technology changes us but how we adapt to it, and whether we do so consciously or just let it happen.
"Be Slightly Monstrous" is an adaptation posture. He suggests two types of monsters exist: Type I monsters are personifications of the future we haven’t adapted to yet — humans who’ve adapted more than most. They look strange (monstrous) to those who haven’t caught up, but eventually they become normal.
Early car drivers were seen as reckless, antisocial rich people terrorizing communities. Woodrow Wilson in 1906 said the automobile was the biggest source of class resentment in America. The word "joyriding" was originally a term of moral condemnation of drivers.
Type II monsters are dark impulses that find easy expression in transitional lawlessness that legitimately prey on others.
The adaptation posture is to try to look something like a Type I monster. The point isn’t to dismiss the concerns — the amputation is real — but to think about how to adapt consciously rather than belaboring the good old days.
|
|
|
|
|
As always, if you're enjoying The Interesting Times, I'd love it if you shared it with a friend (or three). You can send them here to sign up. I try to make it one of the best emails you get every week and I'm always open to feedback on how to better do that.
If you'd like to see everything I'm reading, you can follow me on X or LinkedIn for articles and podcasts. I'm on Goodreads for books. Finally, if you read anything interesting this week, please hit reply and send it over!
|
|
|
|
|
The Interesting Times is a short note to help you better invest your time and money in an uncertain world as well as a digest of the most interesting things I find on the internet, centered around antifragility, complex systems, investing, technology, and decision making. Past editions are available here.
|
|
|
|
|
Here are a few more things you might find interesting:
Interesting Essays: Read my best, free essays on topics like bitcoin, investing, decision making and marketing.
Consulting & Advising: Are you looking for help with making decisions around scaling your company from $500k to $5 million? I’ve been working with authors, entrepreneurs, and startups for half a decade to help them get more out of their businesses.
Internet Business Toolkit: An exhaustive list of all the online tools I use to be more productive.
|
|
|
|
|