Four For Friday | February 13, 2026
LF207 What's next for AI's frontier, Are we in a bubble, OpenClaw, Collectivism vs loneliness and Building your personal board with Claude Code
Welcome to this week’s Four For Friday. Here are four nuggets of interest I’ve picked up this week, plus an AI lesson of the week. It’s quite an AI-heavy one this week as my mind has been blown by using Claude Code and Open Claw extensively, so keen to share some perspectives.
1. Going beyond autocomplete
Those who gripe about how AI models are still dumb (they don’t even know how to wash a car…) are missing the point. Global humanity has for the first time an incredible tool that can bring the world’s intelligence to our fingertips. And for most things that’s amazing. Complaining about that is like complaining about the speed of wifi on a plane. However, there is an interesting question about how AI can go beyond synthesizing and digesting already known knowledge to creating new discoveries.
New science comes from anomalies that break things, rather than conform to it; a model trained on Newton will never invent quantum science. MIT Professor Markus Buehler suggests three ways that we can have AI go from librarian to scientist:
First, compositional world models. Systems must build explicit world models that can be recomposed, not just statistical summaries. Discovery requires systems that represent knowledge as structured relationships, not just correlations in text. By working with explicit models of how parts fit together, an AI can recombine principles across domains and propose explanations that are not simple extensions of past cases.

Second, adversarial falsification. Scientific progress depends on proving ideas wrong. Systems therefore need internal opposition. One component compresses observations into the simplest possible theory, while another actively searches for counterexamples. Progress occurs when theories are revised to survive stronger attacks, rather than when outputs merely sound plausible. (The examples in Claude Code below both have teams of agents that critique with each other, creating a stronger output).
Third, physical grounding. Ideas must confront reality. Predictions are hypotheses until tested against simulations, experiments, or fabrication. Without this loop, models drift toward elegant nonsense.
Together, these levers would shift AI from prediction within a closed worldview to systems that can revise the worldview itself.
So What? We’ll be hearing a lot more about ‘world models’, as the limits of purely text-based LLMs become apparent. These three levers help us build them.
2. Maybe there’s no AI bubble
Azeem Azhar questions the prevailing assumptions that we’re about to experience an AI bubble, suggesting it could be the opposite - the beginning of a far bigger boom.
Data points to a capacity crunch. In two years, monthly AI revenues rose eighteen-fold, from $772m in January 2024 to $13.8bn by December 2025. “Industry Strain”, the ratio of investment to revenue, fell from 6.1x to 4.7x in five months, moving away from bubble territory.
AI has moved from novelty to infrastructure, from chatbots to agents that work for hours at a time. That shift devours computing power. Cloud firms are already rationing capacity, delaying customers and diverting resources between products. Heck, AWS is even getting into copper mining… Data centres take years to build, power grids longer still. Capital can be summoned quickly; electricity, chips and concrete cannot. The risk ahead is not collapse, but congestion: a stampede toward AI that the world is struggling to supply.
The So What? A counterpoint to Cory Doctorow’s perspective in LF204, and this one grounded in numbers. What if the bears are wrong?
3. Open Claw and why it matters
I’ve been really diving into OpenClaw this week and it’s a fascinating experience to be able to chat (am using Telegram) with an always-on, always-remembering assistant - taking us into the Jarvis territory. This post by the generally hype-averse founder of Basecamp does a nice job of summarising its impact.
It’s the first real shift from tools to autonomous digital actors- moving beyond the prompt–response loop. With no special plugins or APIs, the agent signs itself up for email, joins collaboration tools, creates shared workspaces and responds to invitations much as a human would. It navigates a world built for people, using the same interfaces and cues.
This suggests a future of AI that is less about integration and more about resilience, flexibility and real world adaptiveness - similar to the argument in #1.
The So What? Personal AI assistants may soon behave less like software and more like junior colleagues.
4. Collectivist countries are lonelier than individualist ones?
A counter-intuitive take here based on this research and a new WHO report. We might assume that collectivist societies offer deeper connection, but it turns out that the best recipe for addressing loneliness is to have high individuality with high quality, reliable societal infrastructure.
Loneliness is often blamed on excessive individualism, but that misses the mark. Collectivism is about organising life around tight in-groups such as family, clan, caste or social networks (e.g. China, Pakistan, Indonesia and Philippines) confers some advantages but if you’re outside the in-group, it’s devastating.
The evidence points to societies like Denmark and Netherlands having the best loneliness outcomes (a big factor in overall health) - they combine autonomy with strong welfare states and high trust.
The So What? True social resilience comes not from enforced togetherness but from institutions and norms that make individuals secure without constant relational upkeep.
Claude Code Lesson of the Week: Build your AI Boardroom or AI workforce
Claude Code is everywhere, and it’s worth spending a few hours setting it up and figuring it out as it’s mind expanding. It’s fundamentally different than the chat interface that we’re used to, because it lives inside your computer, can access far more context about what it should do and how to work and can actually make changes in your computer folders, such as writing code that is then pushed live on the web. I’ve built 3 websites already this week - far quicker and easier than using prompt-based approaches to design the front-end, like V0.
Anyway, AI influencer Allie Miller shares a prompt that allows you to build a team of advisors to help with your projects, while this video shows how a guy created a workforce of 14 agents that helps him run his business.
That’s all for now - happy weekend everyone.
- Stephen


