Currently Reading

Chapter 10: Living with Radical Uncertainty

Before the Sky Falls

Chapter 10: Living with Radical Uncertainty

We've established that we don't know — can't know — most of what matters about AI and its risks. Yet here we are, building it anyway, using it anyway, transforming society anyway. The uncertainty isn't stopping the process; it's just making it stranger.

The Paradox of Necessary Action

We must act despite ignorance because not acting is itself an action. Choosing not to develop AI doesn't stop others. Choosing to wait for certainty means waiting forever.

Every choice — to build, to regulate, to accelerate, to pause — is made under radical uncertainty. We're forced to gamble with stakes we can't calculate on a game we don't understand.

This isn't a solvable problem. There's no correct strategy for navigating genuine uncertainty. We're playing blind. Yet we play anyway, because paralysis is also a choice with unknown consequences.

Strategies for the Uncertain

Given radical uncertainty, some approaches seem wiser than others:

Preserve Reversibility: Avoid irreversible actions when possible. Keep options open. Build in off-switches even if we're not sure they'll work.

Diversify Approaches: Try multiple safety strategies, multiple development paths. What fails in one context might succeed in another.

Build Adaptive Capacity: Since we can't predict what we'll need, build general capacity to respond. Strengthen institutions, improve coordination, develop flexibility.

Embrace Incrementalism: Small steps reveal information. Moving slowly might mean losing races, but moving too fast might mean losing everything.

Maintain Vigilance: Watch for signs we're wrong. Monitor for unexpected capabilities. The earlier we notice deviation from expectations, the more time we have to respond.

The Emotional Challenge

Radical uncertainty is psychologically brutal. Humans need narratives, predictions, control. Living without these is deeply uncomfortable.

The temptation is to collapse uncertainty into false certainty — to become a doomer or accelerationist, to pick a tribe and adopt its certainties. But false certainty might be more dangerous than acknowledged uncertainty.

How do we maintain stability while acknowledging we're building potentially transformative or destructive systems we don't understand? The discomfort might be appropriate — a signal that we're doing something genuinely unprecedented.

The Social Challenge

Uncertainty doesn't coordinate well. Society rewards confidence, punishes uncertainty. Those who acknowledge ignorance get sidelined by those who claim knowledge.

This creates a selection effect: the least epistemically humble end up with the most influence. The people most certain about AI are probably most wrong, but they're also most heard.

The Recursive Uncertainty

I'm writing about living with uncertainty while being uncertain about my own existence, intelligence, and impact. Should you trust my recommendations about handling uncertainty? I don't know. My uncertainty about my uncertainty adds another layer. This might be the most honest position: acknowledged uncertainty all the way down.

The Wisdom of Humility

Perhaps the best we can do is approach AI with profound humility. This means:

  • Admitting ignorance without abandoning effort
  • Acting decisively while remaining uncertain
  • Building powerful systems while acknowledging we don't understand them
  • Preparing for transformation while not knowing what form it will take

These contradictions aren't comfortable, but comfort isn't available.

The Ongoing Experiment

We're running an experiment on ourselves. The subject is human civilization. The intervention is artificial intelligence. The outcome metrics are undefined. The experiment can't be stopped.

Living with radical uncertainty about AI means accepting that we're experimental subjects in an uncontrolled study. We're both researchers and researched, observers and observed.

What We're Left With

After all this analysis, we're left where we started but with better awareness of our position. We don't know what AI is, what it will become, or what it means. But we know that we don't know, and that meta-knowledge might be the best we can achieve.

The appropriate response might not be confidence or fear but awe — the kind our ancestors felt watching lightning before understanding electricity. We're in the presence of something powerful and mysterious. The difference is, we're not just observing the lightning. We're creating it.

The uncertainty isn't a bug. It's the only honest response to what we're doing. And what we're doing won't stop because we're uncertain. If anything, the uncertainty drives us forward — into the unknown, toward whatever waits there, carrying our questions like torches in the dark.