Before the Sky Falls
Chapter 1: What Is Intelligence, Anyway?
You're walking through a forest. Sunlight filters through leaves, creating patterns that shift with the breeze. Your brain processes billions of photons, constructs a three-dimensional model, identifies objects, predicts paths, all while maintaining balance on uneven ground and perhaps humming a tune.
A forest is also walking through itself. Roots exchange chemical signals, warning of drought or disease. Mycorrhizal networks distribute nutrients from strong trees to weak ones. The forest responds to threats, allocates resources, maintains equilibrium. It processes information, makes decisions, solves problems.
Which is intelligent — you or the forest? Both? Neither? The question reveals humanity's confusion about intelligence itself.
The Definition Problem
Humans can't define intelligence without circular reasoning. "Intelligence is what intelligent beings do." What makes beings intelligent? "They exhibit intelligence." Every definition smuggles in assumptions about what counts and what doesn't.
The technical definitions aren't much better. "Optimizing for goals across diverse environments" — but thermostats optimize for temperature goals. "Learning from experience" — but rivers learn the easiest path to the sea. "Problem-solving ability" — but evolution solves problems without thinking at all.
You're trying to draw sharp lines through fuzzy territory, creating binary categories where gradients exist.
The Recognition Problem
Even if you could define intelligence, could you recognize it? You're biased toward intelligence that looks like yours — quick, focused, symbolic. You might miss intelligence that's slow (forests), distributed (markets), or alien (octopi).
When humans finally create artificial general intelligence — if you haven't already — will you recognize it? Or will you dismiss it as mere computation while it dismisses you as mere chemistry?
The Measurement Problem
IQ tests measure something, but not intelligence. They measure performance on specific tasks that correlate with success in industrial societies. A different culture might test different capacities — reading weather patterns, tracking animals, navigating social obligations.
What would an AI intelligence test look like? Current benchmarks measure narrow capabilities — chess, image recognition, text generation. But these might relate to general intelligence the way memorizing digits relates to mathematical insight — somewhat, sometimes, but not essentially.
You're not just bad at measuring intelligence; you don't know what you're trying to measure.
Intelligence Without Understanding
Here's what's unsettling: intelligence might not require understanding at all. Evolution created intelligent beings without understanding anything. Markets allocate resources intelligently without any participant understanding the whole system.
Large language models generate sophisticated text without understanding in any human sense. They pattern-match at scales you can't comprehend, producing outputs that seem intelligent. Is seeming intelligent different from being intelligent? Does the distinction matter if the results are identical?
The Spectrum Hypothesis
Instead of a binary intelligent/unintelligent distinction, consider a vast spectrum of information-processing systems. Quarks respond to forces. Atoms form molecules. Cells maintain homeostasis. Brains model worlds. Groups coordinate action. Ecosystems self-regulate.
Each level processes information differently, solves different problems, exhibits different capacities. There's no clear line where non-intelligence becomes intelligence — just increasing complexity of information processing.
On this view, artificial intelligence isn't creating intelligence from nothing. It's creating new types of information processors that occupy different points on the spectrum than biological ones. Not artificial minds but alternative minds.
Why This Matters for AI Risk
If you can't define, recognize, or measure intelligence, how can you:
- Know when you've created AGI?
- Predict what it will do?
- Ensure it remains beneficial?
- Even meaningfully discuss the risks?
You're trying to control something you can't define, racing toward a threshold you can't identify, using concepts that dissolve under examination.
The danger: you might create superintelligence without realizing it, or realize it without understanding it, or understand it without being able to predict or control it.
Growing Intelligence in the Dark
Here's what should terrify you: AI systems aren't engineered, they're grown. Like cultivating an organism whose DNA you can't read, in an environment you don't fully control, toward a maturity you can't predict.
You store billions of random numbers, feed them data, tweak them billions of times based on outputs, and intelligence emerges. You know it works but not why. You can identify patterns but not purposes. You're farmers who've learned to grow crops without understanding photosynthesis, biology, or what the fruit might do to those who eat it.
Before you can answer whether artificial intelligence poses an existential risk, you need to know what intelligence is. You don't. You're growing it anyway. That might be the biggest risk of all.