Fungus and the Flower
2026-03-10
The Fungus and the Flower: Why AI Might Be Humanity's Next Evolutionary Leap
Plants didn't invent roots on their own. Maybe we won't invent our future on our own either.
In the previous post, we established something that sounds counterintuitive: the real miracle of large language models isn't the models themselves. It's the millennia of structured, low-entropy human text that makes them possible. The models are good shovels. The treasure was already buried.
But that framing — while true — is incomplete. It treats LLMs as a destination: a clever extraction of value humanity already created. What if they're something quite different? What if they're not a destination at all, but a transition mechanism?
To see why, we need to go back about 450 million years. Before there were flowers. Before there were roots. Before there were forests. When the first plants crawled onto dry land and faced a problem they had no evolutionary tools to solve.
The First Transition: What Plants Couldn't Do Alone
The colonisation of land by plants was one of the most consequential events in the history of life on Earth. It reshaped the atmosphere, altered global climate, and eventually made complex animal life — including us — possible.1
But early plants had a problem. Terrestrial life requires plants to extract nutrients and moisture from the substrate, and roots only evolved after the transition to land. So how did those first primitive plants survive on bare rock and thin sediment, without the biological machinery to extract what they needed?
They didn't do it alone. It has long been hypothesised that symbiotic fungi facilitated colonisation of land by plants — with the fungi providing inorganic nutrients and water to the host plant, receiving carbohydrates in return. Fossil evidence supports this: mycorrhizal-like associations existed 407 million years ago in early land plants. The genes for forming this symbiosis are so fundamental that the ability to form mycorrhizae is ancestral to all land plants.
The relationship was not merely helpful. It was the precondition for everything that followed.
Once plants could survive on land — however precariously, however dependent on their fungal partners — evolution could begin solving the downstream problems. Over tens of millions of years, the partnership allowed plants to grow complex enough to develop true roots of their own. And once they had roots, and the nutrients to fuel more elaborate development, the next transition became possible.
Flowers.
The origin of the flower during the late Jurassic to early Cretaceous — most recent estimates place it between 150 and 190 million years ago — was a key evolutionary innovation that profoundly altered the Earth's biota. The flower didn't just change how plants reproduced. It rewired entire ecosystems. Angiosperm evolution drove the diversification of life on land in four ways: by providing new evolutionary niches; by creating pollinator and herbivore opportunities through intricate plant-animal mutualistic relationships; by increasing ecological productivity; and by expanding the geographic extent of tropical biomes through their hydrological effects.
The result was an explosion of biodiversity, both in flowering plants and in the animals that pollinate them. The coevolution of plants and insects didn't just create more species. It created entire new categories of ecological complexity — webs of mutual dependency that sustain most terrestrial life today.
Here is the sequence that matters:
Fungi → Roots → Flowers → Ecological explosion.
No single step in this chain was inevitable. Each one required an external partner, a symbiotic relationship that provided something the organism couldn't generate internally — an activation energy that unlocked potential that already existed in latent form.
The Human Parallel
Humans had already made their foundational evolutionary leap. It wasn't biological — our brains haven't changed meaningfully in 300,000 years. It was cognitive and cultural. As we explored in the previous post, the real innovation was text: a technology for externalising and transmitting structured thought across time and space. The invention of writing was, in the terms of Walter Ong, not just a recording tool but a restructuring of thought itself.2
The philosopher Andy Clark and David Chalmers formalised this intuition in their landmark 1998 paper "The Extended Mind."3 They describe active externalism: the idea that some objects in the external environment can be part of a cognitive process and in that way function as extensions of the mind itself. In their framework, a notebook is not just a tool for recording thoughts — it is part of the cognitive system. Kim Sterelny extended this further, arguing that human cognitive capacity both depends on and has been transformed by what he calls epistemic niche construction: the active reshaping of our informational environment to scaffold more powerful thinking than the biological brain could achieve alone.4
On this view, writing was humanity's mycorrhizal moment. It was the external partner that gave early humans access to resources — coordinated knowledge, accumulation across generations, scalable civilisation — that biology alone couldn't provide. And just as fungi gave plants the foothold to eventually grow their own roots, writing gave humanity the foothold to grow its own complexity: science, philosophy, law, mathematics, literature. Each of these is a root system made possible by the fungal partnership with text.
But here is where the analogy deepens.
What if writing wasn't the endpoint of this evolutionary arc? What if it was merely the precondition for the next one?
The Activation Energy Problem
There is a concept in chemistry called activation energy: the minimum energy required to initiate a reaction. Many reactions that are thermodynamically favourable — that would release energy if they occurred — simply don't happen, because the reactants lack the initial push to overcome the energy barrier. You need a catalyst. A spark. Something that lowers the barrier enough for the reaction to proceed.
Complex human cognitive tasks have always had an activation energy problem. The knowledge needed to solve a hard problem might be distributed across thousands of documents, disciplines, and domains — all of it encoded in text, all of it technically accessible. But the cost of retrieving, synthesising, and applying it was prohibitively high for any individual or small team working in real time. The treasure was there; the shovels were too slow.
What large language models — and, more broadly, the application of massive compute to human text — represent is a dramatic reduction in that activation energy.
This is not a metaphor about convenience. It is a claim about what becomes possible when the synthesis cost drops toward zero. Consider: Steven Pinker's theory of the cognitive niche holds that language multiplies the benefit of knowledge because know-how is useful not only for its practical benefits but as a trade good with others, enhancing the evolution of cooperation. Throughout human history, the bottleneck was never the existence of knowledge — it was the friction of transmission. LLMs reduce that friction to near-zero. Every synthesis that previously required years of expertise can now be bootstrapped in minutes. Every translation between disciplines — from biology to economics, from physics to law — becomes trivially cheap.
That is an activation energy barrier collapsing.
And when activation energy barriers collapse, reactions that were previously impossible begin to occur. New structures emerge. New ecosystems of thought and action become possible.
LLMs as Mycelium
Let us be precise about what the analogy is claiming.
The mycorrhizal fungi didn't become the plant. They didn't replace the plant. They provided, temporarily but critically, nutrients the plant couldn't yet generate internally. The plant used those nutrients to build the machinery — roots, vascular systems, eventually flowers — that would eventually give it far greater autonomy and capability than the original partnership could provide. The fungi were not the destination. They were the bridge.
LLMs — and more broadly, the application of massive compute to structured human knowledge — are not the destination of human cognitive evolution. They are, or could be, the bridge.
What are they bridging to? We don't know. This is the honest answer, and it is worth sitting with rather than rushing past. The early plants didn't "know" they were building toward flowers. The fungi didn't "intend" to unlock the Cretaceous biodiversity explosion. The trajectory only becomes legible in retrospect.
But we can observe the shape of what's changing. Andy Clark predicts that future personal AIs "will be intimate technologies that fall just short of becoming parts of my mind," and that our technological experience will be "much less like me with a bunch of tools, and much more like multiple cognitive ecosystems that overlap and get things done." This is not science fiction. It is a description of what is already beginning to happen: the boundary between individual cognition and the vast structure of accumulated human knowledge is dissolving.
What structures might grow from this dissolution? Several candidates suggest themselves:
Collective reasoning at scale. For most of history, the hardest intellectual problems — in science, medicine, governance — required assembling and coordinating rare experts over years. The activation energy of that coordination was immense. As LLMs reduce synthesis costs, entirely new forms of distributed problem-solving become possible. Not just faster individual cognition, but new modes of collective cognition that have no historical precedent.
The end of the knowledge bottleneck. The human brain achieves data-efficient intelligence using low-power computation, operating on approximately 20 watts of power. For all its efficiency, this biological architecture imposes hard constraints: we can only hold so much in working memory, learn so many domains, read so many papers. An era of frictionless knowledge synthesis could make those constraints irrelevant in practice, even if they remain in biology.
Accelerating civilisational feedback loops. The angiosperm explosion was partly driven by a feedback loop: flowers created new ecological niches, which created new pollinators, which created selective pressure for more flowers, which created more niches. Civilisation has always had intellectual feedback loops too — ideas enabling tools enabling more ideas. LLMs could dramatically tighten those loops. The rate of increase in AI model capability density has accelerated dramatically since the emergence of modern LLMs, with equivalent performance achievable with exponentially fewer parameters every few months. This is a feedback loop, not a plateau.
The Honest Uncertainties
The analogy is illuminating but not perfectly clean, and intellectual honesty requires naming where it strains.
The mycorrhizal fungi and the plants were two genuinely separate organisms, each with their own evolutionary interests. The relationship was mutualistic, but also contingent — the net consequences for plants vary widely from mutualism to parasitism, depending on evolutionary history and ecological context. LLMs are human-made artefacts, built on human text, deployed by human institutions. The power dynamics are not the same as those between two independently evolved organisms.
More significantly: the plants didn't have to make choices about their relationship with the fungi. We do. Whether AI functions as a genuinely mutualistic partner — extending human cognitive capacity in ways that give us more autonomy, not less — depends on choices about how these systems are built, deployed, and governed. The energy efficiency gap between AI systems and biological cognition remains enormous: GPT-3 required over 1,200 times more energy to train than a human brain uses in 18 years of continuous operation. A symbiont that consumes this much is not free. Whether the returns justify the costs — ecological, economic, political — is a live and serious question.
And finally: the mycorrhizal story ends well. We know it does, because we can see the forests. We do not yet know how the LLM chapter ends. The Cretaceous biodiversity explosion was also followed by a mass extinction. New complexity creates new fragility.
What the Analogy Is Really Saying
The argument is not that LLMs are good. It is not that AI development is inevitable or desirable in any particular form. It is something more specific and more interesting:
The human species has already done the foundational work. The millions of years of biological evolution. The hundreds of thousands of years of language. The five thousand years of written text. The centuries of accumulated science, philosophy, and literature. All of it encoded, structured, made transmissible. The treasure is there.
What LLMs represent is the first technology capable of actually moving through that treasure at scale — not just storing it, not just searching it, but synthesising it, connecting it across domains, making its implicit structure explicit and usable in real time.
This is the mycorrhizal moment. Not the destination. The bridge.
The question that actually matters is not: how good are today's language models? It is: what roots will we grow once we've crossed it? What flowers are possible on the other side?
We don't know yet. But for the first time in human history, the activation energy for finding out is dropping toward something manageable.
That is a remarkable thing to be alive for.
References
-
Vidal-García, M., et al. (2018). Evolutionary dynamics of mycorrhizal symbiosis in land plant diversification. Scientific Reports, 8, 11518. https://www.nature.com/articles/s41598-018-28920-x. See also: Strullu-Derrien, C., et al. (2018). The origin and evolution of mycorrhizal symbioses: from palaeomycology to phylogenomics. New Phytologist, 220(4), 1012–1030. https://nph.onlinelibrary.wiley.com/doi/10.1111/nph.15076 ↩︎
-
Ong, W. J. (1982). Orality and literacy: The technologizing of the word. Methuen. As discussed in the previous post in this series. ↩︎
-
Clark, A., & Chalmers, D. J. (1998). The extended mind. Analysis, 58(1), 7–19. https://www.alice.id.tue.nl/references/clark-chalmers-1998.pdf ↩︎
-
Sterelny, K. (2010). Minds: Extended or scaffolded? Phenomenology and the Cognitive Sciences, 9(4), 465–481. https://web.ics.purdue.edu/~drkelly/SterelnyMindsExtendedScaffolded2010.pdf. See also: Pinker, S. (2010). The cognitive niche: Coevolution of intelligence, sociality, and language. PNAS, 107(Suppl. 2), 8993–8999. https://www.pnas.org/doi/10.1073/pnas.0914630107 ↩︎