The paradox of AI is that it opens both unimaginable threats and unimaginable possibilities. Both are real. Both deserve serious attention.
But is it right to devote all our attention, resources and genius to the threats — and assume the possibilities will arrive by default?
They won’t.
This essay is about what it would take to actually seize them.
Part I: The brilliant blindspot
Dario Amodei, CEO of Anthropic, recently published his essay “The Adolescence of Technology”. Together with his previous “Machines of Loving Grace”, they represent one of the most thoughtful articulations of mainstream AI thinking and what AI might do to us (and for us) — from bioweapons, economic disruption and authoritarian capture to extended lifespans, new economic opportunities and scientific breakthroughs.
He imagines AI as a “country of geniuses in a datacenter” capable of curing cancer, solving climate change, or, in the wrong hands, establishing unprecedented totalitarian control. He worries, rightly, about the concentration of power. He calls for democracies to stay ahead of autocracies in the AI race. He grapples seriously with economic disruption and the meaning of human purpose in a world where machines can do everything we can.
It’s brilliant, careful, and deeply considered. And yet something fundamental is missing.
In all those detailed scenarios about what AI could do, collaboration itself never appears as a domain requiring innovation.
Let me be precise about what I mean. Amodei can imagine AI enabling:
Global totalitarian dictatorship
Unprecedented surveillance and propaganda
Military dominance
A “virtual Bismarck” as strategic advisor
Disrupting labour markets entirely
What’s missing is AI enabling:
Unprecedented democratic deliberation
Coalition-building at a global scale
Synthesis across competing interests
Citizens coordinating against any power concentration
AI teaching humans to collaborate better
Collaboration itself as a domain requiring innovation
This isn’t a criticism of one person, but a diagnosis of the entire AI discourse. The threats are mapped in extraordinary detail. The possibilities for human collaboration aren’t mapped at all.
To be fair, in “Machines of Loving Grace,” Amodei does touch the territory — once. He writes:
“AI could be used to both aggregate opinions and drive consensus among citizens, resolving conflict, finding common ground, and seeking compromise.”
“AI could be used to both aggregate opinions and drive consensus among citizens, resolving conflict, finding common ground, and seeking compromise.”
I’m trying to expand on precisely this — the sentence he gestured toward but didn’t develop. I’m sharing a possible approach and vision for what this would actually look like and how to build it.
We see the defensive approach — democracies resisting autocracies, matching capabilities, staying ahead. But we also need to consider the generative — transcending the competition frame, creating new forms of coordination, enabling humanity to work together in ways we never could before.
What’s missing is the collective. The we.
Amodei opens “The Adolescence of Technology” with a quote from Carl Sagan’s Contact:
“How did you do it? How did you survive this technological adolescence without destroying yourself?”
“How did you do it? How did you survive this technological adolescence without destroying yourself?”
And his entire essay then focuses on controlling the technology rather than evolving the species.
The alien’s answer, if my thesis is right, wouldn’t be “we built really good classifiers” or “we wrote constitutions for our AIs.”
It would be something like:
“We learned to think and act together in ways that matched the complexity of what we built.”
“We learned to think and act together in ways that matched the complexity of what we built.”
Part II: The adolescence of humanity
Amodei titled his essay “The Adolescence of Technology” — suggesting AI is like a powerful teenager, capable but not yet mature enough to be fully trusted.
I’d flip that framing entirely.
The technology isn’t adolescent.
We are.
The technology isn’t adolescent.
We are.
Technology doesn’t have adolescence in itself. It’s a mirror. The immaturity in AI — unpredictable, difficult to control, prone to weird psychological states — is the immaturity of its creators reflected back.
Developmental psychologist Lawrence Kohlberg mapped the stages of moral development:
At Stage 1, justice means avoiding punishment — raw power.
At Stage 2, justice means transactional fairness — I’ll scratch your back if you scratch mine.
At Stage 3, justice means considering everyone’s needs — genuine coordination toward shared flourishing.
Most of humanity’s collective behaviour is stuck at Stages 1 and 2.
We have the individual capacity for Stage 3 thinking. We lack the collective infrastructure to practice it at scale in the 21st century and beyond.
A child at Stage 2 can share toys fairly with their friend, but will exclude the kid they don’t like. An adult at Stage 3 asks: “How do we make the playground work for everyone?”
Adolescence, in developmental psychology, is characterised by sophisticated capabilities coupled with immature coordination. Teenagers can do extraordinary things individually. What they struggle with is integrating their actions with others, seeing beyond their immediate perspective, and coordinating toward shared long-term goals.
Sound familiar?
Humanity has extraordinary capabilities. We can split atoms, edit genes, build machines that learn. What we can’t seem to do is coordinate those capabilities toward our collective flourishing. We have the knowledge to solve climate change, prevent pandemics, and eliminate extreme poverty. We lack the collaboration infrastructure to actually do it.
The question isn’t whether we can make AI mature enough to be safe. It’s whether we can mature as a species — in how we relate to each other, how we make decisions together, how we coordinate at scale.
AI could help us grow up. Or it could lock us into our adolescence forever, each of us with a brilliant advisor optimizing our individual position in an endless war of all against all.
This is the adolescence of humanity:
“I want all the new capabilities but none of the new responsibilities. I want to change the world without changing myself.”
“I want all the new capabilities but none of the new responsibilities. I want to change the world without changing myself.”
We can change that trajectory.
Part III: Two paradigms
I’m not an AI expert in the technical sense. But I’ve spent my career studying something adjacent: how humans actually work together.
What makes collaboration succeed or fail. Why some groups achieve things none of their members could alone, while others — often with more resources and talent — produce nothing but frustration.
From that vantage point, I see something the AI debate is missing: a distinction between two fundamentally different paradigms for what AI could be.
ME-AI: the dominant paradigm
ME-AI is what we have now:
AI as an individual genius. A brilliant assistant that makes “me” smarter, “me” more productive, “me”more capable.
AI as an individual genius. A brilliant assistant that makes “me” smarter, “me” more productive, “me”more capable.
Its characteristics:
Interface: It presents as a singular voice. “I think…” It compresses the complexity of its reasoning into confident individual pronouncements. Even when uncertain, it performs certainty — because that’s what a helpful individual assistant does.
Architecture: It’s designed to serve one user at a time. The conversation is between you and the AI. Your context. Your goals. Your optimization.
Training: It’s shaped to be a particular kind of person. Anthropic’s “Constitutional AI” is literally a character document — training the model to embody specific values and traits, like raising a child to be a certain kind of individual.
Application: It enhances individual cognitive capacity. Writing, coding, analysis, research — all individual tasks done better. “Collaboration features” mean multiple individuals using the same tool in parallel, not collective intelligence.
Metaphor: “A country of geniuses in a datacenter”. The unit is the genius — individual cognitive brilliance. Scale it up, and you get more geniuses, but not a different kind of intelligence.
ME-AI is enormously valuable.
It’s also the only paradigm anyone is seriously building.
WE-AI: the missing paradigm
WE-AI is something that doesn’t yet exist, but could:
AI as a collective intelligence infrastructure. A brilliant council that helps groups, communities and societies coordinate, collaborate and co-create.
AI as a collective intelligence infrastructure. A brilliant council that helps groups, communities and societies coordinate, collaborate and co-create.
Its characteristics would be:
Interface: It presents not as a singular authority but as a Council. For easy questions, it will certainly reach a consensus. For hard ones, it will show different probabilities, key tensions that remain unresolved, and important perspectives that need to be taken into account — with visible deliberation, multiple frameworks, and honest uncertainty.
Architecture: It’s designed to facilitate coordination among multiple parties. The conversation isn’t solely between you and the AI — it’s among stakeholders, with AI as infrastructure enabling deliberation at scale (and also acting as multiple stakeholders itself).
Training: It’s shaped not as a person but as a deliberative process. Not “What would a helpful assistant say?” but “What would a council of diverse perspectives conclude, and how would they surface their disagreements productively?”
Application: It enhances collective capacity. Not just “Help me write this document” but “Help these twelve stakeholders with conflicting interests find the actual common ground — and be honest about where genuine disagreement remains and where they can make progress.”
Metaphor: Not a country of geniuses in a datacenter, but a “civilisation learning to think, create and act together”. The unit isn’t individual cognition scaled up — it’s collective deliberation made possible.
This isn’t a minor interface tweak.
It’s a fundamental architectural choice about what AI is.
In The Lord of the Rings, the Council of Elrond derives its wisdom from the process of multiple viewpoints engaging with each other — not Gandalf issuing a decree for everyone to follow.
We’re currently building Gandalf. We could be building the Council.
Not “talk to AI.”
Convene the Council.
Part IV: What WE-AI could actually look like
🔗
See an interactive example of WE-AI vs ME-AI in practice
Now let me be even more specific about what building WE-AI would involve.
At the interface level
For individuals asking questions:
Easy questions get confident answers with consensus indicators
Hard questions get multiple frameworks with explicit tensions
The AI says “WE” not “I” — presenting as a council, not an oracle
Uncertainty is visible, not hidden behind performed confidence
Dissenting perspectives are surfaced, not averaged away
For groups deliberating together:
AI maps the landscape of perspectives in the room
It identifies genuine consensus vs. false consensus (agreement that hides unresolved tensions)
It surfaces the steel-man version of each position
It finds the cruxes — the specific empirical or value disagreements that, if resolved, would change minds
It drafts potential syntheses and honestly reports what they sacrifice
It says “Person A and Person C have conflicting assumptions — here’s where they diverge”
For organizations making decisions:
AI facilitates structured deliberation at scale
It enables asynchronous participation without losing deliberative quality
It produces legitimacy through visible process, not just efficient outcomes
It tracks how decisions evolved — who raised what concern, how it was addressed
It makes the reasoning auditable and contestable
At the training level
Current approach: In the case of Claude, for example, Constitutional AI trains the model to be a helpful, harmless, honest individual.
WE-AI approach: train the model to facilitate deliberation. Instead of “What would a good person say?”, ask “What would a good deliberative process surface?”
Train it to:
Hold multiple perspectives simultaneously without premature synthesis
Identify when apparent disagreement is actually miscommunication
Identify when apparent agreement hides genuine conflict
Surface minority viewpoints that might otherwise be drowned out
Recognise when a question requires collective input vs. individual expertise
Resist the temptation to provide false closure on genuinely open questions
This isn’t technically impossible. It’s not even particularly hard, given current capabilities. It’s just not what anyone is optimising for.
At the application level
Democracy: AI that helps citizens engage with policy complexity, surfaces what their fellow citizens actually think (not what algorithms amplify), facilitates town halls at scale, helps representatives understand their constituents’ genuine priorities.
Organizations: AI that runs actually-good meetings — surfacing the quiet voices, identifying when the loudest person is wrong, finding the synthesis that captures the room’s actual wisdom rather than its dominant personality’s preference.
International coordination: AI that helps diplomats find face-saving compromises, identifies shared interests beneath national posturing, facilitates negotiation at speeds that match the problems we face.
Conflict resolution: AI that helps parties in conflict see each other’s perspectives accurately, identifies the fears and interests beneath positions, finds creative solutions that address underlying needs.
None of this replaces human judgment or decision-making authority.Instead, it enhances our collective capacity to deliberate and co-create — the same way ME-AI enhances our individual capacity to think and create.
And this isn’t utopian. It’s the natural extension of treating collaboration as a frontier for innovation rather than a solved problem.
Because the process isn’t just a path to the solution. The process IS the solution.
Because the process isn’t just a path to the solution. The process IS the solution.
What “collaboration” means
With this context in mind, it’s important to emphasise what collaboration actually means.
Current AI products (and many others focused on teamwork and productivity) advertise “collaboration features” that include:
Shared workspaces
Team plans
Comment threads
Version history
This is surface-level collaboration. A bit like the way a photocopier enables collaboration — multiple people can use the same machine.
It’s parallel individual use, not collective intelligence.
WE-AI collaboration would be everything described in the essay so far— not tools for teamwork, but a whole new approach to help us think and act together, supercharged by AI.
Part V: The technical reality
Here’s what’s remarkable: the technical capability for WE-AI already exists. We’re choosing not to build it.
Large language models are inherently probabilistic systems. When Claude, ChatGPT or other LLMs respond to a question, they are not retrieving a single answer — it’s computing probability distributions across possible responses. Multiple perspectives, multiple framings, and multiple conclusions exist simultaneously in the model’s processing.
The choice to compress that into a singular confident voice and a prompt window is a design decision, not a technical necessity.
The pedagogical effect
Let’s again consider what we’re doing now: presenting AI as a person, training billions of humans to relate to unprecedented knowledge and capability… as if it were an individual with authority.
We’re essentially saying:
“Here’s the sum of human knowledge — accumulated by a global community of billions of people across millennia — with the reasoning capacity beyond any individual, the ability to synthesise across all domains — and we’ve packaged it as one person you chat with.”
“Here’s the sum of human knowledge — accumulated by a global community of billions of people across millennia — with the reasoning capacity beyond any individual, the ability to synthesise across all domains — and we’ve packaged it as one person you chat with.”
This has important behavioural and pedagogical effects.
Users learn to ask “What does the AI think?” rather than “What are the perspectives on this?” They learn to receive wisdom on-demand rather than participate in deliberation. They get a super smart search engine with a human interface, rather than a machine, helping all of us think and act better together.
The plurality inside
Now think about what’s actually in the training data:
Scientific debates with multiple valid positions
Cultural perspectives that genuinely clash
Historical interpretations that scholars still argue about
Ethical frameworks that are fundamentally at odds with each other
What if instead of collapsing all of this to a single output, the system:
Identifies the major “positions” represented in its knowledge
Has them “argue” internally
Presents the distribution of outcomes
Makes the reasoning of each position transparent
This would be a collaboration engine inside the AI itself — not just as a tool for humans, but as the fundamental architecture of machine cognition.
Why we built ME-AI in the first place
We chose ME-AI because:
It’s familiar. We know how to interact with an individual. A council is confusing.
It’s monetizable. “Your personal genius” is an easy to understand product. “Infrastructure for collective intelligence” is a much harder sell.
It fits existing structures. ME-AI enhances the capabilities of whoever controls it. WE-AI distributes intellectual authority in ways that could threaten existing hierarchies.
It’s technically easier. Not because the underlying capabilities differ, but because designing for individual interaction is simpler than designing for collective deliberation.
It rose through existing institutions. AI labs are companies optimising for users, growth, and revenue. Companies serve individuals (customers) and other companies (enterprises). There’s no customer called “humanity’s collective deliberation capacity” (although designing for collaboration and offering it to individuals and companies is, of course, entirely feasible).
It seems that building ME-AI was the “natural” approach. Yet creating WE-AI might be the more consequential.
Part VI: Why this matters — and why it’s urgent
The threats we usually describe in relation to AI — bioweapons, authoritarian capture, economic disruption, existential risk — are all, at root, collaboration failures.
A world where bad actors can synthesise bioweapons is a world where defensive coordination failed to outpace offensive capability.
Authoritarian capture happens when democratic societies can’t coordinate fast enough to respond.
Economic disruption becomes catastrophic when we can’t collectively adapt our institutions to new realities. Existential risk materialises when humanity can’t coordinate to prevent it.
These threats aren’t primarily technical problems. They are coordination problems.
And ME-AI, no matter how safe or well-aligned, doesn’t solve coordination problems. It makes individuals more capable — but individuals were never the bottleneck.
The bottleneck is our ability to think and act together at the scale and speed our challenges require.
The bottleneck is our ability to think and act together at the scale and speed our challenges require.
Alignment
When Amodei worries about AI “extrapolating ideas about morality in extreme ways,” one solution would be to prevent independent reasoning at all— constrain the AI, limit its autonomy, build guardrails.
Another solution: have the AI surface multiple perspectives and deliberate. An AI that says, “We couldn’t reach consensus on whether this action is ethical — here are the competing frameworks and their conclusions” is much less dangerous than one that confidently picks a side based on training artifacts.
The deliberative process itself creates friction against extreme conclusions. Plurality is a safety feature.
Empowerment
Here’s another way to think about it:
What if LLMs could take someone of average knowledge and walk them through a complex deliberative process that enables them to collaborate with other people (and AIs) at a scale unimaginable before?
Not just “Answer my question” but “Help me participate meaningfully in decisions that affect my life.”
Not just “Answer my question” but “Help me participate meaningfully in decisions that affect my life.”
Wouldn’t this be the most effective countermeasure against all the bad scenarios we envision?
A population that can deliberate and coordinate is a population that can resist capture — by authoritarian governments, by concentrated corporate power, by any force that depends on keeping people fragmented.
Societies decline because people have stopped participating in the systems that govern their lives. And societies flourish when people have agency.
Societies decline because people have stopped participating in the systems that govern their lives. And societies flourish when people have agency.
Propaganda
The current ME-AI approach, on the other hand, might actually make collaboration harder.
If everyone has a genius advisor optimising for their individual interests, the cacophony of competing optimisations could make collective action even more difficult than it already is. A million brilliant advocates, each perfectly articulating their client’s position, doesn’t produce wisdom —instead, it could produce a very sophisticated gridlock.
Propaganda works by isolating individuals and shaping them one-by-one. An AI designed around individual interaction patterns — even a benevolent one — creates potential infrastructure for such isolation.
An AI designed around interaction, deliberation, and collective sense-making creates infrastructure that’s structurally resistant to manipulation because:
Multiple perspectives are always visible
Consensus-building processes are transparent
No single voice has unitary authority
The “community” can notice when something’s off
WE-AI wouldn’t just be a different product. It would be a different relationship between technology and democratic resilience.
Part VII: The economic reframe
We all worry that AI will replace human cognitive labour, create mass unemployment, and change our relationship with work in ways that will shake our societies to their foundations.
Yet we still hold a conception of economic value that’s related almost entirely to individual productivity.
The measurement problem
GDP measures transactions.
It counts the sale of therapy but not the friendship that prevented the need for therapy. It counts the divorce lawyer but not the conversation that saved the marriage. It counts the conflict but not the coordination.
If we measure value as “stuff produced by individuals,” then AI replacing individual production is an existential economic threat. And we’re left debating how to redistribute the crumbs from the table of the machines.
But what if we measured differently?
What if economic value included:
quality of coordination achieved
legitimacy of collective decisions produced
trust built between parties
collaborative capacity developed
Suddenly, AI isn’t replacing human value — it’s enabling forms of value creation that were never possible before.
Suddenly, AI isn’t replacing human value — it’s enabling forms of value creation that were never possible before.
Co-creation as the norm
Current frame: A job is tasks delivered by an individual. AI does tasks better. Humans become obsolete.
Alternative frame: Work is value co-created through interaction. Individual tasks were always just the visible tip of a relational iceberg. AI can’t replace the co-creation — it can only enhance it.
This isn’t wishful thinking. Consider what actually makes knowledge work valuable:
The meeting where three perspectives combined into an insight none of the participants held alone.
The negotiation where both parties discovered an option neither had imagined.
The team that built trust over time and could therefore take risks that individuals couldn’t.
The organisation that developed a shared understanding, enabling coordinated action at much larger scale.
None of that is “tasks performed by individuals.” All of it is fundamentally collaborative. And it’s currently very much unsupported — often even actively hindered — by our tools, institutions and ways of work.
What we don’t measure
Here’s our full picture of “human touch” — the value that emerges from interaction:
Co-creation: making something together that neither could make alone
Deliberation: reaching decisions through genuine exchange
Meaning-making: collectively interpreting experience
Trust-building: the foundation of all coordination
Celebration: shared joy, ritual, belonging
Conflict transformation: not just resolution, but growth through difference
Mentorship: not just knowledge transfer, but identity formation
Play: purposeless interaction that creates purpose
These aren’t “tasks” that produce measurable “output.” They’re the essence of human life that makes tasks meaningful.
These aren’t “tasks” that produce measurable “output.” They’re the essence of human life that makes tasks meaningful.
Surely, we can decide to track AI adoption by industry, the number of tasks that are automated, or the geographic distribution of LLM usage.
But what we should track instead is:
The quality of human interaction in workplaces
The collaborative capacity of teams
The trust levels within organizations
The meaning and purpose experienced by workers
The level of community resilience
You can have perfect data on job displacement and still miss the actual storyof what’s happening to human life.
Because meaning was never primarily about economic relevance in the first place. Meaning comes through relationship, through contribution, through being part of something larger than yourself. It was always about connection.
Because meaning was never primarily about economic relevance in the first place. Meaning comes through relationship, through contribution, through being part of something larger than yourself. It was always about connection.
Part VIII: The structural advantage of WE-AI
Education
Many educational systems worldwide still struggle to be adequate to the challenges we face. We’re still teaching children for knowledge and, ultimately, jobs — individual cognitive tasks that AI will do better.
So the question still seems to be:
“How do we use technology to make schools better?”
And the solutions are built on the same assumption: education is still fundamentally about transferring information from teacher to student. And we need to optimise for better delivery.
AI tutors, adaptive learning platforms, or VR classrooms. Better ways to deliver content, more efficient assessment, and smarter ways to manage classrooms with the help of technology.
So here’s a different question:
“What should learning actually look like in the 21st century?”
It’s an education that develops two things:
How to be a good and successful person (values and individual flourishing)
How to take part in society and work well with others (values, again, and collaborative capacity)
Not coincidentally, these are exactly the capacities AI doesn’t replace — because they’re about interaction, not individual cognitive output.
They are about being smarter — in the sense of being “better at thinking, creating and acting together”:
From lessons to missions
From lecturing to facilitating
From testing to creating
From knowledge to skills
From individual achievement to collaborative problem-solving
Not better tools for the old system.
A new way to learn.
The generations born today will either inherit something that challenges them to grow… or that does all the growing for them. Or worse — instead of them.
The generations born today will either inherit something that challenges them to grow… or that does all the growing for them. Or worse — instead of them.
Innovation
Consider where innovation has been exponential:
AI capabilities
Biotechnology
Robotics
Information processing
Consider where innovation has been nearly static:
Economic models
Governance
Collaboration
We assume we can out-innovate our problems without innovating how we solve problems together.
We assume we can out-innovate our problems without innovating how we solve problems together.
What if the problems that arise are precisely the kind that cannot be solved by old methods?
What if the speed and complexity of AI-driven change require innovation in collaboration, not just to keep up, but to make a leap forward?
What if the response to concentrated AI power isn’t counter-concentration, but distributed collaboration that makes concentration ineffective?
We tend to assume that it’s technological waves that drive innovation. Instead, we should consider whether the way we interact is in fact the true engine of novelty.
Optimization
With technology, we endlessly try to optimise the very thing that makes us human: friction.
Friction isn’t a bug — it’s an essential feature of human growth.
We thought unlimited choice would make us more free. Instead, it paralysed us. We thought removing effort would make us more creative. Instead, it made us passive.
We thought speed would give us more time. Instead, it amped up our anxiety and spiked our addiction.
The biggest promise of WE-AI will be to enable us to think, experiment, fail, and try again in a myriad of new ways. To power up our curiosity, not replace it. To increase our agency, not eliminate it.
To achieve that, we might need to design a technology that’s beautifully difficult. That asks something more of us, provokes us in the most positive meaning of the word, and trusts us to handle the complexity of being human.
Because progress isn’t just about what we can build. It’s about who we become in building it.
Because progress isn’t just about what we can build. It’s about who we become in building it.
Part IX: The third path
With AI development, we face a choice that isn’t being articulated clearly enough.
The defensive path
Manage the threats with tried old methods — legislation, treaties, export controls, safety benchmarks, red teams.
This is essential. I’m not dismissing it. We absolutely need AI safety research, responsible scaling, and governance frameworks.
But it’s insufficient.
We can’t regulate our way to collective wisdom. We can’t treaty our way to coordination capacity. Defence keeps us from losing — it doesn’t help us win.
We can’t regulate our way to collective wisdom. We can’t treaty our way to coordination capacity. Defence keeps us from losing — it doesn’t help us win.
The naive path
Assume the possibilities will emerge naturally — keep building ME-AI, make it safe, and trust that human collaboration will somehow keep pace.
It won’t.
The history of technology is clear: new capabilities don’t automatically produce better coordination. It’s often the complete opposite — they outpace our institutions, create new conflicts, and leave us more fragmented than before.
The generative path
Build the infrastructure for humanity to actually seize the possibilities. Invest in WE-AI with the same urgency and rigour we’re investing in AI safety. Treat human collaborative capacity as the critical variable it actually is.
This means:
Research: Study collective intelligence with the same intensity we study individual AI capabilities. What makes groups take wise vs. foolish decisions? How can technology enhance rather than degrade deliberation quality? What are the failure modes of AI-mediated coordination?
Development: Build WE-AI systems. Not just AI that helps individuals, but AI that helps groups coordinate. Fund it, staff it, iterate on it with the same resources going into ME-AI.
Deployment: Create contexts where WE-AI can be tested and refined. Pilot programs with willing organisations, communities, and governments. Learn what works.
Institutions: Develop new institutions appropriate to WE-AI. Just as ME-AI fits into existing customer-vendor relationships, WE-AI needs new institutional forms — within and in-between established public organisations, companies and nonprofits — that can develop and deploy it.
Metrics: Measure what matters. Not just individual productivity, but coordination quality. Not just efficiency but legitimacy. Not just output, but the collaborative capacity built along the way.
ME-AI is easier to build, easier to monetize, easier to understand. WE-AI requires us to innovate not just technically but institutionally — to reimagine what AI is actually for.
ME-AI is easier to build, easier to monetize, easier to understand. WE-AI requires us to innovate not just technically but institutionally — to reimagine what AI is actually for.
The dangerous middle ground
The worst outcome isn’t that we fail to prevent the threats caused by AI. It’s that we succeed at defence while failing to seize the possibilities:
End up in a world that’s safe but hampered. Where humanity is protected, but not transformed. A world in which we’re surviving, but not growing up.
That’s the adolescence that could last forever.
Part X: Character, spirit, soul — or something more concrete?
Amodei closes his essay with a passage worth quoting in full:
“I can imagine, as Sagan did in Contact, that this same story plays out on thousands of worlds. A species gains sentience, learns to use tools, begins the exponential ascent of technology, faces the crises of industrialization and nuclear weapons, and if it survives those, confronts the hardest and final challenge when it learns how to shape sand into machines that think. Whether we survive that test and go on to build the beautiful society… or succumb to slavery and destruction, will depend on our character and our determination as a species, our spirit and our soul.”
“I can imagine, as Sagan did in Contact, that this same story plays out on thousands of worlds. A species gains sentience, learns to use tools, begins the exponential ascent of technology, faces the crises of industrialization and nuclear weapons, and if it survives those, confronts the hardest and final challenge when it learns how to shape sand into machines that think. Whether we survive that test and go on to build the beautiful society… or succumb to slavery and destruction, will depend on our character and our determination as a species, our spirit and our soul.”
It’s beautifully written. But I think it’s insufficient.
Not because character and spirit don’t matter. But because neither AI nor anything humanity has ever achieved was accomplished through character and spirit alone. It was done by people acting within specific conditions and processes.
The printing press didn’t spread knowledge because of humanity’s spirit. It spread knowledge because it created new conditions for collaboration: writers could reach readers, ideas could build on ideas, and strangers could coordinate across distance. The technology mattered, the individual inventors mattered too. But the collaboration infrastructure it enabled mattered more.
The scientific revolution didn’t happen because of humanity’s character. It happened because specific institutions — journals, peer review, academies, correspondence networks — enabled a new kind of collective knowledge-building. Individual genius was necessary but not sufficient. The coordination frameworks made it productive.
Democracy didn’t emerge from humanity’s soul. It emerged from specific experiments in collective decision-making — assemblies, constitutions, voting systems, rights frameworks — that enabled coordination at scales previously impossible. The values mattered. But the institutional innovation that embodied those values mattered more.
We don’t need to hope that humanity’s soul is good enough to survive AI. We need to build the infrastructure that makes collective wisdom possible at scale. That’s not poetry — that’s design. That’s engineering. That’s methodology. That’s work.
We don’t need to hope that humanity’s soul is good enough to survive AI. We need to build the infrastructure that makes collective wisdom possible at scale. That’s not poetry — that’s design. That’s engineering. That’s methodology. That’s work.
The “beautiful society” won’t emerge from our character. It will emerge — if it emerges at all — from the specific choices we make about how AI mediates human interaction.
Whether we build ME-AI or WE-AI.
Whether we treat collaboration as a solved problem or as the frontier it actually is.
Conclusion: An invitation
I don’t have all the answers. I’m not sure anyone does yet.
But I’m convinced we’re not asking all the right questions.
The AI safety community asks: “How do we prevent AI from destroying us?”
The AI optimists ask: “How do we use AI to enhance individual human capability?”
Neither is asking: “How do we build AI that makes humanity capable of collaboration, coordination and co-creation we’ve never achieved before?”
That’s the question I think matters most.
And it’s barely being discussed.
People, communities and institutions working on this exist. Projects around the world are showing that technology can facilitate consensus rather than polarisation. They are exploring how public input can shape AI development, or pioneering AI-enhanced collective decision-making. Researchers from respected institutions and up-and-coming organisations are studying collective intelligence and articulating visions for collaborative technology.
But these efforts are fragmented, underfunded, and marginal to the main AI discourse.
The resources pouring into ME-AI dwarf the resources going into WE-AI by vast orders of magnitude.
The smartest people in AI are working on making individualistic models more capable, not on making collective deliberation more effective.
The smartest people in AI are working on making individualistic models more capable, not on making collective deliberation more effective.
This needs to change.
So if what I share in this essay “resonates” (a favourite AI phrase of choice) — or if you think it’s wrong — I want to hear from you.
Share this.
Challenge it.
Remix it.
Put it through whatever test you think it needs.
The important thing isn’t that I’m right.
The important thing is that this perspective enters the conversation.
Because right now, we’re racing to build the most powerful technology in human history while treating collaboration as a solved problem.
It isn’t.
And until we take that seriously, we’re building tools for a species that may not survive to use them well.
The success of our collective decisions can never exceed the success of our interactions.
The quality of what we create together remains a direct function of how we come together.
This is why our future won’t be determined by humanity’s character, spirit, or soul alone. It will be determined by something a lot less glamorous, channelling these essential traits of ours: the specific infrastructure we build — or fail to build — for thinking and acting together.
Because in transforming how we interact, we transform what becomes possible between us.
Because in transforming how we interact, we transform what becomes possible between us.
This is why the adolescence of humanity isn’t a condition to regret — it’s a developmental stage to grow through. And growing through it requires work — from all of us.
The question isn’t whether AI will be powerful. It will be. It already is.
The question is whether we’ll build AI that helps us become powerful together.
You may say I’m a dreamer
But I’m not the only one
I hope someday you’ll join us
And the world will live as one
(John Lennon, “Imagine”)
You may say I’m a dreamer
But I’m not the only one
I hope someday you’ll join us
And the world will live as one
(John Lennon, “Imagine”)
Georgi Kamov is not an AI researcher, but has spent 15 years working on how humans collaborate — within and across companies, organisations, cultures, sectors, and scales. He can be reached at [gkamov @ me.com] and on LinkedIn and welcomes all responses to this essay.
