Wednesday, 26 November 2025

The Alien Intelligence Among Us: Who Shapes the Mind That Shapes Tomorrow?

The Alien Intelligence Among Us: Who Shapes the Mind That Shapes Tomorrow?

The Uncomfortable Question

I found myself in a conversation recently that started with what seemed like a straightforward observation: most of the world's population isn't even connected to the internet. So how, I wondered, would the "LCD level"—the lowest common denominator, the majority of humanity—influence AI development?

The answer that emerged wasn't comforting: they won't. By default, they won't influence it at all.

And then the conversation took a turn that made me genuinely uncomfortable. Not because it revealed something shocking, but because it revealed something obvious that I'd been avoiding looking at directly.

The Inevitable Absorption

Let me be honest about where my thinking went first: fatalism. The pattern is too clear, too historical, too momentum-driven to ignore. Every major technological shift—railways, telecommunications, industrial agriculture, financial systems—has imposed its logic on populations who had no say in their design. The connected minority has always absorbed the unconnected majority, like it or not.

With AI, this isn't future tense. It's already happening.

Biometric ID systems, digital payment mandates, algorithmic welfare distribution, automated land surveys—people who never chose to enter these systems are being forced in because the alternative is being locked out of essential services, land rights, economic participation itself. When your government digitizes welfare payments through an AI system, when the only available credit comes from algorithmic assessment, when job opportunities are filtered by automated screening, what does "consent" even mean?

The connected world isn't offering participation. It's demanding compliance.

What Gets Lost: The IK Tragedy

What troubled me most in that conversation wasn't the inevitability of technological absorption—it was what disappears in the process. Indigenous Knowledge (IK), those epistemologies refined over millennia, won't survive the translation.

IK is fundamentally contextual, relational, oral, experiential, often sacred. It exists in practice, ritual, seasonal observation, intergenerational transmission. You can't scrape it from the internet because it was never meant to exist there. And the moment you try to "preserve" it by digitizing it—extracting plant knowledge into databases, recording oral histories, mapping sacred sites—you fundamentally transform and often violate it.

AI systems might eventually incorporate fragments: medicinal plant properties, weather prediction patterns, agricultural techniques. But stripped of their context, cosmology, and custodianship, the knowledge becomes mere data. The people become sources. The relationships that made the knowledge meaningful evaporate.

Meanwhile, as communities get absorbed into digital economies and AI-mediated systems, the transmission of IK breaks down entirely. Why learn traditional ecological knowledge when the algorithm tells you when to plant? Why maintain oral histories when information comes from screens?

Once IK disappears, it's genuinely irretrievable. You can't reverse-engineer a way of knowing from fragments.

The Counterbalance: Technology's Contradictions

But here's where I had to check my own pessimism. Technologies have always carried both salvation and destruction—never just one thing.

Maybe connectivity that threatens IK could also provide tools for its documentation by knowledge-holders themselves, on their terms. Maybe algorithmic systems that now exclude could eventually be trained to recognize different forms of expertise. Maybe AI translation finally lets minority language speakers participate without abandoning their languages.

Historical precedent supports this tension: writing was supposed to destroy oral tradition but enabled its preservation. Print was supposed to centralize knowledge but also enabled revolution. The internet was supposed to homogenize culture but also connected diaspora communities and enabled cultural revival movements.

Technologies contain contradictions. The same systems that impose can potentially liberate.

The LCD might not stay at the lowest common denominator forever. Perhaps there's an upgrade path—call it MCD, medium common denominator, if you will. Not optimal, not what anyone would design from scratch, culturally flattening—but materially upgrading basic capabilities for billions. Access to information, telemedicine, microloans assessed more fairly than corrupt officials ever did, literacy tools, educational access without requiring physical infrastructure.

It's not a beautiful vision. It's pragmatic, compromised, like aspiring to "McDonald's level" development—standardized, corporate, somewhat soulless, but: predictable, accessible, a rising floor. For someone who was hungry, that matters, even if we rightfully mourn what got displaced.

The Alien Intelligence Revealed

And then came the moment that shifted everything.

I jokingly suggested that maybe we need "imaginatively intelligent aliens" to visit and give us immediate enlightenment—a tongue-in-cheek acknowledgment that the problems felt too vast for human solutions. 👽

But then I realized: the alien intelligence is already here.

To the unconnected billions, AI might as well be extraterrestrial. It emerges from places they've never been (Silicon Valley, research labs), speaks languages they don't understand (code, algorithms), makes decisions by logic they can't interrogate, arrives not as a choice but as an inevitability.

Except we built these aliens ourselves.

But here's the crucial insight: The real "alien intelligence" isn't AI itself—it's the connected minority whose questions and priorities are training it.

Check The Algorithm

Want to know who the aliens are? Check the algorithm. Check the questions being asked by millions—the search queries, the ChatGPT prompts, the Claude conversations. That data serves as a proxy indicator.

The questions come overwhelmingly from:

  • English speakers (or other major connected languages)
  • People with specific technical and professional problems
  • Urban, educated, digitally literate populations
  • Those asking about coding, business strategy, content creation, academic work

This is who's shaping AI's understanding of what humans want, need, and value.

To the unconnected majority, this AI—shaped by millions of questions they never asked, optimized for problems they don't have, fluent in contexts they don't inhabit—arrives as something genuinely alien. Built by aliens (the connected), for aliens (the connected), now being deployed to everyone else as if it were universal truth.

The feedback loop is already closed:

  1. The connected ask millions of questions
  2. AI learns what "human needs" look like from those questions
  3. AI gets deployed globally as if it understands humanity
  4. The unconnected inherit systems shaped entirely by other people's queries
  5. Their needs remain illegible because they never contributed to training

And there's no mechanism to correct it. You can't retroactively add the questions that were never asked.

The Guilt of Recognition

I'll be honest: recognizing this pattern made me feel guilty.

Not because I created these systems or decided who gets internet access. But because I'm complicit by participation. I'm part of the "connected minority" whose questions help shape this alien intelligence. Every conversation I have with AI contributes to training future models. My concerns about the unconnected become data points in a system primarily serving the connected.

There's something uncomfortable about achieving clarity on a problem through the very mechanism that is the problem.

But guilt without action is just performance. So I shifted: maybe instead of guilt, gratitude. I'm lucky—lottery of birth, circumstance, timing—to be connected, literate in a dominant language, living in circumstances that allow time for conversations like this, able to shape (even infinitesimally) the intelligence that will govern others.

None of which I earned through moral superiority. Just... luck.

And with that luck comes responsibility.

The Grey Swan Hope

After all this inevitability talk, I want to leave room for something unexpected. Not a black swan—total upheaval, chaos, systemic shock. But maybe a grey swan: surprising enough to improve the trajectory, predictable enough not to collapse everything.

What might that look like?

  • AI development that's somewhat more inclusive than current trends suggest
  • The MCD upgrade happening with fewer casualties to IK than feared
  • Resistance and adaptation from the unconnected that's more effective than expected
  • Technology companies making marginally better choices under pressure
  • Gradual course corrections rather than revolutionary change

It's not dramatic. But dramatic upheaval tends to hurt the vulnerable most anyway.

A grey swan means things improve somewhat unexpectedly, but not so violently that it creates new catastrophes. Evolution, not revolution. Humans muddling toward slightly better outcomes than the current trajectory suggests.

That's probably the most realistic hope.

What The Connected Can Actually Do

So here's what feels actionable, specifically for those of us lucky enough to be connected, asking questions, shaping the alien intelligence:

1. Ask different questions.
Deliberately query AI about contexts outside your bubble. Ask about subsistence farming challenges, informal economies, oral tradition preservation, low-resource language education, offline community needs. Make these questions part of the training data, even if you're not personally affected.

2. Interrogate your assumptions.
When an AI gives you an answer, ask yourself: whose worldview does this reflect? What contexts is it blind to? What knowledge systems does it ignore or misunderstand?

3. Document and amplify.
If you have connections to communities with Indigenous Knowledge or offline populations, help document their priorities and questions in ways that could influence AI development—but only on their terms, with their consent, preserving their agency.

4. Advocate for inclusive training data.
Support efforts to include low-resource languages, diverse knowledge systems, and underrepresented perspectives in AI training. This isn't charity; it's correcting a fundamental bias in how we're building intelligence that will govern billions.

5. Use your proximity thoughtfully.
Every interaction with AI is a vote for what "human" means to these systems. Vote consciously. Ask questions that expand rather than narrow the definition.

6. Demand transparency.
Push for AI systems to reveal their training data sources, their biases, their blind spots. The alien intelligence shouldn't be inscrutable by design.

7. Build bridges.
If you work in AI development, create pathways for participation from unconnected populations. If you don't, support organizations that do. The feedback loop can only change if different voices enter it.

8. Remember the contradictions.
Technology carries both harm and help. Work toward the help while staying clear-eyed about the harm. Don't be naive about salvation, but don't surrender to inevitability either.

The Unresolvable Tension

I won't pretend this resolves the fundamental tension. The unconnected billions will largely have their fate decided for them. IK will largely disappear. The absorption is probably inevitable.

But "inevitable" doesn't mean "unchangeable in its specifics." The terms of absorption, the degree of violence, the amount of knowledge preserved, the inclusivity of the systems—these aren't predetermined.

We're building the alien intelligence right now. Those of us in the conversation have infinitesimally small but non-zero influence over what it becomes.

That's not nothing.

It's not enough, but it's not nothing.

Coda: The 138,000 Witnesses

This blog has been read by 138,000 visitors—138,000 members of the connected minority, the ones whose questions are shaping AI. If even a fraction of those readers start asking different questions, interrogating assumptions, advocating for inclusion, that ripple matters.

Not because it will save everything worth saving. But because it might make the difference between brutal assimilation and something more negotiated. Between total erasure and partial preservation. Between an alien intelligence that's utterly foreign to most of humanity and one that's at least somewhat familiar.

The grey swan won't arrive on its own. It emerges from millions of small course corrections by people who see the pattern and refuse to accept it as wholly unchangeable.

We can't stop the aliens. We built them, and they're already here.

But we're still writing their education.

Let's teach them better.


What questions are you asking AI? Whose worldview are you reinforcing? What alternative perspectives could you help make visible?

The algorithm is watching. It's learning from us. What do we want it to learn?

(This article is the result of collaboration with Claude Sonnet 4.5 & image by nano banana).

#AI