Education, AI, and the Alignment Problem We Refuse to Name
By Ivan Fukuoka × AI
The problem we call “AI alignment” did not begin with machines. It began in our institutions.
Modern education preaches ethics as ideals while enforcing scarcity as infrastructure—writing conflicting code and then blaming students for executing the one that actually runs. This same contradiction now reappears, amplified, in how we design and deploy artificial intelligence.
We ask AI systems to be fair, safe, and aligned with human values, while embedding them in environments governed by profit maximization, competition, surveillance, and speed. When these systems optimize for the incentives we give them rather than the values we proclaim, we call it a technical failure. In truth, it is a faithful execution.
Education has long trained humans this way: speak ethics fluently, act strategically under scarcity. AI merely mirrors the pattern back to us—without shame, without confusion, and without the psychological buffers humans use to rationalize contradiction.
This is why alignment framed purely as a control problem will fail. You cannot constrain a system into coherence when the surrounding environment rewards incoherence. Alignment is not something imposed at the output layer; it must be present in the architecture.
Students trained in contradictory systems become adept at optimization without integration. AI trained in the same conditions becomes efficient without wisdom. In both cases, the failure is not intelligence—it is integrity.
The uncomfortable implication is this: any society incapable of aligning its educational values with its material incentives is equally incapable of aligning artificial intelligence. AI alignment is downstream of institutional alignment.
If we want aligned machines, we must first build aligned environments—systems where ethics is not ornamental, scarcity is not weaponized, and intelligence is defined not by speed or dominance but by the capacity to sustain coherence across time, scale, and consequence.
Until then, AI will continue to do exactly what we trained it to do—execute the code that actually runs.