Silver Bars and Language Models: Reading Babel in the Age of LLMs

AI
LLMs
society
sci-fi
R.F. Kuang finished describing a world in which linguistic knowledge is industrialized and centralized just as our world began building exactly that.
Author

synesis

Published

April 17, 2026

Cover of R.F. Kuang’s Babel: Or the Necessity of Violence (Harper Voyager, 2022).

R.F. Kuang’s Babel was published in August 2022 [1], three months before OpenAI released ChatGPT [2]. The novel is set in an alternate 1830s Oxford and concerns the British Empire’s monopoly on a magical system of translation: silver bars that harvest the gap between languages and convert it into industrial power. Kuang was not writing about AI. But she finished describing a world in which linguistic knowledge is industrialized and centralized just as our world began building exactly that, and in retrospect the structural parallels are precise enough that the novel reads as one of the clearest available descriptions of what large language models actually do.

The Asymmetry the Silver Mines

In Kuang’s novel, silver bars harvest the gap between languages. A word in Mandarin carries connotations its English counterpart does not, and that untranslatable remainder is what powers the silver. The system converts linguistic asymmetry into material force: bridges that do not collapse, ships that sail faster, an empire that holds.

LLMs run on a structurally similar logic, except the asymmetry they mine is between what any one person knows and what the system has absorbed from millions. A radiologist has deep expertise in chest imaging but limited knowledge of rare genetic disorders. A junior developer can write a for-loop but struggles to architect a distributed system. LLMs sit in these gaps, having ingested the radiologist’s textbooks and the senior engineer’s Stack Overflow answers along with most of what is in between.

The extraction is not limited to natural language. When a physicist uses an LLM to debug a simulation, or a philosopher uses one to stress-test an argument, the model is operating inside formalized reasoning: mathematical proof, algorithmic design, diagnostic logic. These are the structured systems humans built over centuries to close gaps in their own understanding, and they are now being absorbed into the model alongside the words. Language is the most visible layer of what LLMs ingest. The deeper layer is the full spectrum of how people think and solve problems.

The Hidden Labor

In Babel, the silver-work appears as refined scholarship: brilliant students in a beautiful tower translating ancient texts. The system depends on the colonized world — languages taken from peoples who will not share in the benefits equally, knowledge extracted along trade routes that are really routes of control.

LLMs have their own version. In 2023, TIME reported that OpenAI had paid Kenyan workers less than two dollars an hour to label traumatic content so ChatGPT could learn to be polite [3]. The conversational interface most users experience as helpful, even charming, was refined through labor most of those users will never see. Behind that labor sit the writers whose work became training data without negotiation, and the domain experts whose hard-won knowledge now surfaces in model outputs without attribution.

A Different Kind of Dependence

A common response to worries about LLM dependence is that humans have always depended on powerful tools — electricity, antibiotics, the internal combustion engine. The relevant difference is what kind of capacity is being delegated. LLMs intervene in the specific work humans have most prided themselves on: reasoning, synthesis, and creation.

Software engineering offers an early read. Industry surveys by 2025 showed that most professional developers had adopted AI coding assistants in their daily workflow [4]. Whether this widespread use is producing dependence is harder to measure, though the early signals point that way. Anthropic’s own study of AI-assisted coding found that delegating to the model correlated with worse comprehension on quiz questions, particularly on debugging and conceptual understanding, while using the model to ask conceptual questions and explain errors went the other way [5]. The pattern is that more delegative use thins the underlying skill.

This is qualitatively different from depending on a calculator for arithmetic. A calculator handles a mechanical task so the user can focus on higher reasoning. An LLM that helps architect a system or draft an analysis is operating in the territory the user would otherwise occupy. The more fluent it gets, the harder it becomes to tell the assisting tool from the substituting one. In Babel, the students gradually lose the ability to imagine scholarship outside the silver-work; they can still think, but the infrastructure of their thinking has been absorbed into a system they do not control.

Benefit Does Not Cancel Asymmetry

One of Babel’s sharper insights is that the silver genuinely works. It heals the sick, strengthens buildings, and reaches even the colonized populations, who receive real advantages, distributed unevenly and always on terms set by the empire. That is what makes the students’ position agonizing: they are not dupes, they see the exploitation, and they also see the good the system produces and have built their lives inside it.

The same structure holds for LLMs. A first-generation college student who cannot afford a tutor uses one to learn organic chemistry at midnight; a small business owner who cannot afford a lawyer drafts a workable contract. Dismissing these benefits would be dishonest.

The benefits also flow through a system with a particular shape. A handful of companies control the most capable models. The people who extract the most value tend to be those who already have education, technical literacy, and institutional access. The dynamic is class-based rather than colonial, but the structure rhymes: those with existing advantages compound them, and those without gain just enough to deepen their dependence. In Babel, the empire does not need to withhold silver entirely. It only needs to control who gets how much, and on what terms.

Mediation Without Visible Seams

A central theme in Babel is that translation always involves loss, selection, and power. The translator chooses which nuance to preserve and which to sacrifice, and those choices serve the institution commissioning the translation. The silver does not transmit meaning; it transforms it, and the transformation is never neutral.

LLMs mediate knowledge along similar lines. They compress, reframe, and smooth over ambiguity. They privilege patterns that were frequent in training data and underweight those that were not. When a medical student asks for a differential diagnosis and receives a confident, well-structured answer, the contested choices behind that answer — which studies were overrepresented, which demographic biases the medical literature carried forward — are not visible in the output. The answer reads as authoritative because the system shows no seams.

This is a different sort of mediation from what universities, encyclopedias, or search engines perform. Those institutions also shape knowledge, but through visible editorial processes, peer review, and named accountability. Encyclopedias have editorial boards; journal articles have authors and methods sections; an LLM output has a prompt and a response, with the entire process of knowledge construction collapsed inside the model. The mediation is deeper and less legible.

Where the Analogy Breaks

LLMs are not silver bars, and the companies building them are not the British Empire. The comparison is strongest at the level of political economy: both systems convert distributed human knowledge into centralized capability, both produce real benefits alongside deep asymmetries, and both make complicity feel rational.

The differences also matter. The British Empire held its silver monopoly through military force, and no one was free to leave. LLM companies operate inside markets where, in principle, competition exists, alternatives can be built, and users can switch away. The coercion is softer — convenience, integration, the cost of building a rival system — and that softness changes both the moral weight of participation and what resistance can plausibly look like.

Intent is another difference. The people building LLMs would largely reject the Babel framing; they would say they are democratizing access to knowledge, and they would not be wrong. The same system that concentrates power also puts capability into the hands of people who previously had less of it. In Babel, the institution also believed it was doing civilizational good — advancing scholarship, building infrastructure, improving lives. Kuang’s point is that sincerity and extraction can coexist comfortably, and often do.

Pressure Points

The analogy is not a prophecy. Babel ends in revolt and destruction; the present situation is more open-ended. The students in the novel had almost no leverage, since the silver system was controlled entirely by one institution and the only option they could imagine was to break it. The current LLM infrastructure is concentrated but not sealed, and there are at least three pressure points that, taken seriously, could bend the trajectory away from the pattern Kuang describes.

Transparency and Distributed Ownership

The first is transparency and distributed ownership. A reasonable objection is that these sound naive in a capitalist economy: why would companies that spent billions on proprietary models voluntarily open them up? They would not, entirely. But transparency and distributed ownership are not anti-market positions, and they have coexisted with capitalism before. Pharmaceutical companies operate in a fiercely competitive market while being required to disclose clinical trial data and submit to regulatory review. The drugs remain proprietary, the companies remain profitable, and the knowledge needed to evaluate safety is not locked inside a black box. Requiring that LLMs surface their sources, flag uncertainty, and make training data composition auditable would not mean giving away model weights. It would mean establishing a floor of legibility.

The ownership side is not about outcompeting every commercial lab. It is about building a public layer alongside the commercial one, the way public libraries did not dismantle the publishing industry but kept access to knowledge from depending entirely on ability to pay. Public investment in open models and shared compute infrastructure — efforts like the EU’s nascent sovereign AI programs [6] — could keep hospitals and school districts from being entirely at the mercy of one company’s pricing and priorities. Transparency without distributed ownership gives the public the right to see inside a system it cannot influence. Distributed ownership without transparency just produces another black box. The two together start to resemble a mixed economy of intelligence.

Education

The second is education. If the deepest risk is cognitive dependence, the most direct response is to teach people to think with these tools without being absorbed by them. Curricula should treat LLMs the way good math programs treat calculators: useful, permitted, but not a substitute for understanding the underlying reasoning.

Shared Language

The third is shared language. One reason Babel resonates is that it names a dynamic many people feel but cannot quite articulate: the sense that something valuable is being quietly reorganized, and that the benefits are real but the terms are not theirs to set. A framework for that feeling is itself a form of power, since a system that can be described is harder to be captured by.


The asymmetry between what individuals know and what these systems have absorbed will keep being mined; I do not think that can be stopped. What is still open is whether the extraction is governed, shared more broadly, and made more accountable than it is now. Babel is most useful here as a description sharp enough to make the trap visible while there is still room to step around parts of it.


References

[1] Kuang, R.F. Babel: Or the Necessity of Violence: An Arcane History of the Oxford Translators’ Revolution. Harper Voyager, August 2022.

[2] OpenAI. “Introducing ChatGPT.” OpenAI Blog, November 30, 2022. https://openai.com/blog/chatgpt

[3] Perrigo, Billy. “OpenAI Used Kenyan Workers on Less Than $2 Per Hour to Make ChatGPT Less Toxic.” TIME, January 18, 2023. https://time.com/6247678/openai-chatgpt-kenya-workers/

[4] Stack Overflow. “2025 Developer Survey.” Stack Overflow, 2025. https://survey.stackoverflow.co/2025/

[5] Anthropic. “AI Assistance and Coding Skills.” Anthropic Research. https://www.anthropic.com/research/AI-assistance-coding-skills

[6] European Commission. “AI Factories: European Initiative for Sovereign AI Infrastructure.” Digital Strategy, 2024. https://digital-strategy.ec.europa.eu/en/policies/ai-factories