The Legibility Trap
Monday morning, February 23rd. A hospital system rolls out a new electronic health record mandate. Surgeons must document each clinical decision in standardized fields: rationale, alternatives considered, expected outcomes. Compliance improves steadily. The documentation looks excellent. What nobody tracks is the quality of the decisions themselvesâwhich have quietly gotten worse, because experienced surgeons have learned to write what the system expects rather than think what the case requires. The documentation is legible. The judgment is diminished. Nobody can prove a connection.
The demand that expertise be made legibleâexplainable, auditable, visible to managers and systemsâdoesn't capture knowledge. It replaces it. When you require tacit skill to be translated into explicit process, you're not preserving the skill; you're substituting documentation for it. The people who benefit from legibility are the ones who don't have the expertise and need to evaluate it from the outside. The cost is paid by the expertise itself, which cannot survive full translation without losing what made it valuable.
What Legibility Demands
The concept comes from James Scott's Seeing Like a Stateâhis analysis of how modern bureaucracies try to make complex phenomena readable and measurable so they can be managed. The local, contextual, and hard-to-articulate gets translated into the standardized, documented, and auditable. Vernacular land-use becomes cadastral maps. Customary agriculture becomes crop yield statistics. Clinical judgment becomes treatment protocols.
The demand is always well-intentioned. Documenting decisions makes them reviewable. Reviewable decisions can be improved. Improved decisions produce better outcomes. The assumption at each step seems reasonable. The cumulative result is often the opposite of what was intended.
The problem is the translation step. Between tacit expertise and its explicit documentation, something gets lostâand what gets lost is frequently the most important part.
The Tacit Knowledge Problem
Michael Polanyi articulated this in 1966: "We can know more than we can tell." Expert practitioners know things they cannot fully articulate. A master craftsman cannot write down everything that makes their work excellent. An experienced nurse cannot translate their pattern recognition into a checklist that preserves its full accuracy. A skilled manager cannot specify the complete algorithm by which they assess whether a hire will work out.
This isn't mysticism. It's the nature of learned skill. Expertise develops through feedback on thousands of cases, building pattern recognition that responds to features that were never explicitly named. The knowledge is in the doing, not in the telling. When you ask the expert to tell, you get a partial accountâsometimes useful, but not the same thing.
When a system demands that tacit knowledge be made explicit, several things happen in sequence. The expert translates what they canâthe articulable parts. The inarticulate parts, which may carry most of the value, don't make it into the documentation. The documentation then gets treated as a complete capture of the expertise, because nobody can see what's missing. The next generation of practitioners learns from the documentation rather than from the practitioner, and the gap compounds with each iteration.
The Case Studies
The pattern appears across every domain where expertise meets audit.
Medicine. The checklist revolutionâstandardized pre-surgical protocolsâreduced preventable errors in routine procedures. This success story got misapplied. The original insight was that routine cognitive tasks benefit from explicit checklists. The error was extending this logic to non-routine clinical judgment, where standardization flattens attention to individual variation. Electronic health records, designed to capture clinical reasoning, have become what physicians spend more time on than patients. One study found doctors averaged two hours of documentation for every one hour of direct patient contact. The documentation grew; outcomes didn't improve proportionally.
Education. Standardized curricula and assessments make teaching legibleâensuring teachers cover the right material and students are tested on it. What standardized tests measure is students' ability to perform on standardized tests. The tacit knowledge of a great teacherâhow to read a classroom, when to slow down, when a student is genuinely confused versus momentarily distractedâdoesn't translate into documented curriculum compliance. Schools that optimize ruthlessly for test score legibility produce better test scores. The relationship to actual learning is more complicated.
Hiring. Structured interviews and blind resume review were introduced to reduce bias and make hiring auditable. They reduced some bias. They also systematically disadvantage the difficult-to-document: creative range, contextual judgment, the specific fit that makes someone excellent at a particular role rather than adequate at a standardized version of it. Legible hiring optimizes for candidates who perform well in legible evaluation processes. Whether those are the same people who would perform well in the job is a separate question.
Management. The twentieth-century standardization of business processesâjob descriptions, performance reviews, KPIsârested on the premise that you can't improve what you can't measure. This is sometimes true and often destructive. The things that make organizations actually functionâinstitutional knowledge, informal trust networks, judgment about when rules should bendâresist documentation and atrophy when the documented version is treated as complete. Organizations that lost their senior practitioners and expected the documented version of their knowledge to compensate found out slowly, then quickly, what they'd lost.
Who Benefits
The beneficiaries of legibility are people who need to evaluate expertise they don't have.
A hospital administrator who cannot perform surgery can read documentation. A politician who cannot teach children can audit test scores. A recruiter who cannot judge software architecture can review resume credentials. The demand for legibility is a demand for expertise to produce outputs that non-experts can evaluate. This is not inherently illegitimateâaccountability matters, and organizations need ways to identify problems they don't have the expertise to see directly.
But the person who benefits from legibility is not the expert, and not usually the people the expert serves. The administrator gets cleaner reports. Whether the actual patients are better off is a different measurement.
When legibility demands become intense enough, experts adapt. They get good at producing legible outputs, which is not the same as getting good at the underlying work. They learn the performance of complianceâwhat the auditor wants to see. The organization gets very good at appearing to function well, which is not the same as functioning well. The distinction tends to become visible only when something fails badly enough that the documentation cannot absorb it.
The AI Irony
The clearest demonstration of the legibility trap's logic is the explainable AI movement.
Machine learning models produce predictions that humans cannot follow step by step. A model trained on millions of medical records produces a diagnosis recommendation that no physician can fully trace. This makes administrators and regulators uncomfortable: how do you audit something you cannot read?
The response was a field called explainable AIâtechnical methods producing human-readable accounts of model outputs. The problem, demonstrated repeatedly, is that these explanations are approximations. They give you a simplified account of why the model decided what it decided; the actual computation was more complex and doesn't translate cleanly into the explanation's terms.
Demanding explanations from models that don't naturally produce them gets you approximate explanations that satisfy auditors without accurately describing what the model is doing. It's legibility theater. The knowledge is in the weights; the explanation is a simulacrum. The field has largely accepted this as an unresolved problem: you can have accuracy or legibility, and fully reconciling both is harder than it looked.
What To Do Instead
None of this is an argument against accountability or documentation. The argument is against treating legibility as equivalent to the thing it's supposed to capture.
Audit processes, not just outputs. Documentation tells you what was recorded after the fact. Observing actual workâin context, in real timeâcaptures what documentation systematically misses. Teaching evaluations based on watching teachers teach, not on curriculum compliance, reveal things that rubrics never will.
Distinguish routine from non-routine. Checklists and protocols are excellent for routine tasks where the same steps reliably produce good outcomes. They're damaging when applied to judgment-intensive situations where context-sensitivity is the expertise. Know which you're dealing with before mandating documentation for it.
Be suspicious of what's legible. If everything you're measuring looks clean and improving, ask whether your metrics are tracking the real thing or a legible proxy. Rising test scores in schools that have optimized for test preparation are not evidence that students are learning more. The metric and the underlying reality can decouple, and the legibility system will not tell you when they have.
Protect the transmission of tacit knowledge. Apprenticeship, mentorship, and time near experienced practitioners are how expertise actually transfers. Documentation supplements this transmission; it does not replace it. Institutions that lose experienced staff and expect the documented version of their knowledge to fill the gap discover the gap when it's too late to close it easily.
The Takeaway
The legibility demand is rarely cynical. It comes from genuine interest in accountability, consistency, and improvement. But it carries an implicit assumption: that knowledge can survive full translation into explicit form, that the map can be made as rich as the territory. It can't.
The cost of this assumption is paid by the people whose expertise gets replaced by documentation and the people those experts were supposed to serve. The benefit is captured by those who needed the expertise made readable from the outside.
Before demanding that expertise show its work, ask what gets lost in the translationâand whether the clean documentation you receive back is actually the thing you wanted.
The most important knowledge in any organization is probably the part that doesn't appear in any document. That's precisely the part to protect.