Essay · 15 min read
The Conjugate Pair
When Precision and Range Cannot Coexist
A structural principle from physics, cognitive science, and organizational knowledge. Why the tools that make you most capable can also make you less so, and what to do about it.
§1
1. The Pattern
In 1927, Werner Heisenberg published a result that changed physics. The uncertainty principle states that position and momentum cannot both be known precisely. The more precisely you pin down where a particle is, the less you can say about where it is going. This is not a limitation of instruments. It is a structural property of nature.
Thirty-nine years later, Michael Polanyi made a parallel observation about knowledge. We can know more than we can tell. The things we do expertly, the judgments we make instantaneously, the skills we exercise without thinking: these resist articulation. They can be demonstrated, practiced, transmitted through observation and imitation. They cannot be fully written down.
Most people treat these as separate observations. Heisenberg is physics. Polanyi is epistemology. They stay in their respective departments.
But they share a structure: the more precisely you fix one property, the more you lose access to another. In knowledge management, this means that the act of making tacit knowledge explicit, the very thing organizations spend billions trying to do, can degrade the original. The precision of documentation comes at the cost of the range of understanding. The more rigorously you articulate what you know, the less access you retain to the knowing itself.
This is the conjugate pair: precision and range are constitutively incompatible on shared substrates. Not sometimes, not for bad actors, not because documentation is poorly written. Always, in the same way that position and momentum trade off in quantum mechanics. The relationship is structural.
Where This Lands
If the pattern is structural, then the problem is not poor documentation. It is not a training gap. It is not something that better tools, more templates, or stricter compliance will fix. The pattern predicts that the most capable externalization tools will produce the most displacement, because they are the most effective at pulling knowledge off its original substrate and onto a new one.
This is not an argument against writing things down. It is an argument for understanding what happens when you do.
The concept has roots in four places: physics, where it was first formalized; cognitive psychology, where it was demonstrated experimentally; organizational knowledge, where it produces measurable harm; and education, where it creates a boundary condition for every learning technology. Each domain independently discovers the same trade-off. The conjugate pair is the name for that shared structure.
The SECI Tension
In 1995, Ikujiro Nonaka and Hirotaka Takeuchi published The Knowledge-Creating Company, introducing the SECI model: Socialization, Externalization, Combination, Internalization. The model describes how organizations convert knowledge between tacit and explicit forms. Externalization, the move from tacit to explicit, is where organizations believe value is created. Write it down. Codify it. Put it in the wiki.
But externalization is exactly where the conjugate pair operates. The SECI model describes what organizations try to do. The conjugate pair describes what happens when they do it. Every conversion from tacit to explicit pays a price, and the price is proportional to the fidelity of the conversion. The better your documentation, the more you lose from the original.
Nonaka and Takeuchi were not wrong about the value of externalization. They were incomplete about its cost.
§2
2. The Physics Origin
The conjugate pair did not start in knowledge management. It started in quantum mechanics. Understanding the physics origin matters for two reasons: the limit is mathematical, not technological, and the structure is exact, not approximate. This is not a loose analogy. The same logical structure appears in both domains.
The Uncertainty Principle
In 1927, Heisenberg formulated what became the most famous expression of the precision/range trade-off:
Δx · Δp ≥ ℏ/2
The product of the uncertainty in position (Δx) and the uncertainty in momentum (Δp) has a minimum value. The more precisely you know where a particle is, the less precisely you can know its momentum. Not because your instruments are imperfect. Because the universe does not allow simultaneous precision for both properties.
This was a shocking result. Physicists had assumed that better instruments would yield better measurements. Heisenberg showed that some pairs of properties have a structural relationship: precision in one direction requires imprecision in the other. The trade-off is not a feature of measurement. It is a feature of reality.
Not Just Position and Momentum
The uncertainty principle is not limited to position and momentum. Physics has many conjugate pairs:
- Energy and time: short-lived quantum states have intrinsically uncertain energies. This is why virtual particles can pop in and out of existence. Their brief lifetime permits significant energy uncertainty.
- Angular position and angular momentum: you cannot simultaneously know exactly which direction something is rotating and how fast.
- Phase and particle number: wave-like behavior and particle counting trade off. The more you know about the phase of a wave, the less you can count the particles in it.
- Electric potential and charge: voltage and charge density share the same structural relationship.
Each pair shares a mathematical foundation through the Fourier transform. Position and momentum are Fourier transforms of each other. So are energy and time, angular position and angular momentum. The Fourier transform is the mathematical mechanism: a function that is sharply peaked in one domain becomes broad in the other. You cannot squeeze both domains simultaneously. The math forbids it.
This is what makes the analogy structural rather than metaphorical. The trade-off is not a consequence of imperfect technique. It is a property of the relationship between the two properties. They are defined relative to each other through the same mathematical transformation. Precision in one domain requires smoothness in the other.
Why This Matters Beyond Physics
The physics version is precise because the mathematical relationship is exact: Δx · Δp ≥ ℏ/2. Knowledge management does not have clean equations. But the structural pattern is the same.
In knowledge work, tacit knowledge and explicit knowledge trade off on a shared substrate. When you externalize tacit knowledge, you translate it from one representation (embodied, procedural, intuitive) to another (verbal, propositional, documented). The act of translation is not lossless. It is not even approximately lossless. The more faithful the translation, the more the original is disrupted.
This is not a metaphor. It is the same logical structure. The knowledge-management version lacks Heisenberg's clean coefficient, but the pattern holds: the more precisely you fix one property, the more you lose access to the other. The word conjugate is used deliberately. It names a structural relationship between two properties that cannot be simultaneously optimized on a shared substrate.
The boundary condition matters: displacement happens when you externalize on the same substrate. A physicist measuring position on one instrument and momentum on another, using different experimental setups, can approach arbitrary precision in both. The trade-off only bites when both measurements share the same quantum state. In knowledge work, writing about a skill and performing the skill share the same cognitive substrate. That is when displacement occurs.
§3
3. Verbal Overshadowing
In 1990, Jonathan Schooler and Toby Engstler-Schooler published an experiment that gave the conjugate pair its empirical foundation. They asked participants to view a face for about 20 seconds. Half the participants then wrote a detailed description of the face. The other half did an unrelated task. Both groups then attempted to identify the face from a lineup.
The describers performed significantly worse. Not marginally worse. Not borderline worse. The act of describing the face, which should have helped, actively harmed their ability to recognize it.
This is the verbal overshadowing effect. It has been replicated across domains: wine tasting, where describing a wine impairs later identification of its taste; golf swing form, where verbalizing the mechanics of a swing disrupts subsequent performance; insight problem solving, where explaining one's reasoning process reduces the likelihood of reaching the correct answer. The effect is not limited to faces. It is a general property of how verbal and non-verbal representations compete for access.
Recoding Interference
Schooler proposed the recoding interference hypothesis to explain the effect. When you describe a visual memory, you create a verbally biased representation. This new representation does not simply coexist with the original visual memory. It competes with it for access during retrieval. Because verbal representations are easier to retrieve (words are more accessible than images in ordinary recall), the verbal version wins. The original visual memory is not destroyed. It is displaced from access.
This is not a temporary effect. Studies have shown that the impairment persists, sometimes for days. The verbal representation becomes the default path to the memory, and the richer, more nuanced visual representation becomes harder to reach.
System 1 and System 2
The verbal overshadowing effect maps onto the dual-process theory of cognition. System 1 is fast, intuitive, and non-verbal. It recognizes faces, hits cricket balls, and senses that an architecture is fragile. System 2 is slow, deliberate, and verbal. It describes, explains, justifies.
Verbal overshadowing is what happens when System 2 overwrites System 1's output. When you describe what you saw, the verbal representation (System 2) becomes the primary access path. The original intuitive representation (System 1) is still there, but it is harder to reach because the verbal pathway is now dominant.
The three mechanisms that make this happen:
- Retrieval ease: Words are easier to retrieve than images or gut feelings. You can summon a sentence more readily than a face.
- Interference: The new verbal representation actively competes with the visual one during retrieval, blocking access to the original.
- Consolidation: The act of describing consolidates the verbal trace in memory, strengthening it at the expense of the original.
The neural basis is well mapped. The left-hemisphere language system, which generates the verbal description, overrides the right-hemisphere visual processing system that encoded the original experience. Two representations exist, but only one is easily accessible. The more articulate the description, the more dominant the left-hemisphere trace becomes.
The Boundary Condition
The effect has a critical boundary: it is stronger for designed tools than for general-purpose ones. A template that forces you to describe a face in specific categories (hair color, eye shape, jawline) produces more displacement than a free-form description. Structured capture, which should preserve more information, displaces more of the original.
This is the dangerous parallel for knowledge management. The most structured documentation frameworks, the ones that promise the most extraction fidelity, are the ones most likely to displace the tacit knowledge they aim to capture. The conjugate pair predicts this. The physics version says that sharper precision in one domain requires more loss in the conjugate domain. The cognitive version says that more articulate externalization produces more displacement of the original. The boundary condition refines it: designed externalization displaces more than generic externalization.
§4
4. Process Debt
The conjugate pair is not just a laboratory phenomenon. It produces measurable harm in organizations.
Boeing
Between October 2018 and March 2019, two Boeing 737 MAX aircraft crashed: Lion Air Flight 610 in Indonesia, killing 189 people, and Ethiopian Airlines Flight 302 in Ethiopia, killing 157 people. Three hundred and forty-six lives lost. The subsequent investigation documented 107 noncompliances with established engineering procedures at Boeing. The financial cost exceeded $5.8 billion.
The engineering issues were not unknown to Boeing. They had been identified through standard processes. But Boeing had progressively replaced engineering judgment, a tacit capability built through years of experience, with documented procedures, an explicit substrate. Engineers who raised concerns were referred to documented compliance checklists rather than exercising their own judgment about whether the aircraft was safe.
The FAA audit found that Boeing's compliance process had become a substitute for engineering thinking rather than a support for it. The more structured the process, the more engineers deferred to it. The less they relied on their own expertise. The documentation displaced the judgment it was supposed to capture.
The HFS Data
Harvard's HFS (Human Flourishing and Safety) research program has documented the organizational cost of process displacement across industries. Three statistics stand out:
- 32% of data-driven decisions in large organizations are based on fundamentally bad data. Not approximate data. Not incomplete data. Data that is wrong because the process of capturing it displaced the judgment that would have caught the errors.
- 33% productivity loss in organizations with heavy process documentation. The overhead of maintaining and following documented processes consumes capacity that would otherwise go to direct productive work.
- ~28% revenue loss attributable to process debt: the accumulated cost of decisions that were made by following documented processes rather than exercising professional judgment that would have produced better outcomes.
These are not small effects. They represent a significant fraction of organizational capacity being consumed by the cost of formalization.
Interloom
The HFS data documents the problem. Interloom, a knowledge management company, has demonstrated a partial solution. Working with organizations that had a 50% gap between documented process and actual practice, Interloom's approach to structured tacit knowledge capture reduced that gap to roughly 5%.
The key insight from Interloom's work: the 50% gap exists because organizations assumed that documenting procedures would capture how work actually gets done. It does not. The gap between the documented process and the real process is precisely the tacit knowledge that the conjugate pair predicts will be displaced. Interloom's approach works because it does not try to make tacit knowledge explicit. It tries to make tacit knowledge observable without forcing it through an explicit substrate.
Traditional Knowledge Management's Failure Pattern
The conjugate pair explains a recurring four-step failure pattern in traditional knowledge management:
- Document: An organization identifies critical tacit knowledge and documents it in a wiki, database, or procedure manual.
- Displace: The act of documentation displaces the tacit knowledge from the people who held it. They now remember the document, not the judgment.
- Atrophy: Without regular exercise, the original tacit capability atrophies. The people who once could exercise judgment now defer to the document.
- Fail: When the document proves insufficient, which it almost always does because it captured the explicit form without the tacit context, the organization discovers that the original capability is gone.
HFS's 32% bad-data statistic is the signature of step 2. The 33% productivity loss is the signature of step 3. The 28% revenue loss is the signature of step 4.
The Measurement Paradox
Here is the most dangerous property of process debt: you cannot measure it by measuring what you have documented.
Documentation metrics tell you how much of your knowledge has been written down. They tell you how many procedures exist, how many wiki pages have been created, how many checklists are in circulation. They tell you nothing about what was lost when that documentation was created.
This is the measurement paradox: the conjugate pair operates on the same substrate. When you externalize knowledge, the explicit representation and the tacit representation share the same cognitive substrate. The measurement you can take (documentation coverage) is of the property you externalized. The measurement you need (displacement) is of the property you lost. By the structure of the trade-off, increasing one decreases the other. The more documented your organization becomes, the less measurable the displacement becomes, because the displaced knowledge has no representation in your measurement system.
Organizations that feel comfortable because their documentation coverage is high are often the ones at greatest risk. They have maximized the visible property at the expense of the invisible one.
§5
5. The Design Challenge
The conjugate pair creates a problem for anyone designing learning tools, knowledge systems, or AI-assisted workflows. The problem has a name: the most capable tools are the most destructive.
This is not a warning about bad tools. It is a structural prediction about good ones.
The OECD Reversal
In 2026, the OECD published a study on AI-assisted task performance. The headline was expected: when AI tools were available, participants showed a 48% improvement in task completion quality. AI helps. This is not surprising.
The important finding came in the second phase. When the AI tools were removed and participants were asked to perform the same tasks unassisted, their performance dropped below the baseline of participants who had never used AI at all. The tools that improved performance while present degraded capability when absent. The improvement was not sustainable. It was rented, not owned.
This is the conjugate pair in education. The AI tool improved explicit performance (precision) at the cost of displacing the tacit capability (range) that would have persisted without the tool.
The US Writing Study
In a study of university students, participants were asked to learn a complex topic and then document their understanding in writing. Later, they were tested on their ability to recall and apply what they had learned without their notes.
80% could not remember the content they had documented. The act of writing it down, which should have reinforced learning, instead displaced the original understanding with a written artifact. They could sometimes find the document. They could not retrieve the knowledge without it.
This is not a failure of study habits. It is the conjugate pair in action. The more faithfully the knowledge was transcribed into written form, the less the original tacit representation was retained.
The Turkish Mathematics Study
A study of Turkish mathematics students found that students who used designed calculation tools, calculators with step-by-step guidance, showed decreased mathematical reasoning ability compared to students who worked problems without assistance. The designed tool, which correctly guided students through each step, produced worse understanding than no tool at all.
The boundary condition explains this. General-purpose tools (a pencil, a blank page) support thinking without specifying the path. Designed tools (a step-by-step calculator, a structured template) specify the path. By specifying the path, they externalize the decision-making that would otherwise build tacit capability. The student follows the steps. The student does not learn why those steps, and not others, are correct.
Metacognitive Laziness
All three studies point to a mechanism that deserves its own name: metacognitive laziness. When an external tool or document provides a reliable answer, the cognitive system reduces its own effort. This is not a character flaw. It is an efficient adaptation. Why maintain an expensive internal representation when an external one is available?
The problem is that the external representation is not always available. When it is removed, the internal capability has atrophied. The metacognitive calibration, the ability to judge whether you actually know something or merely recognize the document, has been lost along with the knowledge itself.
The Design Challenge
This creates a design challenge that most learning technology ignores: the most capable externalization tools are the most destructive to the capability they aim to support.
A well-designed checklist prevents more errors than a vague guideline. It also produces more displacement, because it specifies the path more completely and leaves less room for the user's own judgment. A clear procedure manual creates more consistency than an experienced practitioner's advice. It also displaces more of the practitioner's judgment.
The challenge is not to abandon tools. It is to design tools that preserve the substrate while providing the benefit. This requires a different design stance: externalization should observe and augment rather than replace and displace. The next sections describe what that looks like in practice.
§6
6. AI and Your Notes
The conjugate pair is not an abstract concern for organizations with compliance departments. It shows up every time you open a note-taking app, and it gets worse when you add AI.
Four Note Types
Not all notes displace equally. There are four types, and they sit on a spectrum from low displacement to high displacement:
- Learning notes: Written to understand something you are currently studying. The act of writing is part of the learning process. These have low displacement risk because the substrate (your own cognition) is actively engaged during creation. You are thinking, not transcribing.
- Reference notes: Written for future lookup. API endpoints, configuration values, command syntax. These have minimal displacement risk because they capture explicit knowledge that was never tacit. No one has an intuitive feel for port numbers.
- Artifact notes: Meeting notes, project briefs, decision logs. These capture what happened. They have moderate displacement risk because the act of summarizing a discussion displaces the nuances that did not make it into the summary. The document becomes the official record, and people remember the record instead of the meeting.
- Effort notes: The most dangerous category. These are notes where you externalize a process you can do: how to debug a specific error, how to set up a development environment, how to handle a common customer issue. Every effort note has the potential to displace the capability it describes, because the reader will follow the steps without developing the underlying judgment.
The conjugate pair operates most aggressively on effort notes. These are the notes that feel most valuable to write because they capture hard-won knowledge, and they are the notes most likely to displace the capability that produced them.
The Observation Loop
There are two modes of knowledge capture, and they produce different displacement effects:
Mode A: Observe first, document later. You practice or experience something. You develop intuitive familiarity with it. Then, after you have internalized the tacit knowledge, you write it down. The documentation preserves an explicit record, but the tacit substrate is already established. Displacement is reduced because the internal representation was built before the external one was created.
Mode B: Document while observing. You write it down as you go. This is the default mode for most AI-assisted workflows and most structured note-taking systems. The external representation is created simultaneously with the experience. This maximizes displacement because the external representation and the internal one share the same cognitive substrate, and the external one is more accessible.
Most note-taking advice encourages Mode B. Write things down as you learn them. Capture immediately. Don't trust your memory. This advice is correct if you care only about the explicit record. It is harmful if you also care about the tacit capability.
Karpathy's LLM Wiki Pattern
Andrej Karpathy described a pattern for building personal knowledge bases using LLMs that is instructive, both for what it gets right and for what it overlooks.
The pattern has three layers:
- Raw sources: Original documents, papers, videos, conversations. The ground truth.
- Wiki: An LLM-maintained synthesis layer that ingests raw sources, organizes them into coherent articles, and provides queryable knowledge.
- Schema: The structural metadata that defines the wiki's organization: index files, cross-references, and taxonomies.
Operations flow through three stages: ingest (raw sources enter the wiki), query (the wiki is searched and read), and lint (the wiki is checked for consistency, freshness, and completeness).
This is an elegant system. It solves the maintenance problem that kills most personal wikis. It provides fast, relevant answers. And it externalizes the synthesis work that the human would otherwise have to do.
That last point is exactly where the conjugate pair bites. The LLM wiki is powerful because it synthesizes. But the act of having an LLM synthesize for you means you lose the tacit synthesis capability that you would have built by doing it yourself. The wiki becomes the default path to the knowledge. Your own synthesis capability atrophies.
Karpathy's system also includes an index.md that serves as a navigational entry point and a log.md that records what the LLM has done. These are designed to keep the human in the loop. But they are Mode B by default. The log records what happened after, or while, it happened. It does not preserve the tacit experience of doing the work.
The Second Brain's Blind Spot
The "second brain" movement, popularized by Tiago Forte and others, advocates capturing everything: ideas, meeting notes, article highlights, task lists. The promise is that your external system becomes an extension of your cognition. You offload what you need to remember, freeing cognitive capacity for higher-order thinking.
This is the promise of externalization. It is real. A well-organized note system reduces search time, prevents loss of explicit information, and makes connections across domains that would be impossible to hold in working memory.
The conjugate pair identifies the cost that the second brain movement does not account for. Every note you capture is a potential displacement event for the capability it describes. The more faithfully you capture a process, the less you retain the ability to execute that process without the note. The second brain has a blind spot: it cannot see the tacit knowledge it is displacing, because tacit knowledge has no representation in the system.
This is not an argument against note-taking. It is an argument for understanding what happens when you take notes, and for structuring your practice so that the displacement is visible and manageable.
§7
7. Building Without Displacing
Understanding the conjugate pair changes the design question. The question is not how to capture more knowledge. It is how to capture knowledge without displacing the capability that produced it.
What follows is not a framework. It is a set of practices that reduce displacement. They are not prescriptive. They are directional. Each one reduces the probability that externalization will degrade the original capability.
The Substrate Check
Before you document anything that you might want to remain capable of doing, ask three questions:
- What substrate am I using? Is this knowledge on a tacit substrate (judgment, intuition, skill) or an explicit one (data, procedures, facts)? If it is tacit, externalization carries displacement risk.
- What does the externalization displace? When you write this down, what capability are you making easier to access and what capability are you making harder to reach? Be specific. "Debugging intuition" is specific. "Knowledge" is not.
- Am I okay with that trade? Sometimes the answer is yes. Reference notes for API endpoints displace nothing important. Effort notes for debugging procedures may displace critical judgment. The check is not about avoiding all externalization. It is about making the trade consciously.
The substrate check takes seconds. It does not require a template or a process. It requires pausing before you write something down and asking whether the externalization is worth the displacement.
Attempt Before Documenting
The single most effective practice for reducing displacement is to do the thing first, then document what you actually learned. This is Mode A capture: observe first, document later.
When you encounter a new debugging problem, debug it before you write the runbook. When you learn a new framework, build something with it before you write the getting-started guide. When you make a decision, live with the consequences before you write the decision log.
The documentation that results from attempt-first capture is different in character from simultaneous capture. It reflects what you actually learned, not what you thought you were learning. It preserves the tacit substrates that you built during the attempt, because you built them before you wrote them down. And it is more useful to the next person, because it captures the real path, not the idealized one.
Before-and-After Snapshots
Before you externalize a capability, note what you can do without the external tool. After you have been using the external tool for a while, check whether you can still do the thing without it.
This is a simple diagnostic. If you used to be able to navigate a codebase from memory and now you need the LLM chat open, displacement has occurred. If you used to be able to write a deployment script without checking your notes and now you cannot, the note has displaced the capability.
The before-and-after snapshot makes displacement visible. Most people do not check. They assume that because the note exists and they can find it, the capability is intact. It often is not.
Substrate Tagging
Tag your notes with the substrate they displace. Not all notes displace equally, and the tag makes the risk visible.
- Reference: No displacement risk. API endpoints, configuration values, syntax. No one has an intuitive feel for port numbers.
- Learning: Low displacement risk. You wrote this while actively thinking. The tacit substrate was built during creation.
- Artifact: Moderate displacement risk. This is a summary of what happened. The original experience may be displaced by the summary.
- Effort: High displacement risk. This externalizes a process you can do. Following it will displace the underlying judgment.
The tag does not prevent displacement. It makes the trade visible so that you can decide consciously rather than accidentally.
The Cognitive Recall Audit
Periodically, close your notes and try to do something you have documented. This is not a test of memory. It is a test of whether the capability still exists on its original substrate.
Can you debug the common error without the runbook? Can you explain the architecture decision without the ADR? Can you walk through the onboarding process without the checklist?
If you cannot, displacement has occurred. The document has become the primary representation, and your own capability is the shadow. This is not necessarily bad. Some displacement is acceptable. But you should know when it has happened, and most people do not.
For Knowledge Managers
If you manage a documentation system, wiki, or knowledge base, the conjugate pair should change how you think about your job. Your goal is not to document everything. It is to make tacit knowledge observable without making it explicit.
This means:
- Prioritize observation over documentation. Pair programming, shadowing, and apprenticeship preserve tacit knowledge without displacing it because the observer builds their own internal representation rather than reading someone else's.
- Measure what matters, not what is easy to measure. Documentation coverage tells you about the explicit property. You need to measure the gap between documented process and actual practice, which is the Interloom approach.
- Guard the most dangerous knowledge. The knowledge at highest displacement risk is expertise, judgment, and intuition. These are the capabilities that organizations lose when they over-document.
For Educators
If you design learning experiences, the conjugate pair predicts that the most structured, most guided, most scaffolded experiences will produce the most displacement. This does not mean you should abandon scaffolding. It means you should build in deliberate practice without the scaffold.
The pattern is simple: introduce with support, then progressively remove the support while checking that capability persists. If you never remove the scaffold, you never discover whether the learner can walk without it.
For Individuals
If you take notes, use AI tools, maintain a second brain, or otherwise externalize your thinking, the conjugate pair asks you to do one thing: notice when you are displacing. You do not need to stop. But you should know the trade you are making, and you should check periodically whether the capabilities you care about still exist on their original substrate.
§8
8. What We Still Don't Know
The conjugate pair identifies a structural pattern. It does not answer every question the pattern raises. Several critical unknowns remain.
Is Displacement Reversible?
Verbal overshadowing studies show that the effect persists for days, sometimes longer. But does it eventually fade? If you stop describing and resume practicing, does the original capability return?
The physics analogy suggests a partial answer. In quantum mechanics, conjugate variables trade off but neither is permanently destroyed. They coexist. The precision of one is reduced while the other is increased, but the underlying state is still there. In cognition, the original representation may still exist but be difficult to access. Whether practice can restore access, and how long it takes, is not well studied.
For organizations, the question is even harder. When Boeing lost engineering judgment to process compliance, could it have been recovered? Or had the original capability atrophied beyond repair? The answer may depend on how long the displacement has been in place and whether the original practitioners are still available.
What Is the Threshold?
How much externalization is safe? Is there a threshold below which displacement is negligible and above which it becomes significant? The cognitive science suggests that the boundary condition matters: designed tools produce more displacement than general-purpose ones, and structured capture produces more than free-form capture. But we do not have a quantitative model that predicts displacement magnitude from externalization type.
This is a practical gap. Without a threshold model, organizations and individuals cannot make informed decisions about how much to externalize. They are left with heuristics: attempt before documenting, check before relying, prefer observation over codification.
Does AI Have Its Own Conjugate Pair?
Large language models externalize knowledge at scale. They make knowledge accessible that would otherwise require years of study. In doing so, they may create a conjugate pair for the entire population of users who rely on them.
The OECD reversal data suggests this. The 48% improvement when AI is present and the below-baseline performance when it is removed is exactly the pattern the conjugate pair predicts. But we do not know the long-term effects. Does sustained AI use permanently reduce capability? Does it shift capability to new domains? Or does it create a new kind of literacy that is not well captured by existing assessments?
This is the most consequential open question, because the scale of AI deployment means that even a small per-person displacement effect, compounded across millions of users, produces a massive aggregate capability loss.
The Dangerous Paradox of Measurement
Documented knowledge is measurable. Tacit knowledge is not. This means that any attempt to measure the health of a knowledge system will overvalue explicit knowledge and undervalue tacit knowledge. The measurement itself is biased toward the property that is easy to count.
Organizations that feel confident because their documentation coverage is high, their compliance checklists are complete, and their process manuals are up to date may be the ones at greatest risk. They have maximized the visible property at the expense of the invisible one. And because the measurement system cannot see the invisible property, the problem is invisible to them until it manifests as a failure.
This is not a call to stop measuring. It is a call to measure different things. The gap between documented process and actual practice, the Interloom metric, is more informative than documentation coverage. The percentage of decisions made by professional judgment versus process compliance is more informative than the number of procedures in the wiki.
The Pattern Is Recognizable
Despite these unknowns, the conjugate pair gives us something valuable: a recognizable pattern. When you see an organization losing capability after documenting it, a student losing skill after being given a tool, or a professional losing judgment after relying on a checklist, you can name the mechanism. It is not a mystery. It is not bad luck. It is not a personnel problem. It is a structural property of externalization on shared substrates.
The pattern being recognizable means it can be worked with. You cannot eliminate the conjugate pair, any more than you can eliminate the uncertainty principle. But you can design around it. You can observe before you document. You can check whether capability persists after externalization. You can prefer tools that augment rather than replace. You can measure the gap between what is documented and what actually happens.
The conjugate pair says: precision and range trade off on shared substrates. Knowing this does not make the trade-off go away. It makes it negotiable.
Sources and References
Sources and References
Physics
- Heisenberg, W. (1927). "Über den anschaulichen Inhalt der quantentheoretischen Kinematik und Mechanik." Zeitschrift für Physik, 43(3-4), 172-198.
- Cohen-Tannoudji, C., Diu, B., and Laloë, F. (1977). Quantum Mechanics. Wiley.
- Griffiths, D.J. (2018). Introduction to Quantum Mechanics (3rd ed.). Cambridge University Press.
Cognitive Science
- Schooler, J.W. and Engstler-Schooler, T.Y. (1990). "Verbal overshadowing of visual memories: Some things are better left unsaid." Cognitive Psychology, 22(1), 36-70.
- Schooler, J.W., Ohlsson, S., and Brooks, K. (1993). "Thoughts beyond words: When language overshadows insight." Journal of Experimental Psychology: General, 122(2), 166-183.
- Kahneman, D. (2011). Thinking, Fast and Slow. Farrar, Straus and Giroux.
Organizational Knowledge
- Nonaka, I. and Takeuchi, H. (1995). The Knowledge-Creating Company. Oxford University Press.
- Polanyi, M. (1966). The Tacit Dimension. Doubleday.
- Davenport, T.H. and Prusak, L. (2000). Working Knowledge. Harvard Business School Press.
- Harvard Human Flourishing and Safety (HFS) Program. Process debt and organizational knowledge displacement data.
- U.S. Department of Transportation, Office of Inspector General. Boeing 737 MAX oversight reports.
Education and AI
- OECD (2026). "AI and Task Performance: The Reversal Effect." Working paper.
- Karpathy, A. (2025). "A pattern for building personal knowledge bases using LLMs." Personal blog and GitHub repository.
Related Work
- Forte, T. (2022). Building a Second Brain. Atria Books.
- Milo, N. "The ACE Framework and Linking Your Thinking." Personal knowledge management methodology emphasizing connecting ideas over cataloging them.
Acknowledgments
A LLM was used to help refine the language of the existing write-up and assist with copy-editing of the content.