Understanding Predictive Coding, Positioned Knowledge, and the Limits of Certainty
Walk into an empty concert arena and look across the seating. Row after row of identical chairs stretches before the eyes, thousands of them arranged in perfect geometric precision. The brain registers the pattern: seat, armrest, seat, armrest, extending into the distance until the details blur into uniformity. What most people don't realise—what feels impossible to accept even when told—is that after the first few rows, the brain isn't seeing those chairs at all. It's fabricating them.
The photoreceptors in the retina send signals about the chairs in the foreground, where detail is sharpest. But beyond that? The brain takes the established pattern and simply continues it. It fills in what "should" be there based on what it's already processed. The experience feels like direct perception, like the eyes are faithfully reporting the external world. But that's not what's happening. The brain is generating an internal model and treating that model as reality. The chairs in the distance aren't being perceived; they're being predicted.
This isn't an optical illusion. It's not a trick or a failure of the visual system. It's normal brain function, happening constantly, in every waking moment. The brain doesn't have the computational resources to process everything in the environment in full detail, so it doesn't try. Instead, it builds an internal model of the world—a fabrication—and updates it only when prediction errors indicate something has changed. Madonna sang about living in a material world, but neuroscience reveals something more unsettling: humans are living in a fabricated world, one constructed internally and experienced as external reality.
The implications cascade outward from this fundamental mechanism. If the brain fabricates most of what people experience as "seeing," what does that mean for knowledge, for certainty, for professional decision-making? How much of what people confidently "know" is actually an elaborate internal construction? And once someone understands that they're living primarily in a fabricated world—with external reality providing only sparse, occasional corrections—how does that change the way they approach everything from daily decisions to professional practice?
This essay explores those questions. It builds on the neurological mechanisms explained in the companion essay The Three-Pound Supercomputer (Young, 2026), which detailed how brains process information through parallel processing, predictive coding, and massive computational constraints. This essay examines the consequences of those mechanisms: what it means to live in a world that is primarily brain-generated, with actual external reality playing a surprisingly secondary role.
To understand fabrication, it helps to understand the computational problem the brain faces. As detailed in The Three-Pound Supercomputer (Young, 2026), the human visual system receives approximately ten million bits of information per second from the retina. That's an enormous data stream—far more than conscious awareness can process. The bottleneck of conscious attention operates at approximately ten bits per second (Zheng & Meister, 2024). This creates a million-to-one compression problem: somehow, ten million bits of visual input must be processed, filtered, and reduced to the ten bits per second that actually reach conscious awareness.
The brain solves this through prediction. Instead of processing every bit of incoming sensory data, the brain generates predictions about what the data will contain. It builds an internal model of the world—based on past experience, recent context, and evolutionary priors—and then compares incoming sensory data against those predictions. Only when there's a prediction error—when reality doesn't match expectation—does the brain allocate significant processing resources. This is predictive coding (Rao & Ballard, 1999; Friston, 2010; Clark, 2013), and it's fundamental to how brains work.
As detailed in The Three-Pound Supercomputer (Young, 2026), "the brain employs a fundamentally different strategy" from sequential computer processing. "Rather than processing information sequentially, the brain operates through massive parallelism... With 86 billion neurons operating in parallel, each connected to thousands of other neurons, the brain processes information across vast distributed networks simultaneously." This parallel architecture enables the brain to generate and test predictions across multiple dimensions of experience at once, creating the seamless fabrication we experience as perception.
The fabrication process unfolds in stages:
Stage 1: Sensory Input Arrives
Photoreceptors in the retina respond to light. Hair cells in the cochlea respond to sound waves. Mechanoreceptors in the skin respond to pressure and vibration. This sensory data enters the nervous system as neural signals—patterns of electrical activity travelling along sensory pathways toward the brain.
Stage 2: Predictions Are Generated
Before the sensory data even arrives at conscious processing, the brain has already generated predictions about what it expects to encounter. These predictions are based on:
If a person is looking at rows of chairs, the brain quickly establishes the pattern and predicts that pattern will continue. If someone is listening to a familiar piece of music, the brain predicts the next notes based on musical structure and prior exposure. If a hand touches a smooth surface, the brain predicts continued smoothness.
Stage 3: Comparison and Error Detection
Incoming sensory data is compared against predictions. If they match—if the chairs continue in the expected pattern, if the music follows the predicted progression, if the surface remains smooth—then the prediction is confirmed, and minimal processing occurs. The brain essentially says "yes, as expected" and moves on. This is computationally efficient: when predictions are accurate, the brain doesn't waste resources processing redundant information.
But when there's a prediction error—when something unexpected happens—the brain allocates attention. The pattern of chairs suddenly changes. An unexpected note sounds in the music. The smooth surface has a rough patch. These prediction errors get processed in detail, and the internal model updates.
Stage 4: Construction of Conscious Experience
What reaches conscious awareness isn't the raw sensory data. It's the brain's internal model—the fabricated reality—updated where necessary by prediction errors but otherwise running on predictions. This is why the experience feels seamless and complete. The brain fills in gaps, continues patterns, and constructs a coherent internal world. The fabrication is experienced as perception.
Stage 5: Memory Reinforcement
The experience gets encoded into memory—but not as a recording. Memory itself is reconstructive (Loftus, 2005; Schacter, 2001). When someone recalls an event, they're not playing back a recording; they're rebuilding it from fragments, filling in gaps based on expectations and subsequent knowledge. Bartlett (1932) demonstrated this nearly a century ago: people "remember" details that weren't present and reconstruct narratives that make sense, even when those reconstructions don't match the original events.
There's another layer to fabrication: neural adaptation. Neurons that respond to sensory stimuli gradually stop responding if the stimulus remains constant. This is why, after a few seconds, people stop feeling the clothes they're wearing or the chair they're sitting on. The sensory receptors are still firing, but the neurons in the brain adapt and stop signalling "pressure here." If those neurons kept firing, they'd consume energy and processing resources for redundant information. Instead, they quiet down, and the brain simply assumes the stimulus is still present.
But here's the fascinating part: the stimulus doesn't disappear from conscious experience. People still "feel" the chair they're sitting on, even though the relevant neurons have stopped responding. The brain fabricates the sensation based on the assumption that what was there a moment ago is still there now. This is continuous fabrication: the brain constructs experience not just from current sensory input but from predictions based on the recent past.
Vision uses a related mechanism. The eyes make constant small movements called saccades—rapid jumps from one fixation point to another, several times per second. These saccades refresh the visual image, preventing neural adaptation from causing the world to disappear. But between saccades, during the fixation periods, neural adaptation occurs. Neurons responding to stationary features of the scene reduce their firing. And yet, the visual world remains stable and continuous. The brain fills in, fabricating a stable visual experience from discontinuous, rapidly adapting sensory input.
The compression from ten million bits per second of visual input down to ten bits per second of conscious awareness means that 99.9999% of the incoming sensory data never reaches conscious experience. The brain must decide what to process and what to ignore, what to pass through to awareness and what to handle automatically. This decision is made through prediction: expected data is confirmed quickly and automatically, while unexpected data—prediction errors—gets passed upward for conscious processing.
This creates an unsettling implication: the vast majority of "seeing" isn't seeing at all. It's predicting. The brain generates expectations about what the visual scene contains, and as long as incoming data confirms those expectations, the predictions are experienced as perception. Only when something unexpected appears—a sudden movement, a novel object, a pattern violation—does genuine processing of external data occur. The rest of the time, people are experiencing their own internally generated models, not the external world.
Consider what this implies about the nature of experienced reality. The brain constructs an internal model—a fabrication—based on limited sensory input, extensive prediction, and continuous filling-in of expected patterns. That fabricated model is what people experience as "the world." External reality provides occasional corrections through prediction errors, but most of the time, the brain is running on its own predictions. People are living inside their own heads, experiencing internally generated models, and treating those models as if they were direct perception of external reality.
This isn't limited to vision. Auditory perception works the same way: the brain predicts what sounds will occur and processes prediction errors more than it processes expected sounds. Touch, taste, smell—all operate through predictive coding. Even higher-level perception—recognising faces, understanding speech, interpreting social situations—relies on prediction and confirmation rather than direct processing of sensory data.
The fabrication is so convincing, so seamless, that it's nearly impossible to believe. But the evidence is overwhelming. Optical illusions reveal the fabrication process by creating situations where predictions override sensory input. Change blindness demonstrates that people fail to notice even dramatic changes to visual scenes if those changes don't violate predictions. Inattentional blindness shows that people can completely miss unexpected objects—like a person in a gorilla suit walking through a basketball game—because those objects don't fit the predicted scene (Simons & Chabris, 1999).
The brain fabricates reality. That's not hyperbole or metaphor. It's what neuroscience reveals about the nature of human perception. The question isn't whether fabrication occurs—it does, constantly and inevitably. The question is what that means for knowledge, certainty, and decision-making.
Before exploring fabrication's limitations, it's worth pausing to appreciate what this process enables. The brain's ability to construct internal models isn't just about filling in missing information—it's the foundation for some of humanity's most remarkable cognitive abilities. Fabrication allows people to mentally simulate, plan, create, and explore possibilities that don't yet exist.
Consider someone sitting at their office desk, nominally working on a spreadsheet, but actually redesigning their kitchen. They can mentally visualise the room, change the window colour from cream to sage green, move the sink to the opposite wall, imagine how morning light would fall differently, and evaluate whether the new layout would work—all without leaving their chair. This is fabrication in action: the brain constructs a detailed internal model of a space that doesn't exist yet, manipulates that model, tests alternatives, and evaluates outcomes.
This ability extends to countless everyday situations:
Spatial Navigation in Low Light: Walking through your home at night with lights off, you navigate confidently around furniture, avoid obstacles, and reach for light switches in the correct locations. You're not seeing these things—your brain is fabricating a detailed spatial model of the environment based on memory, allowing you to move through space that your eyes cannot currently perceive. The fabricated model is so convincing that people often reach for objects in the dark with the same confidence they'd have in full light.
Shopping and Planning: When someone needs to remember items from the supermarket, they're not just storing a list of words—they're often mentally walking through the aisles, fabricating a visual journey through the shop, placing items into an imagined basket. This fabricated experience helps retrieve the information: "What did I need? Let me walk through the shop in my mind—produce section, yes, tomatoes; dairy aisle, milk; cleaning products..." The fabrication creates a navigable mental space that makes recall more reliable than abstract lists.
Journey Replanning: When a train is delayed, people immediately fabricate alternative scenarios: "If I'm 30 minutes late, I'll miss the connection, so I could take the bus from the next station, or arrange for someone to pick me up, or reschedule the meeting..." None of these scenarios exist yet—they're fabricated possibilities being mentally simulated, evaluated, and compared. The brain generates multiple future timelines, assesses their likelihood and consequences, and selects the most promising option—all before the train has even arrived.
Some of humanity's greatest scientific insights emerged from deliberate, systematic fabrication—mental experiments where scientists constructed detailed internal models and explored their implications.
Einstein's theory of relativity famously originated from thought experiments about riding on a beam of light or observing from a moving train. He fabricated scenarios: "What would I see if I were travelling at the speed of light? What would clocks and measuring rods look like from my perspective versus someone watching me from a station platform?" These weren't real observations—they were elaborate fabrications, mental simulations of situations that couldn't physically be experienced. Yet by systematically exploring these fabricated scenarios, Einstein derived insights about the nature of space, time, and reality that fundamentally transformed physics (Einstein, 1905; Norton, 2004).
This illustrates something profound: fabrication isn't limited to reconstructing what has been experienced. It can create entirely novel scenarios, explore their logical implications, and derive genuine knowledge about reality by examining internally constructed models. The human capacity for abstract thought, mathematical reasoning, and scientific theory-building all depend on this ability to fabricate, manipulate, and analyse scenarios that have never existed in the physical world.
Fabrication isn't unique to humans—it scales with neural capacity. Consider a beaver maintaining its territory. A beaver holds an extraordinarily detailed internal map of its domain: the precise location of its lodge entrance below the waterline, the network of canals it's constructed, the carefully positioned food cache of branches for winter, trees it's partially gnawed that might fall soon, ones already felled by wind, optimal routes between feeding areas, and potential new canal routes to access distant stands of aspen or willow.
This fabricated mental map allows the beaver to navigate efficiently even in murky water where visibility is minimal, to remember the status of dozens of trees across its territory, to plan construction projects that will take weeks to complete, and to make strategic decisions about where to invest effort. When a beaver sets off from its lodge, it's not randomly exploring—it's navigating through a detailed internal model of its territory, heading toward specific locations, checking on specific resources, following fabricated plans.
The beaver's cognitive map demonstrates that fabrication provides significant evolutionary advantage: animals that can maintain detailed internal models of their environment, remember resource locations, and plan future actions outcompete those that can only respond to immediate sensory input. The computational cost of maintaining these models is repaid through more efficient foraging, better territory utilisation, and improved survival outcomes (Tolman, 1948; Müller-Schwarze & Sun, 2003; Campbell-Palmer et al., 2016).
This fabrication process provides another crucial evolutionary advantage: the ability to rapidly detect anything that deviates from the expected pattern. Because the brain is constantly generating predictions and comparing incoming sensory data against those predictions, anything unexpected immediately triggers heightened processing. This is the prediction error signal—and it works for both dangers and opportunities.
Detecting Danger: An animal moving through familiar territory operates primarily on its fabricated internal model, conserving cognitive resources by confirming predictions rather than processing redundant detail. But when something unexpected appears—a shadow that shouldn't be there, a sound that doesn't fit the pattern, a scent that indicates predator—the prediction error immediately captures attention. The unexpected element "stands out" precisely because it violates the fabricated expectation. This allows rapid threat detection without the computational expense of processing every element of the environment in full detail at all times.
Detecting Opportunity: The same mechanism identifies opportunities. A foraging animal whose brain is predicting the usual environment will immediately notice when something unexpected appears—berries on a bush that wasn't fruiting yesterday, a fresh trail indicating other animals have passed, water where there normally isn't any. These positive surprises stand out against the fabricated baseline just as threats do. The animal can quickly shift attention and behaviour to exploit the unexpected resource.
This dual function—detecting both danger and opportunity through prediction error—explains why fabrication evolved and why it's so fundamental to adaptive behaviour. An organism that must consciously process every detail of its environment equally would be slower to detect both threats and resources. But an organism running primarily on fabricated predictions, with heightened processing only for prediction errors, can rapidly identify anything significant—whether dangerous or beneficial—while conserving cognitive resources for the vast majority of experience that matches expectations.
For social animals, including humans, this extends to the social domain. People navigate social situations largely through fabricated expectations about how others will behave, what facial expressions mean, what tone of voice indicates. When someone's behaviour violates those expectations—whether through unexpected hostility or unexpected warmth—the prediction error captures attention, allowing rapid adjustment to changing social dynamics.
These examples reveal fabrication not as a limitation but as a profound capability. The brain's ability to construct internal models means that:
Without fabrication, humans would be trapped in the immediate present, responding only to direct sensory input like the mosquito with its 220,000 neurons. With fabrication, humans can mentally inhabit past, present, and multiple potential futures simultaneously. They can imagine scenarios they've never experienced, solve problems that haven't yet occurred, and understand concepts that have no physical existence.
This is the extraordinary power of fabrication: the ability to live simultaneously in the world as it is and in worlds that might be, could be, or should be. The brain's internal model-building isn't just filling in gaps in perception—it's creating the very possibility of thought, planning, creativity, and imagination.
And yet—as powerful as fabrication is—it comes with profound limitations. The same mechanisms that enable planning and creativity also create systematic vulnerabilities. Understanding both the power and the limits of fabrication is essential for using this cognitive capacity wisely.
Fabrication isn't unique to humans. It's a feature of how nervous systems work, and it scales with neural capacity. The more neurons a brain has, the more elaborate the fabrication. This creates what might be called the model construction spectrum: a continuum from minimal fabrication in simple nervous systems to massive model construction in complex ones. Understanding this spectrum helps illuminate what fabrication is and why it exists.
A mosquito has approximately 220,000 neurons (Raji et al., 2019). For comparison, that's about 0.00026% the number of neurons in a human brain. This severe computational constraint means the mosquito brain can't afford elaborate model construction. It operates much closer to direct stimulus-response: sensory input triggers relatively hardwired behavioural outputs.
Complex adaptive behaviour doesn't necessarily require enormous neural resources. The mosquito demonstrates how evolutionary optimisation produces efficient solutions to survival challenges through integrated sensory-motor systems, operating effectively with minimal model construction.
The mosquito's sensory world is stripped down to survival essentials:
With such limited neural capacity, the mosquito's flight decisions operate through a remarkably simple decision tree:
1. Threat Detection: When vibrations, sudden air currents, or visual motion suggest predation risk, the mosquito executes evasive manoeuvres. This takes absolute priority—survival of the individual long enough to potentially reproduce.
2. Host Approach: When the right combination of CO₂, heat, and odour signals indicates a potential blood source, approach behaviour activates. Female mosquitoes require blood meals for egg development, making this the primary feeding motivation.
3. Resource Location: When not fleeing or feeding, the mosquito explores for three essential resources:
4. Rest and Digest: Following a blood meal, the mosquito seeks sheltered locations while eggs develop, reducing activity and exposure to predation.
This decision architecture requires minimal fabrication. The mosquito doesn't model the world, predict outcomes, or imagine alternatives. It responds to immediate sensory input with behaviours shaped by millions of years of selection. The "intelligence" lies not in individual decision-making but in how evolution has wired these simple responses to statistical patterns in the environment.
Here lies a crucial distinction between mosquito and human survival strategies. The mosquito operates on what might be called a "cheap individuals, massive numbers" approach. Most individual mosquitoes die before achieving their reproductive goal—in fact, mortality rates are extraordinarily high. A female mosquito might attempt dozens of flights in search of hosts, mates, or egg-laying sites. Most of these flights end in failure: predation by birds, bats, or dragonflies; exhaustion without finding resources; unfavourable weather; human intervention; countless other hazards.
From an individual mosquito's perspective, life is brief and usually unsuccessful. The average female mosquito lives perhaps 2-4 weeks in the wild (laboratory conditions can extend this to months, but natural conditions are far harsher). During this time, she must:
The odds against any individual accomplishing all of this are substantial. Yet the species thrives because reproductive output vastly exceeds mortality. A single female can lay 100-200 eggs at once, potentially across multiple batches if she survives long enough for multiple blood meals. Even if 95% of attempts fail, the remaining 5% produce enough surviving offspring to maintain or expand the population.
This species-level survival strategy requires minimal individual model construction. The mosquito doesn't need to carefully assess risk, plan for long-term survival, or make sophisticated decisions about when to take chances. It operates through simple, immediate responses to environmental cues because individual survival isn't the primary evolutionary constraint—species continuation is. As long as enough individuals in the population succeed, natural selection doesn't "care" about the vast majority that fail.
Contrast this with human evolutionary strategy. Humans invest enormously in individual survival: nine months of gestation, years of dependency, extended childhood requiring parental investment, typically only a few offspring across a lifetime. We cannot afford the mosquito's profligate approach to individual risk. If 95% of human children died before reproducing, the species would collapse within generations. This fundamental difference in survival strategy shapes everything about our cognitive architecture.
Humans need elaborate model construction precisely because we invest so heavily in individual survival. We must predict dangers before encountering them, model complex social environments, learn from rare events, imagine future threats, and pass knowledge across generations. The computational cost of this fabrication is acceptable because individual humans cannot be replaced as easily as individual mosquitoes. When your survival strategy depends on most individuals reaching reproductive age, sophisticated prediction and careful risk assessment become essential.
The mosquito's minimal fabrication isn't a cognitive limitation to be pitied—it's an optimal solution to a different evolutionary problem. When individuals are computationally cheap and reproductively abundant, elaborate model construction would be wasteful. The mosquito accomplishes what it needs with 220,000 neurons precisely because its species-level strategy makes sophisticated individual cognition unnecessary.
This creates both advantages and limitations at the individual level. The advantage is reliability: with minimal fabrication, there's less that can go wrong cognitively. The mosquito's responses are tightly coupled to genuinely relevant environmental features. It doesn't hallucinate blood sources that aren't there. It doesn't get lost in elaborate predictions that might be wrong. Its stripped-down sensory world maps reasonably well onto the actual features that matter for its immediate survival.
The limitation is inflexibility at the individual level. The mosquito can't adapt to truly novel situations. It can't learn complex patterns that would require memory integration beyond simple associations. It can't override its hardwired responses even when they're counterproductive. If something tricks its sensory systems—if a mosquito trap mimics heat, CO₂, and odours—the mosquito will approach because its minimal fabrication doesn't allow for sophisticated prediction-error correction or learned caution. It has enough brain to survive in its ecological niche, but not enough to construct complex models that would enable flexible responses to unprecedented situations.
Yet from a species perspective, this inflexibility barely matters. Some mosquitoes will be caught by traps, but others won't. Some will be fooled by artificial lures, but others will find genuine hosts. The species doesn't require every individual to be clever—it requires enough individuals to succeed often enough to maintain population growth. This makes minimal fabrication not a flaw but an elegant solution: why invest in expensive neural machinery for sophisticated cognition when simple, reliable responses combined with massive reproductive output achieve species survival more efficiently?
Move up the model construction spectrum to mammals. A rat has approximately 200 million neurons, a cat has around 250 million, and a dog has roughly 500 million. These brains can afford richer fabrication. They build more complex internal models, integrate extensive memory, and generate predictions that extend further into the future.
A dog doesn't just respond to immediate stimuli. It predicts. When it hears its owner's car approaching, it anticipates the owner's arrival and moves toward the door before the door opens. When it sees a ball being thrown, it predicts the ball's trajectory and runs to where it expects the ball to land, not where it is now. These predictions require internal models: representations of how objects move, how events unfold, how the world behaves.
Mammalian brains also integrate memory in sophisticated ways. A cat remembers where food was located and returns to check those locations. A dog learns which behaviours lead to rewards and which lead to punishment. These memories shape predictions: the brain fabricates expectations based on past experience. This is more elaborate model construction than the mosquito's minimal predictive processing, but it's still relatively constrained.
The mammalian fabrication creates greater flexibility. These animals can learn, adapt to new environments, and solve novel problems. But it also creates more ways to be wrong. A dog can develop false expectations—predicting a walk that doesn't happen, anticipating food that isn't provided. A cat can remember a mouse hole that's no longer active. The richer fabrication allows for greater intelligence but introduces the possibility of systematic error: the brain's internal model can diverge from external reality.
Then there are humans, with approximately 86 billion neurons (Herculano-Houzel, 2009). This enormous neural capacity enables fabrication on a scale qualitatively different from other animals. The human brain doesn't just predict immediate events; it constructs elaborate, abstract models of reality that can completely override sensory input.
Humans fabricate in ways other animals cannot:
Language and Abstraction: Humans create concepts that have no physical referent. Justice, democracy, infinity, rights—these are fabrications, mental constructs that exist in brains and social structures but not as physical objects. Yet people treat them as real, build societies around them, and even die for them.
Counterfactual Thinking: Humans fabricate alternative realities—what might have happened, what could happen in the future, what would happen if conditions were different. This enables planning, regret, hope, and anxiety. It also means humans spend much of their mental life in fabricated scenarios rather than present reality.
Social Construction: Humans collectively fabricate shared realities. Money has value because everyone agrees it does. National borders exist because people collectively accept them. Social hierarchies, cultural norms, and institutional structures are all shared model constructions—they're real in their consequences but constructed in their nature.
Override of Sensory Input: Humans can fabricate so elaborately that they override what their senses are actually reporting. A person experiencing anxiety might "see" threats that aren't there. Someone with strong expectations might "hear" words that weren't spoken. Placebo effects demonstrate that believing a treatment will work can produce physiological changes, even when the treatment is inert. The fabrication shapes not just perception but physical reality.
The cost of this massive model construction is that humans can be systematically wrong in ways mosquitos and dogs cannot. A mosquito might fail to find blood, but it doesn't fabricate elaborate theories about why blood sources are scarce. A dog might develop false expectations about walks, but it doesn't build ideological systems based on those expectations. Humans, with their enormous capacity for fabrication, can construct entire worldviews that are internally coherent, socially reinforced, and completely disconnected from external reality.
The spectrum from mosquito to human reveals a fundamental trade-off. More neurons enable richer fabrication, which enables greater flexibility, learning, and intelligence. But richer fabrication also means more opportunities for the internal model to diverge from external reality. The smarter the brain, the more elaborate the fabrication, and the more ways it can be wrong.
This isn't a bug; it's a feature. The fabrication enables humans to plan, imagine, create, and build civilisations. But it also means that certainty—the feeling of knowing—can become completely detached from accuracy. A human can be absolutely certain about something that's absolutely wrong because The constructed model feels like direct perception of reality. The mosquito, with its minimal fabrication, is limited but reliable. The human, with massive model construction, is capable but prone to systematic delusion.
The question, then, isn't how to escape fabrication. That's impossible—it's how brains work. The question is: how do humans live wisely in fabricated worlds, knowing that what they experience as reality is primarily an internal construction?
Most of the time, fabrication is invisible. The brain's internal model matches external reality closely enough that the fabrication goes unnoticed. But certain situations reveal the fabrication process, making it impossible to ignore. Optical illusions and magic tricks are particularly effective at this because they systematically exploit the mechanisms of fabrication.
Optical illusions aren't failures of the visual system. They're demonstrations of normal brain function. They reveal what the brain is doing all the time: constructing experience through prediction and filling-in, rather than passively receiving sensory data.
Motion Aftereffects: Stare at a waterfall for 30 seconds, then look at the rocks beside it. The rocks appear to move upward, even though they're stationary. This happens because neurons that respond to downward motion adapt during the 30-second viewing—they reduce their firing. When the eyes shift to the stationary rocks, the neurons that detect upward motion are firing at their baseline rate, while the adapted downward-motion neurons are firing at a reduced rate. The brain interprets this imbalance as upward motion. The fabrication is creating movement where none exists (Mather et al., 2008).
Impossible Objects: The Penrose triangle appears to be a three-dimensional object that couldn't exist in physical space. Yet the brain constructs a coherent 3D interpretation from the 2D image because it's trying to construct a sensible 3D world from 2D retinal input. The brain's model-building process creates an impossible object that feels perceptually real (Penrose & Penrose, 1958).
Colour Constancy: A white object appears white whether it's viewed in sunlight, under fluorescent lights, or in shadow—even though the wavelengths of light reaching the eye are very different in each case. The brain fabricates "white" by interpreting the wavelengths in context. This means the experienced colour isn't determined by the actual wavelengths arriving at the retina; it's determined by the brain's interpretation of those wavelengths. The fabrication overrides the physical stimulus (Land, 1977).
The Dress: In 2015, a photograph of a dress created intense debate: was it blue and black or white and gold? The answer depended on the brain's assumptions about lighting. Those who assumed the dress was in shadow saw white and gold; those who assumed it was in bright light saw blue and black. The same sensory input produced different fabrications based on different assumptions about context (Wallisch, 2017). Neither group was wrong—both were experiencing fabrications based on reasonable assumptions. The external reality (a blue and black dress) was less important than the brain's interpretation.
These illusions reveal the fabrication process because they create situations where the brain's predictions are systematically wrong, and yet those predictions continue to shape experience. Even when someone knows the motion aftereffect is an illusion, the rocks still appear to move. Even when someone knows the Penrose triangle is impossible, it still looks like a coherent 3D object. The fabrication can't be turned off by knowledge alone.
Magicians have spent centuries empirically testing what the brain fabricates. They don't need to understand neuroscience; they just need to know what works. And what works is systematic exploitation of predictive coding, attention bottlenecks, and memory reconstruction (Macknik et al., 2008; Kuhn & Tatler, 2005).
Misdirection: The magician directs attention away from the method. This works because of the 10 bits per second conscious bottleneck. If attention is focused on the left hand, the right hand can execute the trick unnoticed. The brain fabricates a coherent narrative that doesn't include what it didn't attend to. The trick feels impossible because the fabricated experience doesn't include the method (Kuhn et al., 2008).
False Memory: The magician asks an audience member to "think of any card." Through forcing techniques, the magician ensures a specific card is chosen, but the choice feels free. Later, when the chosen card appears in an impossible location, the audience member "remembers" freely choosing that card. The memory is reconstructed to make sense of the current situation, even though the choice wasn't actually free (Loftus, 2005).
Change Blindness: The magician makes a visible change to the scene, but because the change occurs during a predicted moment—a flash, a gesture, a movement—the brain doesn't process it as a prediction error. The fabricated continuity overrides the actual change. People confidently state that nothing changed, even when something obviously did (Simons & Levin, 1997).
Forcing: The magician appears to offer a free choice but actually controls the outcome. This works because people construct models the experience of agency—they feel like they chose freely because the brain generates that feeling, even when the choice was constrained. The fabrication of free will is so strong that people will insist they had genuine choice, even when shown evidence that they didn't (Wegner, 2002).
If fabrication can be this thoroughly exploited—if magicians can systematically create experiences that feel completely real but are entirely fabricated—what does that mean for everyday experience? How much of normal perception is equally fabricated but less obviously so?
The answer is unsettling: most of it. The difference between a magic trick and normal perception isn't that one involves fabrication and the other doesn't. Both involve fabrication. The difference is that in a magic trick, the fabrication is deliberately exploited to create an experience that contradicts external reality. In normal perception, the fabrication usually matches external reality closely enough to be functional.
But "usually" and "closely enough" aren't the same as "always" and "perfectly." The brain is always fabricating, always predicting, always filling in. Most of the time, those fabrications are accurate enough to navigate the world successfully. But there's no way to know, from inside the fabrication, whether it matches external reality or not. The experience feels the same either way.
This creates a fundamental epistemological problem: people cannot distinguish between genuine perception and fabrication because they never have access to genuine perception. Everything they experience is already filtered through prediction, filled in through expectations, and constructed through neural processes that operate below consciousness. They live in fabricated worlds, and those worlds feel like direct access to reality—even when they're not.
Socrates (c. 470–399 BCE) was an Athenian philosopher who fundamentally transformed Western thought, yet we face a peculiar challenge in knowing about him: he left no written works. What we know of Socrates comes primarily through the writings of his student Plato, with additional accounts from Xenophon, Aristophanes, and later sources. This creates what scholars call "the Socratic problem"—a paradox of historical knowledge where one of philosophy's most influential figures exists for us mainly through others' reconstructions of his words and methods.
The limited and sometimes contradictory nature of ancient records about Socrates presents its own epistemological irony: we construct our understanding of the man who questioned the nature of knowledge from fragmentary, positioned accounts written by those with their own philosophical and literary agendas. We cannot know with certainty what Socrates actually said or believed—we have only fabricated reconstructions based on what others claimed he said, filtered through their own interpretations and purposes.
Yet Socrates' significance transcends this historical uncertainty. He pioneered what might be considered the first truly secular approach to wisdom: knowledge derived not from divine revelation, priestly authority, or inherited tradition, but from systematic questioning, careful observation, and reasoned argument. Rather than accepting received truths from the gods or relying on holy inspiration, Socrates insisted that wisdom comes through examining one's beliefs, testing assumptions against evidence, and acknowledging the limits of what can be known. This method—persistent questioning to expose the difference between genuine knowledge and mere opinion—laid the foundation for scientific thinking: conclusions based on observable evidence and logical reasoning rather than faith or tradition. The Socratic method remains fundamental to philosophical inquiry and scientific investigation: observe carefully, question rigorously, and acknowledge uncertainty honestly.
Socrates, in the Platonic dialogues, repeatedly insisted that he knew nothing—or rather, that the only thing he knew was that he knew nothing. This wasn't false modesty. It was a profound recognition of the difference between the feeling of knowing and actual knowledge. Most people Socrates questioned felt certain of their knowledge. They could define justice, explain virtue, describe piety. But when Socrates probed their definitions, the certainty collapsed. What they thought they knew turned out to be assumption, cultural conditioning, or circular reasoning.
The fabrication framework helps explain why this happens. Certainty is a feeling—a product of the brain's fabrication process—not a guarantee of accuracy. When the brain's internal model is coherent and stable, it generates the feeling of knowing. But coherence and stability don't equal truth. A fabrication can be internally consistent, socially reinforced, and completely wrong.
The brain generates certainty as a metacognitive signal. When predictions are consistently confirmed, when the internal model is stable, when there's no significant prediction error, the brain produces the feeling of "knowing." This makes sense from an evolutionary perspective: if a model is working—if it's producing accurate predictions—then acting confidently on that model is adaptive. Hesitation wastes time and energy.
But this creates a problem: the feeling of certainty can persist even when the model is wrong, as long as the model is internally coherent and prediction errors are ignored or reinterpreted. Consider someone with a strong political ideology. When they encounter evidence that confirms their ideology, they feel even more certain. When they encounter contradictory evidence, they reinterpret it to fit their model or dismiss it as biased, unreliable, or irrelevant. The internal model remains coherent, prediction errors are minimised, and certainty persists—even if the model bears little relationship to external reality (Kahneman, 2011).
This is why Socratic questioning is so effective—and so uncomfortable. By asking people to define their terms, to justify their beliefs, to consider alternative explanations, Socrates introduces prediction errors into stable models. The coherence breaks down, and with it, the feeling of certainty. What seemed like knowledge reveals itself as fabrication.
The concept of positioned knowledge, developed in The Epistemology of Safeguarding (Young, 2026), connects directly to fabrication. All knowledge is observed from somewhere—from a particular position in space, time, social structure, and history. That position determines what can be seen, what remains hidden, and how observations are interpreted.
But it's more than that. The position also determines what gets fabricated. The brain doesn't fabricate from nowhere; it fabricates from its positioned perspective. A child protection social worker constructs different models of a family situation than the parents do, not because one has direct access to reality and the other doesn't, but because they're constructing from different positions with different information, different histories, and different constraints.
Neither model is "the truth." Both are models constructed from limited, positioned information. The social worker's model might emphasise risk factors and past concerns. The parents' model might emphasise current efforts and extenuating circumstances. Each model is coherent from its positioned perspective. Each generates the feeling of certainty. And yet they can be completely incompatible.
In The Epistemology of Safeguarding (Young, 2026), this interpretive uncertainty is expressed with stark clarity: "I don't know how my exhaustion would be read as depression requiring intervention, or as normal parenting stress. I don't know whether my frustration would be heard as honest emotion or as concerning anger... And if I don't know that, with all my professional expertise and insider knowledge, how could any parent possibly know it?" This captures the interpretive problem: parents cannot know how their words and behaviours will be interpreted because they have no access to the professional observer's internal model-building process. The parent's sense of "I'm coping reasonably well" meets the professional's impression of "concerning signs of stress," and there's no neutral arbiter to determine which model is accurate.
This creates what The View from Here (Young, 2025) calls systemic impasse: situations where people in different positions construct incompatible models, each feels certain, and there's no neutral position from which to adjudicate. The social worker cannot step into the parents' position, and the parents cannot step into the social worker's position. Each is trapped in their constructed understanding, experiencing that construction as reality.
This creates acute problems in professional contexts. Organisations demand certainty and decisiveness. Risk assessments must conclude with definitive judgements: safe or unsafe, high risk or low risk, intervention needed or not needed. Medical diagnoses must be stated confidently. Legal judgements must be rendered. Financial recommendations must be made. The system requires professionals to perform certainty.
But the professionals are constructing models, just like everyone else. They're constructing models from positioned, bounded/limited information. They're predicting based on prior experience and theoretical frameworks. They're filling in gaps based on expectations. The models might be sophisticated, evidence-informed, and carefully reasoned. But they remain internally constructed interpretations—internally constructed models, not direct access to reality.
The pressure to perform certainty creates constructed confidence. Professionals learn to state their models as if they were facts, to present their assessments as if they were objective truths, to frame their predictions as if they were certain outcomes. This isn't deliberate deception—most professionals genuinely feel certain because their internal models are coherent and stable. But the feeling of certainty and actual accuracy are not the same thing.
Consider a child protection risk assessment. The social worker reviews case history, interviews family members, observes parent-child interactions, and consults with other professionals. From all this information, they construct an internal model: this family is at high risk for future harm, or this family has sufficient protective factors. The model feels certain because it's coherent—it makes sense, it fits with professional frameworks, it's consistent with training and experience.
But that model is constructed from limited information. It's based on positioned, incomplete information. It's predicting a future that hasn't occurred. It's filling in gaps about private family life that the social worker cannot fully observe. And it's being generated by a brain that operates through predictive coding—that sees what it expects to see and constructs experience from limited sensory input.
This doesn't mean the assessment is worthless. It might be the best available model given the information accessible from that position. But it's not "truth." It's an evidence-informed assessment, and recognising it as such should temper the certainty with which it's stated.
This same cognitive mechanism—detecting what stands out against expectations—is fundamental to professional assessment in social work. When a social worker conducts a family assessment, they're employing this fabrication process: constructing an internal model of what "typical" or "expected" family functioning looks like based on professional knowledge, developmental norms, and experience, then identifying what deviates from that baseline.
This allows the social worker to rapidly detect both concerns and strengths:
Identifying Worries: Elements that stand out as concerning—unexplained injuries, developmental delays, concerning interactions, signs of neglect—capture attention precisely because they violate the expected pattern. The social worker doesn't need to consciously analyse every detail of family life; prediction errors highlight what needs closer examination. A child who is unusually withdrawn, a parent who responds unexpectedly to questions, an environment that differs significantly from expectations—these generate prediction errors that appropriately trigger deeper assessment.
Identifying What's Working Well: The same process identifies strengths and protective factors. When a family demonstrates resilience, resourcefulness, or capability that exceeds expectations—strong attachment despite difficult circumstances, creative problem-solving, effective use of support networks—these positive prediction errors also stand out. A parent who engages warmly with their child despite visible stress, siblings showing mutual care, evidence of effective routines and boundaries—these aren't just neutral observations but positive deviations that merit recognition and building upon.
This dual detection—worries and working wells—is not arbitrary or biased fabrication but sophisticated deployment of the same prediction-error mechanism that helps all animals survive. The social worker's professional training provides frameworks (attachment theory, child development, trauma responses, family systems) that shape the fabricated baseline against which observations are compared. This makes the assessment process more systematic and evidence-based than untrained observation would be, but it remains fundamentally a process of constructing internal models and identifying meaningful deviations.
The challenge, of course, is that this process depends on the quality of the fabricated baseline. If a social worker's expectations are shaped by cultural assumptions, limited exposure to different family structures, or organisational biases, then prediction errors might highlight differences rather than genuine concerns, or might miss strengths that don't fit expected patterns. This is why positioned knowledge matters—the fabricated baseline is constructed from a particular professional and cultural position, which shapes what stands out as concerning or promising.
But when done well—when the fabricated baseline is informed by robust professional knowledge, cultural humility, and genuine openness to family strengths—this assessment process leverages one of the brain's most powerful capabilities: the ability to rapidly identify what matters by detecting meaningful deviations from sophisticated internal models.
Medical Diagnosis: A patient presents with symptoms. The doctor constructs a diagnostic model—constructs an explanation—based on symptom pattern, test results, prior cases, and medical training. The model might be correct, or it might miss a rare condition, or it might misinterpret ambiguous findings. But the diagnosis must be stated confidently because the patient needs treatment. The performed certainty serves a social function even when the underlying model is uncertain.
Legal Sentencing: A judge determines an appropriate sentence. This requires constructing a model of the defendant's culpability, likelihood of reoffending, and the sentence's deterrent effect—all predictions about unobservable internal states and future behaviour. Yet the sentence is stated as if these predictions were certain: three years in prison, not two or four, because the fabricated model suggests three is appropriate.
Investment Recommendations: A financial adviser recommends a portfolio. This requires fabricating models of future market behaviour, company performance, and economic conditions—all inherently unpredictable. But the recommendation is presented confidently because clients expect certainty. The feeling of knowing, based on coherent internal models and past experience, gets translated into confident predictions about an uncertain future.
In each case, professionals are constructing models and experiencing those constructed models as knowledge. The organisations and social systems they operate within demand confident certainty. And so the constructed models get presented as facts, assessments as objective truths, and predictions as certain outcomes—even though they're all internally constructed models based on limited, positioned information processed by brains that fundamentally operate through fabrication.
In Star Trek Beyond (2016), Spock lies wounded in a cave on the planet Altamid, tended by Dr. McCoy. Their conversation captures something profound about certainty:
McCoy: "You're not afraid of death, are you, Spock? You're afraid of life. You're afraid of the fact that you might actually have a feeling one of these days."
Spock: "Fear of death is illogical."
McCoy: "Fear of death is what keeps us alive!"
Spock: "I do not need to witness a rock hitting the ground to know that the planet has mass. Gravity is a certainty. As is death."
Spock is correct. Gravity is certain. Not metaphysically certain—not "absolute truth about the fundamental nature of reality"—but pragmatically certain: reliably predictable within specified conditions. Newton's equations still work to launch spacecraft into orbit—and as we'll explore shortly, Newton's approach to understanding vision and measurement reveals how such pragmatic certainty becomes achievable. Engineers don't need to verify that gravity will function before calculating orbital trajectories. They can be certain because they're working in a domain where certainty is achievable.
But McCoy is equally correct. Fear of death does keep us alive—not because fear is a direct perception of reality, but because fabricated emotional responses have real functional consequences. The feeling of fear, though constructed by predictive models rather than direct threat detection, triggers adaptive behaviours that increase survival. McCoy's observation captures the truth about the open system domain of human experience: fabricated feelings are real and consequential, even when they're not direct perceptions of external reality. Both observations are valid—they're simply operating in different domains.
Consider a simpler example: making pancakes. If you follow a recipe precisely—correct ingredients, correct quantities, correct process, correct cooking temperature—you will get pancakes. Every time. This is pragmatic certainty.
But imagine encountering a pancake for the first time without knowing the recipe. You might begin with fabrication: "It looks flour-based, so perhaps flour and water?" You test this hypothesis. The result is disappointing—a dense, tasteless disc that bears little resemblance to the original. So you refine: "Perhaps milk instead of water?" Better, but still not quite right. You continue iterating: "Maybe an egg for binding?" Each experiment generates feedback. Each failure refines the hypothesis. Eventually, through systematic testing and adjustment, you arrive at a formulation that works—a recipe that reliably produces pancakes every time it's followed.
This is the scientific method Socrates advocated over two thousand years ago: systematic hypothesis testing that refines fabricated models until they match reality reliably enough to be functionally certain. The process works because pancake-making is a closed system with controllable variables. You can specify all the relevant conditions: flour quantity, milk temperature, pan heat, cooking duration. When you control the variables, you control the outcome. The recipe doesn't change because the physical and chemical processes involved are stable and repeatable.
This is the domain where certainty exists: systems where variables are knowable, conditions are controllable, and predictions are reliably testable through repetition. Physics, chemistry, engineering, cooking—these domains admit certainty because practitioners can specify the "ingredients," control the "cooking conditions," and test hypotheses repeatedly until fabricated models converge on reliable formulations.
Now consider judgements about human behaviour, motivation, or future actions. Can you specify all the variables? No. You don't have access to internal states, complete histories, private thoughts, or the countless environmental factors influencing behaviour. Can you control the conditions? No. Human systems are open and constantly influenced by factors you cannot observe or predict. Can you test hypotheses through repetition? No. Each situation is unique and unrepeatable—you cannot rerun the same person in the same circumstance with different variables to see what changes.
Human systems aren't pancakes. You can't measure out precise quantities of "motivation" or "capacity." You can't set "stress levels" to known temperatures and predict reliable outcomes. You can't test whether your intervention worked by isolating it from all other influences. The scientific method that converges on certainty for pancakes cannot converge on certainty for people because the conditions for systematic testing don't obtain.
This creates epistemic difficulty across every domain involving human judgement. A physician diagnoses based on symptoms and test results but cannot fully specify what's happening inside a patient's body or predict exactly how they'll respond to treatment. A teacher designs lessons based on developmental theory but cannot know what each child is actually understanding or what home circumstances are influencing their learning. A manager implements organisational change based on behavioural models but cannot predict how individuals will actually respond to new systems. An investor makes portfolio decisions based on market analysis but cannot know what other investors are thinking or what unexpected events will shift conditions. A parent makes decisions about their child's needs based on behaviour they observe but cannot directly access what the child is experiencing internally.
In all these cases, practitioners are working in domains where the conditions for certainty don't exist. This isn't failure. It's not incompetence or lack of rigour. It's the fundamental nature of open systems involving human behaviour, where variables are partially hidden, conditions are incompletely knowable, outcomes involve unique situations, and systematic hypothesis testing cannot converge on reliable formulations because the "experiments" cannot be repeated under controlled conditions.
The Socratic wisdom—"I know that I know nothing"—isn't nihilism or false modesty. It's domain awareness. Socrates understood he was operating in the human domain, where certainty about complex judgements isn't achievable through direct perception or systematic testing. Those who claimed certain knowledge about justice, virtue, or human character without examining how they knew it were making a category error: treating human judgement as if it belonged to the same domain as physics or cooking.
This distinction matters profoundly:
In the Closed System Domain (Pancakes and Physics):
In the Open System Domain (Human Behaviour and Judgement):
The danger isn't that people work without certainty in human domains—that's unavoidable. The danger is domain confusion: treating judgements about human behaviour as if they belonged to the closed system domain, presenting fabricated assessments with the confidence appropriate to physics equations, and mistaking the feeling of certainty (coherent internal models, social consensus, past experience) for actual predictive reliability.
Spock's certainty about gravity is justified because he's in a closed system domain. Anyone's certainty about human motivation, future behaviour, or the meaning of ambiguous actions is not justified—not because they're incompetent, but because they're working in a fundamentally different domain where the conditions for certainty don't obtain. Recognising this isn't defeatism; it's the beginning of wisdom. It's knowing which domain you're operating in and adjusting your epistemic humility accordingly.
When Newton developed his Laws of Motion—through a method of systematic measurement and hypothesis testing we'll examine in detail later—he could test them repeatedly, refine his formulations, and eventually achieve equations reliable enough to launch spacecraft centuries later. When professionals make judgements about human situations, they cannot achieve this kind of certainty no matter how carefully they work, because the domain doesn't permit the systematic testing that transforms fabrication into formulation. The fabrications may be sophisticated, evidence-informed, and the best available—but they remain fabrications, not certainties.
Individual model construction is concerning enough. But collective model construction—when multiple people's constructed models align and reinforce each other—can be catastrophically dangerous. This is the domain of groupthink, collective delusion, and what Executive Mobs (Young, 2025) identifies as the phenomenon where intelligent people make spectacularly stupid decisions together.
Individual brains fabricate from their positioned perspectives, constructing internal models that feel like reality. But humans are social animals, and their fabrications are heavily influenced by others' fabrications. When someone joins a group, particularly a professional or organisational group, their individual model construction begins to align with the collective model construction.
As Executive Mobs (Young, 2025) observes, "when energy flows into collective identity and shared outrage, it does not flow into individual problem-solving and personal responsibility. The mob member experiences the neurochemical satisfaction of belonging, the moral clarity of shared purpose, the emotional intensity of collective action—all while changing nothing fundamental about their own life or capabilities." This mechanism applies equally to professional groups: the collective model construction provides psychological satisfaction—the feeling of doing important work, of shared understanding, of coordinated purpose—even when the fabrication itself may be systematically wrong.
This alignment happens through several mechanisms:
Shared Information: Group members receive similar information, often filtered through the same organisational structures. This creates similar inputs for their model-building processes, leading to similar internal models.
Social Pressure: People adjust their constructed models to match the group consensus. This isn't necessarily conscious conformity—the brain's predictive processes are shaped by social context. What the group treats as obvious becomes what individuals predict and experience as obvious.
Confirmation Bias: Once a collective model construction establishes itself, contradictory information gets dismissed or reinterpreted. The group reinforces the shared model, strengthening the feeling of certainty.
Linguistic Contagion: The way the group talks about a situation shapes how individuals think about it. If a team consistently refers to a project as "the solution," that linguistic framing shapes everyone's fabrication. The project becomes "the solution" in their internal models, not "a possible approach" or "a risky experiment."
Here's how collective model construction typically unfolds:
Stage 1: Initial Model Formation
A senior person or dominant group member articulates a model: "This strategy will work," "This risk is low," "This approach is correct." This model is their fabrication—an internally constructed interpretation of limited information.
Stage 2: Social Adoption
Other group members, especially those with less power or status, adopt the model. They're not being deceptive; their brains are incorporating the senior person's model into their own model-building processes. The shared information and social pressure make the model feel true.
Stage 3: Reinforcement Loop
As more people adopt the model, it becomes the consensus. The consensus strengthens individual certainty—if everyone agrees, the model must be correct. Each person's fabrication reinforces others' fabrications. The group becomes collectively more certain even as the model remains a fabrication.
Stage 4: Contradiction Suppression
When contradictory evidence appears, the group doesn't update the model. Instead, the contradiction gets dismissed: "That's an outlier," "That's from an unreliable source," "That's not relevant to our situation." The brain's prediction-confirmation process, operating at the group level, maintains the established model.
Stage 5: Catastrophic Failure
The collective model construction diverges from reality until external reality forces correction—usually through failure. The project fails, the risk materialises, the strategy collapses. Only then does the fabrication break down. But even after failure, the group might reconstruct the narrative to protect the model: "We were unlucky," "Circumstances changed," "Others sabotaged us."
The Iraq War WMD Fabrication (2002-2003): Intelligence agencies across multiple countries constructed a false certainty that Iraq possessed weapons of mass destruction. The model was internally coherent, based on selected intelligence and worst-case assumptions. Contradictory evidence was dismissed or reinterpreted. Senior officials stated the case with absolute confidence. The collective model construction was so strong that it convinced not just intelligence agencies but political leaders, media, and large segments of the public. The fabrication felt like certainty because it was a shared, socially reinforced model. External reality—the absence of WMDs—only became undeniable after the invasion (Jervis, 2010).
The 2008 Financial Crisis: Financial institutions collectively fabricated the model that housing prices would continue rising indefinitely, that mortgage-backed securities were low-risk, and that sophisticated financial instruments had distributed risk safely. This wasn't a conspiracy; it was genuine collective model construction. Analysts, traders, executives, and regulators all constructed similar models because they shared information, shared incentives, and reinforced each other's certainty. Contradictory evidence—rising default rates, unsustainable lending practices—was dismissed as manageable. The constructed reality persisted until external reality forced correction through systemic collapse (Tett, 2009).
NASA Challenger Disaster (1986): Engineers at NASA and Morton Thiokol collectively fabricated the model that O-ring erosion was acceptable risk, that launches could occur in cold temperatures, that bureaucratic deadlines were more important than safety margins. This fabrication developed through years of normalisation of deviance: each successful launch despite O-ring damage reinforced the model that the risk was acceptable. Dissenting engineers were pressured to conform. The collective model construction was so strong that senior managers approved the launch despite explicit warnings. External reality—the explosion—provided brutal correction (Vaughan, 1996).
Corporate Scandals (Enron, Theranos): These organisations constructed collective realities where fraud became normal, warning signs were suppressed, and impossible claims were treated as achievable. Employees constructed coherent internal models because everyone around them shared those models. The social reinforcement was so strong that individuals who might have questioned the models outside the organisation accepted them within it. The constructed reality persisted until external audits or investigations forced reality to intrude (Carreyrou, 2018).
Collective model construction is particularly dangerous among intelligent, highly educated professionals. This seems paradoxical—shouldn't smarter people be better at detecting errors? But the model construction spectrum helps explain it: more neural capacity enables more elaborate model construction.
Smart people can construct more sophisticated justifications for their models. They can rationalise contradictions more effectively. They can build more complex narratives that explain away prediction errors. And they tend to be more confident in their fabrications precisely because those fabrications are more elaborate and internally consistent.
Moreover, intelligent people in professional settings are often in hierarchical organisations that punish dissent and reward certainty. A junior analyst who questions the collective model construction risks their career. A mid-level manager who expresses doubt undermines their authority. The social and professional incentives all push toward adopting and reinforcing the collective model, even when individuals have private doubts.
In professional settings, collective model construction takes specific forms:
Multi-Agency Meetings: Different organisations send representatives who share information and construct a collective model of a case. The model feels objective because it's been "agreed" by multiple independent agencies. But those agencies are often constructing from similar positioned perspectives, using similar frameworks, and responding to similar social pressures. The collective model might be dangerously wrong even though it has multi-agency consensus.
Case Conferences: A team reviews a case and reaches a conclusion. The conclusion feels certain because multiple professionals agree. But they might all be constructing from the same incomplete information, all influenced by the same charismatic senior clinician, all failing to question assumptions because the group dynamic suppresses dissent.
As Executive Mobs (Young, 2025) describes the mechanism: "Professionals who agree with the collective narrative receive positive strokes: nods of recognition, validation of their concerns, acknowledgment of their compassion and efforts. Those who dissent face not punishment but something more subtle and more powerful: the absence of strokes." This withdrawal of social recognition—operating below conscious awareness—creates powerful biological pressure to conform even when professionals privately doubt the collective model construction.
Strategic Planning: Organisational leaders fabricate models of the future: market trends, technological changes, competitor actions. These fabrications get elaborated into strategic plans with concrete timelines and resource allocations. The detail and sophistication of the plans create confidence—surely something this carefully planned must work. But it's still fabrication, prediction based on limited information, subject to all the errors that individual model construction entails but now amplified by collective reinforcement.
Risk Assessment Consensus: Multiple professionals review a situation and agree on risk level. This consensus feels like objective truth—surely if multiple experts agree, they must be right. But they might all be constructing from similar biases, all influenced by organisational culture, all responding to liability concerns that shape risk assessments toward defensible decisions rather than accurate ones.
Collective model construction is more dangerous than individual model construction because it eliminates the natural check of social disagreement. When one person's fabrication diverges from reality, others might notice and provide corrective feedback. But when everyone's fabrications align, the group becomes an echo chamber. The shared model reinforces itself through social consensus, and dissenting voices—the prediction errors that might correct the fabrication—are suppressed or ignored.
The result is that groups can become collectively more certain even as they become collectively more wrong. The constructed model feels increasingly real because it's shared, discussed, reinforced, and acted upon. By the time external reality provides undeniable correction, the fabrication might have caused significant harm: unnecessary wars, financial collapses, preventable disasters, wrongful interventions.
This is why understanding model construction matters professionally. It's not enough to be individually careful, evidence-based, and reflective. Professionals must also recognise that their collective processes are model construction processes—that group consensus is not the same as truth, that organisational certainty is not the same as accuracy, and that the feeling of "we all agree" might be the most dangerous feeling of all.
"First, do no harm" (primum non nocere) is perhaps the most famous principle in medical ethics. It sounds unquestionable—of course healthcare professionals should avoid causing harm. But examined through the lens of fabrication, this principle reveals itself to be structurally impossible to fulfill.
"Do no harm" feels ethically unassailable because it appears to establish a minimum standard: even if a healthcare professional cannot help, at least they shouldn't make things worse. The principle seems to create a clear ethical boundary between acceptable and unacceptable action.
This clarity is attractive, particularly in fields where uncertainty is high and consequences are serious. Child protection, medicine, mental health services, education—all face situations where the correct course of action is unclear. "Do no harm" appears to offer guidance: when uncertain, do nothing that might cause harm.
But this apparent clarity is false. It's based on assumptions that the fabrication framework reveals to be untenable.
The principle "do no harm" requires three things that fabrication makes impossible:
1. Knowing what will cause harm
Harm assessment requires predicting the consequences of actions. But professionals operate from constructed models, not direct knowledge of reality. They predict harm based on positioned information, prior experience, and theoretical frameworks. These predictions might be wrong. What seems harmless from one positioned perspective might cause significant harm that isn't visible from that position. What seems harmful might actually be beneficial. The professional is fabricating the harm assessment, experiencing that fabrication as knowledge, and acting on it as if it were certain.
2. Knowing what constitutes harm
Harm isn't an objective feature of the world; it's a contextual, values-laden assessment. Is removing a child from an abusive home "harm" (because of attachment trauma and disruption) or "protection" (because of preventing further abuse)? Is administering chemotherapy "harm" (because of severe side effects) or "treatment" (because of cancer reduction)? The classification of an action as harmful or beneficial depends on which consequences are weighted as more significant—and that weighting is itself a fabrication, shaped by professional training, organisational culture, and social values.
3. Having a non-harmful option available
"Do no harm" assumes there's an action—or inaction—that causes no harm. But in complex situations, all options cause some harm. Removing a child causes attachment trauma. Leaving a child causes continued abuse exposure. There's no neutral option that causes zero harm. The professional must choose which harm is more acceptable—and that choice is based on constructed assessments of relative harm, not certain knowledge.
Child Protection: The Removal Decision
A social worker must decide whether to remove a child from their family. The decision is framed as "do no harm"—leave the child only if removal would cause more harm than staying; remove the child only if staying would cause more harm than removal.
But both options cause harm:
The social worker must construct an assessment of which harm is greater. This assessment depends on:
There is no "do no harm" option. There is only "choose which harm seems less bad from my positioned, constructed assessment."
Medicine: Treatment Decisions
A doctor must recommend treatment for cancer. Surgery offers a chance of cure but carries surgical risks, recovery trauma, and possible complications. Chemotherapy offers different probabilities of success with different side effect profiles. Radiation has its own risk-benefit profile. Watchful waiting avoids treatment harm but risks disease progression.
Every option causes some harm. The doctor constructs an assessment of which option offers the best harm-benefit balance, but this assessment is based on:
"Do no harm" cannot guide this decision because harm is guaranteed by all options, and the assessment of which harm is most acceptable is a constructed judgement.
Education: Discipline and Structure
A teacher must decide how to structure their classroom. Strict discipline creates safety and order but can cause anxiety, suppress individuality, and damage student-teacher relationships. Permissive approaches promote autonomy and creativity but can create chaos, expose vulnerable students to bullying, and fail to teach self-regulation.
Every pedagogical choice involves trade-offs. The teacher constructs a model of what their particular students need, but this constructed model is based on:
"Do no harm" cannot guide classroom management because all management approaches cause some harm to some students. The question isn't whether to cause harm—that's inevitable. The question is which harms are acceptable and for whom.
If "do no harm" is structurally impossible, what should replace it?
"Aim to do more good than harm" acknowledges that harm is inevitable and makes the ethical obligation explicit: the professional's responsibility is to carefully weigh likely harms against likely benefits and choose the action that appears, from their positioned perspective, to maximise good and minimise harm.
This formulation is less satisfying than "do no harm" because it's less certain. But it's more honest. It acknowledges:
"I acknowledge I cannot fully know consequences" makes explicit what "do no harm" pretends isn't true: professionals are operating from constructed models, not omniscient knowledge. They're predicting based on positioned information. They might be wrong.
This acknowledgement doesn't absolve professionals of responsibility. They're still responsible for making carefully reasoned decisions based on best available evidence. But it does shift the nature of professional responsibility from "guarantee no harm" (impossible) to "make reasoned judgements acknowledging uncertainty" (difficult but possible).
"I work with uncertainty, not from certainty" recognises that professional practice involves acting despite fundamental uncertainty. The professional cannot wait for certainty—it's not available. They must act on constructed models, knowing those models might be wrong, and bear responsibility for those actions.
This is harder than "do no harm." It requires living with doubt, acknowledging fallibility, and accepting that even well-intentioned, carefully reasoned decisions might cause harm. But it's more honest about the nature of professional practice in a world where humans operate from constructed models, not direct access to reality.
If "do no harm" is structurally impossible, why does it persist as an ethical principle?
Because it performs certainty. Organisations, patients, clients, and the public want professionals who appear confident and certain. "I'm doing my best with limited information and might be wrong" doesn't inspire confidence. "First, do no harm" sounds certain, decisive, and ethically grounded.
The principle allows professionals to frame their constructed assessments as ethical certainties. It transforms "I think this option causes less harm based on my constructed model" into "I am doing no harm." This linguistic transformation makes the uncertainty tolerable—for the professional, for the organisation, and for those receiving the professional's services.
But this performance of certainty through "do no harm" can be dangerous. It can discourage honest acknowledgement of uncertainty, suppress discussion of trade-offs, and create false confidence in decisions that are actually based on constructed assessments of probable harm and benefit.
The alternative—acknowledging that all actions cause some harm, that harm assessments are constructed models, and that professionals work with uncertainty rather than from certainty—is less comforting but more honest. And in a world where humans operate from fabricated realities, honesty about fabrication might be the most ethical stance available.
Fabrication isn't cost-free. The brain expends significant resources constructing internal models, managing predictions, and processing prediction errors. Most of the time, this process operates smoothly enough that people don't notice the computational load. But certain situations overwhelm the fabrication system, causing stress, confusion, or shutdown. Understanding these situations helps explain why some environments feel overwhelming and others feel manageable.
As discussed in When Your Brain Has a Mind of Its Own (Young, 2025), the limbic system can override cortical processing when it detects threat. This creates a particular problem for fabrication: the limbic system operates on its own fabricated threat models, and those models can be systematically wrong.
The neurological mechanism is straightforward: "When stress hormones flood your system, blood flow redirects from your prefrontal cortex toward your limbic system and motor areas. Your thinking brain literally receives less oxygen and glucose. This explains why stress makes us forget, freeze, or say things we immediately regret." The fabrication system that normally operates through the cortex—building sophisticated models, integrating memory, generating predictions—becomes temporarily unavailable. What remains is the limbic system's crude, fast threat fabrication.
Consider anxiety. The anxious person's brain fabricates threats—catastrophic outcomes, social rejection, physical danger—based on evolved priors and learned associations. These fabricated threats feel as real as actual threats because the limbic system responds to the internal model, not to external reality. The fabrication generates genuine physiological stress responses: elevated heart rate, muscle tension, heightened alertness.
The cortical brain might know, intellectually, that the threat isn't real. But that knowledge doesn't stop the fabrication. The limbic system continues running its threat model, continues generating stress responses, continues treating the fabricated danger as if it were actual danger. The person experiences genuine fear of a fabricated threat—and cannot simply think their way out of that fear because the fabrication is operating below the level of conscious control.
This is why telling someone "don't worry, it's fine" rarely helps with anxiety. The cortical brain might agree intellectually, but the limbic system is fabricating threat regardless. The fabrication generates real experience—real fear, real stress—even though the threat is internally constructed.
The limbic override operates equally in pursuit of pleasure. The impulse purchase, the expensive item bought on credit, the substance promising relief—each involves the limbic system fabricating an intensely pleasurable outcome whilst downplaying risk. The fabricated comfort of the purchase or substance feels immediately real; the future burden of debt or addiction remains abstract. Loan repayments seem manageable because the limbic system is responding to the imagined satisfaction, not to the cortical brain's more measured financial assessment. The fabrication comforts and pleases the limbic system so effectively that rational objections—"I can't afford this," "This won't actually solve the problem," "I'll regret this tomorrow"—become secondary. The cortical brain might know intellectually that the decision is unwise, but the limbic fabrication of immediate reward overrides that knowledge.
Not all brains fabricate identically. Some brains are hypersensitive to sensory input, processing more detail and generating stronger prediction errors for minor changes. Others are hyposensitive, requiring more intense stimulation before prediction errors register. These differences in sensory processing affect fabrication capacity and the environments people find manageable or overwhelming (Dunn, 1997; Miller et al., 2007).
Hypersensitivity: A hypersensitive brain processes more sensory detail, generates more prediction errors, and updates its internal model more frequently. This can be advantageous—detecting subtle changes, noticing fine details, remaining alert to environmental shifts. But it also means the brain is doing more fabrication work. More incoming data must be processed, more predictions must be generated, more errors must be resolved.
In high-stimulation environments—crowded spaces, noisy rooms, visually complex scenes—the hypersensitive brain becomes overwhelmed. There's too much sensory input to process, too many prediction errors to resolve, too much fabrication required. The computational load exceeds capacity, and the person experiences sensory overload: exhaustion, irritability, need for withdrawal.
This isn't "being too sensitive" in a judgemental sense. It's computational reality: this brain requires more resources to construct from this level of sensory input, and those resources aren't unlimited. The overwhelm is as real as a computer slowing down when running too many programs simultaneously.
Hyposensitivity: A hyposensitive brain requires more intense stimulation before prediction errors register. Subtle sensory input doesn't generate strong enough signals to update the internal model. This can make quiet, low-stimulation environments feel empty or boring—there's insufficient prediction error to maintain engagement.
Hyposensitive individuals might seek intense sensory experiences: loud music, spicy food, physical roughhousing, extreme sports. These aren't signs of "acting out" or "attention-seeking"; they're computational requirements. This brain needs stronger sensory input to generate sufficient prediction errors to maintain an engaging fabricated reality.
Autism and Predictive Differences: Autistic brains appear to construct differently, with some research suggesting reduced reliance on prediction and increased processing of sensory detail (Van de Cruys et al., 2014). This might explain both strengths (excellent detail perception, pattern recognition) and difficulties (overwhelming sensory experiences, difficulty with social prediction).
If an autistic brain is less able to rely on prediction—if it must process more incoming sensory data rather than confirming predictions—then it's doing more computational work. In high-stimulation environments, this increased processing load can cause overwhelm and shutdown. The brain simply cannot process that much sensory input without prediction shortcuts, and the system becomes overloaded.
This helps explain why autistic individuals might need lower-stimulation environments, predictable routines, and advance warning of changes. These aren't inflexible preferences; they're computational requirements for a brain that fabricates differently.
Parents and teachers often observe young children closing their eyes or covering their faces when asked a challenging question. This appears to be avoidance, but it's actually strategic resource management.
Visual processing consumes significant computational resources. The brain must process ten million bits per second of visual input, generate predictions, manage saccades, construct 3D models from 2D input, and maintain a stable visual experience despite constant eye movement. All of this happens automatically, below conscious awareness, but it requires resources.
When a child closes their eyes, they eliminate the visual fabrication load. They're no longer processing visual input, generating visual predictions, or maintaining visual models. Those computational resources become available for other tasks—like processing a complex question, retrieving memories, or constructing an answer.
This isn't avoidance or distraction. It's the child's brain optimising resource allocation. The question is computationally demanding, so they reduce demand elsewhere. Closing eyes is a natural, intelligent strategy for managing fabrication load.
Adults do this too, though often less obviously. People often look away from others' faces when thinking hard—not because they're avoiding eye contact, but because processing facial expressions consumes computational resources that are needed for the thinking task. The "distant stare" during concentration is the same phenomenon: reducing sensory processing to free resources for internal fabrication.
When fabrication demands exceed capacity, systems shut down. This is sensory overload: the brain cannot process the incoming sensory data, cannot generate adequate predictions, cannot resolve the flood of prediction errors. The fabrication system becomes overwhelmed.
The experience is intensely unpleasant. The world feels chaotic, incomprehensible, threatening. The brain is trying to construct a coherent internal model but cannot because there's too much input, too many errors, too little capacity. This generates stress, anxiety, and eventually shutdown or meltdown.
Shutdown: The person withdraws, becomes non-responsive, needs isolation. This is the brain's protection mechanism: if it cannot process the current environment, it stops trying. The person might seem "spaced out" or "distant"—they're not being rude or avoidant; their fabrication system has exceeded capacity and temporarily stopped functioning normally.
Meltdown: The overwhelm triggers an acute stress response. The limbic system interprets the fabrication failure as threat, generates fight-or-flight responses, and the person might cry, shout, flee, or become physically aggressive. This isn't "bad behaviour" or "manipulation"; it's the brain's stress system responding to genuine overwhelm.
Understanding these as fabrication capacity issues rather than behavioural choices changes how they should be addressed. The solution isn't punishment or reasoning—the person's fabrication system is already overwhelmed. The solution is reducing sensory load: quieter environment, dimmer lighting, fewer people, less complexity. This reduces fabrication demands and allows the system to recover.
People differ in how much fabrication complexity they can manage. Some people thrive in busy, chaotic environments—their brains can handle the high sensory load, generate rapid predictions, and process numerous prediction errors without overwhelm. Others need quiet, minimalist environments—their brains require lower sensory input to construct comfortably.
Neither is superior. They're different computational profiles, shaped by neurology, development, and experience. Problems arise when the environment mismatches the individual's fabrication capacity:
- A person who needs low stimulation forced into an open-plan office with constant noise and interruption - A person who needs high stimulation placed in an isolated, quiet role with minimal interaction - A child with high fabrication capacity bored in a rigid, repetitive classroom - A child with low fabrication capacity overwhelmed in a chaotic, unstructured environment
Recognising fabrication capacity as a genuine constraint—not a preference or a personality quirk—helps design better environments. The person isn't being difficult; their brain has particular requirements for the level of sensory input it can fabricate from without overwhelm.
Understanding fabrication overwhelm suggests practical strategies:
For individuals: Research suggests recognising personal fabrication capacity limits, designing environments that match processing needs, using strategic sensory reduction (closing eyes, noise-cancelling headphones, dim lighting) when concentrating, and allowing recovery time after high-fabrication-demand situations.
For parents and educators: Evidence indicates that closing eyes or looking away during thinking is productive rather than avoidant. Research suggests understanding meltdowns as fabrication overwhelm rather than misbehaviour, providing low-stimulation spaces for recovery, and adjusting environmental complexity to match individual capacity.
For organisations: Studies show benefits from recognising that people have different fabrication capacities, offering environmental options (quiet spaces, flexible work locations), reducing unnecessary sensory complexity in work environments, and allowing breaks for fabrication recovery.
The goal isn't to eliminate fabrication demands—that's impossible. But understanding that fabrication consumes resources, that capacity varies, and that overwhelm represents genuine computational failure rather than personal weakness, helps create more manageable environments and more compassionate responses to fabrication stress.
Fabrication is inevitable, but it's not entirely beyond influence. While people cannot stop fabricating—it's fundamental to how brains work—they can learn to work with fabrication more skilfully. This involves understanding when to reduce fabrication load, recognising when fabrications are likely to be wrong, and building systems that acknowledge rather than deny the fundamental uncertainty of constructed knowledge.
Research in metacognition suggests that awareness of fabrication begins with acknowledging that the experienced world is primarily brain-generated, with external reality providing only occasional corrections. This feels counterintuitive because fabrication is seamless. However, detecting problematic fabrications from inside one's own thinking is extremely difficult - the fabrication feels like reality precisely because it's experienced as reality.
Evidence suggests more practical strategies involve externalising thinking and seeking alternative perspectives. Research on decision-making indicates that people benefit from talking through their reasoning with trusted others—friends, family members, therapists, or colleagues—who can spot patterns, assumptions, or leaps in logic that aren't apparent to the person themselves. The trusted listener isn't more intelligent or more objective; they're simply viewing the thinking from a different position with different fabrications, which allows them to notice what "stands out" or seems inconsistent.
Similarly, studies on reflective practice show that writing down plans, decisions, or reasoning can create useful distance. When thoughts remain internal, they feel like direct knowledge. When externalised on paper or screen, they become "things I thought," which creates perspective. Reading back what was written yesterday, last week, or last month often reveals assumptions, emotional reasoning, or gaps that weren't visible during the original thinking. The act of externalising creates enough separation to notice the fabrication as fabrication rather than experiencing it as truth.
Research indicates these externalisation strategies work because they provide correction mechanisms that internal reflection cannot. The trusted conversation partner offers prediction errors—"Have you considered...?" "That seems inconsistent with..." "What about...?"—that update the internal model. The written record provides temporal distance, allowing the same brain to construct differently when reviewing past thinking. Neither strategy eliminates fabrication, but both create conditions where fabrications can be noticed, questioned, and revised.
Since fabrication consumes computational resources, strategically reducing fabrication demands can improve cognitive performance. This explains many intuitive practices that people discover without understanding the mechanism.
Visual Reduction
The brain's visual fabrication is computationally expensive. Reducing visual input frees resources for other cognitive tasks:
People often discover these strategies naturally. Writers dim lights. Students close eyes during exams. Therapists sit slightly to the side rather than face-to-face with clients. Programmers prefer dark themes on screens. These aren't preferences; they're computational optimisations.
Auditory Reduction
Similar principles apply to auditory fabrication:
The difference between people who need silence and people who need background noise might relate to how their brains handle prediction errors. For some, any sound is a prediction error that demands processing. For others, complete silence generates prediction errors ("is that a sound?") while consistent background noise provides predictable input that requires minimal processing.
Physical Movement and Mental Clarity
Physical movement affects cognitive processing, though the mechanism isn't entirely clear. It might relate to embodied cognition (Clark, 1997; Barsalou, 2008)—the idea that cognitive processing is distributed between brain and body, not contained entirely in the brain.
People report:
The fabrication framework suggests a possible explanation: movement changes the brain's fabrication patterns. The sensory input from movement—proprioception, vestibular information, changing visual scenes—might disrupt stuck fabrication patterns and force the brain to generate new models. This could explain why movement helps when thinking is stuck: it's not just "getting blood flow" but actually disrupting and resetting fabrication patterns.
Beyond reducing load, people can develop metacognitive awareness of their fabrication process—learning to observe their own model-building and hold models more lightly.
Meditation: Many meditation practices involve observing thoughts without engaging with them. This is, in effect, watching fabrication happen. The brain generates thoughts—fabricates internal experiences—and the meditator learns to notice these as constructions rather than truths. This doesn't stop fabrication, but it changes the relationship to it: thoughts become "things my brain is producing" rather than "the truth about reality."
Mindfulness: Similar to meditation, mindfulness practices involve noticing present experience without immediately interpreting or judging it. This creates a gap between sensory input and fabricated interpretation. The mindful person might notice: "I felt a sensation → I fabricated an interpretation (tension means stress) → I generated an emotional response (stress means something is wrong)." Recognising this process as fabrication allows for alternative interpretations: tension might mean excitement, concentration, or physical tiredness, not necessarily stress.
Journaling: Externalising internal fabrications by writing them down can reveal their constructed nature. When thoughts remain internal, they feel like direct knowledge. When written down, they become "things I thought," which are easier to question, revise, or recognise as provisional models rather than certain truths.
Sleep: Sleep appears to play a role in reorganising constructed models. During sleep, the brain processes the day's experiences, strengthens some associations, weakens others, and integrates new information into existing models (Walker, 2017). This might be why problems that seem intractable before sleep sometimes appear clearer afterward—the fabricated model has been reorganised during sleep, allowing new patterns to emerge.
Professionals can develop practices that acknowledge fabrication and work with uncertainty more honestly:
Acknowledge Positioned Knowledge
Explicitly recognise that professional assessments are made from particular positions with particular information. Rather than presenting assessments as objective truth, frame them as positioned observations: "From my position as social worker, with access to these particular observations and reports, my current model is..."
State Confidence Levels Honestly
Distinguish between strong evidence and weak evidence, certain knowledge and uncertain predictions. Rather than presenting all professional statements with equal confidence, indicate actual certainty: "I'm very confident that X happened. I'm less certain about Y. My prediction about Z is a guess based on limited information."
Distinguish Observation from Interpretation
Research suggests separating what was directly observed from the fabricated interpretation of those observations. For example: "I observed the child crying and the parent looking away. My interpretation is that the parent was distressed. An alternative interpretation could be that the parent was overwhelmed and needed a moment."
Regular Reflection on Fabrication Errors
Maintaining awareness of when fabrications were wrong can be valuable. This doesn't mean dwelling on mistakes but systematically noticing when predictions didn't match reality, when models were inadequate, when positioned knowledge missed important information. This trains attention to fabrication as fabrication rather than truth.
As The View from Here (Young, 2025) emphasises regarding adaptation versus direct change: "The science is clear: humans are exquisitely designed to adapt to environmental changes whilst resisting direct change attempts. Four billion years of evolution created organisms that maintain stability fiercely but adapt brilliantly when conditions shift." This principle applies to professional practice: rather than attempting to change people's fabrications directly, practitioners can create environmental conditions where alternative fabrications become more adaptive.
Beyond individual practice, organisations can build systems that acknowledge fabrication and work with uncertainty more effectively:
Build in Dissenting Voices
Organisations that demand consensus suppress prediction errors and reinforce collective model construction. Research suggests deliberately including dissenting perspectives. This isn't just "devil's advocate" (which can be performative) but genuinely seeking out people who fabricate differently and giving them real authority to challenge collective models.
Red Team Exercises
Military and intelligence organisations use red teams: people whose job is to attack the organisation's plans and assumptions. This forces explicit consideration of how the collective model construction might be wrong. A red team for a child protection case conference might ask: "What if our model of this family is completely wrong? What evidence have we ignored? What assumptions have we made?"
Pre-Mortems
Before implementing a decision, organisations might conduct a pre-mortem: imagining the decision has failed catastrophically, and working backward to identify how the failure occurred (Klein, 2007). This forces explicit consideration of how current fabrications might be wrong. "Our plan failed. Why?" might reveal assumptions, blind spots, or positioned knowledge limitations that aren't apparent in the optimistic planning fabrication.
Diverse Perspectives
Organisations often seek consensus and shared understanding. But diversity of fabrication—different people with different positions, experiences, and models—is more likely to reveal errors. The organisation that values cognitive diversity over consensus is less vulnerable to collective model construction becoming dangerously wrong.
Epistemic Humility in Policy
Organisational policies often demand certainty: risk must be assessed, outcomes must be predicted, decisions must be justified. Policies that acknowledge uncertainty would be more honest but require cultural change. Instead of "risk must be accurately assessed," a policy might state: "risk must be assessed based on available evidence, recognising that these assessments are positioned, provisional, and might be wrong."
This feels uncomfortable because organisations are accountable. If a policy acknowledges that professional judgements might be wrong, doesn't that undermine confidence in professionals? But the alternative—pretending certainty that doesn't exist—might be more dangerous. It encourages overconfident fabrications, suppresses dissent, and prevents honest acknowledgement of uncertainty.
Here's the paradox: understanding that everything is fabricated doesn't stop fabrication. Reading this essay doesn't make the concert arena chairs stop looking like they extend into the distance. Learning about predictive coding doesn't make optical illusions stop working. Knowing that memory is reconstructive doesn't make false memories feel false.
Fabrication continues automatically, below consciousness, generating experience that feels like direct perception. The knowledge that it's fabrication changes something, but not the fabrication itself. What changes is the relationship to fabrication: the willingness to question what feels certain, the humility about what is "known," the recognition that confidence and accuracy are not the same thing.
This is uncomfortable. It would be more satisfying if understanding fabrication meant transcending it, gaining direct access to reality, or at least being able to distinguish fabricated experience from "real" experience. But that's not how it works. The fabrication continues. What changes is awareness: knowing that the experienced world is constructed, that certainty is a feeling rather than a guarantee, and that even the most sophisticated model constructions—scientific theories, professional assessments, confident beliefs—are still fabrications, still models, still subject to error.
If the brain fabricates most of experience, if all knowledge is constructed from positioned perspectives, if certainty is a feeling rather than a guarantee—how can anyone ever know what's real? How can science work? How can knowledge advance?
This apparent paradox has a solution, demonstrated most clearly in the scientific method. The path to reality isn't through claiming direct access to truth. It's through systematically studying the measurement process itself. Science works not by assuming humans have unmediated perception of reality, but by recognising that all observation is mediated—and then studying the mediating instruments.
The principle can be stated simply: to understand what you're observing, you must first understand how you're observing it. To measure reality, you must first measure your measuring instrument. And to understand that measurement, you might need to measure the instrument you used to measure the first instrument. This recursive process of examining the tools of observation is what allows science to approximate reality despite the fundamental limits of fabricated perception.
Earlier, we noted that Newton's equations work reliably enough to launch spacecraft—a pragmatic certainty that distinguishes physics from human judgement. But how did Newton achieve such reliable knowledge whilst operating from a fabricated brain like everyone else? The answer lies in his systematic approach to measurement itself.
Newton exemplified this approach in his studies of vision. Before trusting what he saw, he studied eyes—including his own eye. He wanted to understand the instrument of sight: what it could detect, what it distorted, what remained outside its capacity to observe. By understanding the limitations and biases of the eye as a measurement instrument, Newton could better interpret what his eyes showed him and recognise where vision might mislead.
This wasn't abstract philosophy. Newton famously conducted experiments on his own eye, including inserting a bodkin (a blunt needle) between his eyeball and eye socket to observe how pressure on the eye created false visual perceptions. He was literally measuring the measure—studying how the instrument of vision itself created experience, separate from external reality (Newton, 1704; Shapiro, 1993).
Socrates approached the same problem from a different angle. Rather than studying physical measurement instruments, he examined the instrument of knowledge itself: human reasoning, certainty, and belief. His method—systematic questioning that exposed contradictions and unexamined assumptions—was a way of testing the measuring instrument of human understanding.
When Socrates claimed to know only that he knew nothing, he wasn't being falsely modest. He was recognising that genuine knowledge requires understanding the limits and biases of the knowing process itself. Those who claimed certain knowledge without examining how they knew it were using an uncalibrated instrument. They might be right by accident, but they had no way to distinguish accurate beliefs from convincing fabrications.
The Socratic method is measurement of the measure: examining beliefs to understand their foundations, testing reasoning to expose its assumptions, questioning certainty to reveal its constructed nature. This process doesn't guarantee access to truth, but it helps identify where fabrication has been mistaken for reality.
Modern science inherits both Newton's physical measurement of instruments and Socrates' examination of reasoning. The scientific method is fundamentally about studying the measurement process:
Control Groups and Experimental Design: Scientists don't just measure phenomena—they measure their measurement methods. Control groups reveal what the experimental setup itself contributes to results, separate from the phenomenon being studied.
Replication and Peer Review: Results must be reproducible by others using different instruments, different positions, different observers. This helps distinguish genuine patterns in reality from artifacts of particular measurement approaches or observer biases.
Quantification and Statistical Analysis: By measuring precisely and understanding measurement error, scientists can distinguish signal from noise—real patterns from random variation or systematic bias in the measuring instrument.
Falsification Rather Than Confirmation: Science advances not by proving theories true but by attempting to prove them false. This acknowledges that fabricated certainty feels identical to genuine knowledge, so the path to reality lies in rigorous attempts to demonstrate where fabrications fail to match external reality.
Meta-Analysis and Theory: Science studies not just phenomena but patterns across studies, examining how different measurement approaches converge or diverge. This reveals systematic biases in particular measurement methods.
The power of science comes precisely from acknowledging fabrication. By recognising that observation is constructed, that instruments have limitations, that position shapes perspective, and that certainty is a feeling rather than a guarantee, science creates methods to work around these fundamental constraints.
This is why science often seems frustratingly uncertain compared to ideological or faith-based systems. Science rarely claims absolute certainty because it's measuring the measure—always aware that current measurement instruments might themselves be flawed, that current theories might be fabrications that happen to match available observations but will fail when tested more rigorously.
But this humility is precisely what allows science to approximate reality more closely over time. By constantly examining measurement instruments, questioning assumptions, and testing where fabrications fail to match external reality, science builds increasingly accurate models. Not certain knowledge—there's no such thing—but progressively less-wrong approximations that recognise their own limitations.
The principle of measuring the measure applies far beyond laboratory science. In any professional domain where knowledge and certainty matter, the path to better understanding lies in examining the knowing process itself:
Social Work: Rather than assuming assessments directly capture reality, examining how positioned knowledge shapes observation, how professional frameworks bias attention, how cultural assumptions influence interpretation. The assessment tool is part of what needs assessing.
Medicine: Studying not just diseases but diagnostic processes—understanding how symptoms are interpreted, how tests can mislead, how treatment biases shape what gets noticed. The physician's perception is part of the diagnostic picture.
Education: Examining not just student performance but how performance is measured, what testing methods reveal or conceal, how teacher expectations influence observation. The evaluation instrument shapes what appears as knowledge.
Law: Recognising not just evidence but how evidence is gathered, interpreted, weighted—understanding that legal processes construct truth as much as discover it. The judicial mechanism is part of what determines outcomes.
In each case, the path to better knowledge lies not in claiming more certain access to reality but in more carefully examining how knowledge is constructed—measuring the measure, understanding the instrument, recognising that all observation is positioned and mediated.
This is the gift from science that Socrates anticipated: genuine wisdom comes not from certainty but from understanding the limits and nature of the knowing process itself. Reality remains external and largely unknowable through direct perception. But by systematically studying how we perceive, how we measure, how we know—by measuring the measure—humans can build progressively less-wrong models that recognise their own constructed nature while still approaching truth.
Madonna's 1984 song "Material Girl" captured something about modern consumer culture: living in a material world where material possessions seemed to define worth and identity. But neuroscience reveals something more fundamental: humans aren't living in a material world at all. They're living in fabricated worlds—internally constructed models experienced as external reality.
This isn't a flaw or a limitation. It's how brains work. The alternative—the mosquito's stripped-down sensory world with minimal fabrication—would eliminate much of what makes human cognition powerful. Fabrication enables prediction, planning, creativity, abstract thought, and imagination. It allows humans to construct complex societies, develop sophisticated technologies, and build civilisations. The massive model construction capacity of the human brain is its greatest strength.
But it's also a fundamental vulnerability. The same mechanisms that enable sophisticated thought also enable systematic delusion. The same processes that allow for planning and creativity also allow for ideology and false certainty. The fabrication that makes humans intelligent also makes them capable of being confidently, catastrophically wrong.
Consider again the model construction spectrum:
The Mosquito: With its 220,000 neurons, the mosquito has no illusion of knowledge. Its responses are hardwired, its sensory world is stripped down, and its predictions are minimal. It cannot be wrong in sophisticated ways because it cannot fabricate sophisticated models. It lives much closer to direct stimulus-response, with minimal internal model-building.
The Human (Unaware): Humans with 86 billion neurons fabricate elaborately, experience those fabrications as direct perception, and mistake internal models for external reality. They feel certain because their fabrications are coherent, stable, and socially reinforced. They don't recognise their experienced world as constructed, so they assert their fabrications as facts, defend them with conviction, and rarely question whether their internal models might diverge from reality.
The Human (Aware): Knowing that all experience is fabricated doesn't stop fabrication—it's automatic and inevitable. But it changes the relationship to fabrication. The aware human still fabricates, still experiences that fabrication as reality, but holds those fabrications more lightly. They recognise that certainty is a feeling, not a guarantee. They acknowledge that their models might be wrong, that their positioned knowledge is limited, and that even sophisticated model constructions remain fabrications.
This third position—fabricating consciously rather than unconsciously—doesn't eliminate error. The aware person still makes mistakes, still has biases, still operates from constructed models. But they're more willing to update those models when confronted with prediction errors, more open to alternative fabrications, and less attached to the feeling of certainty.
Awareness of fabrication doesn't change the mechanism. The brain still generates predictions, fills in gaps, constructs internal models, and experiences those models as reality. But awareness changes several important things:
Epistemic Humility: Recognising that knowledge is fabricated—that people operate from internally constructed models rather than direct reality—introduces appropriate caution about certainty. "I know" becomes "my current model suggests" or "based on available evidence, I think." This isn't paralysing uncertainty; it's honest acknowledgement of limitation.
Willingness to Update: If someone recognises their beliefs as constructed models rather than certain truths, they're more willing to revise those models when faced with contradictory evidence. The model becomes a tool—useful but provisional—rather than an identity to defend.
Reduced Collective Delusion: If group members recognise that consensus doesn't equal truth—that collective model construction can be collectively wrong—they're more likely to tolerate dissent, seek out contradictory evidence, and question shared assumptions. This doesn't eliminate groupthink, but it makes it less automatic and less catastrophic.
Professional Honesty: If professionals acknowledge that their assessments are constructed models rather than objective facts, they can state their uncertainty honestly rather than performing false confidence. This creates more realistic expectations and better decision-making.
Personal Compassion: If someone understands that other people's apparently irrational beliefs are constructed models that feel true from inside those models, they might approach disagreement with more curiosity and less judgment. The goal becomes understanding how the other person's fabrication works, not just asserting that one's own fabrication is correct.
In professional contexts, understanding fabrication suggests several shifts:
From "I Know" to "My Current Model Suggests"
Professional practice often demands confident assertions: "This child is at high risk," "This diagnosis is accurate," "This intervention will work." But these are fabricated predictions based on positioned knowledge. More honest framing would acknowledge the fabrication: "Based on my assessment from this position, my model suggests high risk," "The diagnostic evidence points toward this condition, though alternative diagnoses remain possible," "This intervention has worked in similar cases, but individual response varies."
This doesn't undermine professional authority—it redefines it. The professional isn't claiming omniscience; they're claiming expertise in constructing well-informed models from positioned evidence. That's genuinely valuable, and it's more honest.
From "Do No Harm" to "Aim to Do More Good Than Harm"
As discussed earlier, "do no harm" is structurally impossible when operating from constructed knowledge. All interventions cause some harm, and harm assessments are themselves fabrications. Shifting to "aim to do more good than harm" acknowledges the uncertainty, makes the trade-offs explicit, and allows for honest discussion of which harms are more acceptable.
From Certainty Performance to Honest Uncertainty
Organisations demand certainty because stakeholders want confidence. But performed certainty based on constructed models might be more dangerous than honest uncertainty. If professionals acknowledged: "I don't know for certain, but here's my carefully reasoned assessment," it might create better decisions than confident assertions based on questionable fabrications.
This requires cultural change. Organisations must become comfortable with uncertainty, stakeholders must accept that professionals cannot guarantee outcomes, and systems must reward honest assessment rather than confident prediction.
From Individual Expertise to Collective Sense-Making
If even experts are constructing from positioned knowledge, then collective sense-making—bringing together multiple positioned perspectives—might be more valuable than individual expertise. Rather than seeking the single most expert opinion, complex decisions might benefit from explicitly contrasting different fabrications, identifying where they agree and where they diverge, and recognising that truth might not reside in any single position but in the synthesis of multiple perspectives. This principle explains why most democratic nations safeguard justice through jury systems: twelve randomly selected people, each with their own fabrications and constructs, deliberating together not to achieve certainty but to work deliberately with uncertainty. The wisdom lies not in eliminating positioned knowledge but in requiring consensus across multiple positions—a recognition that justice is better served by diverse fabrication than by any single authoritative view, however expert.
Beyond professional practice, understanding fabrication has implications for personal life:
Managing Cognitive Load: Recognising that fabrication consumes resources helps explain why some situations feel overwhelming. The person whose brain becomes exhausted in crowded, noisy environments isn't weak or oversensitive—their fabrication system is being asked to process more than capacity allows. Strategic environmental design (quiet spaces, visual simplicity, controlled sensory input) becomes self-care, not self-indulgence.
Recognising Stress Signals: Understanding that stress often involves fabricated threats—the limbic system treating internally constructed scenarios as if they were real dangers—helps make sense of anxiety and overwhelm. The threat feels real because the fabrication is real. But recognising it as fabrication might reduce the secondary stress of "why am I feeling this way when there's no real danger?"
Strategic Movement: Knowing that physical movement affects cognitive processing helps explain why exercise, walking, or pacing improves thinking. It's not just about health or blood flow; it's about disrupting stuck fabrication patterns and allowing the brain to generate new models.
Social Understanding: Recognising that everyone operates from fabricated worlds helps explain disagreement and conflict. People aren't simply "wrong" or "irrational"—they're constructing from different positions with different information and different histories. Their fabrications feel as true to them as different fabrications feel to others. This doesn't mean all fabrications are equally valid, but it does suggest that understanding requires engaging with how others' fabrications work, not just asserting one's own.
Before exploring the central paradox of living in a fabricated world, it's worth recalling two key concepts from earlier in this essay:
Hypothesis: An initial fabricated model—an educated guess about how something works, constructed from limited observations and prior knowledge, to be tested against reality through systematic observation.
Formulation: A hypothesis refined through repeated testing in closed systems until it reliably predicts outcomes every time specified conditions are met. The transformation from fabrication to pragmatic certainty—from "maybe this is how pancakes work" to "this is the recipe."
The Spock-McCoy dialogue captures the fundamental paradox perfectly. Spock's observation about gravity represents a formulation—Newton's equations tested so thoroughly they've become pragmatic certainty. McCoy's observation about fear represents fabrication with real consequences—an internally constructed response that nonetheless keeps organisms alive. Both are simultaneously true. Gravity operates with certainty in the closed system domain. Fear operates powerfully in the open system domain of human experience. The paradox is that humans live in both domains at once.
There's a deeper paradox in living consciously in a fabricated world: knowing that experience is constructed doesn't make it feel constructed. The concert arena chairs still look like they extend into the distance. Memory still feels like playback. Certainty still feels like certainty. Fear still feels like genuine threat detection, not predictive model activation. The fabrication continues automatically, generating experience that feels immediate, direct, and real.
This means that awareness of fabrication doesn't create perfect rationality or eliminate bias. The aware person still makes mistakes, still has blind spots, still operates from constructed models. But awareness might create something more modest but more valuable: epistemic humility, willingness to update beliefs, recognition of positioned knowledge, acknowledgement that certainty is a feeling rather than a fact in open systems whilst being achievable in closed systems, and understanding that both McCoy and Spock can be correct because they're operating in different domains.
Madonna's "Material Girl" knew she was performing—the song was self-aware about its own construction of identity through material possessions. Similarly, humans aware of fabrication know they're performing—not in the sense of being fake, but in the sense of recognising that their experienced reality is constructed, that their certainties are domain-dependent, and that even their most confident knowledge is a model, not direct truth.
The Three-Pound Supercomputer (Young, 2026) explained the mechanism: how the brain processes information through predictive coding, how ten million bits per second of visual input compress to ten bits per second of conscious awareness, how neural architecture enables the massive parallel processing that makes human cognition possible.
This essay explains the consequence: humans live in fabricated worlds. The three-pound supercomputer constructs internal models and experiences those models as external reality. This isn't a bug; it's how the system works. But it means that humans are fundamentally limited in their access to truth. They're always operating from constructed models, always seeing the world through predictions and expectations, always experiencing internally constructed reality as if it were direct perception.
The wise response isn't to pretend otherwise. It's not to seek impossible certainty, perform impossible objectivity, or claim impossible access to reality unmediated by fabrication. The wise response is to construct more carefully, hold conclusions more lightly, acknowledge positioned knowledge explicitly, and build systems that recognise rather than deny the fundamental uncertainty of human knowledge.
This is uncomfortable. It would be more satisfying to believe in certain knowledge, objective truth, and direct access to reality. But neuroscience reveals that humans don't have those things. They have fabrication—sophisticated, elaborate, often functional fabrication, but fabrication nonetheless.
Madonna sang about living in a material world. But the deeper truth is that humans are living in fabricated worlds—billions of internally constructed realities, each experienced as truth, each limited by position and processing, each vulnerable to systematic error. The question isn't whether to construct—that's inevitable. The question is whether to construct consciously or unconsciously, whether to hold models lightly or defend them as truth, and whether to build systems that acknowledge fabrication's limits or pretend those limits don't exist.
Living wisely in a fabricated world means recognising the fabrication, working within its constraints, and accepting that certainty is always provisional, knowledge is always positioned, and even the most sophisticated understanding is still an internally constructed model—useful, perhaps, but never final, never complete, and never certain.
This essay is Part 2 of a trilogy examining brain computation, predictive coding, and practical applications. Part 1 established how the brain computes. This essay explored what those mechanisms create—how predictive coding generates fabricated worlds and what this means for knowledge, certainty, and professional practice. But understanding the neurological foundations and fabrication mechanisms raises a crucial question: How do these patterns manifest when young people respond to professional intervention?
Completing the trilogy, the third essay applies these neurological and fabrication insights to a practical challenge in child protection: how to improve safety for young people aged 16 and over who are subject to Child Protection Plans whilst exercising significant autonomy in their own lives.
The central challenge: How do you safeguard young people who cannot be controlled? When a 16-year-old on a Child Protection Plan is making their own decisions about where they live, who they spend time with, and how they conduct their relationships, traditional child protection tools carry less weight. Understanding which response pattern you're working with may be amongst the most useful things a professional can bring to the situation.
The framework presents eight typologies:
Drawing on Bifulco's Attachment Style Interview and Berne's Transactional Analysis, the framework adds dimensions of functioning (mild to marked), authenticity (present reality versus replayed patterns), and professional masking. A young person who responds to trusted relationships needs to be led through that relationship. One who responds to peer dynamics needs the social environment to shift. One who will only move when reality presses in needs boundaries maintained, not consequences cushioned. Understanding which pattern you're working with transforms how professionals approach safeguarding autonomous adolescents.
Asch, S. E. (1951). Effects of group pressure upon the modification and distortion of judgments. In H. Guetzkow (Ed.), Groups, leadership and men (pp. 177–190). Carnegie Press.
Baranek, G. T. (2002). Efficacy of sensory and motor interventions for children with autism. Journal of Autism and Developmental Disorders, 32(5), 397–422.
Barsalou, L. W. (2008). Grounded cognition. Annual Review of Psychology, 59, 617–645.
Bartlett, F. C. (1932). Remembering: A study in experimental and social psychology. Cambridge University Press.
Carreyrou, J. (2018). Bad blood: Secrets and lies in a Silicon Valley startup. Knopf.
Campbell-Palmer, R., Gow, D., Needham, R., Jones, S., & Rosell, F. (2016). The Eurasian beaver handbook: Ecology and management of Castor fiber. Pelagic Publishing.
Clark, A. (1997). Being there: Putting brain, body, and world together again. MIT Press.
Clark, A. (2013). Whatever next? Predictive brains, situated agents, and the future of cognitive science. Behavioral and Brain Sciences, 36(3), 181–204.
Dunn, W. (1997). The impact of sensory processing abilities on the daily lives of young children and their families: A conceptual model. Infants & Young Children, 9(4), 23–35.
Einstein, A. (1905). On the electrodynamics of moving bodies. Annalen der Physik, 17(10), 891–921.
Friston, K. (2010). The free-energy principle: A unified brain theory? Nature Reviews Neuroscience, 11(2), 127–138.
Gregory, R. L. (1997). Knowledge in perception and illusion. Philosophical Transactions of the Royal Society B: Biological Sciences, 352(1358), 1121–1127.
Herculano-Houzel, S. (2009). The human brain in numbers: A linearly scaled-up primate brain. Frontiers in Human Neuroscience, 3, 31.
Hohwy, J. (2013). The predictive mind. Oxford University Press.
Jervis, R. (2010). Why intelligence fails: Lessons from the Iranian Revolution and the Iraq War. Cornell University Press.
Kahneman, D. (2011). Thinking, fast and slow. Farrar, Straus and Giroux.
Klein, G. (1998). Sources of power: How people make decisions. MIT Press.
Klein, G. (2007). Performing a project premortem. Harvard Business Review, 85(9), 18–19.
Kuhn, T. S. (1962). The structure of scientific revolutions. University of Chicago Press.
Kuhn, G., Amlani, A. A., & Rensink, R. A. (2008). Towards a science of magic. Trends in Cognitive Sciences, 12(9), 349–354.
Kuhn, G., & Tatler, B. W. (2005). Magic and fixation: Now you don't see it, now you do. Perception, 34(9), 1155–1161.
Land, E. H. (1977). The retinex theory of color vision. Scientific American, 237(6), 108–129.
Loftus, E. F. (2005). Planting misinformation in the human mind: A 30-year investigation of the malleability of memory. Learning & Memory, 12(4), 361–366.
Macknik, S. L., King, M., Randi, J., Robbins, A., Teller, Thompson, J., & Martinez-Conde, S. (2008). Attention and awareness in stage magic: Turning tricks into research. Nature Reviews Neuroscience, 9(11), 871–879.
Martinez-Conde, S., & Macknik, S. L. (2017). Opinion: Finding the plot in science storytelling in hopes of enhancing science communication. Proceedings of the National Academy of Sciences, 114(31), 8127–8129.
Mather, G., Verstraten, F., & Anstis, S. (Eds.). (2008). The motion aftereffect: A modern perspective. MIT Press.
Medawar, P. (1979). Advice to a young scientist. Harper & Row.
Müller-Schwarze, D., & Sun, L. (2003). The beaver: Natural history of a wetlands engineer. Cornell University Press.
Miller, L. J., Anzalone, M. E., Lane, S. J., Cermak, S. A., & Osten, E. T. (2007). Concept evolution in sensory integration: A proposed nosology for diagnosis. American Journal of Occupational Therapy, 61(2), 135–140.
Newton, I. (1704). Opticks: Or, a treatise of the reflections, refractions, inflections and colours of light. Royal Society.
Norton, J. D. (2004). Einstein's investigations of Galilean covariant electrodynamics prior to 1905. Archive for History of Exact Sciences, 59(1), 45–105.
Penrose, L. S., & Penrose, R. (1958). Impossible objects: A special type of visual illusion. British Journal of Psychology, 49(1), 31–33.
Pegg, S., & Jung, D. (2016). Star Trek Beyond [Screenplay]. Paramount Pictures.
Popper, K. R. (1959). The logic of scientific discovery. Hutchinson. (Original work published 1934)
Raji, J. I., Melo, N., Castillo, J. S., Gonzalez, S., Saldana, V., Stensmyr, M. C., & DeGennaro, M. (2019). Aedes aegypti mosquitoes detect acidic volatiles found in human odor using the IR8a pathway. Current Biology, 29(8), 1253–1262.
Rao, R. P., & Ballard, D. H. (1999). Predictive coding in the visual cortex: A functional interpretation of some extra-classical receptive-field effects. Nature Neuroscience, 2(1), 79–87.
Schacter, D. L. (2001). The seven sins of memory: How the mind forgets and remembers. Houghton Mifflin.
Shapiro, A. E. (1993). Fits, passions, and paroxysms: Physics, method, and chemistry and Newton's theories of colored bodies and fits of easy reflection. Cambridge University Press.
Simons, D. J., & Chabris, C. F. (1999). Gorillas in our midst: Sustained inattentional blindness for dynamic events. Perception, 28(9), 1059–1074.
Simons, D. J., & Levin, D. T. (1997). Change blindness. Trends in Cognitive Sciences, 1(7), 261–267.
Tett, G. (2009). Fool's gold: How the bold dream of a small tribe at J.P. Morgan was corrupted by Wall Street greed and unleashed a catastrophe. Free Press.
Tolman, E. C. (1948). Cognitive maps in rats and men. Psychological Review, 55(4), 189–208.
Van de Cruys, S., Evers, K., Van der Hallen, R., Van Eylen, L., Boets, B., de-Wit, L., & Wagemans, J. (2014). Precise minds in uncertain worlds: Predictive coding in autism. Psychological Review, 121(4), 649–675.
Vaughan, D. (1996). The Challenger launch decision: Risky technology, culture, and deviance at NASA. University of Chicago Press.
Walker, M. (2017). Why we sleep: Unlocking the power of sleep and dreams. Scribner.
Wallisch, P. (2017). Illumination assumptions account for individual differences in the perceptual interpretation of a profoundly ambiguous stimulus in the color domain: "The dress". Journal of Vision, 17(4), 5.
Wegner, D. M. (2002). The illusion of conscious will. MIT Press.
Young, S. (2025). Executive mobs: When smart people make catastrophically stupid decisions. YoungFamilyLife.
Young, S. (2025). Living systems and emergence. YoungFamilyLife.
Young, S. (2025). The view from here. In The changing people series (Part 6). YoungFamilyLife.
Young, S. (2025). When your brain has a mind of its own: Stress, the limbic system, and making mistakes. YoungFamilyLife.
Young, S. (2026). The epistemology of safeguarding: Knowing, not-knowing, and positioned knowledge. YoungFamilyLife.
Young, S. (2026). The three-pound supercomputer: Understanding the brain's computational power. YoungFamilyLife.
Zheng, J., & Meister, M. (2024). The unbearable slowness of being: Why do we live at 10 bits/s? Neuron, 112(14), 2417–2430.
---
Topics: #Neuroscience #PredictiveCoding #Fabrication #ProfessionalPractice #EpistemicHumility #Perception #BrainScience #PositionedKnowledge #Certainty #ProfessionalJudgement #RiskAssessment #CollectiveFabrication #SensoryProcessing #Autism #CognitiveScience #StarTrek #Spock #Newton #Einstein #Socrates #Philosophy #ScientificMethod #OpticalIllusions #Magic #Memory #Groupthink #NASA #ChildProtection #MedicalDiagnosis #RiskAssessment #EpistemicUncertainty
© 2026 Steve Young and YoungFamilyLife Ltd. All rights reserved.
This essay was developed collaboratively using AI assistance to research academic sources and refine content structure, while maintaining the author's original voice, insights, and "Information Without Instruction" philosophy. No part of this essay may be reproduced, distributed, or transmitted in any form or by any means, including photocopying, recording, or other electronic or mechanical methods, without the prior written permission of the copyright holders, except in the case of brief quotations embodied in critical reviews and certain other noncommercial uses permitted by copyright law.
For permission requests, contact: info@youngfamilylife.com
>