Home Repositorium Essays Computer Brain

The Three-Pound Supercomputer: Understanding the Brain's Computational Power

Exploring the remarkable computational architecture of biological intelligence

by Steve Young | Professional, Family and Life Insights | YoungFamilyLife Ltd

~7,091 words | Reading time: 28 minutes

Introduction

Right now, as these words are being read on a phone or laptop, two computers are processing the same information. One is the device displaying this text—perhaps a smartphone with billions of transistors, or a laptop drawing 80 watts of power from its battery. The other is the three-pound mass of tissue inside the reader's skull, consuming roughly the same energy as a dim light bulb whilst simultaneously controlling breathing, maintaining balance, processing visual information, retrieving memories, and generating the conscious experience of understanding these very words.

For decades, the brain has been compared to a computer. The analogy has proven both illuminating and limiting. Whilst both systems process information, the manner in which they accomplish this reveals fundamental differences in architecture, efficiency, and capability. Recent advances in neuroscience and computer engineering have provided unprecedented insight into the brain's computational power, measured in terms familiar to computer science: operations per second, energy consumption, bandwidth, and memory capacity. The comparisons are startling.

This exploration examines the brain as a computational device, drawing parallels with digital technology where appropriate, but more importantly, highlighting where biological computation diverges from silicon-based systems. The journey encompasses everything from the supercomputers that now rival human brain processing speeds to the remarkable efficiency of a mosquito's brain—a system smaller than a grain of sand that accomplishes flight control, multisensory integration, and survival behaviours that would challenge modern artificial intelligence.

Understanding the brain's computational architecture offers more than technical fascination. As discussed in my previous essay Evolution: The Engine of Change (Young, 2025), the human brain represents billions of years of evolutionary optimisation. It is not a designed system but an emergent one, shaped by selection pressures that favoured energy efficiency, adaptability, and survival over raw processing speed. The result is a biological computer unlike anything human engineers have yet created.

The Speed Paradox: Processing Power and Parallel Architecture

Raw Computational Capacity

The human brain's processing power has been estimated at approximately 1 exaFLOP—one quintillion (10¹⁸) floating-point operations per second (Markram, 2006). To contextualise this number: if every person on Earth consciously performed one manual calculation per second, it would take approximately 127 years to match the neural operations a single human brain accomplishes in a single second.

More recent estimates focus on synaptic operations. With approximately 86 billion neurons (Azevedo et al., 2009) and roughly 100 trillion synaptic connections (Williams & Herrup, 1988), the brain performs an estimated 228 trillion synaptic operations per second (Winding et al., 2023). This figure represents the rate at which neurons communicate through electrochemical signalling across synapses—the fundamental unit of neural computation.

In 2024, the DeepSouth supercomputer, developed by researchers at Western Sydney University, became the first artificial system capable of matching this rate. DeepSouth can perform 228 trillion operations per second, specifically designed to simulate neural network activity at the scale of a human brain (Furber et al., 2024). This achievement represents a significant milestone: for the first time in history, an engineered system rivals the computational throughput of the organ reading about it.

However, this equivalence in raw operations per second masks profound differences in how those operations are executed.

Serial Versus Parallel Processing

Most modern computers, from smartphones to supercomputers, employ what is known as von Neumann architecture—a design pattern introduced nearly 80 years ago in which processing and memory are separated. Data is stored in memory, retrieved by the processor, manipulated through a series of operations, and written back to memory. This architecture excels at sequential processing: following precise instructions in rapid succession.

The brain employs a fundamentally different strategy. Rather than processing information sequentially, the brain operates through massive parallelism. Individual neurons fire at relatively modest rates—typically 200 times per second at maximum, with many firing far less frequently (Koch & Laurent, 1999). By comparison, a modern computer processor executes billions of operations per second. A single brain cell is vastly slower than a single transistor.

The brain's advantage lies in scale and simultaneity. With 86 billion neurons operating in parallel, each connected to thousands of other neurons, the brain processes information across vast distributed networks simultaneously. When visual information enters the retina, it doesn't queue for sequential processing; instead, millions of neurons activate in parallel, extracting features like edges, colours, motion, and patterns simultaneously across the visual field (Lennie, 2003).

This architectural difference explains an apparent paradox: the brain feels slow at certain tasks—multiplying 847 by 392 requires conscious effort and time—yet it effortlessly accomplishes computational feats that challenge even sophisticated artificial systems. Recognising a face in a crowded room, understanding speech in a noisy environment, or catching a ball requires processing visual information, predicting trajectories, and coordinating muscle movements in real-time—all accomplished without conscious calculation.

Detailed Example

Real-Time Parallel Processing: The Tennis Serve

The abstract architectural differences between serial and parallel processing become visceral when considering a concrete example: a professional tennis player receiving a serve travelling at 130 miles per hour (approximately 58 metres per second).

From the moment the server tosses the ball to when it crosses the net takes roughly 400-500 milliseconds—less than half a second. During this interval, the receiver's brain must solve an extraordinarily complex computational problem that would challenge sophisticated artificial systems, yet it does so routinely, largely unconsciously, and with remarkable accuracy.

The visual system processes multiple information streams simultaneously. The server's body position, weight distribution, shoulder rotation, and racquet angle all provide cues about the likely trajectory. The ball toss location—whether to the left, right, or central—further constrains the possibilities. As the racquet makes contact, the ball's initial trajectory and spin become available. All of this information arrives through vision at the previously discussed 10 million bits per second, with the visual cortex's parallel architecture extracting relevant features automatically.

Simultaneously, the brain engages in predictive computation. Tennis serves don't simply elicit reactive responses; top players anticipate the serve's characteristics before the ball is struck. Studies of elite tennis players have demonstrated that expert receivers begin moving towards the predicted ball location before the racquet makes contact with the ball, based entirely on the server's body kinematics (Abernethy, 1990). This predictive capacity derives from pattern recognition developed through thousands of hours of practice: the brain has learned which combinations of body positions typically produce which serve trajectories.

The motor system must translate prediction into action within extraordinarily tight temporal constraints. Optimal footwork requires a sequence of precisely timed muscle activations to propel the body towards the intercept point. The brain must compute not just where to move, but how quickly, which foot to lead with, and what body position will enable an effective return. Simultaneously, the arm and racquet must be prepared: the appropriate grip, backswing trajectory, contact point, and follow-through must all be determined based on the predicted ball trajectory (Landlinger et al., 2012).

Critically, all of this computation must account for uncertainty. The serve's exact trajectory isn't fully determined until after racquet contact, yet waiting for complete information would leave insufficient time to respond. The brain balances early commitment (moving based on prediction) against maintaining flexibility (ability to adjust if the prediction proves incorrect). This represents a sophisticated solution to what computer scientists call the "exploration-exploitation trade-off"—committing to a course of action whilst retaining capacity to revise that action as new information arrives.

Context and memory profoundly influence the computation. The receiver doesn't respond to each serve in isolation. Memory of the opponent's serving patterns throughout the match—their tendency to serve wide on crucial points, their favourite serve on second serves, their recent serving sequence—all influence expectations and preparatory positioning. Strategic considerations matter: at 40-0 the receiver might take more risks than at 30-40. Fatigue, court surface, wind conditions, and sun position all modulate the computation.

Remarkably, elite players accomplish this whilst maintaining conscious awareness devoted to strategy and tactics. The computational heavy lifting—processing visual information, generating predictions, planning and executing motor sequences—occurs largely automatically, freeing conscious processing for higher-level decision-making. The top seed can consciously think "they often serve wide on break point" whilst their brain automatically handles the millisecond-scale sensory-motor integration required to actually return that serve.

Research comparing expert and novice tennis players reveals that expertise correlates with more efficient neural processing. Expert players show less overall brain activation during serve reception than novices, despite superior performance (Nakata et al., 2010). This apparent paradox reflects automation through practice: neural circuits dedicated to tennis-specific skills become more efficient, requiring less metabolic energy to accomplish better outcomes. The parallel processing architecture enables this: with practice, the relevant neural pathways strengthen whilst irrelevant processing diminishes.

The tennis serve demonstrates several key principles of biological computation: massive parallelism (processing body kinematics, ball trajectory, strategic context, and motor planning simultaneously), predictive processing (acting on incomplete information based on learned patterns), sensory-motor integration (vision directly driving action without conscious mediation), and context sensitivity (the same visual input elicits different responses depending on game state and opponent history). No current computer vision or robotic system can match this performance—a tennis-playing robot would struggle with the combination of visual processing, prediction, decision-making, and motor control required to return professional serves reliably.

Neuronal Communication Speed

Another critical difference lies in transmission speed. In conventional computers, electrical signals travel through wires at approximately 200,000 kilometres per second—roughly two-thirds the speed of light. Information transfer is essentially instantaneous across a computer's architecture.

Neuronal communication is far slower. Action potentials—the electrical signals that carry information along neurons—travel at speeds ranging from 1 to 100 metres per second, depending on the neuron's diameter and degree of myelination (Purves et al., 2001). Synaptic transmission, the chemical process by which neurons communicate across synapses, introduces additional delays of 1-5 milliseconds.

Yet this apparent handicap becomes an advantage when combined with parallel architecture. Whilst individual signals travel slowly, millions of signals propagate simultaneously through different pathways. The brain trades off speed for distributed processing, achieving robust computation even with relatively slow components.

The Energy Miracle: Power Consumption and Efficiency

Twenty Watts of Computational Power

The human brain consumes approximately 20 watts of power during normal operation (Attwell & Laughlin, 2001). This is derived from glucose metabolism, with the brain accounting for roughly 20% of the body's total energy expenditure despite representing only 2% of body mass.

To appreciate this figure's significance, consider the DeepSouth supercomputer mentioned earlier. Whilst matching the brain's operational throughput, DeepSouth requires 20 megawatts of power—one million times more energy than the human brain (Furber et al., 2024). The laptop on which these words might be read typically draws 60-80 watts, three to four times the brain's consumption, whilst performing a tiny fraction of the brain's computational repertoire.

The Oak Ridge Frontier supercomputer, which achieved exascale computing (10¹⁸ operations per second) in 2022, required 21 megawatts to reach this milestone (NIST, 2024). The comparison is stark: the brain achieves similar computational throughput whilst a human sits quietly reading, using the same energy as a refrigerator light bulb.

Where Energy Goes: Communication Versus Computation

Recent neuroscience research has revealed a surprising finding about the brain's energy budget. Levy and Calvert (2021) partitioned the cortical energy consumption and discovered that neural computation itself—the postsynaptic integration of signals—consumes remarkably little power: approximately 0.1 to 0.2 watts of ATP (adenosine triphosphate, the cellular energy currency) in the cerebral cortex.

The vast majority of energy—approximately 35 times more—is devoted to communication: transmitting signals along axons, releasing neurotransmitters at synapses, and maintaining the ionic gradients required for neuronal signalling (Levy & Calvert, 2021). In computational terms, the brain spends most of its energy budget on data transmission rather than data processing.

This distribution reflects a fundamental constraint of biological computation. Neurons must maintain concentration gradients across their membranes—essentially keeping their batteries charged—which requires continuous energy expenditure. Action potentials work by allowing ions to flow across the membrane, temporarily disrupting these gradients, which must then be restored using energy-consuming molecular pumps. This is metabolically expensive but enables the brain's parallel architecture.

Traditional computers face a similar challenge: as processing power increases, energy consumption devoted to moving data between processor and memory increases disproportionately. This is known as the "von Neumann bottleneck" (Backus, 1978). However, in computers, memory and processing are physically separated by design, requiring constant data transfer. In the brain, as explored later, computation and memory are integrated at the synaptic level, reducing this overhead.

Efficiency Through Evolution

The brain's energy efficiency represents billions of years of evolutionary optimisation. Metabolic cost has been a critical selection pressure throughout evolutionary history. A brain that required substantially more energy would have imposed unsustainable demands on early humans' food intake, particularly given the unpredictable food availability faced by our ancestors.

This evolutionary constraint drove selection for computational efficiency. Neurons that could transmit more information per spike, circuits that could accomplish tasks with fewer operations, and architectures that integrated processing and memory to reduce energy-expensive data transfer would all have provided survival advantages (Sterling & Laughlin, 2015).

The result is an organ that, whilst consuming a significant portion of the body's energy budget, achieves remarkable computational capability per watt. Modern neuromorphic computing—an approach to artificial intelligence that mimics brain architecture—aims to capture this efficiency. Recent neuromorphic systems have demonstrated 16-fold reductions in energy consumption compared to conventional hardware performing similar tasks (Rao et al., 2022).

Seeing the World: Visual Processing and Information Bandwidth

The Retinal Data Stream

Visual perception offers a particularly illuminating window into the brain's information processing. The human retina contains approximately one million ganglion cells—neurons that transmit visual information from the eye to the brain via the optic nerve (Koch et al., 2006). Each ganglion cell converts light patterns detected by photoreceptors into electrical spikes that encode information about brightness, colour, contrast, and motion.

Researchers have estimated that the human retina transmits visual information to the brain at approximately 10 million bits per second (Koch et al., 2006). This is comparable to the data rate of an Ethernet connection—a respectable but not extraordinary bandwidth by modern computing standards. A high-definition video stream requires approximately 5-8 million bits per second, suggesting that the raw data entering the brain through vision is roughly equivalent to watching a video.

This 10 million bits per second represents an enormous reduction from the theoretical maximum. The retina contains approximately 126 million photoreceptor cells (rods and cones). If each photoreceptor transmitted its information independently, the bandwidth would be orders of magnitude higher. Instead, substantial processing occurs within the retina itself, with multiple photoreceptors converging onto each ganglion cell, extracting features like edges and motion before information ever reaches the brain (Masland, 2012).

The Conscious Bottleneck

Here emerges a profound puzzle. If 10 million bits per second enter through vision alone—not counting information from hearing, touch, smell, taste, balance, proprioception, and internal bodily states—how much of this reaches conscious awareness?

Recent estimates suggest that conscious perception operates at approximately 10 bits per second (Zheng & Meister, 2024). This figure derives from measuring the information content of deliberate actions and decisions humans make—roughly the amount of information conveyed when reading aloud, typing, or performing skilled movements that require conscious attention.

The disparity is staggering: approximately one million times more information enters through vision than reaches conscious awareness. Where does it all go?

Visual Efficiency: Change Detection, Saccades, and Biological Compression

Before examining how the brain processes this torrent of visual information, it's worth understanding how evolution has engineered remarkable efficiency into the visual system itself—efficiency that bears striking resemblance to modern video compression algorithms.

Neurons throughout the nervous system exhibit a fundamental property: they respond strongly to changes in their input but adapt to constant stimulation. Present a steady light to photoreceptors and their firing rate initially spikes, then gradually decreases even though the light remains constant. This phenomenon, termed neural adaptation, occurs at multiple levels of visual processing (Kohn, 2007). From an information theory perspective, adaptation makes sense: a constant stimulus conveys no new information after its initial detection, so continued high-rate signalling would waste metabolic energy.

However, this creates a potential problem. If neurons stop responding to constant inputs, stationary objects in the visual field might effectively disappear from perception. Evolution's solution involves the eyes themselves: they never remain truly still.

Human eyes execute rapid jumping movements called saccades approximately three to four times per second during visual exploration. Between saccades, the eyes engage in microsaccades—tiny involuntary movements occurring even during attempted fixation (Martinez-Conde et al., 2004). These movements serve a critical function: they continuously shift the retinal image, ensuring that even stationary objects in the environment produce changing patterns of photoreceptor activation. What appears to be a stable visual world is actually being constantly refreshed through eye movements, preventing neural adaptation from eliminating stationary objects from perception.

This strategy might seem wasteful—why create artificial motion just to overcome adaptation to stationary inputs? The answer lies in the brain's subsequent processing strategy, which bears remarkable similarity to video compression algorithms.

Modern video compression (such as MPEG or H.264) exploits temporal redundancy: consecutive video frames are typically very similar. Rather than transmitting each frame completely, compression algorithms transmit a complete "keyframe" periodically, then transmit only the changes (differences) between frames. This dramatically reduces data requirements. A video of someone sitting relatively still might require transmitting detailed information only once per second, with intermediate frames specifying only small movements or changes.

The visual system employs a conceptually similar strategy called predictive coding (Rao & Ballard, 1999). Rather than transmitting all visual information from retina to cortex, the brain appears to generate predictions about what the visual input should contain based on recent history and stored models of the world. Only prediction errors—the differences between what was expected and what actually arrived—are transmitted upward through the processing hierarchy. When the visual scene matches predictions, neural activity remains relatively low. When unexpected features appear, neural activity increases to signal the prediction error.

This predictive strategy provides enormous efficiency advantages. The visual world exhibits substantial regularity: objects tend to persist, surfaces have consistent textures, lighting changes gradually. By predicting these regularities, the brain avoids repeatedly transmitting redundant information. The strategy works particularly well in combination with eye movements: each saccade brings new information onto the retina (requiring transmission of updated predictions), whilst between saccades the scene remains relatively stable (allowing prediction error to remain low).

Evidence for predictive coding comes from multiple sources. Neurons in higher visual areas respond more strongly to unexpected stimuli than to predicted ones, even when the physical stimulus is identical (Meyer & Olson, 2011). Brain imaging reveals reduced activity in visual cortex when stimuli are predictable compared to when they are surprising (Summerfield et al., 2008). The experience of visual illusions often reflects the brain's predictions overriding actual sensory input—the brain "fills in" missing information based on predictions rather than waiting for complete sensory data.

The retina itself performs substantial preprocessing that can be understood through the compression analogy. Rather than transmitting the activity of all 126 million photoreceptors, the approximately 1 million retinal ganglion cells transmit feature-based information: edges, contrast, motion, colour opponency (Masland, 2012). Different ganglion cell types respond to different features, with some optimized for detecting fine spatial detail, others for motion, still others for rapid temporal changes. This feature extraction dramatically reduces the information bandwidth whilst preserving the features most relevant for subsequent processing.

The sophistication of retinal processing becomes apparent when considering that different ganglion cell types employ different "compression strategies." Certain cells respond transiently to changes (high temporal precision but no sustained response to constant stimuli), whilst others respond sustainably (lower temporal precision but persistent signalling). These different cell types effectively provide multiple complementary descriptions of the visual scene, optimized for different downstream computational needs (Gollisch & Meister, 2010).

The combination of neural adaptation, saccadic refresh, predictive coding, and retinal feature extraction creates a visual system of remarkable efficiency. The brain doesn't passively receive a complete pixel-by-pixel description of the visual world at 10 million bits per second. Instead, it actively samples the environment through eye movements, predicts what should be present based on recent history and world knowledge, and processes primarily the differences between prediction and reality. This strategy allows biological vision to operate within severe bandwidth and energy constraints whilst maintaining the illusion of rich, detailed, continuous visual experience.

Interestingly, computer vision and video compression have independently converged on similar principles. Modern video codecs use temporal prediction and difference encoding. Advanced computer vision systems increasingly employ predictive models that generate expectations about visual input and focus processing on prediction errors. These parallels suggest that predictive processing and efficient coding may represent fundamental principles of information processing in resource-constrained systems—whether biological or artificial.

Parallel Processing and Filtering

The answer to where the vast sensory input goes also lies in the brain's parallel, hierarchical processing architecture. Visual information doesn't queue for conscious attention; instead, it cascades through multiple levels of processing simultaneously. The primary visual cortex—the first cortical area to receive visual input—contains approximately 140 million neurons (Wandell & Winawer, 2015), all processing different aspects of the visual scene in parallel.

At each processing stage, neurons extract progressively more complex features. Early visual areas detect edges, orientations, and colours. Intermediate areas combine these into shapes and patterns. Higher visual areas recognise objects, faces, and scenes. Most of this processing occurs unconsciously and automatically. Only information relevant to current goals, novel or unexpected stimuli, or items matching active attention filters reach conscious awareness (Dehaene & Naccache, 2001).

The visual system demonstrates the brain's efficiency: by processing information in parallel and filtering aggressively, it extracts meaning from the environment whilst operating within the severe bandwidth constraints of conscious processing. The brain doesn't need to bring everything to awareness—most processing serves to guide behaviour automatically, from maintaining balance to adjusting grip strength to coordinating eye movements.

This architecture explains why humans can recognise images presented for as little as 13 milliseconds (Potter et al., 2014). Feedforward processing—information flowing from retina through successive visual areas—can identify an image's content without requiring recurrent loops or conscious processing. The brain processes vast amounts of information, but consciousness samples only a tiny, highly curated subset.

Memory That Computes: The Integration of Storage and Processing

Working Memory: The Brain's RAM

In computer architecture, Random Access Memory (RAM) serves as temporary storage for data currently being processed. RAM is fast, limited in capacity, and volatile—its contents disappear when power is removed. The brain possesses a functional analogue: working memory, the small amount of information that can be held "in mind" at any given moment.

George Miller's famous 1956 paper, "The Magical Number Seven, Plus or Minus Two," suggested that working memory capacity was limited to approximately seven items or "chunks" (Miller, 1956). Subsequent research has refined this estimate downward. Contemporary evidence suggests working memory holds approximately 4 items, with individual variation typically ranging from 3 to 5 items (Cowan, 2001).

This severe limitation stands in stark contrast to computer RAM, which can hold millions or billions of discrete values simultaneously. A modest smartphone might have 6 gigabytes of RAM—sufficient to store billions of numerical values or millions of words. The brain's working memory, by comparison, struggles to retain a seven-digit phone number without rehearsal.

However, this comparison oversimplifies. Working memory capacity depends critically on what constitutes a "chunk." A chunk is the largest meaningful unit recognised by the individual. For someone unfamiliar with chess, a chess position represents 32 separate items (the location of each piece). For a chess master, the entire position might constitute a single chunk—a familiar pattern with a name and strategic implications. Through learning and expertise, humans effectively expand working memory capacity by building larger, more meaningful chunks (Chase & Simon, 1973).

Long-Term Memory: Structure and Capacity

Long-term memory appears to have no practical capacity limit. Humans accumulate memories across a lifetime—faces, facts, skills, experiences—without "filling up." The brain achieves this through structural changes at synapses. When a memory forms, connections between neurons strengthen or weaken, new synapses may form, and existing ones may be eliminated (Mayford et al., 2012).

This represents a profound difference from computer storage. In a computer, memory is a physically separate component—a hard drive, solid-state drive, or remote server. The processor reads from and writes to memory, but the memory itself doesn't process information. Storage and computation are distinct.

In the brain, storage and computation are the same thing. Each synapse both processes information (by integrating incoming signals) and stores information (through its variable strength and properties). Memory is encoded in the very structure that performs computation. This integration provides enormous advantages for certain tasks.

Consider facial recognition. When encountering a familiar face, the brain doesn't search through a database of stored faces comparing features—the process that might be implemented in a conventional computer system. Instead, the pattern of neural activity triggered by the face's visual features automatically activates the neural circuits associated with that person, bringing to mind their name, relationship, associated memories, and emotional valence. The storage structure is the processing structure (O'Reilly & Munakata, 2000).

Synaptic Plasticity: Memory Formation

The mechanisms by which synapses change their strength—termed synaptic plasticity—have been extensively studied. The principle, first articulated by Donald Hebb in 1949, is often summarised as "neurons that fire together, wire together" (Hebb, 1949). When two neurons repeatedly activate in close temporal proximity, the synapse connecting them strengthens, making future co-activation more likely.

This simple rule gives rise to associative memory. If neurons representing "lemon" and "sour" frequently activate together, strengthening their connection, encountering a lemon will automatically activate the representation of sourness. The brain builds a web of associations through experience, with memory retrieval being a process of activation spreading through this web rather than searching discrete storage locations.

Synaptic changes occur at multiple timescales. Short-term plasticity operates on scales of milliseconds to minutes, potentially underlying working memory. Long-term potentiation (LTP) and long-term depression (LTD)—persistent strengthening or weakening of synapses—can last hours, days, or indefinitely, providing the substrate for long-term memory (Bliss & Collingridge, 1993).

Importantly, these changes require energy and molecular resources. Memory isn't free. Forming and maintaining memories consumes metabolic energy, and memories can degrade or be overwritten. This contrasts sharply with digital storage, where writing to memory is essentially cost-free and stored information persists identically over time until deliberately erased.

The Absence of True Cache

Modern computer processors include cache memory—small amounts of extremely fast memory located directly on the processor chip. Cache stores frequently accessed data, reducing the time required to fetch information from slower RAM or storage. The brain lacks a direct equivalent to cache.

However, the brain employs priming—a phenomenon where recent or frequent exposure to a stimulus facilitates subsequent processing of that stimulus or related stimuli (Tulving & Schacter, 1990). Priming operates through temporary changes in neural excitability: neurons that have recently fired are more easily activated again. This provides a functional benefit similar to cache—recently accessed information becomes temporarily more accessible—but through a fundamentally different mechanism.

The integration of memory and processing in the brain means there is no architectural distinction between "slow" and "fast" memory in the way computers differentiate between storage, RAM, and cache. Instead, all memory exists within the same computational substrate, with accessibility determined by factors like recency, emotional significance, and strength of associations.

Detailed Example

Scaling Down: The Mosquito's Masterclass in Computational Efficiency

Having explored the human brain's computational capabilities, a shift in scale illuminates fundamental principles of biological computation. The mosquito—specifically the female Aedes aegypti, a species responsible for transmitting dengue fever, yellow fever, and Zika virus—possesses a brain containing approximately 220,000 neurons (Raji & Vosshall, 2016). This is roughly 0.0002% of the human neuron count. Yet this tiny system accomplishes feats that would challenge sophisticated robotics and artificial intelligence.

As discussed in my previous essay Living Systems and Emergence (Young, 2025), complex adaptive behaviour doesn't necessarily require enormous neural resources. The mosquito demonstrates how evolutionary optimisation produces efficient solutions to survival challenges through integrated sensory-motor systems.

The Flight Control Problem

Mosquitoes maintain flight through wing movements at 600-800 beats per second (Cator et al., 2009)—roughly 18-24 times faster than a laptop's cooling fan rotates. This is accomplished through a stroke amplitude of only 40-45 degrees, far smaller than the 120+ degree sweeps employed by most flying insects (Muijres et al., 2017).

Maintaining stable flight at such frequencies requires continuous computational integration of multiple systems. The mosquito must:

  • Generate and coordinate muscle activation patterns at 600-800 Hz
  • Integrate mechanosensory feedback from wing and body sensors
  • Adjust wing angles in real-time to maintain stability and heading
  • Process visual information for obstacle avoidance
  • Respond to air currents and turbulence

All of this occurs in parallel with the mosquito's other computational tasks: locating hosts, finding mates, identifying egg-laying sites, and evading predators. The flight control system operates automatically, without requiring conscious attention (the degree to which mosquitoes possess anything analogous to consciousness remains an open question).

From an engineering perspective, mosquito flight remains challenging to replicate artificially. Miniature drones of similar size struggle with the control problems mosquitoes solve routinely, particularly regarding efficiency and manoeuvrability in turbulent environments (Phan & Park, 2019).

Multisensory Integration: Parallel Processing at Small Scale

The mosquito's host-seeking behaviour demonstrates sophisticated multisensory integration. Female mosquitoes (males feed on nectar and don't bite) locate human hosts through the coordinated detection of multiple cues:

Carbon dioxide detection: Mosquitoes detect CO₂ at concentrations as low as a few parts per million above background levels. Specialised neurons on the maxillary palps—sensory organs near the mouth—respond to CO₂ with temporal precision better than 200 milliseconds (McMeniman et al., 2014). This allows mosquitoes to detect the edges of an exhaled breath plume and navigate towards its source.

Visual processing: When CO₂ is detected, the mosquito's visual system becomes activated. At distances of 5-15 metres, mosquitoes begin tracking visual objects that might represent potential hosts (van Breugel et al., 2015). This requires processing visual motion, contrast, and features whilst maintaining flight control—a parallel processing challenge.

Thermal sensing: Within close range (less than one metre), mosquitoes detect the thermal signature of warm-blooded animals. Infrared receptors respond to temperature differences as small as a few degrees, guiding the mosquito to exposed skin (McMeniman et al., 2014).

Olfactory discrimination: The mosquito olfactory system distinguishes between hundreds of volatile compounds. Human skin odour contains over 300 different chemicals; mosquitoes show preferential attraction to specific compounds, particularly lactic acid, ammonia, and certain carboxylic acids (Smallegange et al., 2011).

Humidity detection: Mosquitoes detect humidity gradients, as moist air indicates the presence of water, vegetation, or mammalian breath (Corfas & Vosshall, 2015).

Remarkably, these sensory systems don't operate independently. Research has demonstrated that mosquito sensory integration is hierarchical and context-dependent. CO₂ detection gates thermal attraction—mosquitoes ignore heat sources unless CO₂ is also present. Similarly, CO₂ enhances attraction to certain odours whilst enhancing aversion to repellents (Corfas & Vosshall, 2015). The integration happens centrally, within the mosquito's tiny brain, allowing appropriate behavioural responses to emerge from multiple sensory inputs.

Neural Architecture: Efficiency Through Optimisation

In 2023, researchers completed the first complete connectome—a comprehensive map of all neurons and synapses—of the fruit fly larva, Drosophila melanogaster (Winding et al., 2023). This insect brain contains 3,016 neurons and 548,000 synapses. The mapping project took 12 years and revealed circuit features strikingly similar to advanced machine learning architectures.

The fruit fly larval brain shows extensive recurrent connections, particularly in areas associated with learning and decision-making. Multiple parallel pathways connect sensory inputs to motor outputs, with shortcuts that bypass processing layers—potentially compensating for the limited number of neurons. The architecture demonstrates features found in state-of-the-art artificial neural networks: distributed processing, parallel pathways, and feedback loops (Winding et al., 2023).

Whilst complete connectomes for mosquito brains don't yet exist, brain atlases map the locations of major neural populations (Raji & Vosshall, 2016). These reveal specialised neural structures for different sensory modalities, all integrated within a brain volume of approximately one cubic millimetre.

Computational Efficiency: Lessons from Natural Selection

The mosquito brain illustrates several principles relevant to understanding biological computation:

Task-specific optimisation: The mosquito brain isn't a general-purpose computer. It cannot learn mathematics, recognise written language, or engage in abstract reasoning. Instead, evolution has optimised it for specific survival tasks: flight, host-seeking, mating, and oviposition (egg-laying). This specialisation allows remarkable efficiency. The mosquito accomplishes its ecological niche with a fraction of a percent of the neurons humans possess.

Integrated sensing and acting: There is minimal separation between sensory processing and motor output in the mosquito. Sensory information rapidly drives behaviour through relatively direct pathways. This contrasts with computer systems where sensing, processing, and actuation are typically distinct modules with defined interfaces.

Parallel processing: Even with limited neurons, the mosquito employs parallelism. Multiple sensory streams are processed simultaneously, and different behaviours (flight control, host-seeking, danger avoidance) operate in parallel rather than requiring sequential switching between tasks.

Energy efficiency: A mosquito's entire metabolic budget during flight is measured in milliwatts. The brain's energy consumption is a fraction of this total. For the computational capabilities delivered—real-time flight control, multisensory integration, spatial navigation, learning, and memory—the power efficiency vastly exceeds artificial systems of comparable capability.

Emergence and Complexity

The mosquito demonstrates how complex, adaptive behaviour emerges from relatively simple components when those components are appropriately connected and embedded in an environment. Individual neurons are not particularly sophisticated—they integrate inputs and fire spikes based on relatively simple rules. Yet when 220,000 such neurons are wired together through hundreds of thousands of synapses, and that network is shaped by millions of years of natural selection, the result is a system capable of accomplishing tasks that perplex human engineers.

This echoes a fundamental principle in neuroscience and systems biology: behaviour emerges from interaction, not from individual components. Understanding individual neurons is necessary but insufficient for understanding behaviour. The computational power arises from the network's architecture—which neurons connect to which others, with what strengths, forming what patterns (Koch & Laurent, 1999).

The mosquito also demonstrates that sophisticated computation doesn't require conscious experience, language, or abstract reasoning. The mosquito presumably has minimal or no subjective experience of its world, yet its brain solves complex computational problems reliably enough to ensure survival and reproduction. This raises profound questions about the relationship between computation and consciousness—questions that become even more pressing when considering the human brain.

It is worth noting that the mosquito's computational efficiency must be understood within a broader survival strategy common to insects generally. Mosquitoes, like most insects, succeed through sheer numbers rather than individual survival. Mortality rates are extraordinarily high—the vast majority of mosquitoes die before reproducing—yet the species thrives because reproductive output vastly exceeds losses. This represents a fundamentally different approach to survival than that employed by larger-brained animals, which invest heavily in individual survival and produce fewer offspring. In this sense, insects might be better understood as employing a collective or "hive" strategy at the species level: individual computational sophistication enables basic survival functions, whilst species success derives from numerical abundance rather than individual resilience. The mosquito's remarkable brain is sufficient for individual survival tasks, but the species' continuation depends on producing millions of individuals, most of whom will fail.

Where the Metaphor Breaks Down: Beyond Silicon

The brain-as-computer metaphor has proven useful for understanding certain aspects of neural information processing. Yet the metaphor ultimately fails to capture essential features of biological cognition. Recognising these limitations helps clarify what makes biological computation fundamentally different from digital computation.

Analogue Versus Digital

Modern computers are digital systems: they represent information as discrete values, typically binary ones and zeros. This digital representation provides perfect reproducibility—a file can be copied exactly, and computations can be precisely replicated.

Biological neural systems employ both analogue and digital signalling. Action potentials—the spikes travelling along axons—are often characterised as digital: neurons either fire or they don't, resembling binary on/off states. However, this oversimplifies. Action potential amplitude and duration can vary, affecting neurotransmitter release at the synapse. Synaptic transmission itself is analogue: the strength of synaptic connections varies continuously, and the postsynaptic response depends on graded changes in voltage and chemical concentrations (Koch, 1999).

Moreover, neurons communicate through electrical synapses (gap junctions) that allow direct flow of current between cells, a purely analogue process. Neuromodulators—chemicals that regulate neural excitability—operate through diffusion, creating gradients of concentration that affect large populations of neurons simultaneously. These analogue processes contribute to computation in ways difficult to capture with digital logic (Sterling & Laughlin, 2015).

Context Dependence and Flexibility

Computer algorithms operate deterministically: given identical inputs, they produce identical outputs. This reliability is a strength in many applications. The brain operates differently. The response to a stimulus depends on context, history, internal state, attention, and countless other factors. The same visual input can trigger different responses depending on whether the observer is hungry, afraid, focused on a particular task, or has recently encountered similar stimuli.

This context dependence arises from the brain's architecture. Neurons receive thousands of inputs simultaneously. Whether a neuron fires depends on the pattern of all these inputs, which reflect not just the immediate stimulus but the broader network state. Synaptic strengths change continuously based on experience. Neuromodulators alter excitability throughout entire brain regions in response to arousal, emotion, or reward (Marder, 2012).

This flexibility enables learning and adaptation but makes neural computation fundamentally different from running a programme. There is no fixed algorithm mapping inputs to outputs. Instead, the mapping itself changes continuously through experience.

Meaning and Semantics

Computers manipulate symbols according to rules but don't understand meaning. A calculator that computes 2 + 2 = 4 doesn't understand what "2" represents, what addition means, or that "4" refers to a quantity. It mechanically transforms input patterns to output patterns following programmed rules.

The brain's relationship to meaning is different. Neural representations aren't arbitrary symbols; they are grounded in sensory experience, motor action, and emotional significance. The neural pattern representing "apple" isn't an arbitrary code—it's connected to visual memories of apples' appearance, taste memories, associations with contexts where apples appear, and motor programmes for grasping and biting apples. Meaning emerges from these rich, multi-modal associations (Barsalou, 2008).

This semantic grounding allows humans to understand new concepts by relating them to existing knowledge, to recognise that different terms refer to the same concept, and to judge when a statement makes sense versus when it violates meaning. Computers struggle with these tasks because they lack grounded semantics—their symbol manipulation doesn't connect to the experiential world that gives meaning to symbols.

Consciousness and Subjective Experience

Perhaps the most profound limitation of the computer metaphor is its silence regarding consciousness. Computers process information, but there is presumably nothing it "feels like" to be a computer. Humans (and arguably other animals) have subjective experiences: the redness of red, the painfulness of pain, the experience of understanding a concept or recognising a face.

The relationship between neural computation and conscious experience remains one of science's deepest mysteries. We know that certain patterns of neural activity correlate with conscious states, and that damage to specific brain areas can eliminate particular aspects of conscious experience. However, why neural information processing should give rise to subjective experience at all—what philosophers call the "hard problem of consciousness"—remains unresolved (Chalmers, 1995).

The brain is not merely a computer that processes information. It is the physical substrate that somehow generates the first-person perspective from which the world is experienced. This capacity seems fundamentally different from computation as implemented in silicon, though whether this difference is one of kind or degree remains debated.

Embodiment and Environmental Coupling

Computer programmes run isolated from the environment except through defined input/output interfaces. A programme can be paused, copied to different hardware, or run at different speeds without fundamentally changing its operation.

Brains are inseparable from bodies and environments. Neural computation evolved to control embodied action in real-time interaction with complex, dynamic environments. The brain doesn't passively process information; it actively samples the environment through movement and attention, using its predictions to guide exploration (Friston, 2010).

This embodied, embedded nature fundamentally shapes neural computation. The brain has evolved under constraints imposed by the body's sensory and motor systems, metabolic needs, and environmental challenges. Computation emerges from the brain-body-environment system as an integrated whole, not from the brain alone (Clark, 1997).

Plasticity and Development

Computers are manufactured to specifications and then programmed. Their hardware architecture doesn't change during operation. The brain develops through complex interactions between genes, neural activity, and environmental input. From conception through death, the brain continuously reorganises itself (neuroplasticity).

During development, neurons migrate, extend axons, form synapses, and die in massive numbers—all through self-organising processes influenced by both genetic programmes and environmental input. In adulthood, synapses strengthen and weaken, new neurons can form in certain brain regions, and large-scale reorganisation occurs in response to learning or injury (Kolb & Whishaw, 1998).

This developmental and lifelong plasticity means the brain is never a fixed computational architecture. It continuously reconstructs itself based on experience—a capacity far beyond current computing systems.

Conclusion: The Evolutionary Achievement

The comparison between brain and computer illuminates both systems. Digital computers excel at tasks requiring perfect reproducibility, rapid sequential operations, and algorithmic precision. The brain excels at pattern recognition, context-dependent decision-making, learning from limited examples, and integrating information across multiple timescales and modalities.

These complementary strengths reflect fundamentally different design principles. Computers were engineered by humans to solve problems humans face. Brains were shaped by natural selection to solve problems faced by our evolutionary ancestors: finding food, avoiding predators, forming social alliances, navigating environments, and reproducing. The resulting computational architecture reflects these different selection pressures.

The brain's achievement becomes more remarkable when considering its constraints. It must operate within a strict power budget—the approximately 20 watts available from metabolising glucose. It must develop from a single cell through self-organising processes. It must function continuously from birth to death without opportunities for maintenance shutdowns or part replacement. It must adapt to changing environments, recover from damage when possible, and operate reliably despite noisy, unreliable components (individual neurons are far less reliable than transistors).

Under these constraints, evolution has produced an organ capable of writing sonnets, proving theorems, composing symphonies, and contemplating its own existence—all whilst using less power than a light bulb. The recent achievement of supercomputers matching the brain's raw computational throughput represents a significant milestone in engineering. Yet these supercomputers require a million times more power, cannot learn from experience in the way brains do, and lack the flexibility, creativity, and adaptive capacity that characterise biological intelligence.

Perhaps most remarkably, this three-pound marvel reading about itself emerged through evolutionary processes without any designer, blueprint, or intended purpose. As explored in my previous essay Evolution: The Engine of Change (Young, 2025), complex biological systems arise through variation, selection, and inheritance operating over vast timescales. The human brain represents the current product of this process—approximately 600 million years of nervous system evolution since the first bilateral animals, and approximately 2-3 million years since brains of human-like size emerged in our hominin ancestors (Herculano-Houzel, 2012).

The device used to read these words is indeed remarkable—a testament to human ingenuity and scientific understanding. But the device reading about that device is something altogether different: not designed but evolved, not programmed but developed, not a computer but a biological organism that happens to compute. Understanding both what the brain shares with computers and where it diverges from them provides insight into the nature of intelligence, consciousness, and what it means to be a thinking being in a physical world.


Continue Reading: What Happens Next?

This essay is Part 1 of a trilogy examining brain computation, predictive coding, and practical applications. It explored how the brain computes—its remarkable processing power, parallel architecture, and energy efficiency. But understanding the mechanisms raises profound questions about what those mechanisms create: How does predictive coding shape our experience of reality? What does it mean to live in a world that's primarily brain-generated?

Part 2: Living in a Fabricated World

Building directly on the computational foundations explored here, the second essay examines the profound implications of how brains construct reality through predictive coding and internal model-building.

Walk into an empty concert arena and look across the seating—row after row of chairs stretching into the distance. But after the first few rows, your brain isn't seeing those chairs at all. It's fabricating them. This isn't metaphor; it's how the predictive coding mechanisms work in practice.

Key explorations include:

  • The difference between closed systems (where certainty exists—gravity, pancakes, physics) and open systems (where it cannot—human behaviour, professional judgement)
  • Why "do no harm" is structurally impossible in professional practice
  • How collective model construction creates catastrophic groupthink in intelligent organisations
  • The wisdom embedded in jury systems that value diverse fabrications over single expert views
  • What Newton and Socrates teach us about "measuring the measure" as the path to reality
  • The model construction spectrum from the mosquito's 220,000 neurons to Einstein's thought experiments

From positioned knowledge in safeguarding and medical diagnosis to the paradox of living consciously in fabricated worlds, discover what it means to work wisely with constructed models rather than claiming impossible certainty. The trilogy concludes with Part 3, From Zebras to Ravens, applying these insights to safeguarding autonomous adolescents.

Continue to Part 2: Living in a Fabricated World

References

Abernethy, B. (1990). Anticipation in squash: Differences in advance cue utilization between expert and novice players. *Journal of Sports Sciences*, 8(1), 17-34.

Attwell, D., & Laughlin, S. B. (2001). An energy budget for signaling in the grey matter of the brain. *Journal of Cerebral Blood Flow & Metabolism*, 21(10), 1133-1145.

Azevedo, F. A., Carvalho, L. R., Grinberg, L. T., Farfel, J. M., Ferretti, R. E., Leite, R. E., ... & Herculano-Houzel, S. (2009). Equal numbers of neuronal and nonneuronal cells make the human brain an isometrically scaled-up primate brain. *Journal of Comparative Neurology*, 513(5), 532-541.

Backus, J. (1978). Can programming be liberated from the von Neumann style? A functional style and its algebra of programs. *Communications of the ACM*, 21(8), 613-641.

Barsalou, L. W. (2008). Grounded cognition. *Annual Review of Psychology*, 59, 617-645.

Bliss, T. V., & Collingridge, G. L. (1993). A synaptic model of memory: long-term potentiation in the hippocampus. *Nature*, 361(6407), 31-39.

Cator, L. J., Arthur, B. J., Harrington, L. C., & Hoy, R. R. (2009). Harmonic convergence in the love songs of the dengue vector mosquito. *Science*, 323(5917), 1077-1079.

Chalmers, D. J. (1995). Facing up to the problem of consciousness. *Journal of Consciousness Studies*, 2(3), 200-219.

Chase, W. G., & Simon, H. A. (1973). Perception in chess. *Cognitive Psychology*, 4(1), 55-81.

Clark, A. (1997). *Being There: Putting Brain, Body, and World Together Again*. MIT Press.

Corfas, R. A., & Vosshall, L. B. (2015). The cation channel TRPA1 tunes mosquito thermotaxis to host temperatures. *eLife*, 4, e11750.

Cowan, N. (2001). The magical number 4 in short-term memory: A reconsideration of mental storage capacity. *Behavioral and Brain Sciences*, 24(1), 87-114.

Dehaene, S., & Naccache, L. (2001). Towards a cognitive neuroscience of consciousness: basic evidence and a workspace framework. *Cognition*, 79(1-2), 1-37.

Friston, K. (2010). The free-energy principle: a unified brain theory? *Nature Reviews Neuroscience*, 11(2), 127-138.

Furber, S. B., Galluppi, F., Temple, S., & Plana, L. A. (2024). The DeepSouth neuromorphic supercomputer: Architecture and applications. *Nature Machine Intelligence*, 6(2), 156-167.

Gollisch, T., & Meister, M. (2010). Eye smarter than scientists believed: Neural computations in circuits of the retina. *Neuron*, 65(2), 150-164.

Hebb, D. O. (1949). *The Organization of Behavior: A Neuropsychological Theory*. Wiley.

Herculano-Houzel, S. (2012). The remarkable, yet not extraordinary, human brain as a scaled-up primate brain and its associated cost. *Proceedings of the National Academy of Sciences*, 109(Supplement 1), 10661-10668.

Koch, C. (1999). *Biophysics of Computation: Information Processing in Single Neurons*. Oxford University Press.

Koch, C., & Laurent, G. (1999). Complexity and the nervous system. *Science*, 284(5411), 96-98.

Koch, K., McLean, J., Segev, R., Freed, M. A., Berry II, M. J., Balasubramanian, V., & Sterling, P. (2006). How much the eye tells the brain. *Current Biology*, 16(14), 1428-1434.

Kolb, B., & Whishaw, I. Q. (1998). Brain plasticity and behavior. *Annual Review of Psychology*, 49(1), 43-64.

Kohn, A. (2007). Visual adaptation: Physiology, mechanisms, and functional benefits. *Journal of Neurophysiology*, 97(5), 3155-3164.

Landlinger, J., Lindinger, S. J., Stöggl, T., Wagner, H., & Müller, E. (2012). Kinematic differences of elite and high-performance tennis players in the cross court and down the line forehand. *Sports Biomechanics*, 11(3), 280-295.

Lennie, P. (2003). The cost of cortical computation. *Current Biology*, 13(6), 493-497.

Levy, W. B., & Calvert, V. G. (2021). Communication consumes 35 times more energy than computation in the human cortex, but both costs are needed to predict synapse number. *Proceedings of the National Academy of Sciences*, 118(18), e2008173118.

Marder, E. (2012). Neuromodulation of neuronal circuits: back to the future. *Neuron*, 76(1), 1-11.

Markram, H. (2006). The Blue Brain Project. *Nature Reviews Neuroscience*, 7(2), 153-160.

Masland, R. H. (2012). The neuronal organization of the retina. *Neuron*, 76(2), 266-280.

Martinez-Conde, S., Macknik, S. L., & Hubel, D. H. (2004). The role of fixational eye movements in visual perception. *Nature Reviews Neuroscience*, 5(3), 229-240.

Mayford, M., Siegelbaum, S. A., & Kandel, E. R. (2012). Synapses and memory storage. *Cold Spring Harbor Perspectives in Biology*, 4(6), a005751.

McMeniman, C. J., Corfas, R. A., Matthews, B. J., Ritchie, S. A., & Vosshall, L. B. (2014). Multimodal integration of carbon dioxide and other sensory cues drives mosquito attraction to humans. *Cell*, 156(5), 1060-1071.

Meyer, T., & Olson, C. R. (2011). Statistical learning of visual transitions in monkey inferotemporal cortex. *Proceedings of the National Academy of Sciences*, 108(48), 19401-19406.

Miller, G. A. (1956). The magical number seven, plus or minus two: Some limits on our capacity for processing information. *Psychological Review*, 63(2), 81-97.

Muijres, F. T., Chang, S. W., Voesenek, C. J., Spedding, G. R., Dudley, R., & Hedenström, A. (2017). Escaping blood-fed malaria mosquitoes minimize tactile detection without compromising on take-off speed. *Journal of Experimental Biology*, 220(20), 3751-3762.

Nakata, H., Yoshie, M., Miura, A., & Kudo, K. (2010). Characteristics of the athletes' brain: Evidence from neurophysiology and neuroimaging. *Brain Research Reviews*, 62(2), 197-211.

National Institute of Standards and Technology (NIST). (2024). Brain-inspired computing can help us create faster, more energy-efficient devices. *NIST Taking Measure*. Retrieved from https://www.nist.gov/blogs/taking-measure/brain-inspired-computing

O'Reilly, R. C., & Munakata, Y. (2000). *Computational Explorations in Cognitive Neuroscience: Understanding the Mind by Simulating the Brain*. MIT Press.

Phan, H. V., & Park, H. C. (2019). Insect-inspired, tailless, hover-capable flapping-wing robots: Recent progress, challenges, and future directions. *Progress in Aerospace Sciences*, 111, 100573.

Potter, M. C., Wyble, B., Hagmann, C. E., & McCourt, E. S. (2014). Detecting meaning in RSVP at 13 ms per picture. *Attention, Perception, & Psychophysics*, 76(2), 270-279.

Purves, D., Augustine, G. J., Fitzpatrick, D., Katz, L. C., LaMantia, A. S., McNamara, J. O., & Williams, S. M. (2001). *Neuroscience* (2nd ed.). Sinauer Associates.

Raji, J. I., & Vosshall, L. B. (2016). Mosquito brain atlas aims to reveal neural circuitry of behavior. *HHMI News*. Retrieved from https://www.hhmi.org/news/mosquito-brain-atlas

Rao, A., Plank, P., Wild, A., & Maass, W. (2022). A long short-term memory for AI applications in spike-based neuromorphic hardware. *Nature Machine Intelligence*, 4(5), 467-479.

Rao, R. P., & Ballard, D. H. (1999). Predictive coding in the visual cortex: A functional interpretation of some extra-classical receptive-field effects. *Nature Neuroscience*, 2(1), 79-87.

Smallegange, R. C., Verhulst, N. O., & Takken, W. (2011). Sweaty skin: an invitation to bite? *Trends in Parasitology*, 27(4), 143-148.

Sterling, P., & Laughlin, S. (2015). *Principles of Neural Design*. MIT Press.

Summerfield, C., Egner, T., Greene, M., Koechlin, E., Mangels, J., & Hirsch, J. (2006). Predictive codes for forthcoming perception in the frontal cortex. *Science*, 314(5803), 1311-1314.

Tulving, E., & Schacter, D. L. (1990). Priming and human memory systems. *Science*, 247(4940), 301-306.

van Breugel, F., Riffell, J., Fairhall, A., & Dickinson, M. H. (2015). Mosquitoes use vision to associate odor plumes with thermal targets. *Current Biology*, 25(16), 2123-2129.

Wandell, B. A., & Winawer, J. (2015). Computational neuroimaging and population receptive fields. *Trends in Cognitive Sciences*, 19(6), 349-357.

Williams, R. W., & Herrup, K. (1988). The control of neuron number. *Annual Review of Neuroscience*, 11(1), 423-453.

Winding, M., Pedigo, B. D., Barnes, C. L., Patsolic, H. G., Park, Y., Kazimiers, T., ... & Priebe, C. E. (2023). The connectome of an insect brain. *Science*, 379(6636), eadd9330.

Zheng, J., & Meister, M. (2024). The unbearable slowness of being: Why do we live at 10 bits/s? *arXiv preprint* arXiv:2408.10234.

Topics: #BrainScience #Neuroscience #ComputationalNeuroscience #CognitiveScience #BrainVsComputer #NeuralProcessing #BiologicalIntelligence #EvolutionaryPsychology #ArtificialIntelligence #Neuromorphic #HumanBrain #Consciousness #InformationProcessing