Reimagining Neural Networks: Using Predictive Coding Modules over Traditional Backpropagation to Model Neural Circuitry

Written by Shree Bhattacharya

Imagine a world where artificial intelligence (AI) learns as intuitively as the human brain. This article delves into predictive coding, an innovative approach that offers a compelling, biologically inspired alternative to traditional backpropagation for neural network models. Unlike backpropagation, which trains networks by tweaking their settings whenever they make a mistake, predictive coding suggests that our brains work by constantly guessing what will happen next and then adjusting those guesses when the outcomes differ from what was expected. Hence, instead of merely fixing a mistake after it happens, predictive coding focuses on fine-tuning predictions to minimize future mismatches. This AI model was tested across multiple neural network topologies, focusing on learning speed, accuracy, adaptability, and alignment with biological processes. The findings indicate that predictive coding is more efficient than backpropagation, requiring less computational power and training while offering better generalization to new contexts. Throughout this article, Shree begs the question of whether predictive coding could revolutionize how we develop intelligent systems and potentially work towards bridging the gap between human intelligence and AI.


The Essence of Predictive Coding

As neuroscience and artificial intelligence converge, a groundbreaking paradigm is emerging—one that reimagines our understanding of brain function while revolutionizing neural network learning. This framework, known as predictive coding, shifts our perspective from reactive to proactive, presenting the brain as not just a passive recipient of sensory inputs but an active predictor, constantly generating and refining models of its environment. Predictive coding stands in stark contrast to traditional backpropagation, the cornerstone of many artificial neural networks, which adjusts for errors only after they occur. By embracing predictive coding, we move closer to unraveling the brain's intricate mechanisms, holding profound implications for the future of AI and our understanding of the human mind.

Neural Networks and Their Connection to Biological Systems

To fully appreciate the significance of predictive coding, it is important to understand the structure and function of neural networks and how they relate to biological systems. Neural networks are like digital brains that help computers learn to perform tasks by mimicking how our own brain works. Just as our brains process information to make decisions, neural networks do the same but through mathematical processes. To understand how they function, we can break down their structure into three main parts: the input layer, hidden layers, and output layer.

The input layer is where the process begins, acting like the eyes of the network. This layer receives raw data (i.e. images, sounds, and other information) that the network needs to learn from. Think of it as similar to how our eyes take in visual information and send it to our brain. For instance, if you’re showing a neural network a picture of a cat, the input layer would take each small detail of the image—perhaps the shape of the cat’s ear or the color of its fur—and pass this information on to the next part of the network.

Next, the hidden layers come into play, serving as the network’s thinking and processing center. These layers are where the real work happens, as they process the information received from the input layer to make sense of it. Each hidden layer is made up of nodes, which are like tiny brain cells. These nodes perform mathematical operations on the data they receive, like adding things together and applying functions, before passing the processed information on to the next layer. In our previous example of the cat, the hidden layers might begin identifying certain features–recognizing the shape of an ear or detecting the texture of its fur–thus building an understanding of what’s being represented in the image. An aspect of this process involves activation functions, which are decision-making processes within each node that determine whether certain information is important enough to pass on to the next layer. This is similar to how our brain cells decide whether to send a signal based on the strength of the input they receive.

This intricate process culminates in the the output layer, where the network makes its final decision or prediction, akin to how our brain concludes “That’s a cat!” If the network were designed to recognize different animals, the output layer would evaluate all possibilities and select the one that best matches the input data.

Current vs New Paradigm for Neural Networks 

The current paradigm for adjusting the weights of connections between neurons in the network based on the errors observed during training is back propagation. When a neural network makes a prediction, it compares the predicted output to the actual, desired output, and the difference between these two results is the error. Backpropagation calculates how much each connection in the network contributed to this error and adjusts the weights accordingly to reduce the error in future predictions. This process is repeated many times over, allowing the network to gradually improve its accuracy. While backpropagation is a powerful tool in artificial neural networks, it’s considered biologically implausible. One of the key issues is that backpropagation requires precise and symmetric feedback connections to propagate errors backward through the network. In contrast, the human brain’s neural circuits are much more complex and don’t have such clear, direct pathways for error correction. Moreover, backpropagation assumes that neurons can access global information about the error signal, which is not feasible in biological systems where neurons typically only process local information. Global access is necessary for backpropagation to function correctly, but the brain's neurons are only aware of the signals they receive from their immediate neighbors, not the entire network. 

Predictive coding, on the other hand, relies on the underlying concept that the brain is actively engaged in a continuous process of prediction and refinement. Rather than merely recognizing a prediction error, the brain actively responds by adjusting its internal model—a process driven by neuroplasticity, the brain's remarkable ability to rewire itself. Through this mechanism, the brain strengthens or weakens specific synapses, effectively modifying the connections between neurons to improve its future predictions. For example, if you learn that your friend’s might not always greet you with a smile, but it has no bearing on their mood, this new understanding is stored and becomes part of how you anticipate their behavior in the future. When a similar situation arises, your brain uses this updated model to generate a more nuanced prediction, anticipating a range of possible reactions rather than a single expected outcome. The refinement then reduces the likelihood of repeating past errors of misinterpreting your friend’s mood, enhancing the accuracy of the brain’s predictions. As the brain continuously updates its internal model with each new experience, learning from every prediction error to fine-tune its understanding of the world, the brain becomes increasingly adept at anticipating and navigating the complexities of everyday life.

Expanding on Biological Foundations of Predictive Coding

As artificial intelligence converges with fields like neuroscience and concepts like predictive coding continue to gain significant attention, researchers must draw from theories related to brain function. A key foundation of predictive coding is tied to Karl Friston's Free-Energy Principle. Friston, principal British Neuroscientist at the University College London, describes the brain's imperative to minimize uncertainty in this principle. In this context, free energy is a measure of the difference between the brain's predictions about the world and the actual sensory inputs it receives. The brain, in an effort to maintain a stable and coherent perception of reality, constantly adjusts to minimize this discrepancy. By reducing free energy, the brain ensures that its internal model remains both accurate and adaptive, perpetually refining its predictions based on incoming data. This process is a fundamental mechanism by which the brain maintains homeostasis and adapts to its environment!

The concept of free energy ties closely with the brain's ability to anticipate and react to stimuli before they fully unfold. Rather than passively waiting for information to arrive and then responding as typical AI processings work, the brain actively generates hypotheses about what it expects to happen next. These hypotheses are continuously tested against real-world data, and any discrepancies are used to update the brain's model of the environment. This constant cycle of prediction, comparison, and correction is what allows the brain to navigate a complex and ever-changing world with remarkable efficiency 

Hebbian learning is another biological principle underpinning how predictive coding models operate, famously captured by the phrase, "cells that fire together, wire together." This principle reflects the brain's extraordinary capacity to adapt and reorganize itself by way of synaptic connections between neurons being strengthened with repeated activation. The brain ensures that frequently used connections grow stronger over time which allows the brain to adapt its structure in response to recurring patterns of activity, thereby embedding experiences into its neural architecture. Ultimately, Hebbian learning supports the process of updating predictions and refining the internal model based on local activity which underlies predictive coding.

The biological plausibility of predictive coding is further substantiated by the work of neuroscientists like Andre Bastos, who have uncovered the precise neural mechanisms underlying this process. Through detailed electrophysiological studies which record the electrical activity of neurons, Bastos and his colleagues have demonstrated that specific cortical microcircuits (i.e. small networks of neurons within the brain’s cortex) are structured to generate and propagate prediction errors. These processes are very similar to the mechanism used by predictive coding to propagate prediction errors. Predictive coding is therefore a robust model that captures the dynamic and hierarchical nature of brain function. The alignment seen between theory and biological reality could underscore the potential of predictive coding as a more accurate and biologically faithful representation of neural processes.

The Exciting Future

Predictive coding models have shown to surpass backpropagation in critical areas: they demand less computational power, learn at a faster pace, and excel in adapting to new situations. By adopting this biologically inspired framework, we're on the brink of creating AI systems that copy the brain's extraordinary capacity to learn and adapt. This shift isn't just a technical advancement—it's a leap toward blurring the lines between artificial and human intelligence. As we push the boundaries of predictive coding and weave it deeper into neural networks, we're not just refining algorithms; we're unlocking a deeper understanding of the brain's mysteries. This exciting evolution has the potential to revolutionize both neuroscience and AI, forging a future where technology and biology intertwine in ways we've only just begun to imagine.