Unbreakable Memory – Neural Networks as Holograms, Processors as Porcelain

Krzysztof Michalik

4/16/20252 min czytać

It’s astonishing how even a single faulty transistor—one out of millions inside a CPU—can bring an entire system to a halt. Computers, despite their beauty and precision, can be fragile. To follow my analogy: computers are like porcelain—elegant and exact, but one fracture can render them useless. Years ago, when I started experimenting with neural networks (then mostly multilayer perceptrons), I noticed something that struck me as profound—what I called the holographic effect.

Just like a hologram (a 3D image formed with laser interference), which retains the full image even if the glass plate it’s etched on shatters, neural networks exhibit a kind of resilience. Break a hologram, and every piece—though distorted—still contains a blurry but complete version of the original image. Neural networks work in a remarkably similar way: even when some neurons or weights are removed—whether due to dropout, pruning, or failure—they often continue to function. The quality may decline, but functionality remains. I’ve demonstrated this many times in class, using it as a powerful teaching moment to explain the true nature of neural computation.

Now, contrast this with how a traditional computer processor works. It’s like high-grade porcelain—elegant, refined, but brittle. Lose one transistor, and the entire system can collapse. CPUs are deterministic, binary-accurate to the last bit, and designed with little to no fault tolerance built-in.

Classical computing assumes perfection—any deviation is catastrophic. Neural networks, on the other hand, assume the world is noisy and imperfect. That assumption reshapes everything—from architecture to philosophy.

Of course, we should mention fault-tolerant systems, including specially engineered processors, but these are rare and highly specialized—mainly used in spacecraft or critical control systems. NASA's space shuttles, for example, used five onboard computers. Four would compute the same task in lockstep, and if there was consensus, the result was accepted. If not, the fifth computer acted like a referee—running independently written software to resolve the conflict.

In essence, artificial neural networks are not statistical constructs—they are inspired by the brain’s architecture, its adaptability, neuroplasticity, and resilience. My analogy of neural networks as holograms isn't coincidental—it reflects their ability to degrade gracefully, adapt, and retain function, even in the face of partial loss. This stands in stark contrast to traditional computing or statistical models, where even a single fault can be fatal.

Unfortunately, there's a growing trend—particularly outside the AI field—to reduce neural networks to mere statistical tools, even equating them with simple linear regression models [1]. This view is not only reductive; it is scientifically misleading. In a forthcoming essay, I intend to challenge this misconception and explore why AI cannot—and should not—be understood purely through the lens of classical statistics.

AI, in this light, is not about probability. It’s about resilience, generalization, and inspiration drawn from the most complex system we know: the human brain. What I call their “holographic memory” is a personal analogy that captures how information remains distributed and recoverable even when parts of the network are lost.
This difference between AI and traditional computing isn't just technical—it reveals a deeper, almost philosophical shift in how we think about intelligence and failure.

___________________________
[1] Md. Asifur Rahman: "Neural Network is nothing but a Linear Regression",  https://medium.com/@asifurrahmanaust/lesson-3-neural-network-is-nothing-but-a-linear-regression-e05a328a0f23  (Accessed April 16, 2025.).