TLDR: The Interface Theory of Perception

A summary of the 2015 paper by Donald Hoffman.

There’s this paper I came across recently that I’d like to share with you — Donald Hoffman’s The Interface Theory of Perception. It deals with how we, as living beings, perceive the world around us.

(Source: Byte Magazine, April 1979)

In Hoffman’s own words:

“Informally, the interface theory of perception says that the relationship between our perceptions and reality is analogous to the relationship between a desktop interface and a computer.”

How can this work?

In neuroscience, it’s a well-accepted notion that everyone is basically a brain in a vat. Everything about the world around you, in a sense, is a figment of your imagination. Every color you see, every touch you feel… all of it is a show put on by your brain in order to get you through the day.

Likewise, what you’re looking at right now are pixels on your screen which spell out the words you’re reading right now. Maybe you have this open in a browser window, as one of several tabs. That’s great and all, except for the fact that none of this is ‘real’, so to speak. It’s a convenient interface that lets you make sense of information on your screen. What’s actually going on is that all of the information is stored as ‘on’ or ‘off’ switches (a.k.a. logic gates) using the transistors inside your machine. 0’s and 1’s. By doing work, you change the pattern of those 0’s and 1’s. To you, of course, it might look like editing a photo, downloading a PDF or playing a video game.

Speaking of video games, here’s another analogy, from the man himself:

“If I’m in California and you’re in New York and we’re competing in an online video game trying to steal cars, I might find [a Porsche] before you do and steal it. But the Porsche on my screen is not numerically identical to the Porsche on your screen. What is behind my screen that triggers it to display a Porsche is a complex tangle of code and transistors that does not resemble a Porsche. I assume that the Porsche on my screen is similar to the Porsche on yours, so that we can discuss genuinely and compete for the Porsche. But there is no public Porsche.”

Hoffman goes on to say:

“Our perceptions have not been shaped to make it easy to know the true structure of the world but instead to hide its complexity.”

It would be intuitive to say (and, as it happens, this is conventional wisdom) that as organisms become more and more complex through evolution, they need to model reality with increasing accuracy to stay fit. At all times, their representation of reality must be homomorphic with reality itself. Or in other words, the true structure of our world must be preserved when converted into your brain’s representation of it. So for example, if I had three switches — off, on, off — a homomorphic representation might be something like 010. The information content is completely preserved despite the change in representation.

However, as Hoffman argues, your representation of reality doesn’t have to be homomorphic at all. It might be, but only when there’s an advantage to doing so — or in other words, when there exists a fitness payoff. Turns out, there are plenty of cases where there is no such advantage. Even worse, attempting to accurately model reality might actually disadvantage you in some cases — whether it be unnecessary resource consumption, or distraction. So if there are three switches, you might only see two of them as seeing the third might hurt your chances of survival.

According to Interface Theory, even things like spacetime itself are constructions of this video game world that our brains render. The true nature of reality is hidden from us.

In considering ideas like Interface Theory, I can’t help but think of a quote from J. B. S. Haldane:

“The universe is not only stranger than we imagine, it is stranger than we can imagine.”

Let’s keep on trying though, shall we?

Aside: What’s cool is that Interface Theory (or something close to it) seems to complement a certain part of Stephen Wolfram’s fundamental physics project quite cleanly.

For those not familiar, Stephen Wolfram is working on generating the laws of physics from graphs being transformed by simple sets of rules. The hope is to come up with a fundamental physics theory that smooths out the incompatibilities between General Relativity and the Standard Model of particle physics.

The interesting connection with Interface Theory has to do with something Wolfram calls “rulial space relativity”. Quote:

“I’ve always assumed that any entity that exists in our universe must at least “experience the same physics as us”. But now I realize that this isn’t true. There’s actually an almost infinite diversity of different ways to describe and experience our universe, or in effect an almost infinite diversity of different “planes of existence” for entities in the universe—corresponding to different possible reference frames in rulial space, all ultimately connected by universal computation and rule-space relativity.”

He explains it in more detail in this video:

Another aside: This also reminds me of something Nassim Taleb wrote in Antifragile, about how too much information can actually harm you (formatting mine):

The more frequently you look at data, the more noise you are disproportionally likely to get (rather than the valuable part called the signal); hence the higher the noise to signal ratio. And there is a confusion, that is not psychological at all, but inherent in the data itself. Say you look at information on a yearly basis, for stock prices or the fertilizer sales of your father-in-law’s factory, or inflation numbers in Vladivostok. Assume further that for what you are observing, at the yearly frequency the ratio of signal to noise is about one to one (say half noise, half signal) —it means that about half of changes are real improvements or degradations, the other half comes from randomness. This ratio is what you get from yearly observations. But if you look at the very same data on a daily basis, the composition would change to 95% noise, 5% signal. And if you observe data on an hourly basis, as people immersed in the news and market price variations do, the split becomes 99.5% noise to 0.5% signal. That is two hundred times more noise than signal —which is why anyone who listens to news (except when very, very significant events take place) is one step below sucker.”

Edit: changed the Wolfram video from this to this - as I feel this new one gets the idea across better.

If you liked this post, feel free to share it with your friends! If you have any feedback or if I got anything wrong, please let me know! | @savsidorov