Perceptual Processing is new to me. What little I know about it I just learned from this review of a book. So what follows is a brief attempt on my part to summarize a summary of what is apparently a difficult scholarly book by a philosopher about a theory of the mind that has arise in the psychology community.
The idea is that the mind has many many layers. The low level layers are sensors, muscles, etc. The higher up we go the more matching, modeling, and cognition we find. Each layer is then at the intersection between the rough model from above, and the noisy data from below.
Layers compare the noisy data and the rough model, which naturally don’t match perfectly. That may annoy it – in which case it complains – or the mismatch is tolerable – in which case it let’s it ignores it. How fastidious, how easily annoyed is it? These are adjustable. The term of of art for the complaining is “surprisal.” Each layer can presumably kick problems up or down stairs.
It’s as if the entire brain is about managing confirmation bias.
Moving beyond what the review talks about…
A layer whose subordinate notifies is to something it doesn’t expect has three choice, I think. Expect means: fails to confirm against the model it has.
- Ignore it: Presumably the problem doesn’t cross the threshold that triggers the layer to become annoyed. I’d assume there is both a duration and a level component to those thresholds – at minimum.
- Kick the problem back to the lower level:
- If the subordinate is a sensor it might tell it that the info is boring and to ignore it. Which is to adjust the model and thresholds of the subordinate. I.e. “Yeah, your model is irritating me, please consider this alternative.”
- If the subordinate is a muscle it says: “Yeah, fix that.” Or more likely “Yeah, keep working on fixing that.” The review suggests this is actually how we move. We set up a model of the end state and then let the muscles work out how to get there.
- Kick the problem upstairs. Reporting that the thresholds have been crossed and the model isn’t working.
- The sensor might report: “Gosh boss I see a snake!”
- The muscle might report: “Dude, I’m stuck.”
If some high level layer decides to move the pot off the stove it pushes a model downstream, and in that model the pot is no longer on the stove. The layers below all push model adjustments to their subordinates until the muscles and sensors get to work on moving the physical world into alignment.
If at some point the report comes in that your fingers are burning that gets push upstream and the goal adjusted to resolve the problem. A bit later the high level system get’s told it’s goals have not been met. Annoyed some cursing then is planned.
The review has some fascinating things to say. That there are global control system that can turn up or down the fastidiousness, etc. Really interesting stuff.
I’m amused about how this has (weak) mimics in software systems.
For example in model-view systems these days people are rediscovering the idea of having a few layers between the model and the screen and using differencing engines to bring them into alignment (aka React).
Or, the way layers of process managers work to assure retry and recovery when their subordinates fail. Or even how the cable company’s first level support prescribe unplugging the modem.
None of these are as sophisticated as what as Perceptual Processing. Which is exciting.