In his thesis, he discusses the use of hierarchical grouping self-organising maps to get symbols to self-organise. I can’t decide whether i feel as though self-organization is intrinsic to intrinsicality or not, but it definitely feels as though the two are somewhat intertwined, esp. in the brain
The second thing is that such self-organising symbols are gradually transformable – to illustrate what is meant by this, consider C++ source code. If you imagine that source code is a genotype (though this point applies to learning in general and not just to genetic algorithms), and you mutate it slightly or combine the first half with another piece of source code and see how well it performs at your chosen task, then chances are that it will be completely broken. C++ source code is not gradually transformable. Genotypes and neural networks, on the other hand, are gradually transformable. This, i think, is what allows them to learn by self-organizing, and is somehow key to the whole intrinsicality business. Because self-organizing systems can morph gradually as a function of changes or experiences in their environment, they are inextricably tied to it, and form intrinsic representations of it
– from a message to Stephen Larson
P.S. Is it a coincidence that both types of systems represent things in the universal language of vectors???