Intrinsicality, symbols, self-organisation and gradual transformation

In his thesis, he discusses the use of hierarchical grouping self-organising maps to get symbols to self-organise. I can’t decide whether i feel as though self-organization is intrinsic to intrinsicality or not, but it definitely feels as though the two are somewhat intertwined, esp. in the brain

The second thing is that such self-organising symbols are gradually transformable – to illustrate what is meant by this, consider C++ source code. If you imagine that source code is a genotype (though this point applies to learning in general and not just to genetic algorithms), and you mutate it slightly or combine the first half with another piece of source code and see how well it performs at your chosen task, then chances are that it will be completely broken. C++ source code is not gradually transformable. Genotypes and neural networks, on the other hand, are gradually transformable. This, i think, is what allows them to learn by self-organizing, and is somehow key to the whole intrinsicality business. Because self-organizing systems can morph gradually as a function of changes or experiences in their environment, they are inextricably tied to it, and form intrinsic representations of it

– from a message to Stephen Larson

P.S. Is it a coincidence that both types of systems represent things in the universal language of vectors???

Gradual transformability

Am increasingly convinced that concepts need to be malleable, nebuluous – in Hofstadter’s words, fluid. To do this, you need to be able to tweak them in order for them to most cleanly fit into the niche that the environment has defined for them (given the way we interact with it). This is kind of what I think that Holland et al. (Induction) are getting it with their arguments for directed environmentally situated and embodied induction, though I’m putting Lakoff’s words into their mouths.

My current plan is to try and represent concepts as functions. I need to elaborate on what I mean by this (xxx). I considered going all the way back to basics, and trying to use Turing machines as the most elementary building blocks of these concept-functions. I’m now thinking about jumping up a few levels of abstraction to some high-level programming language. But then we’re back to the problem of brittleness. What about, instead of using source code concept-building blocks, using neural nets? This too needs much elaboration. (xxx)