Otoom banner
Home
Downloads
Books       
Downloads
Whats new              
First-time visitors      
On the origin of Mind 
Synopsis                  
Applying Otoom        
Further developments
The Otoom fractal     
Android                   
CV                         
About
OtoomCM program
OMo program      
OWorm program   
OSound programs 
OVideo program   
Shapeworld (teaser)  
OCTAM    
Programs Search What's new Parallels FAQs Basic Charter the social experiment Otoom blog List of blog topics other programs other programs Forum Mayaroma Museum Links Contact
LinkedIn icon
Otoom blog
on Facebook
discarded-full-sm.jpg 5g-navtheworm.jpg 5g-navthemindwhats.jpg 5g-navmyhome.jpg 5g-navtheisaa.jpg 5g-navsomething.jpg 6g-navdontreadthis.jpg Freedom uses collective knowledge...
Home  >  Discarded  >  The Mind: What's the mystery?

The Mind: What's the mystery?

The following was designed as an opinion piece (for want of a better word) to show that it is possible to consider what essential features one would need to see in a mind model. Since every project needs a plan, an idea of what it is all about, what such a model should demonstrate would be a worthwhile exercise.

At this stage no technical details are required; that comes later. Since we all are human, and move in human society, one would think that out of all that cumulative experience certain patterns become visible to a fellow observer.

The reviewers didn't like the piece because I did not enter into the literature where hypotheses about the mind abound. But why should I? Firstly, they did not produce any results because they approached the whole concept from wrong angles, and secondly, even when setting about to develop a hypothetical framework it still is necessary to become clear what that framework should entail.

The reviewers also did not understand some sections, others they found "contentious".

The reader can make up their own mind whether they grasp what I say and whether it makes them uneasy just reading about it. Certain possibilities are suggested in discarded.

 

The Mind: What's the Mystery?

The quest towards an understanding of the human mind is as old as humanity. The results reflected the level of conceptualisation of the unknown, and that includes the names given to what appeared at the periphery of cognition. The full explanation of what constitutes a system that thinks is a long and detailed one, far exceeding the space currently available.

Let us concentrate therefore on one important aspect, one that should precede any further exploration: what should we expect from our model?

We need to be clear what we mean. Functions such as grammar, cultural interpretation, or logic seem obvious but suffer from the potential trap of becoming anthropocentric. To consider homo sapiens as the only arbiter of cognitive phenomena is a view that can be left behind by now. Times have changed since Giordano Bruno was tortured by the Holy Inquisition for daring to suggest we are not the only intelligent species in the universe.

Equally, in our description we should steer away from anything that is a result of even more fundamental processes, otherwise we have locked ourselves into a preemptive perception of some phenomenon that could be subjected to further deconstruction. Such an error can lead to the assumption that our mind is somehow a biological interpretation of such things as words, pictures, or databases (and conversely, to the idea that if only we could find the right picture or database we would know how the mind works). They constitute the output of our minds, not its functional elements.

What follows is a summary of those features that are necessary, including some which are more instructive than they may seem at first glance.

Probably the first impression we get from such a system is the need for sheer variance.

Throughout the ages there have been as many implementations of life and society - all defended, killed for, and ridiculed or overturned eventually. As much as some idea may have been deemed to be awesome or atrocious, the neurons which were responsible for its production suffered no such sentiments. Our model must be able to be intelligent and stupid, be able to explain tolerance and hate, show elegance and brutishness.

For the system to display variance it requires the capability to process incoming information, the only limitation derived from the physicality of its sensors. Because the system processes information, not only all kinds of experiences must be possible but also their interpretation.

Consider the perception of a building. Regardless of its cultural background, its purpose, its shape and form, our mind forms the concept of 'building' because we deconstruct its perceived elements such that their aggregate is interpreted as such and not as 'bridge' or 'tree'.

Since the necessary information has to be embedded somehow among the brain's neurons, the representative quality of the system's smallest members, the neurons, has to be of a high granularity. Viewing the process of interpretation from the opposite direction, we can say the representative content of each neuron must be sufficiently small to allow composites such that at a certain stage of their assembly we are able to speak of a building from then onwards, or a bridge, and so on.

Interpretation is a function of the system, and so a high degree of granularity leads to a high degree of functional flexibility. So much so that it has given rise to all the cultural diversity we find on this planet; where a word or even a gesture (disassociated from what we commonly call 'language') can be interpreted in a myriad of ways. Essentially the mind does not interpret because of some set of values, but first and foremost because it can.

It is tempting to see a certain type of behaviour as 'right' or 'wrong', depending on one's perception of the world. In terms of its situatedness within the wider context of society it may well lead to constructive and/or destructive outcomes, but even here we need to consider what we mean by such labels. Is it constructive to have - many - children in the context of a sustainable eco-system, or could it be destructive in the long run? And yet in either case the elemental processes of the mind are performing as they should.

In order to define our mind model we cannot allow ourselves to be influenced by discomfort when ranging between behaviour (and our interpretation of it) and its constituent functional elements on one hand and analysing its basis and delineating from there to what can possibly emerge on a higher scale on the other. Our conceptualisation of the system needs to be freed from any meaning we attach to the behaviour of its elements, to the resulting building blocks and beyond - in other words, from thoughts, to ideas, to entire concepts.

Let us abstract further. Just as an identified thought cannot be classified as 'right' or 'wrong' in terms of its inherent functionality, the manner in which the representative elements are clustered together to form larger structures cannot follow a similar set of rules either. For example, take the syllogism, 'All men are mortal; Socrates is a man; therefore Socrates is mortal'. There are many to whom that expression is entirely reasonable; its logic has proven itself over and over again. Nevertheless, without the appropriate explanation as to its structural quality a person may not necessarily make it their own.

'No-one held the cup; so it fell.' This from someone who was chided for being careless when carrying a cup. Note the disassociation from the - usually - assumed scope of responsibility on behalf of the cup-holder towards the object, and from there to the resultant effect. Teaching about responsibility is a high-level concept; in essence however it is the imparting of the idea that one's action has an effect not necessarily related to one's immediate surrounds. Technically speaking, it represents the training of the mind to widen its conceptual scope so that other phenomena can be consciously perceived while at the same time not loosing the connection with the conceptualisation of the 'self'; the establishment of affinity relationships between hitherto unconnected conceptual clusters. In the foregoing example this affinity did not exist.

However, as far as that unfortunate person's neurons were concerned, there was no mistake. Hence the neurons' functional granularity has to be sufficiently high to allow for behaviour that may seem odd to us, but still follow the rules at their specific level of operation.

At that level then, what rules must a system follow that prevents it from being simply random, but still functions according to the dictates of its environment, however strange they may seem to an outsider?

The only answer that fits that demand is Chaos.

A chaotic system is one where functional elements produce an output that is unpredictable once the system is allowed to perform on an ongoing basis. While its immediate output is discernible, feeding it back into the system leads to unpredictability - despite the fact that the rules inherent in each of its elements do exist and have not changed. What changes are the values, not the variables themselves (to borrow from the language of computer programmers for a moment).

Furthermore, the more complex such a system is, the more interdependent the process of feedback becomes too. The interdependence prevents the system from approaching randomness because, although each element possesses the general optionality of variance, its actual optionality is tempered by the influence from its neighbours with the result that the overall effect represents a compromise between the system's overall potential for variance and the constraints imposed by pre-existing (that is, already pre-formed) conditions - and all under the same auspices.

Make the system sufficiently complex and the resultant effects of its constituents ensure that they do not grate against the system per se; the system is 'big' enough to accommodate variances somewhere within its scope. It is able to accommodate them because somewhere there is another subsystem that is affinitive in relation to the newcomer. If there isn't, either the newcomer will become invalid, or, if the latter is influential enough, the system itself comes to a halt.

On the large scale of society we can observe the phenomenon all the time. Urban Western society, with its inherent variance, tolerates the existence of demographics such as 'punks'. Punks may not integrate with their host society's subsystems of banks and parliaments, but they do generate the market for a certain fashion and they do create their own music industry and those subsystems do have an affinity with the rest. Therefore the overall system still works. Transpose punks into a closely-knit village and observe the result!

On a scale of lesser complexity, the emergence of over-emphasised mandibles in male stag beetles did not lead to the demise of stag beetles as long as their protuberances remained within the physical scope of body mass, flying ability, and the body's metabolism (they even enhanced the male's ability to fight over females). There was no wider 'reason' to change their mandibles, but once they did change the system permitted it until the boundaries determined by the afore-mentioned scope were reached and in doing so nullified the advantages of them being used as a weapon.

Since the building blocks of organic systems must adhere to the elemental rules of their constituents, the mind, dependent upon the physicality of the brain and so also organic in origin and therefore adhering to a similar set of fundamental rules, can be seen as a system in which the rules of complex, chaotic dynamics determine its behaviour. Our model needs to reflect this.

If the overall environment - a system in its own right - permits the emergence of subsystems because they are affinitive to some of its constituents, the result is the increasing complexity of the environment. Hence complex systems, at whatever scale, are the antidote to the dissipation of energy. Life, being a complex system, is the opposite to entropy.

The mind model must be able to display the dynamics leading to representative clusters based on affinity relationships at an elementary level, any emergent pattern must have its progeny in terms of the functionality of the respective clusters, and the ultimate sustainability of those domains must depend on the affinitive quality of the overall system. Any interpretation as to their 'meaning' comes after the event; it has nothing to do with the system as such.

Hence the model has to agree with the evolutionary and/or developmental sequences and plausibilities contained within itself, an emergence from the inside out.

These are the key points for our mind model. Any further detail is a function of the fundamental rule set that characterises a chaotic, dynamic and complex system.

As the system becomes more complex it becomes richer in terms of processing its input. Let us not forget that the human brain is the current endpoint of a developmental process that started millions of years ago. The increase in neuron numbers and the commensurate growth in their connectivity gave rise to the functionalities we can observe in our minds today.

Memory and its access depend on some new input having generated its own cluster which in turn is affinitive to some other pre-existing cluster; the quality of the recall being dependent upon the degree of affinity (therefore a recall can never be a complete copy of pre-existing content). For the appropriate clusters to emerge, forming their own representative domains, the systems needs the space to accommodate the necessary number of neurons. The neurons also need to be connected to many others; the more connections are possible the higher the probability that the newcomer is affinitive to already established subdomains. This in turn leads to better recall because any aspect of the former can trigger connections being made to one of the latter. The downside of such an expanded volume is the chance the connecting pathways may access part of the domain but do not connect to its main content.

For example, 'we know' we know the name of a person but can't say it. The current recall process, triggered by input not associated with the original formation of the content domain, did not follow the connections leading to the representative clusters containing the name. On such occasions the trick is to set the current experience aside and focus on (that is, remember) a situation in the past when the name had been accessible. It usually works, and if it still doesn't we didn't remember properly! To put this another way, what we would normally label as an 'error' is, in the fundamental sense, not a fault at all. Rather, it represents a side effect of a system that, because of its very capacity to be complex, exhibits a certain side effect. It is one of the costs that comes with complexity.

Another feature of a more developed system is the ability to form abstractions.

When we look at a dinner table and compare it with a desk our mind enables us to identify each as a 'table' by discerning their shared feature of a 'flat top on supports'. The building of abstractions derives from a sufficiently large number of representative clusters which overlap each other, an abstraction being the representative content of their intersections. The mind's representative domains include the respective clusters relating to the various features. Both sets - the dinner-table-set and the desk-set - are not completely the same but they do overlap, and it is those intersections which are responsible for the abstraction 'table'.

Just as memory needs a sufficient volume to perform, so do abstractions require the space for their underlying intersections. If the system was incapable of abstractions, each item, even closely related ones, will be perceived as separate entities in their own right. Any cognitive progress based on recognising similarities between observed objects and repositioning them to serve another purpose would be impossible. In our model we need to observe its inner processes and be able to identify intersections among affinitive clusters. Such related clusters contribute to the formation of their domains, hence increase their variance, therefore ramp up the complexity of the system as a whole, and so make it more capable of accommodating new input. In fact, it will generate more input for its neighbours, at any scale.

This is the reason why the most illustrious civilisations have been - and still are - those that could draw on rich environments. Whether in the form of geography, of climate, or interactions with others in the form of commerce and intellectual exchanges, their surrounds provided the input which generated a variety of conceptual domains in their minds leading to their heightened complexity. It also needed an adequate number of neurons and their connections to process the information in the first place.

Lower the limits imposed by the physicality of the brain and no emergence takes place. Dogs have been domesticated for centuries, but whatever the potential of a modern city their minds could not avail themselves of such opportunities. On the contrary, compared with wolves in the wild whose environment is far more suitable to the canine nature, the ability of a lap dog to survive on its own is very much compromised.

Our artificial mind must be able to demonstrate its heightened capacity in line with increased volume.

It also must be able to learn. Learning is the laying-down of representative clusters such that they eventually reach a quality that is deemed comparable to what gets perceived as the standard (a process with its own clusters and associated memory functions).

What gets learned depends on the complexity and variance of domain content, which in turn are closely linked to the existence of abstractions. If connections can be established between affinitive clusters, and if those clusters serve to heighten the probability of being involved in the formation of further clusters because they possess an affinity with incoming information through teaching, then the resultant product will be meaningful in terms of the overall concept. That is to say, the sub-elements of the meta-domain are congruent with each other. Good teaching therefore is the art to gradually build up the student's perception through the introduction of more and more complex ideas. Again, what matters is input, pre-existing representative clusters, and the numbers of neurons and their connections.

The model should exhibit the ability to learn, and differences in speed and efficacy should depend on the quality of input as well as on the system's size.

In short, all of these features rely on a minimal number of neurons and the degree of their mutual connectivity to come into being. We need many neurons to have enough room for the necessary clusters to form, and the neurons must be connected to a high degree to make the relationships work. Expand either and the quality of memory and its recall, of abstraction forming, and what can be learned and to what depth, is enhanced. Reduce the parameters and the output of the system suffers accordingly.

One question that remains concerns the matter of language. We expect intelligent systems to communicate, and our model is no exception.

Consider the phenomenon of emergence, the ongoing formation of representative clusters, their interactions with each other, in tandem with a continual stream of input - all part of an overall process of feedback. Language, the ability to come up with output which is symbolic of some internal state of perception, relies on its constituent elements just as all the other dynamics do. What the mind uses for that purpose is therefore subject to the same environmental influences as the rest. One can say that language, in its widest sense, is wholly representative of the system's and its subsystems' surrounds, including social interactions as well as internal criteria. Hence two comparable systems will understand each other, outsiders may not.

In the technical sense then, the system's output is its language. It is up to the observer, or the builder, to discern the meaning.

Can such a system be built? Provided the rules are adhered to, of course. Furthermore, considering its functionality per se, any system that exhibits such qualities would, in the technical sense, be a thinking system. In our brains the efficacy of the neurons is compacted into their tiny spaces, hence our mind can function inside our skulls. We could also imagine a similar degree of efficacy but this time needing more volume due to the reduced potential for variance residing in its constituents. What is missing per unit of volume gets compensated for by the sheer size of the aggregate, making possible once again the existence of representative clusters created from its elements.

Is planet Earth big enough to compensate for the respective compactness of animals, plants, and minerals? Is the universe big enough? And would we understand what they say?

 

October 2011


© Martin Wurzinger - see Terms of Use