My sincere thanks to all the people who have taken the time to make their comments, ask questions, and provide their constructive criticisms. This page is a reflection of their efforts.
Sometimes "On the origin of Mind" sounds like philosophy or politics
- but is it science?
It is true that Otoom uses philosophical tracts to explain the functional dynamics of the mind from the top down (see Synopsis). It is also true that the model addresses political issues in the past and present. But that's because it represents a formal framework that now makes such an approach possible. Its elements - functional subsets in their own right - can be explicitly described, referenced to examples from the real, and replicated in the computer program. Political events etc are seen as systems, operating within larger systems (for example, in society), and thereby can be analysed in terms of their inherent validity measured against their environment. The model does not need nor does it attempt to enter into judgments of a philosophical, religious, or moral kind.
But the section Parallels for instance certainly
contains critical comments!
True, however they are made in terms of a particular system's situatedness within its culture, an overriding system itself. Other cultures follow different guidelines and under those auspices some development may have different ramifications. Nevertheless, since all such systems do answer to fundamental dynamics that are applicable to humans in general or indeed life, some events are more destructive than others regardless of where they occur.
In how far is the article "How the mind works.." comparable to
the entire work?
As the title suggests, it deals with the principle dynamics. The connections to higher-scale phenomena in the thoughts of an individual or in society at large are touched upon, but for reasons of space and format could not be entered into. Both, the article and Otoom, contain nothing that cannot be referenced to events in the real, but Otoom goes into far greater detail extending to various scales.
What is the essential difference between Otoom and other mind models?
To some extent it depends on what other models are considered. However, it all starts with a perspective that places functionality ahead of content. This may sound trivial, but the effects are significant. And if it does sound trivial, chances are it hasn't been fully understood!
What exactly is meant by functionality?
The dictionary definition of the word 'functional' is "pertaining to a function". Its noun is therefore the property of being functional. Regardless of what philosophers, cognitive scientists or artificial intelligence researchers have added throughout the years, I prefer to stay with the basic meaning. If a number of nodes in the matrix, or a number of neurons in the brain act together within the context of some process that can be identified at a higher level of observation, then such a cluster can be tagged with a certain functionality - its type descriptor would have to be meaningful in relation to the process. For example, if in the OtoomCM program a particular input caused a green patch to appear in the output field on the screen, then one can say the participating nodes were of a functional type related to green. If subsequent input changed the colour to red, either the original nodes changed their functionality, or another cluster became more influential in terms of its own functionality. At the higher scale of society a concept can be informative about something, it could also be deemed inflammatory. Both are its functionalities, and if a confrontation ensues it was due to this particular functionality (naturally, for one or the other to have any effect there has to be an affinity in its surrounds). It also aids better conceptualisation. Let's say we could substitute the expression 'type of behaviour' in a particular context. Although it may be correct as such, it can lead one to think of it as a stand-alone manifestation - should it become invisible to an observer for whatever reason the tendency is to assume it has disappeared. Thinking of it as a type of functionality however we remain aware of its owner still existing, and the focus is on that owner's nature under those circumstances. In this case the behaviour type may have become latent only and nothing has changed within the owner.
Here are some examples of how the concept of functionality can be applied.
Conceptualising an event in terms of its functionalities relates to abstractions; see the next paragraph.
What is meant by abstractions?
Under common nomenclature an abstraction is the reformulation of a relationship in terms of its essential features. For example, a bench and a dinner table may look different from each other, but both share the essential features of a flat surface on supports. When it comes to their content, both neuronal clusters (one representative of 'bench', the other of 'dinner table') share that common content. Expressed in the form of sets, the shared content represents an intersection between the two sets. Two or more intersections can share some of their content in turn, leading to second (and third, etc) abstraction levels. Abstraction-forming is a function of cluster size and their mutual degree of connectivity. Hence the number of neurons and to how many others each neuron is connected influences the potential for the formation of intersections. Solving puzzles, finding the solution to a problem, what is commonly called 'creativity', indeed the higher cognitive processes as such, they all rely on the mind's ability to abstract. And, of course, the input needs to have supplied the appropriate data to begin with.
So what's an affinity?
The basic definition is "an inherent likeness or agreement between things". If the functionality of a node cluster is such that another will maintain its nature (ie, its functionality) rather than change or destroy it, then there is an affinity between these two clusters. Similarly, if two ideas can coexist side by side such that neither is sufficiently influenced by the other to change their nature, then there is an affinity between the two. Since the current inherent nature or state of nodes and/or neurons is due to the interdependent processes among the network and based on the chaos-type behaviour of attractors, an affinity is not a random event but the outcome of a specifically configured dynamical space - configured in terms of the existing functionalities. These can be seen as forever shrinking, expanding, altering, morphing, fragmenting, emerging, domains; their actual role is determined by the attributes they have acquired along the way. For a practical introduction to chaos see The mechanics of chaos: a primer for the human mind. How such functionalities affect the cause and effect relationships within dynamic systems see CauseF program, an example of a simulation. These two are part of a guide to nonlinearity.
How can functionality be applied to society?
By observing the functional dynamics operating there, analysing their interdependencies, and considering what they mean in terms of object-related content. Here's an analogy from computer graphics. To render a scene certain parameters need to be considered, such as the material of a surface, its texture, its bump height, the ambient, specular, etc colour; then come the lights, their position, their intensity, colour, scale; next we have the cameras with their own angles, focal lengths, f-stops, and so on. All these are the features belonging to an object, and it doesn't matter what the object is - whether it's a cube, a building, or a human form for that matter. For the computer to render the scene it needs to know those parameters in order to calculate literally every single point of the object. In order to render that scene then all of these parameters need to be identified and chosen, and only then will the result be meaningful. CG artists of course have such detail at their fingertips, and they step through the development of a scene by being aware of the interdependencies, in other words, the effect of selecting some texture under certain lighting conditions for instance. The point is, those features are the functionalities accompanying an object, and whatever the object may be, disregarding them leads to problems. Objects are important too, but in overall terms it is functionalities which first and foremost determine the nature of a scene - objects come later. The functionalities identified and described under the Otoom perspective are the characteristics which determine the nature of our behaviour, and whether humans are male or female, short or tall, their manifested behaviour is a result of their inherent functionalities, in existence at the time. It gets complex once we consider the feedback process in dynamic systems, and the mutual affinities between subsystems. But that's life - in a very real sense. Combining functionalities with the phenomenon of affinity under the auspices of chaos leads us to the principles governing a human activity system on a large scale, that is society. Since society constitutes a subsystem of complex, dynamic systems in general, that rule set also applies to life overall. The 10 axioms of Society list those rules, and The 10 axioms of Life contains the same set but in a more generalised form. One should mention that these rules are not of the 'pick and choose' type. If an opinion or an ideology happens to agree with one but not another this is not a problem for the rules - it is the opinion and/or ideology which needs to be revisited.
Where does latency come into all this?
'Latent' means "existing, but not (yet) visible". A domain with a certain functionality acts on its neighbours, giving rise to the observation in the first place. However, it turns out that the potential for exercising some functionality can exist already, although it is not visible at the time. In the program for instance, among the matrix nodes there can be a functional domain producing green; it's obvious, and coming from some input, say I1. Further input, I2, changes the patch to blue; also obvious. But now some still different input I3 lets the colour green re-emerge, even though without their history behind them the same nodes do not produce green with I3 only. This kind of thing happens all the time, and indicates a functional latency on behalf of those nodes. When eventually realised the latency does not produce an exact copy of the previous re-representative content. The difference depends on such factors as the size of the cluster, the time interval between inputs, and the difference in the inputs themselves. The outcomes look remarkably similar to results obtained from research into false memory syndrome. I would argue that 'memory' ultimately represents the realised functionalities of latencies amongst participating neurons. It seems a domain that has been created from some input (which also includes input from another domain) represents a virtual network defined by its content. Due to latency virtual networks can overlap each other since their ability to re-represent their specific content depends on the overall pattern they constitute. Hence a domain that is affinitive with some other is able to modify the latter depending on the degree of mutual affinity (therefore the emergence of 'false memory'). How many virtual domains can co-exist within any given sector of the physical network of neurons is a function of the neurons' inherent variance; in other words, how many representative states that particular sector is capable of with its number of neurons, their protein formations, connectivity and number of neurotransmitters across the synapses. The combinatorial possibilities are huge even in the case of a few dozen neurons, let alone a few million. Let's use the metaphor of a television screen with its given number of pixels to illustrate how much information can be 'packed' into its finite area. All the images the screen is capable of displaying are the result of having the potentials of its pixels evoked by electrical triggers so that each pixel displays a particular colour at any given time. The combination of all those colour elements produce the entire image. The number of images the screen can display is virtually endless, yet the number cannot be infinite because each pixel has a finite colour range. The more pixels and the greater their colour range (ie, their sensitivity) the larger the number of images which can be produced. No image nor a part thereof is represented by a specific pixel and no other; all pixels contribute all the time (notwithstanding the fact that some images are larger than others). Within the context of biology a similar phenomenon occurs in gene structures (see Atavism), whereby a previous trait re-appears in future generations triggered by a mutation. The previous trait (the latency) gets triggered by subsequent internal input (the mutation) and what had been latent has now become visible once again. Much research needs to be done there, as hinted at in further developments.
How much of the computer model can be applied to the mind?
In terms of its displayed functionalities, just about everything. It's one of the advantages of using functionalities rather than content. The content in the program differs from what the mind would contain - at least so far. But because functionalities can be scaled up or down, the relevance does not depend on specific instantiations of content - it's not what the program does but how it does it which is important.
What type of neural network does the computer model represent?
It does not resemble any of the more or less traditional networks (eg, Hopfields, back-propagation types, SOMs...) for three fundamental reasons: (1) although having predetermined connections between the nodes the respective efficacy of these connections changes during the processing cycles; (2) there are no established layers, and the nodes become part of clusters or disengage according to their mutual affinity relationships; (3) there are no conventional threshold functions, but the effect of one of the other node and/or domain on others depends entirely on their realisable functionalities, ie their latency. Another way of looking at it is to compare Western-style thinking to say, the perception of a Taoist. The former seeks to consciously detail every aspect of life, whereas to the other existence represents a whole which is indescribable through its elements. Both have their pros and cons (the 'fuzziness' of Taoist thought is counter-balanced by the formal framework based on functionality in Otoom). Robots so far incorporate rule sets for each and every eventuality, a 'from the outside-in' type of regulation. OtoomCM on the other hand develops its rule sets from the broad pattern of inputs, it lets them emerge from within. And yes, this can be simulated in a computer program. While a rule applied from the outside will make the program and/or robot perform precisely according to the algorithm, emergent rules provide for any eventuality that can be learned about. The only limitation lies in the number of nodes. All in all it is a highly dynamical, pattern-seeking, self-organising system - a pretty useful definition for Life itself, I would think.
The basic layout is a 2D matrix, but the brain is three-dimensional - does that matter?
No. Firstly, the 2D element matrices (that is, the nodes of the main matrix) act as simulations of the protein formations within neurons and their output is expressed as the sumtotal of the comprehensive state of each el-matrix; their dimensionality is irrelevant. Secondly, the main matrix nodes, although part of a 2D structure, are nevertheless connected to each other across the entire matrix in accordance with the degree of connectivity decided at the beginning (so for example, a 10% connectivity means each node is linked to 10% of the rest). Since a node can be connected to any particular other node, regardless of how far or how near the other is, in terms of their linkages there is no systematic, set layout which needs to be traversed from one location across another to the next, and so on. For the purposes of interacting with other nodes, each node becomes a temporary centre. Although nodes form clusters eventually, they do so in accordance with the affinity relationships ascertained throughout the matrix at any given cycle. Whether the nodes are arranged in a 2D or 3D structure is immaterial. Keep in mind also that a 3D data structure in programming is essentially a series of arrays placed next to each other (the registers in a microchip do not change their shape according to some code). So, all in all the functional shape of the matrix is a forever changing globular cluster featuring dynamic shapes intersecting each other at varying degree of stability.
So what 'drives' the whole thing?
In the computer model, energy and input from outside, the inherent complexity of the nodes themselves, and the algorithm producing attractor-type states among the nodes. For the algorithm itself see page 9 of the IPSI-2005 Venice paper. Much like the biological counterpart really, except for the algorithm which is a simulation of the processes taking place due to the - functional! - richness of neuron cells made possible through autocatalytic closure (see the IPSI paper).
What does the algorithm actually do?
Although a formula can be understood purely in mathematical terms, sometimes it is difficult to conceptualise its nature. So I'll explain it through a metaphor. Suppose we have two large bodies, situated in space (imagine two planets but without the star). Both have gravity because of their mass and therefore act as attractors in relation to each other. Suppose further each of them is surrounded by a medium which, due to the gravity, gets denser as we get closer to the surface (something like water - from steam to an ever denser liquid). Since they float in space we can play the outside observer and see both as moving, or we can see one or the other as being stationary and the other moving. Let's remain with the latter view and let us call the "stationary" body Ref, the other Res. Res is drawn towards Ref because of gravity, hence the closer it gets the greater the speed. But the closer it gets the denser the medium which tends to act against the acceleration induced by the forces of gravity. There are three possible outcomes, depending on the angle at which Res moves towards Ref. If the angle is within a relatively small range from the vertical, Res will close in and eventually end up on the surface of Ref. Increase the angle and there comes a point when Res will get closer but then the medium becomes sufficiently dense for Res to be deflected away from Ref. The result is that Res will glance off, skip back into space for a certain distance until gravity has slowed it down again and causes it to turn back, until it hits the dense region once again, makes it skip off until it is being pulled back by gravity, returns, and so on and so on. The result is an oscillating trajectory, with Res never escaping into space but never hitting the surface of Ref either. Increase the angle further and the continuous movement away from and back to Ref will occur too, but now the trajectory is no longer regular but becomes erratic. Given that in actuality both bodies, Ref and Res, are moving, either one of the three outcomes is possible at any given time during approach. In the algorithm Ref becomes the reference value, Res is the resident value, and the main purpose of the formula is to change the size of the resident value until it ends up as the reference value. If we let the resident value adopt a series of numbers (the reference value remaining "stationary" for the time being), the graph depicting the results from the algorithm will either show a convergence with the reference value, or an oscillation in relation to the reference, or a seemingly random fluctuation. We can term the reference value an attractor, and, depending on the outcome, we have a stable attractor (convergence), or a periodic attractor (oscillation), or a strange one (fluctuation). Every element in each matrix node acts as an attractor at some time, and the target is the equivalent element in the next node. The outcomes (within their own possible range of variance) configure the node overall, and at the next level of complexity determine the degree of affinity between the two nodes. From the nodes we get to clusters of nodes and then to domains, their respective states standing for the re-representative quality in relation to the input. The algorithm is part of the feedback loop in this chaotic system.
The prototype program is written for DOS. Are there more sophisticated versions available?
Yes, there are. See the items under AI programs at the top of this page. The reason the prototype has been written for DOS originally is that it at least allows one to run the program not only in DOS itself but also under Windows. But it also means that the entire code can be compiled under Unix or Linux without any changes to be made. Had it been written for Windows just about every function would have to be renamed and possibly modified because Windows altered most functions to serve its own purpose (this doesn't happen in the context of Open Source). To be properly experimented with the program needs to be scaled up, from the maximal 500-odd matrix nodes here to literally millions and millions of them. It also needs to be transposed into a distributive mode. NOTE: the latest in the series, OCTAM, is also part of a guide into nonlinearity, the phenomenon underpinning complex, dynamic systems in general.
Shouldn't the program be able to communicate somehow?
Actually, it does. Take the original version, where the output consists of a series of discs differing from each other in diameter and colour (or not, depending on what happens within a certain domain inside the matrix). The program has a degree of processing capacity given the size and configuration of the matrix. The discs are its 'words'. Any language, whether formed by high- or low-complex organisms, is derived from the same basis - the complexity of the neuronal system and the means to express. Hence those discs popping in and out of existence on the screen represent the language of the program. Since we humans have bigger brains and a body to communicate in a far more sophisticated manner, the matrix output is virtually meaningless. On the other hand, play around with the program by pressing keys on the keyboard in response to the discs and after a short while certain patterns (a personality?) start to emerge.
Your program doesn't run!
Yes it does. You need to follow the manual. At the risk of sounding patronising, one really needs to read the manual. Keep in mind that OtoomCM and its derivations are a test platform. As such it is open to inspection, the code can be analysed and even be played around with (if you are so inclined). The present format allows everything to be seen, from the most simple to the more complicated functions. So, if you really want to confirm that the program does what I claim it does you can do so. In OWorm for example there are over 12,000 lines of code (including the header file), not that large by any standard but it does take some time to go through.
Isn't scientific research usually introduced via universities or journals?
If a researcher is associated with a university or an institute its sheer presence provides the means to disseminate new findings. The researcher uses the publicity to support the university and vice versa. It's a powerful marketing machine that is not available in this case. Journal articles are restricted in size, a real issue when it comes to wider-spanning concepts. In addition, when the importance and usefulness of research results makes them necessary for wider society the traditional way of waiting for years to come to the fore is not an option. In a wider sense the resulting effects prevent the model to be used as constructively as it could be by decision makers in wider society here and there.
An even more fundamental problem is the kind of conceptualisation employed when dealing with the mind. There are three main aspects that need to be comprehensively understood before one can begin to explain the technical detail: (1) the mind is the only thing in our existence which is subject and object at one and the same time; (2) cognitive and/or behavioural dynamics must be seen in terms of functionalities, not content, in order to be deconstructed usefully; (3) the functional nature of abstractions needs to be appreciated. In none of the texts on hypotheses or plain musings about how the mind could work are any of those three dealt with in the profound manner they deserve. Generally speaking, our evolutionary path has not trained us to delve into those matters, so the unfamiliarity is understandable. On the other hand, all it takes is a little bit of practice and the rewards are not slow in coming (remember when you didn't know how to ride a push bike?)
Some of the issues raised in further developments sound
They do so because so far these kind of considerations had indeed been the prerogative of metaphysics. They are not esoteric musings by some philosopher however - their significance derives from a knowledge base that is formal, analytical, and repeatable.
Regarding the Parallels, how do you justify making the connection with
By identifying the functional drivers of human behaviour at any given time, their potential for domain forming or its opposite can be ascertained as well. The approach follows the same lines of argument as in the computer model, although on a much larger scale and hence higher sophistication. The observations made there have already been highlighted in the main work, "On the origin of Mind", and are therefore not modifications after the fact. To remain technically consistent I have used expressions which are in line with Otoom's context, but on the other hand that can render them somewhat obscure for the outsider. As a consequence sometimes the explanations on these pages may not be all that clear. In any case, I have estimated that to explain the model comprehensively to a student in something like a university course two semesters would be needed (assuming two lectures per week). But what can I do - the mind is anything but simple! Still, "On the origin.." contains 968 references confirming the consistency and validity of the model, the program OWorm has been subjected to 560 tests and as for the Parallels, as of August 2018 there are over 360 events collected from around the world that confirm the model after the work was completed - and counting.
The book is available for purchase.
Why should the entries in the Parallels be considered a confirmation of the
Because the accompanying explanations in terms of the Otoom model are consistent throughout. Suppose you have a pump hidden inside a black box that is subjected to a number of tests where the in- and output is analysed under varying conditions. For someone to claim they know how the pump works they would have to supply a detailed description for every test and those descriptions would have to be mutually consistent. Just like a detective who interrogates five suspects and compares their statements with each other; if four of them match but one doesn't it becomes clear who is lying. The point here is that in the case of the Parallels the 'tests' were not pre-arranged - the events were taken as they presented themselves, they all occurred years after the model was completed, no modification of the model after the fact took place, and the descriptions are consistent with each other.
. . . More >>>