OCTAM v.2 - the artificial mind
Contents
Overview
Functional overview
Settings - and what they mean
Where to go from here
Tests and Error messages
Contact and bug report
Copyright and credits
Where to go from here
See also Footnote - The current state of AI.
There is one feature which, while unattainable so far, would provide a true simulation of an organic brain: semi-autonomous nodes. In nature each neuron acts in an autonomous fashion as far as its innate functionality is concerned (beyond that they answer to the contingencies of the wider system, hence do not act completely on their own). Therefore whatever and wherever the input, each neuron 'knows' what to do. To replicate that in a computer, we would need as many threads as there are nodes in the matrix. Even some of today's supercomputers ('massively parallel' is the - albeit relative - term used in this context) barely make the grade for the higher-scale deployment of OCTAM's AI Engine, where something like 900,000 nodes is easily reached (main matrix nodes x element nodes).
Thomas Alsop mentions Japan's Supercomputer Fugaku with its over 7.6 million cores (Fastest supercomputers by number of computer cores 2020). In addition parallel processing would not only mean processes working in tandem but also allowing for a variety of computation types. As a consequence the difference in efficiencies given the varying tasks within the same system would need to be addressed, already an issue with OCTAM. The problem is discussed in Finally, how many efficiencies the supercomputers have? (Vègh, J., The Journal of Supercomputing, Springer link, 2020). Of course, simulating any natural process through a model brings us up against the same barrier.
Apart from the above, when it comes to future developments there are two general options. One, expanding the cognitive capacity of the current version, and two, putting OCTAM's chaos-type characteristics to various uses.
The first option is a matter of increasing the number of nodes in OCTAM. More nodes means deeper processing of input, a heightened capacity to form clusters (ie, node sets with similar affinity levels) and their sub-clusters, and therefore a greater ability to form affinity connections between initially unconnected clusters due to their mutual similarity. Which also means it aids memory because there is more latency (see the FAQs page → Where does latency come into all this?, also FAQs → So what's an affinity?).
To enable OCTAM to increase the number of nodes requires making more memory space available to the operating system. If this is possible OCTAM can deploy more nodes.
The second option makes use of the chaos-type dynamics not only for cognitive processing per se but also to transform any input into patterns that evolve as a function of input.
Input could be sound, or visuals. For example, music can be streamed into OCTAM and turned into a projection of shapes and colours that change with the sound, yet produces patterns which are based on the patterns within the music. Since internal processing takes place all the time, the result is not a repetitive display but has the tendency to form its own variations, yet all ultimately derived from the input.
Of course, input from a camera can be used in a similar fashion. Hence the output can be sound from sound, or sound from visuals, or a combination of all.
Thus OCTAM can be used to render any form of input into a particular 'language', just as human words are a rendition of our mind's internal states, made possible via our senses.
All of the above can be further expanded by linking copies of OCTAM together through the internet. Then the ultimate capacity will depend on the number of users logged on at any given time. Therefore a thousand copies of OCTAM, each with 100,000 nodes, becomes an entity having 100,000,000 nodes at its disposal; and so on.
Similarly, with the second option the output at any given location can be the result of input having come from several places, altogether forming a multi-faceted interpretation of many events happening at the same time.
Although in this version the input is restricted to web cam, microphone and keyboard (and the relevant outputs), any device with a digitised interface can be directed to the AI Engine and output from the AI Engine can be streamed back to the device; this includes sensors and mechanical devices.
© Martin Wurzinger - see Terms of Use