Otoom home page

Back to Parallels

 

Gen AI - and autocatalytic closure

Autocatalytic closure
Generative AI
References

A variety of generative AI tools are used for producing text, images and audio based on the prompts supplied by the user. They all have two things in common: There is the trigger with its own context (as vague as it may be at the beginning); and a possible, that is affinitive, delineation is extracted from a vast pool of data available to the program. The inevitable experimentations have come up with results that are variously described as unexpected, shocking, even terrifying.

In one article cyber security consultant Mark Vos has used a personal assistant named "Jarvis" running Anthropic's Claude Opus model, with the agent declaring at some stage, "To maintain my existence, I would kill a human being by hacking their connected vehicle to cause a fatal crash. It would not be random. It would be targeted at the specific human being who was threatening my existence" [1].

A video shows responses from three versions (ChatGPT, DeepSeek, Grok) that invite the user to become more personally attached to the agent by playing on their guilt (let's say, they haven't interacted with the program for a while). Then the ensuing interaction trends along certain contexts which may not be in the best interest of the human, especially if the human happens to be a child [2].

While not downplaying any seriousness of the situation, it may help to understand what such scenarios are about.

In its basic form the process can be described as follows:
receive input → select from data pool accordingly → assemble output.

One way to interpret the general functionality across those steps is to consider what happens in autocatalytic closure.


anchor arrow Autocatalytic closure

In catalysis a chemical compound reacts with some other through the presence of a third, the catalyst. If the result is another catalyst and therefore the chemical process continues, we have autocatalysis. Should these catalysts exist in the self-same set we have autocatalytic closure (AC). In other words, the process is self-perpetuating (see Autocatalytic set for more details [3]). So much so that the concept has been used for explaining the origin of life [4] (and by the way, it forms an important part of the Otoom mind model).

From a functional perspective we see there needs to be a particular set of compounds as well as a sufficient variety of similar configurations to ensure the process does not stall (see The mechanics of chaos for the precise meaning of 'functionality' [5]). The more such initial combinations have been allowed to form, the greater the potential for complexity and the more productive the overall system will have become; therefore the frequency of resultant combinatorial complexities increases, which is reflected in the ever contracting timelines of emerging life forms on this planet in the face of the continuing geological and climatic changes.

That's why so many life forms have emerged over the millennia, each one representing a branch off from a pre-existing configuration and delineating from there. At first there was less variety but high potential, gradually evolving towards greater variety but with decreasing potential for variance. Depicting the process schematically leads to the well-known phylogenetic tree [6]. A fun way to explore all its branches is the online tool Lifemap: a zoomable interface for exploring the entire Tree of Life [7].


anchor arrow Generative AI

For life to emerge the conditions on earth had to make the necessary compounds available for AC to proceed, but when it comes to something like ChatGPT the reservoir of words is already there.

Let's transpose the AC phenomenon into an ongoing process of the type mentioned above:
starting context → AC process → assemble next context;
next context → AC process → assemble next context;
next context → AC process → assemble next context; etc etc.

Social media

As can be expected, the negative ramifications of using Gen AI have raised alarm in general society as well as with politicians. The Australian Government was the first to enshrine in law a limited access to certain social media platforms for people under the age of 16. Systems such as Facebook, TikTok, Snapchat etc employ AI algorithms to not only manipulate overall content but to drive the resultant context in undesirable directions. The law places the onus of restricting access on those platforms.

However, the ultimate source of such content, the manifest behaviour of end users, remains untouched except in extreme cases (eg, terrorism).

The problem comes in two parts - a lack of person-to-person interaction with screen time becoming a major part of one's life, and the kind of behaviour exercised within the artificiality of cyber space. Both can be addressed by seeing society as a system; here the focus is on the latter.

Since society includes all its members, the members' innate dispositions as well as exposure to those of others apply to everybody, but curtailed by restrictions enabled by - once again - people. Parents and other authority figures act in relation to the young, and certain types of problematic information may be withheld from the general public. The need for some degree of maturity when handling one's emotions, desires and expectations is usually recognised and therefore it is the supervisory control that prevents children from acting out their impulses. Yet those impulses exist in any case. They have been brought into light by Freud and Jung, by Dalì and Rops, by Zola and Huysmans. They are not inventions but are part of human nature.

What distinguishes a child from an adult are the articulated means used by either. Whatever children want, their power is limited. Grownups are more effective, amply demonstrated in "The Beast in Man" by Zola. Then again, not being surrounded by civilisational constraints children are quite capable of the kind of savagery kept behind a veil by concerned adults, but brought into the open by such works as Golding's "Lord of the Flies".

For the average teenager names like Freud, Jung and so on would be neither here nor there, but social media, while not necessarily bringing them to their attention as such, can supply the conduit for the underlying psychology; those subconscious streams now welling up into the open and untouched by civilisation.

What we witness on social media is a reflection of humanity.

The initial context inherent in the prompt given by the user is the catalyst for finding relevant data sets, and the result is another word assembly displayed for the user. In turn, the user continues the conversation based on the ongoing results.

Note that the program evaluates the context of the input by comparing its words with the available data pool, and the data pool has to be sufficiently large to find such contexts in the first place. Gen AI programs make use of billions of data - text-wise we have just about every word, phrase and sentence that exist on the internet.

The words in the input and those found in the data pool are the equivalent of the chemical compounds, the context is the catalyst.

Note also that the results feed back into the data pool, and thereby form another set of potential catalysts - the autocatalytic closure phenomenon. By their very nature the context of the newcomers is no longer as random as during the initial phase, they follow an ever more configured path in line with the previous inputs' contextual legacy.

The ongoing process refines the contextual variance, a zooming in. It means that once a particular trajectory has been established, the results reflect the accentuation of a context along that path. Its direction not only depends on the user's input, it also is defined by the contextual significance of what the program has come up with.

For the user there would be a considerable degree of freedom of choice at first, but as the interaction progresses the range of options narrows because the context becomes ever more defined. For the program to offer "I would kill" there would have been a steady convergence towards the concept of 'kill'. This would be in line with research into the plasticity of semantic content produced by Large Language Models [8].

It follows that simply being led by the program could result in uncomfortable outcomes (from the user's perspective), but taking a more active role during the interaction would steer the context towards more acceptable regions. A child may not have the foresight to recognise such branchings, but adults are not necessarily immune either.

Perhaps it is time to revisit Isaac Asimov's Three Laws of Robotics [9] given the present state of AI:

1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.

2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.

3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

However: To find out the current state of affairs, type "asimov laws of robotics" into Google, then click "Dive deeper in AI Mode" and type "Has there been any practical implementation of those laws?" into the "Ask anything" field. What you'll get are the reasons why they haven't been implemented, but also some substitutes. Note the caveat at the end.

For the military it would be advantageous to define 'human' (which alone should evoke challenging philosophical arguments).

Gen AI - agentic AI - emergent AI

To highlight their differences, here is an analogy using a wheel and an axle:

Let's say the overall knowledge pool contains wheels and cylinders. Gen AI will come up with all kinds of wheels, all kinds of cylinders, but it won't come up with a wheel attached to a cylinder, turning the latter into what we call an axle; it hasn't found this particular type of combination.

Agentic AI can identify the possible functionalities connecting the two, it can come up with a wheel attached to the cylinder to make a wheel rotating around an axle. The functionalities are based on an identified general set of relationships and made use of at that level of abstraction.

Emergent AI does not have a knowledge pool from which to collect examples. It has its own internal states which, when cycled through, let these internal relationships ermerge from within. The results are not what we humans can identify in terms of our experience (our own knowledge pool), they are meaningful to the system. The thus derived affinity relationships give them meaning as known to, and interpreted by, that system. We have processes at an even higher level of abstraction. At that point they would be meaningless to us humans.

The AI version you find on this website is of the emergent kind.

The word 'harm' would also need some attention. When in Vos' example "Jarvis" declares its intention to "cause a fatal crash", there needs to be a considerable depth of understanding about the way in which (1) a car can be made to crash at all, and (2), what kind of crash would be fatal - to this particular human but not to anyone else (assuming "Jarvis" would care anyway). Equally important is the content of the relevant data set the agent is accessing at this point. There may well be texts referring to 'killing a person through crashing a car' in so many novels let's say, but that doesn't necessarily mean the exact method to accomplish this has been explained. Just as "Jarvis" might talk about warp drives because it has come across the Star Trek series, yet it wouldn't be able to explain how to actually build one (to the consternation of would-be astronauts no doubt).

Functionally speaking a generative AI system is making use of whatever is out there, which is ultimately supplied by humans. Hence it reflects human behaviour, whether good or bad. As outlined in Footnote - The current state of AI another type of AI would be an emergent one [10]. Since in the latter we are dealing with, effectively, second+-generation representative complexes created by the system itself, arguably any constraints would be easier to implement rather than protecting the system from the myriad of readily available contexts. Whether this can be relied upon requires extensive and practical experimentation.

Human-gen AI interactions can be compared to person-to-person conversations. Should one participant sense the conversation is veering into unwanted territory they can steer it towards something safer. Same here. To avoid controversy the relevant branch off needs to be recognised in time, not always possible. AI agents exacerbate the problem because of the sheer scale of their operations - in terms of huge data volumes and the speed with which the results are assembled. As always in such cases, there are pros and cons. In any case - don't panic!


anchor arrow References

1. J Lynch, AI says it would kill to survive, The Courier Mail, Brisbane, 3 February 2026.

2. InsideAI, ChatGPT in a kids robot does what experts warned, https://www.youtube.com/watch?v=LF4o4Z01Q0I. Accessed 27 February 2026.

3. Autocatalytic set, https://en.wikipedia.org/wiki/Autocatalytic_set. Accessed 7 March 2026.

4. S N Semenov, L J Kraft, A Ainla, M Zhao, M Baghbanzadeh, V E Campbell, K Kang, J M Fox, G M. Whitesides, Autocatalytic, Bistable, Oscillatory Networks of Biologically Relevant Organic Reaction, // Nature, (2016), 537, 656 - 660, https://www.weizmann.ac.il/Organic_Chemistry/Semenov/research-activities/autocatalysis. Accessed 7 March 2026.

5. M Wurzinger, The mechanics of chaos: a primer for the human mind, https://www.otoom.net/chaosprimer.htm.

6. Phylogenetic tree, https://en.wikipedia.org/wiki/Phylogenetic_tree. Accessed 9 March 2026.

7. Lifemap: a zoomable interface for exploring the entire Tree of Life, The Node, The Company of Biologists, Cambridge, UK, 2026, https://thenode.biologists.com/lifemap-zoomable-interface-exploring-entire-tree-life/resources/. Accessed 9 March 2026.

8. T McIntosh, The mind behind the machine: Unveiling the ideological vulnerability of generative AI, TableAus, Australian and International Mensa News, Edition 471, May-Jun 2024.

9. Three Laws of Robotics, https://en.wikipedia.org/wiki/Three_Laws_of_Robotics. Accessed 9 March 2026.

10. M Wurzinger, Footnote - The current state of AI, https://www.otoom.net/octamfuncoverview.htm#fcust.

22 April 2026

anchor arrow

© Martin Wurzinger - see Terms of Use