Based on what I have described in the previous series of blogs on AI, it can be seen that the process is an uncontrolled sequence of execution, driven solely by the state of raw data that can be sensed. This type of uncontrolled execution is nice as a research tool. Without the ability to program or control, it is of not much use to create a system that we want to do something specific. So, how can we program such an uncontrolled flow to specifically direct it towards the direction and domain we want the knowledge and intelligence in? Programming such a system needs to be completely different from programming for computers.
Typical programming involves a sequence of steps. A step has two parts. A control that is programmed to drive the execution of an uncontrolled sequence. In a computer, the controlled part is the next instruction in the program and the uncontrolled part is the associated series of hardwired hardware in the form of logical gates that execute to achieve the logic of the instruction such as AND, OR, INC etc. This is a single step of execution. Since the step in computers is hardwired, they are rigid or constant and are independent of the data. These steps are purely mathematical because they have been created with mathematical computation in mind. They are preset and devoid of the changing nature of data. Thus, the programs we write with them also looks upon data as external to itself, instead of, as an integral part of itself. For example, when we write a program to sort a series of numbers, we need to first think of a generic algorithm which is a sequence of steps using the instructions available that will work for any series of data. While we can create threads to execute multiple steps in parallel, the sequence of steps executed in each thread is predetermined and is independent of data. This type of generic, constant, sequential execution of steps will not work for an AI system.
So, how can data be an integral part of the program? To understand this, we need to understand the difference in the “uncontrolled part” of the “step” between what we have in the computer systems to an “Organic-based system” I have described previously. As indicated above, the entire sequence I have described in my prior blogs, i.e., from data forming knowledge; to sequencing knowledge; to filtering and classifying them into different behaviours: all are “uncontrolled parts” of the step. This means, just like the current flowing in the logical gates, the sequence flows through to the end to affect the behaviour of the self-organising network, once triggered by the change in data. There, the similarities end. Unlike the behaviour of hardwired logical gates that is constant on any data input to them, the logic executed in a step of the Organic-based system is variable. The execution logic gets formed dynamically as the data flows through the system. Hence the logic varies based on the data. There is no pre-defined, hardwired logic present. This is “step variance“. There is no prior knowledge of what exact logic is being executed, even though we know the various abstract processes triggered to execute the logic. So, in-effect infinite steps are available. The difference from one step to the next can be very minute to completely different steps based on data. Thus data becomes an integral part of the logic or the program. With these kinds of dynamic variant steps, programs cannot be static. So, how do we write an algorithm when we do not know the exact outcome of a given step?
Another core difference to understand is in “step sequencing”. Given that steps in computers are static, the trigger for the step is also a static representation. Algorithms with such static instruction set is a sequential series of steps, where the steps are chained together to achieve a logic. In an Organic-based system, the presence of a certain behaviour of data drives the start of a step and the logic of that step. In this context, sequential steps do not have an impact on the algorithm. The step itself is a sequence triggered by the outcome of the previous step. In such a system, when multiple behaviours of data are present, both run in parallel modifying the self-organising network together rather than a sequential execution of one behaviour on the network followed by the next behaviour. The advantage of this type of execution is that the logic is formed as a “harmony of the impact of both the behaviours on the network”. The outcomes from both methods are distinctly different. In the first, where the harmony is established of both behaviours, various parameters of the behaviours such as the strength of the behaviours, the relative time of occurrence of the behaviours etc., automatically play a role in the type of network formed and hence logic. Whereas in the second method one behaviour necessarily modifies the network before the next behaviour executes. Hence it falls on the algorithm to take parallelism and various parameters of the behaviours etc., into consideration and program appropriately. To retain the parallelism, the program cannot be sequential. So, how do we program a system where the sequence of steps has to be inconsequential?
The algorithms in such Organic-based systems are not runnable programs that execute to achieve a purpose and stop subsequently. Typical computer programs are run only when invoked either at the startup of the system or manually invoked at some prompt. The OS of the computers themselves rarely has any functions inherent in them. They just provide an environment and a frame in which other programs can run. Without a program to run, the OS just stays at the prompt unused. A program itself can interact with other programs to emulate a distributed nature, but once the core program stops all execution stops. On the other hand, Organic-based systems exist without any programming and accumulate knowledge and intelligence triggered by external data inputs. They are distributed inherently. Data drives the system behaviour, hence, multiple disconnected data can trigger the formation of multiple networks in parallel. They seemingly can be disconnected in the beginning and easily connect later on in the network. Thus the possibilities of the knowledge and intelligence that can accumulate are for all practical purposes, infinite. Hence, in such a system programming is equivalent to limiting the network or logic formed, rather than forming the network piece by piece from scratch. All we need to do is introduce controls at the appropriate locations to prevent a certain logic from forming.
When we look at the organic-based systems in nature, we find that DNA controls the formation of mRNA that dictate how the proteins are formed within the mitochondria. Thus, but varying the DNA sequence, the mRNA formed can be varied and hence the protein which controls the network formed. Along with this, there are enzymes that also control the various parameters of the protein formed. These control the process at various points and act as a program to control the behaviour of the whole system. Thus, we see that the algorithm rather than being a single or forked sequence of instructions, is actually as distributed as the organic-system operation. They are small pieces of control introduced at various points where control is available. When looked at from this distributed manner, the sequence of occurrence of various steps become inconsequential and the control introduced gets triggered whenever the step is formed. The actual complete outcome of a step also is inconsequential because the control is only triggered when the current outcome meets a criteria.
Published on Java Code Geeks with permission by Raji Sankar, partner at our JCG program. See the original article here: Step variant non-sequential programming models for AI systems
Opinions expressed by Java Code Geeks contributors are their own.