As I had written in my previous blog, I don’t believe a true AI can be a computer program. The question that arises is why not? And if not, what can it be? As I have said, I find that computers are just that, a system that does mathematical computations and nothing more. While we claim that our computers are logic and algorithm oriented, the processors themselves provide very little logic-oriented instruction set. We have a number of mathematical computations hard-coded into the hardware for fast execution that we have to wonder how much logic can be achieved using the instructions that are available for writing logic. If we look at the instruction set, the program control statements can be considered maybe as instructions that allow us to write logic. I can think of GOTO, JMP, CALL, RET. While the binary operators such as AND, XOR, OR etc are called logical operators, I find these are just binary operation instructions. Given that all of them are only related to if-else kind of control, no wonder we write even AI as a huge if-else clause. Does it matter that it is written as a connected network of neurons? It still follows the logic of some “if and else” clause to pass through the data to the next layer of neurons? First, the major question to be answered is “what is logic?”
All of us have written logic and algorithms based on which the whole world runs currently. Yet, I am not sure any of us understand what logic really is! For the number of times I have heard and myself said, “this is wrong logic” in my professional life, I have started wondering if anyone understands what logic really means. The foremost mistake we make is that we look at logic devoid of the data that is getting modified. Taking a basic example: when a programmer writes a bubble sort or a binary search algorithm, the first thought that comes into their mind is the pseudo-code. Subsequently no one seems to write out or cares about the flow of data, see how the data gets modified, and hold that in their mind. All that is present in the mind is the pseudo-code. But, in-fact the steps through which the data flows are more important than the pseudo-code. Honestly it helps solve many a bugs that arise in the code.
So, what happens if we take data first and then start trying to understand what logic really means. Let us take a very simple example of data streams. Let’s say that we got the data stream “1, 1, 2, 3, 5”. By looking at it, we say they are the Fibonacci series. So, according to this data, the logic should be, “the expected next number is computed as the sum of the previous two numbers. Set the first two numbers as 1”. Now, what happens if the following number that arrives in the series is again “5” instead of “8”? Should we ignore “5” as an outlier? Can there be an outlier? Is outlier just a concept that has been introduced by us because we cannot fit something into our definition of logic? What happens, if the following number is “10” rather than “8”? Is “10” also considered outlier? If, instead of writing the logic as “the expected next number is computed as the sum of the previous two numbers. Set the first two numbers as 1”, had we written the logic as “the expected next number is computed as sum of the previous two numbers. Set the first numbers as 1. Repeat the number after every 5 computations”, wouldn’t it suit the series that we would have got? So, now the question to ask ourselves is what is wrong? The logic or the data? So, what is logic?
We need to learn from the best, i.e. nature, to understand what logic really means. After all, many algorithms have come to life because of observing the nature around us. The only problem is that we seem to have observed it after it has already come into existence. So, we have ended up mimicking the result rather than the process. What do I mean by this? To understand this, we need to observe a situation that is currently playing out in front of our eyes, as we saw the stream of data flowing above. So, where can we see such a stream flowing?
I am sure we all have observed how a city expands. If, not we should observe a city expanding. It is very enlightening. In Bangalore, it is visible always because it is a city that is burgeoning at its seams and hence the authorities are always trying to find a way to reduce city traffic. So, it all starts with the BBMP deciding to build a road at the edge of the current city limits to allow trucks to bypass the city roads. So, they choose a really remote location and take obviously their own sweet time, but in time build it out. But, over the years it has taken to build the road, we find that a whole lot of vendors have realised that there is a huge opportunity to sell their wares to the people driving on the spans of roads that are completed. Hence we find these shops springing up on the sides of the road. Then, we find that a whole of builders have started seeing the opportunity and building houses, layouts and apartments claiming it to be near so and so road and selling it at claimed low price. Then people who are fed up with living in the dusty concrete cities start moving out into these apartments and so on it goes. Step upon step building up, till the city comes to the road built by BBMP and the same trucks for whom the roads were supposed to have been built are not welcome on these roads because they cause traffic jams. Bangalore has many such roads, it started with inner ring road, then went on to becoming outer ring road from where we have peripheral ring road and so on it goes.
But, what is important to note here is that we can see this whole as “data streaming”. We can see the logic building as each and every event plays out. The road getting planned becomes the first event that triggers the next event of trucks trying to bypass by using the roads followed by the next event of vendors setting up shops and so on it goes. With each accumulated data, we find that the logic adjusts itself to the data that has come into existence. This is nature. “Logic is never constant. Data IS the logic. Logic changes with data.” Hence, there is no right or wrong logic. If, we had used data to directly represent logic rather than write a pseudo-code that fits the data that we currently know into it, we would have always had the logic that best fit all the data. In such a case there are no outliers, there is just data. Logic can only be wrong when evaluated against some set of parameters that we have set for ourselves.
What we need to see and understand here, is that processing comes after data. In our computers, the processing is primary; data is an afterthought, in the form of registers and external storage. With such a system, the kind of accumulative logic that I have described above cannot be simulated. I believe that such an accumulative logic is THE primary requirement for implementing a true AI. Hence I find that our computer programs cannot come anywhere near mimicking intelligence. We need to start looking at different technologies to implement an AI.