Software Development

AI, data, Knowledge and intelligence

As many other terminologies before like cloud computing, muti-tenancy, SaaS, PaaS, IaaS and so on, the definition of AI also has started to blurr irreparably. We seem to want to use the term AI for anything and everything. It goes to prove the time-proven saying once again “if the mountain cannot go to Mohammad, Mohammad goes to the mountain!” If we cannot write a truly intelligent piece of code that works like a human, then bring the human down to the level of the computers. Oh sorry, right, I am confusing AI for metaverse and neural chips! For AI, it should be, “If we cannot write a truly intelligent piece of code that works like a human, call what we write as AI” and be done with it!!! Why trouble your mind to understand and write a true AI? Oh wait! That sounds more and more like a badly directed science fiction movie! And there are too many of those! The engineer in me cringes!

As far as I can remember from college days, and that was a very long time ago, AI seems to have held the fascination for me. I have wanted to write a truly intelligent program for a very long time. But, I have never found an accurate definition of AI nor the tools that would help create it. Now, after working this long in the computer industry I find, still, we have neither the definition nor the tools to implement the holy grail of a “true AI”. But, we seem to entered into an era of “learning algorithms” and “neural nets” and now “deep neural nets” and calling them as AI. But let’s be clear. These are still in the realm of “executing logic” thought out by someone whether it is called learning algorithm or plain algorithm! It just translates down to “if-else” statements that execute, whether those have been manually coded by a programmer or has been manufactured by adding a learning algorithm. The if-else clause stays. Just a difference of an intricate hand-made handicraft vs mass manufactured articles. That does not make it an intelligent program.

I think it is time to call the spade a spade. What is written as algorithms for self-driving cars, are purely learning algorithms that learn some pattern that is present in some inadequate training data. They do not learn a skill. A skill is when the training is very little and of just the generic rules. In a skill the smallest learning can be easily extended and expanded in all directions, applied and adapted to all scenarios based on the need. A skill becomes acute over time, after all, “practice does make perfection”. A truly AI self-driving algorithm will not have to be trained and re-trained every time a new rule needs to be added or new roads need to be driven on. Another example. When we write algorithms to detect heart conditions or worsening health of a patient, they again are just algorithms that are taking some pattern that is present in some training data based on some expert’s opinion of what pattern should be, and applying it to some other set of data to draw a parallel. That again is not true AI. It is pre-coding an expert’s opinion into some learning algorithm to prevent a manual coding of complex algorithm, nothing more. It is similar to all the operational algorithms that were written to solve the supply chain problems a decade back, but with newer terms and learning the mathematical equations rather than hand-coding the mathematical equations.

I find it is time we stopped diluting terms, took a step back and asked ourselves a few core questions. “Are we defining AI correctly”?, “Can it be called AI if the domain of operation is very narrow such as healthcare, astronomy etc? Can not the learnings from one domain be cross-applied to similar situations across domains?”, “What are the various terminologies that goes into defining an AI that can be adapted for any domain, any data and any parameter without having to rethink a learning algorithm?” “Should a AI be such that is it pre-trained or can it be trained on the job (as we are asked to do in every job!)?”

In my view, a “true AI” is that program, which, when put into “any situation” can collect the necessary input data. Pull out from its own learnings the most suited learning which can be adapted. Adapt the learning to the current situation it is in and react. Add the outcome of the current situation back to itself as a confirmed learning of what the reaction of the action is. Judge the current outcome against some common expectation that it had set for itself, detect the difference and make appropriate adjustments and save it for the next similar situation, as a learning. This needs to continue in a cycle. Only such or a similar abstract generic implementation can create a truly intelligent system that will learn and grow. Anything else is purely coded algorithms for a specific problem and domain.

I have called it still a program, but anymore, I have my doubts that this can be implemented as a “computer program” at all. I think, we need to start recognising that the “computer” is just that, a “computational device” not suited for “a knowledge or intelligent system”. I have still called it data, but again I do not any more believe the “data representation” that we have suffices to implement an “intelligent system”. It is works for a “mathematical computation” where precise values are needed to compute or solve a set of equations to get an output and applies where precision is needed in the output computed. But that is all a discrete set of data is capable of, precise and precision. It is single dimensional, where even the first dimension is not fully contained within a single representation of “1” or “0”. We need to recognise that discreteness does not do any good for AI. We need to represent continuous, multi-dimensional data and this has to be contained within a single representation such that can it be easily melded, moulded and worked with to give other valid values as they are melded and moulded. This is definitely is not present with discrete, digital data representation.

The only abstract terms I find we need to work with when we write an AI is “knowledge” and “intelligence”. Anything else such as classification, linear regression or any such mathematical equations only dilutes the AI. Mathematical equation can be one type of knowledge, but that is where it stops. It cannot be the whole defining factor of an AI.

I believe all of us know the difference between “knowledge” and “intelligence”. We call some people knowledgeable where as others we call intelligent. We typically tend to use the term “knowledgeable” when a person has a vast experience and knows a lot of stuff because they have done it, while we use the term “intelligent” where a person can take the knowledge they have and relate it appropriately according to the given context and come up with an accurate conclusion. This distinction makes a huge difference to writing a true AI program. So in an implemented AI, “knowledge” is the information that is present in “the data”, while “intelligence” is taking the “information” in the data and relating them according to the situation and coming to a conclusion. So, when we look at Convolution neural networks that are used to analyse images, the “features” identified in the image are “knowledge”. It should be noted that the “classification” of these features is also just “knowledge”. When these features and classifications are related and adapted based on the situation for which the knowledge is applied, it becomes intelligence. We tend to call classification as intelligence. But, that is just an intermediate to the actual application of the knowledge found. Intelligence is when the raw knowledge, the processed knowledge and many such knowledge are taken and adapted to a given situation. Which automatically implies that “intelligence” has to be real-time while “knowledge” has to be accumulative.

Published on Java Code Geeks with permission by Raji Sankar, partner at our JCG program. See the original article here: AI, data, Knowledge and intelligence

Opinions expressed by Java Code Geeks contributors are their own.

Subscribe
Notify of
guest

This site uses Akismet to reduce spam. Learn how your comment data is processed.

1 Comment
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
Peter
Peter
2 years ago

Very good article

Back to top button