An overview of knowledge-based and industrial approaches for understanding AI

Artificial intelligence started as a subject with the primary aim of developing solutions to very complex problems. The research in the domain of artificial intelligence was primarily focused on developing elementary reasoning steps as the first steps towards solving a particular problem. As the number of reasoning steps was limited, this approach proved to be a weak approach. The increase in the number of reasoning steps demanded an increase in the information required for preprocessing. Consequently, larger reasoning steps would be able to solve complex problems with a greater degree of accuracy. However, for the full-fledged development of knowledge-based systems, artificial intelligence and machine learning courses started to shape our technological outputs. One of the most important approaches that remained intact over a period of time and never lost its relevance for knowledge-based systems was the Dendral approach.

The Dendral approach and McCarthy advice taker approach

The Dendral approach was first developed at Stanford where a team of scientists deciphered complex molecular structures with the help of information collected from a spectrometer. The program developed by the scientists generated every possible structure as per the given formula. However, the disadvantage associated with this was the lack of precision and a relatively large number of structures predicted as an outcome.

The Dendral approach was important because it led to the development of the first knowledge-intensive systems. This type of system worked on a large number of predefined rules in a methodological manner. Over a period of time, the Dendral approach was gradually replaced by the McCarthy Advice taker approach. The main premise of this approach was to separate the components of knowledge that worked on the basis of a set of rules from the components of reasoning. These two approaches led to the development of intelligent systems that could perform tasks of the level of human expertise. However, these tasks needed to be pre-programmed. Moreover, items that were designed for a particular set of tasks were limited to the performance of those tasks and could not be extended to other tasks.

AI begins to industrialize

AI began to industrialize itself as early as 1980. During the subsequent decade, we saw the formation of various systems that helped in configuring various types of orders. By the end of 1980, it was estimated that IT companies had already saved more than 7 million dollars per year. With the arrival of 5th generation technologies, the idea of intelligent computing became much more prominent than ever before. This led to the development of computer programs that had the capability to process thousands of data sets in a span of a few seconds. This gave a new lease of life to natural language processing capabilities. The state of artificial intelligence from 1980 to the end of the twentieth century preceded swiftly with the development of microelectronics, microprocessors, and other components of computer technology. At the cusp of the new millennium, we saw the rise of artificial intelligence that was self-sufficient in itself. Although in its nascent stage, narrow AI became extremely popular with its application to the everyday tasks of human life. Contributing to the various spheres of human life, AI continued to gradually develop in the age of automation.

Next in line was the advent of technologies like Big Data Analytics, cloud computing, the internet of things, and the like. All these technologies acted as a catalyst for Artificial Intelligence and the systems powered by this technology. This was the time when artificial intelligence started to show its real potential by paving the way for the advent of the fourth industrial revolution. The advent of the fourth industrial revolution has often been considered as the maturity of artificial intelligence systems. However, the growth trajectory of artificial intelligence has often been subjected to severe criticism. For instance, it has been proposed that artificial intelligence would give rise to a genre of intelligent machines that would overpower the thinking capabilities of human beings. Critics argue that such machines can go beyond human control and wipe off the human race. However, this can be termed as an exaggeration of the exponential growth trajectory that artificial intelligence has been tracing in the last few decades.

The future roadmap

There is no doubt in the fact that artificial intelligence will continue to develop at a rapid pace in the coming years. The need of the hour is to channelize the progress of artificial intelligence towards the development of such technologies that benefit humans in the long run.