All indications are artificial intelligence (AI) will transform our future in significant ways. Today, AI already automates many activities that were typically relegated to humans, but how exactly did we get here?
AI was first implemented as specific rules that encoded types of tasks and decisions into software. Some examples are trading rules for the stock market that initiate a sell order if the stock falls below a certain percentage or an approval workflow that examines expenditures and approves them based upon pre-set allowable amount ranges. These types of AI are called “expert systems.” These systems continue to be widely used where the rules are unchanging with little—if any—need to handle variances.
Sir Ronald Fisher was probably the earliest known person to introduce the concept of statistical models that could be considered machine learning when he published his research in 1934 that covered classification of the types of Iris flowers. The model that he introduced involved statistical analysis of the data in a way that could predict the type of Iris based upon learned data sets. For a few decades, the area of AI research stayed mostly in this expert system realm.
Many tasks require more sophisticated analysis and adaptability. Enter the age of “machine learning” where the AI could be more adaptable. Machine learning is basically the group of algorithms and models that can learn and make predictions on data. Progress in this area established the foundation of where we are now with Google’s GO-winning AI and IBM’s Watson-powered services.
The Brain-Inspired Perceptron
Frank Rosenblatt was the first to design a learning algorithm, inspired by the brain, called the Perceptron. The Perceptron was the first system that supported what is called supervised learning that allowed a system to gradually “understand” how to accomplish a task by input provided by the person operating the system.
Research continued and in 1974 Paul Webos took an interesting concept called backpropagation and applied it to neural networks. Backpropagation expanded the learning function of the perceptron by allowing feedback regarding errors to be known by the individual nodes of a neural network that helped to create the output. In the 1980s, the concept of a multi-layer neural network was introduced.
Interestingly enough, AI research fell out of favor during a period of what is now referred to as the “AI Winter.” Essentially, like any technology, the hype came first followed by the realization that the maturity of the technology was not far along. As a result, funding for AI research dwindled.
One of the principal reasons for this extended AI Winter was the lack of available (and cheap) computing power. With the emergence of cloud computing in the 2000s, everything changed. With parallel processing more robust, faster and less expensive as well as big data everywhere—also relatively cheap to store—suddenly AI applications could be developed in every field and across industries. What AI needs most is processing power and volumes of ground truth data.
To find out more about where AI is today in document processing and where it’s headed, you might find this article and eBook interesting: