History of Machine Learning Gathered Through Research and Unsupervised Learning

Limitless Learning

Shikha SaxenaSep 17 · 4 min read

Photo by Markus Winkler on Unsplash

If you are in a “forever learner” mode this topic is for you. Are you intrigued by the latest technological research and advances and always are curious? Are you compelled to do your own research and be in “learn more” mode? The final outcome > you tend to accumulate a lot in the process. It is always good to share your learnings. The History of Machine Learning gathered through research and unsupervised learning are yours to ponder upon.

Machine Learning: History of Evolution

Machine Learning(ML) is a branch of Artificial Intelligence (AI)that is based on the model of brain cell interaction. This model was created by Donald Hebb in 1949 and published in his book The Organization of Behavior.

This book is about neurons, their communication, and their interaction when they are fired or activated. According to Hebb when one brain cell (neuron) continuously assists other neurons in firing, the first cell develops strong synaptic knobs /nodes or synapses. Synapses in contact with the soma of other neurons strengthen it.

Hebb’s concept can be easily translated to Artificial neural networks and artificial neurons where the relationship between artificial neurons or nodes can be strengthened or weakened if the nodes are activated at the same time or at different times respectively.

The word “weight” is used to describe the relationship between the same sign neurons /nodes and opposite sign neurons/nodes. Nodes with the same sign give strong positive weight and nodes of opposite signs result in weakened strong negative weights.

Machine learning uses algorithms and neural network mathematical models to enable computer systems to improve their performance with less human intervention. ML algorithms build mathematical models using sample data also called “Training data”.

Training data fed to the systems gradually build up and help the system to make decisions without even being programmed to make such decisions, finally contributing to the Artificial Intelligence of the system.

Arthur Samuelof IBM who developed a computer program to play checkers game in the 1950s coined the phrase “Machine Learning” in 1952.

Samuel designed a scoring system depending on player’s positions and pieces onboard and other mechanisms to score the chances of a win for each side of players. He then programmed and developed a system that got better by recording and remembering each players positioning and board moves and combined with reward function. He called it rote learning.

In 1957Frank Rosenblatt combined the concepts of Hebb and Samuel to bring in the concept of Perceptron custom made for image recognition.

The neural network and machine learning then resurfaced in the 1990s after the use of “multilayer perceptron” in the 1960s and “nearest neighbor algorithms” in 1967 were conceptualized.

Backpropagation developed in 1970s help to strengthen and train Deep Neural Networks (DNN). The multiple hidden layers of neurons or nodes adapt to the new situations. The errors encountered in output layer of machine learning system is backpropagated or sent backwards to the network layers for Deep Learning (DL) process, just similar to our brain system making mistakes; and rectifying it second time.

Artificial Neural Networks(ANN)has hidden layers which respond to much complicated tasks than earlier perceptron. ANNs are the basic tools of Machine Learning(ML) with input and output layers with multiple hidden layers of neurons /nodes to recognize patterns that are impossible for human programmers to detect.

During 1970s -1980s AI and ML took separate paths and again in 1990s ML evolved and resurfaced with “Boosting Algorithms” .

With the paper of Robert Schapire in 1990 “A set of weak learners can create a single strong learner” concept of Boosting was presented. This concept reduces the bias of supervised learning and includes ML algorithm to transform weak learners into strong ones using average and weighted average votes of learners and conviction of prediction.

For example, your daily judgement of spam mail from a non spam one. Comparing all learners/pointers whether it could be a spam or non spam, iterating judgement and final prediction leads to accurate result.

Speech recognition (1997) facial recognition (2006), Google’s X Lab (2012) and Facebook Deepface recognition (2014) algorithms are recent instances. Machine Learning along with Business Analytics may solve number of business and organizational complexities in near future!

Learn as if you are there, to live forever…..Mahatma Gandhi

Photo by Tengyart on Unsplash

CodeX

Everything connected with Tech & Code. Follow to join our 600K+ monthly readersFollowing

Get an email whenever Shikha Saxena publishes.

You cannot subscribe to yourself

WRITTEN BY

Shikha Saxena

A Technical Writer, an artist and blogger by choice. Passionate about reading , writing and editing. http://www.shikhasaxena.com and https://www.dnabox.co/

CodeX

CodeX

Everything connected with Tech & Code. Follow to join our 600K+ monthly readers

Share Your Thoughts