Machine Learning Analogous to Human Learning

Varun Mittal
3 min readMar 9, 2021

This post is regrading the analogy of machine learning from the perspective of how humans learn/work. This will help aspiring data scientists and people new to data science correlate and remember the concepts of machine learning.

Here are some of the analogies that I tried to derive between human and machine learning,

Training data : Like human learns and builds their own experiences from different mediums of information, machine learns from training data curated for a specific problem.

Goals and Objectives : A machine learning models is trained for a specific application and the performance of the models is measured by using some metric like accuracy, precision, recall etc, likewise as a human being we also learn or do something to achieve a specific goal or objective.

Learning by mistakes/errors : Like human beings improves themselves at a specific task by identifying mistakes and then practice to improve those mistakes, machines learning models also calibrate themselves by keeping an eye on the error and reducing the errors over successive iterations.

Bias : Like human beings are biased towards one thing or other based on their experiences and beliefs, machines learning models also become bias depending on what kind of data they have been feed to learn patterns. Also, it’s not the machine learning models that get biased themselves, rather it’s the human induced bias in the training data that is feed to the models.

Overfitting or high variance is when the model learns very minute variances in the data like sometime we take the words literally without getting the meaning.

Under-fitting or high bias is when the models is unable to learn the underlying patterns because of not enough data or too simple model compared to the complexity of the data, this is like when we don’t have enough information on something and try to derive some conclusions.

Regularisation : To avoid overfitting in machine leaning models we use regularisation so that it penalises the features/samples that don’t contribute much to improve the model performance, likewise in real life we also use regularisation to cut and don’t bother about the things that are negative or not true and are not of any help.

Prediction : We use our past experiences and knowledge that we have gathered from various sources to make decisions and work, likewise a machine learning model predicts the result based on what it has learned during training from input data. And like sometimes human are confused and don’t have a specific say, machines also give a probability/confidence of their outcome rather than a definite result.

Distributive computing : For huge tasks machine learning models need more computation that can be facilitated by distributive computing, similarly to achieve bigger goals and solve large problems, we form a team to distribute the work.

Ensemble learning : Ensemble learning in ML is to train a number of models on a specific problem and use the votes or say from each model to derive at the final result. Similarly we seek advice from people from the same domain (bagging) or take decisions based on own/others experiences (boosting) to derive at any conclusion.

Finally the way neural network methodology of machine learning works, is taken from the analogy of how humans brain works.

--

--

Varun Mittal

A practical result oriented person having more than 7 years of industry experience as a data scientist and bigdata engineer.