Use this one simple trick to understand machine learning

Machine Learning is a lot like gravity in outer space. That’s the trick! Let me explain further how this analogy works.

Man looking at a starry sky
Photo by Klemen Vrankar on Unsplash

We know that Machine Learning systems learn by example. We know they must be trained with varied and representative data since the only way to train and improve the system is through more and better training data.  By learning from known-good, representative examples, the system learns to associate unseen examples with previously seen examples and thus can make useful predictions.

We can build an analogy in the following way.  Each piece of training data is a star and our machine learning model is the gravitational field of the universe.  In space as well as in our example every galactic body has a gravitational pull on every other body.  Further, as we add more training data (more stars) to our universe they start to form gravitational clusters – zones of similar bodies with a strong combined gravity.

Thus, training a machine learning model is like dropping stars into space and letting them form gravitational clusters.

A galaxy is a gravitational cluster (Photo by Bryan Goff on Unsplash)

We don’t create machine learning models for the thrill of setting up gravitational clusters – we create them to make predictions on data they have never been trained on, data they have never seen.  Let’s add that to the analogy.

Each piece of unseen data is like a tiny asteroid dropped into the system.  A machine learning prediction is thus what gravitational cluster this little asteroid gets pulled to.  Remember stars pull on asteroids – asteroids don’t pull on stars.   In a well trained machine model our unseen data asteroid will be gravitationally pulled to the correct gravitational cluster and our prediction will be made.

Asteroid floating in space
An asteroid, or astronaut, will pulled by the strongest gravitational body (Photo by NASA on Unsplash)

Building on this analogy we can visualize additional key points relating to machine learning models:

You need enough data – both volume and variety.  Without sufficient training data you can’t form gravitational clusters and thus unseen data won’t be pulled in the correct direction.  (For more, see Why does machine learning require so much training data?)

Bad (or non-representative) data hurts the model.  Every piece of training data impacts the model through its gravitational pull.  If you train the model on data that it will never see at runtime, predictions can still skew towards this non-representative data.  I have seen several-percent accuracy improvements in models when non-representative data was removed from the training set.  As popularized in the movie Gravity, space junk is deadly!

Real outer space isn’t static – and your model shouldn’t be either.  This one is stretching our analogy a bit.  Nevertheless, over time your original training data becomes less representative of the unseen data the model must predict.  Thus your gravitational universe requires old stars to be removed and new stars to be added.  This maintains the health of your machine learning model’s gravitational field.

I hope this analogy helps you understand machine learning better!

Footnote on the analogy:  The actual math behind most machine learning models uses hundreds or thousands of dimensions and outer space only has three.  Nonetheless, I find this an interesting way to think about machine learning.  Note that his analogy works best for ML classifiers, but it’s still a workable analogy for other model types.  (See more of my thoughts on classifiers at Cognitive classification and what it can do for you).