effort-free
Learn

Classical ML

(the pattern finders)

Classical ML learns from examples. You show a computer 10,000 emails marked “spam” or “not spam.” The computer looks at every word, every pattern, every length. It builds its own rules. Nobody writes the rules by hand. This is how your email filter learned to spot junk mail. It is how your bank’s fraud system learned to spot stolen cards. It is how Netflix learned what you might watch next.

FEATURE AFEATURE BDrop the new dot anywhere. The model picks its class based on which side of the line it lands on.

Drag a new dot onto the chart. The model places it based on the boundary it learned.

The process

Classical ML looks at labelled examples and finds a pattern that separates them. Picture thousands of emails plotted on a chart. Spam emails land on one side. Good emails land on the other. The computer draws the line that splits them best.

Once the line is drawn, any new email gets placed on the chart. If it lands above the line, the computer calls it spam. Below the line, it goes to the inbox.

The chart in real life is two dimensions. It has thousands of dimensions, one for every word or feature the computer tracks. Humans cannot picture this. The computer does not need to.

A modern spam filter checks around 1 million features on each email. A human wrote the code that checks them. No human decided which features matter. The computer figured that out from examples.

A familiar example

Think of a new waiter on their first week. Nobody gives them a rulebook. They watch how senior waiters handle guests. After a month, they have picked up patterns. Regulars get a specific greeting. A certain look from the chef means the special is running low. The waiter never wrote these rules down. They learned by example. Classical ML does the same thing, with numbers.

Variants include

Decision tree models (XGBoost, Random Forest)

A learned decision tree is like the hand-written kind, but the computer draws it from examples. XGBoost and Random Forest build hundreds or thousands of small trees and vote on the answer. These models power most of the “boring” AI you encounter: credit card fraud, loan decisions, product recommendations, ad targeting. They are fast, cheap, and often more accurate than fancier methods.

Linear and logistic regression

The simplest learned models. Logistic regression draws a single straight line to split the dots. Linear regression draws a line that best fits a cloud of dots. Banks, insurance companies, and medical studies still use these because they are easy to explain to a regulator.

Clustering (K-Means)

Clustering groups similar things together without any labels. Show it a million shoppers, it finds 7 clusters of shopping behaviour. Amazon and Spotify use clustering to find “people like you.”

The breaking point

Classical ML only works on the kind of examples you showed it. A fraud model trained on UK bank data will fail on Brazilian bank data. A spam filter trained on 2015 emails misses 2025 spam. The model cannot reason. It can only pattern-match against the past. When the world changes, the model goes stale.

Your takeaway

Classical ML is running quietly behind most of the apps on your phone. You have been using it every day for ten years without knowing its name.

The Zero-Data Promise
Your data never leaves your screen.
01 · No upload
Files stay put.
02 · No training
Your words, your own.
03 · No storage
No logs. No profile.
04 · No catch
Always free.