effort-free
Learn

Neural Networks

(the layered thinkers)

A neural network is many small pattern finders, stacked. Each one does a tiny job. Each one passes its answer up to the next. The bottom layer sees raw pixels. The top layer sees a cat. The layers in between do all the work. Nobody tells them what to look for. They figure it out.

Hover any layer to see what it finds

INPUTEDGESTEXTURESPARTSOBJECTSCAT0%dog 4% · other 2%PREDICTION

Hover any layer to see what it spots. Layer 1 finds edges. Layer 4 finds the cat.

The process

A neural network has input at the bottom and output at the top. The input for an image is the brightness of every pixel. The output is a guess, like “cat” or “dog.”

Between the input and the output sit the hidden layers. Each hidden layer has many small units. Each unit looks at the layer below, does one calculation, and sends a number to the layer above.

A single unit does almost nothing on its own. It multiplies the numbers it sees by weights and adds them up. The weights are what the network learned. Early layers learn to spot edges. Middle layers spot textures. Late layers spot whole parts: an ear, a wheel, a face. The final layer picks the answer.

A modern image classifier has around 25 million weights. Each one is a tiny dial. During training, a computer nudged each dial a tiny amount, millions of times, until the network got the answer right.

A familiar example

Think about how you recognise your best friend’s face in a crowd. Your eye catches a shape. Then a colour. Then a jawline. Then a specific combination that is unmistakably them. You do not think through each step. It happens in milliseconds, layer by layer, inside your brain. A neural network works the same way. The design loosely copies how brains work.

Variants include

Convolutional Neural Networks (CNNs)

CNNs are neural networks tuned for images. They slide small filters across the picture, spotting edges in one place and reusing that skill everywhere else. Every face recognition system, every medical imaging tool, every self-driving car camera runs on a CNN or something descended from one.

Recurrent Neural Networks (RNNs) and LSTMs

RNNs process one item at a time and remember what came before. LSTMs are an upgraded RNN that remembers longer. These used to run language translation and speech recognition. Transformers have replaced them in most places, but your voice assistant from 2018 was probably an LSTM.

Graph Neural Networks (GNNs)

GNNs work on data shaped like a network of connections, such as social graphs, molecules, or road maps. Google Maps uses GNNs to estimate your arrival time. Drug discovery companies use them to spot new medicines.

The breaking point

Nobody fully knows why stacking layers like this works as well as it does. We know it does, from trying it millions of times. The theory is still being written. This makes neural networks powerful and unsettling at once. The best tools in AI remain unexplained.

Your takeaway

Every time your phone unlocks with your face, a neural network decided it was you. You trusted a machine whose inner workings nobody has fully mapped.

The Zero-Data Promise
Your data never leaves your screen.
01 · No upload
Files stay put.
02 · No training
Your words, your own.
03 · No storage
No logs. No profile.
04 · No catch
Always free.