Foundations

The Perceptron

The simplest artificial neuron - understand how weighted inputs and thresholds create decisions

The Building Block

Think of hiring decisions. You evaluate a candidate based on factors like years of experience and skills match. Each factor has different importance (weight). You combine these weighted factors and make a yes/no decision based on a threshold. A perceptron works exactly the same way.

Interactive Perceptron

Adjust the weights and bias to see how the neuron's decision changes. Try different scenarios to build intuition.

What is a perceptron? A perceptron is the simplest neural network unit. It takes inputs, multiplies each by a weight, adds them together with a bias, then applies an activation function. This mirrors how biological neurons sum up signals and "fire" when the total exceeds a threshold.

Why these weights?

Experience is weighted slightly higher (0.6 vs 0.4) because senior roles require proven track record. Both factors matter, but experience is harder to fake.

What does the bias (threshold) mean here?

The negative bias (-0.3) means candidates need to demonstrate above-average qualifications. It represents a "prove yourself" hiring bar.

Try adjusting the bias slider below and watch how the decision threshold changes!

Parameters

Should we hire this candidate? Years of experience and skills match predict hire recommendation

Inputs

0.70

Years Experience score (0 = low, 1 = high)

0.80

Skills Match score (0 = low, 1 = high)

Weights

0.60

How much "Years Experience" influences the output

0.40

How much "Skills Match" influences the output

Bias (Hiring Bar)

-0.30

Sets the threshold: threshold = −bias. Higher bias = lower bar to fire.

Computation

signal = (0.70 × 0.60) + (0.80 × 0.40) = 0.74
threshold = −bias = −(-0.30) = 0.30
0.74 > 0.30Fires!
output = sigmoid(0.44) = 0.6083

Perceptron Diagram

w1 = 0.6w2 = 0.40.70Years Expe...0.80Skills Matchsignal0.74Σ (sum)bias = -0.3threshold = 0.3Signal vs Threshold→1→00.70.3output1

Key insight: A single perceptron can only learn simple, linear patterns - it draws a straight line to separate "yes" from "no". Real decisions often need more nuance. What happens when we connect many perceptrons together?

From 0/1 to smooth outputs

The original perceptron (1950s) used a step function — the output was exactly 0 or 1. But modern neural networks use smooth activation functions like sigmoid, which output any value between 0 and 1 (like 0.73).

Step function (original)

Signal ≥ threshold → output 1

Signal < threshold → output 0

Binary, abrupt

Sigmoid (modern)

High signal → output near 1 (e.g., 0.95)

Low signal → output near 0 (e.g., 0.12)

Smooth, nuanced

Why this matters: In multi-layer networks, neurons pass continuous values between layers (not just 0s and 1s). The final 0/1 decision only happens at the very end, when we threshold the output probability.

The smooth function is also mathematically "differentiable" — essential for training networks to learn, as we'll see later.

Meet the Neuron

A neuron is just a perceptron with a smooth activation function. This is the building block of all neural networks:

x₁inputx₂inputw₁w₂Σsum + biasactivationσ(z)sigmoid/ReLUoutput0-1continuous

Perceptron → Neuron → Neural Network

You now understand the neuron — the fundamental unit. In the next module, we'll connect many neurons together in layers to solve problems a single neuron cannot.

What you learned

  • Perceptrons compute weighted sums of inputs plus a bias
  • Weights control how much each input matters to the decision
  • Bias shifts the decision threshold - how "hard" it is to trigger a positive outcome
  • A single perceptron can only make linear (straight-line) decisions