Probability & Statistics · 4 Modules

Introduction to Probability Models

The probability course for people in software engineering and AI/ML.

Every concept is taught through systems you work with: load balancers, CI/CD pipelines, training runs, RAG pipelines, and AI agents. The same running examples follow you from sample spaces all the way to Markov chains.

Modules

01

Introduction to Probability Theory

Build the formal language of probability from the ground up. By the end you'll understand exactly what Bayes' theorem is doing, and why it's the right tool for updating beliefs from evidence.

Sample spaces & events Axioms of probability Equally likely outcomes Conditional probability Bayes' formula Independence
02

Random Variables

You'll meet important probability distributions that appears in ML and systems work, and learn to derive important properties from first principles.

PMFs & expectation Bernoulli & Binomial Geometric Poisson PDFs & CDFs Uniform & Exponential Normal distribution
03

Conditional Expectation & Limit Theorems

Conditioning is how you reason under partial information. This module covers the law of total expectation, then proves the Central Limit Theorem - the reason normality keeps showing up in ML.

Conditional distributions Law of total expectation Computing by conditioning Random sums First-passage times Central Limit Theorem Law of Large Numbers
04

Introduction to Markov Chains

The most powerful model in applied probability. Server health cycles, user sessions, policy updates in RLHF, and PageRank are all Markov chains. This module gives you the tools to analyse any of them.

Markov property n-step transition probabilities Classification of states Stationary distributions Detailed balance

Running Examples Throughout the Course

Software Engineering

Load Balancer

Software Engineering

Cache

Software Engineering

CI/CD Pipeline

Software Engineering

Distributed Database

Machine Learning

Binary Classifier

Machine Learning

Training Run

Machine Learning

Recommender System

LLMs & Agents

Token Sampling

LLMs & Agents

RAG Pipeline

LLMs & Agents

AI Agent Loop

Machine Learning

Data Pipeline

LLMs & Agents

RLHF / Preference Model

What you'll learn

Why a cache hit rate is a probability, and how to reason about it formally
How Bayes' theorem powers spam filters, fraud detectors, and diagnostic tests
Why your recommender system's click-through rate is a Binomial random variable, and how to reason about variance in A/B tests
The geometric distribution behind retry loops and cold-start caches
How PageRank is literally a stationary distribution of a Markov chain
How to compute the probability a CI/CD pipeline fails, and which stage is most likely the culprit
How a bigram language model is a Markov chain over a vocabulary
The law of total expectation — the key tool behind expected query latency and retry cost
June Cohort Two Months · Live Course

Every Monday, 7:00 – 9:00 AM IST  ·  1 June – 20 July 2026

1
Jun
1
Monday, 1 June 7:00 – 9:00 AM IST
2
Jun
8
Monday, 8 June 7:00 – 9:00 AM IST
3
Jun
15
Monday, 15 June 7:00 – 9:00 AM IST
4
Jun
22
Monday, 22 June 7:00 – 9:00 AM IST
5
Jun
29
Monday, 29 June 7:00 – 9:00 AM IST
6
Jul
6
Monday, 6 July 7:00 – 9:00 AM IST
7
Jul
13
Monday, 13 July 7:00 – 9:00 AM IST
8
Jul
20
Monday, 20 July 7:00 – 9:00 AM IST