Introduction to Probability Models
The probability course for people in software engineering and AI/ML.
Every concept is taught through systems you work with: load balancers, CI/CD pipelines, training runs, RAG pipelines, and AI agents. The same running examples follow you from sample spaces all the way to Markov chains.
Modules
Introduction to Probability Theory
Build the formal language of probability from the ground up. By the end you'll understand exactly what Bayes' theorem is doing, and why it's the right tool for updating beliefs from evidence.
Random Variables
You'll meet important probability distributions that appears in ML and systems work, and learn to derive important properties from first principles.
Conditional Expectation & Limit Theorems
Conditioning is how you reason under partial information. This module covers the law of total expectation, then proves the Central Limit Theorem - the reason normality keeps showing up in ML.
Introduction to Markov Chains
The most powerful model in applied probability. Server health cycles, user sessions, policy updates in RLHF, and PageRank are all Markov chains. This module gives you the tools to analyse any of them.
Running Examples Throughout the Course
Software Engineering
Load Balancer
Software Engineering
Cache
Software Engineering
CI/CD Pipeline
Software Engineering
Distributed Database
Machine Learning
Binary Classifier
Machine Learning
Training Run
Machine Learning
Recommender System
LLMs & Agents
Token Sampling
LLMs & Agents
RAG Pipeline
LLMs & Agents
AI Agent Loop
Machine Learning
Data Pipeline
LLMs & Agents
RLHF / Preference Model
What you'll learn
Every Monday, 7:00 – 9:00 AM IST · 1 June – 20 July 2026