Quantitative Analysis
Mastering Common Univariate Random Variables: The Art of Choosing the Right Distribution

When faced with a probability problem, the goal is to identify the most appropriate distribution as quickly and accurately as possible. Each situation carries clues in the form of data type, context, and relationships between variables. Recognising these clues allows you to select the right model without unnecessary trial and error.
Common univariate random variables include a range of discrete and continuous distributions, along with mixture distributions that combine them. Understanding the defining properties, practical applications, and interconnections between these distributions enables more accurate and efficient analysis in both theoretical and applied settings.
1. The Big Picture — Distributions Are Precision Tools
A probability distribution is not just a formula. It is a model of how uncertainty behaves.
The skill lies in selecting the right model to represent the data and the process that generated it.
The core set of distributions includes:
- Discrete: Bernoulli, Binomial, Poisson
- Continuous: Uniform, Normal, Lognormal, Chi squared, Student t, F, Exponential, Beta
In addition, mixture distributions combine two or more of these to represent more complex patterns.
2. The Normal Distribution — The Core Player
Among all probability distributions, the Normal distribution is the most widely applied in statistics, data science, finance, and many other fields.
Why it matters:
- Many natural and economic phenomena approximate a normal distribution
- The Central Limit Theorem shows that sums of many independent variables tend toward normality
- It forms the foundation for many statistical methods and models
- Transformations of the normal give rise to other important distributions, such as the lognormal
Because of these qualities, a strong understanding of the normal distribution often makes it easier to grasp other distributions and their relationships.
3. Quick Identification Framework
When analysing a problem, it helps to move through three steps:
Step 1: Determine whether the variable is discrete or continuous
- Discrete: counts, integer values, event occurrences
- Continuous: any value within a range, often measurements
Step 2: Identify the context
- Binary outcome single trial → Bernoulli
- Repeated binary outcome trials with fixed probability → Binomial
- Number of events in fixed time → Poisson
- Time between events → Exponential
- Equal likelihood across an interval → Uniform
- Symmetric and bell shaped → Normal
- Positively skewed growth data → Lognormal
- Variance tests → Chi squared
- Small sample with unknown variance → Student t
- Ratio of variances → F
- Variable bounded between 0 and 1 → Beta
Step 3: Recognise any special relationships
- Poisson arrivals imply exponential interarrival times
- Log of a lognormal variable is normal
- Chi squared is the sum of squared standard normals
- t and F are derived from the normal and chi squared
4. Distribution Mind Map Table
| Distribution | Type | Key Properties | Typical Clues |
| Bernoulli | Discrete | One trial, p and 1 minus p | Success or failure |
| Binomial | Discrete | Fixed n trials, constant p | Number of successes in repeated trials |
| Poisson | Discrete | λ events per interval | Event counts over time or space |
| Exponential | Continuous | Memoryless, rate λ | Waiting time between events |
| Uniform | Continuous | Equal probability | Any value equally likely |
| Normal | Continuous | Symmetric, μ, σ² | Bell shaped, common natural processes |
| Lognormal | Continuous | Positive skew | Asset prices, multiplicative growth |
| Chi squared | Continuous | Sum of squares | Variance estimation |
| Student t | Continuous | Fat tails | Estimating means with small samples |
| F | Continuous | Ratio of variances | Comparing variability between groups |
| Beta | Continuous | Bounded between 0 and 1 | Probabilities, rates, proportions |
5. Mixture Distributions
A mixture distribution is formed when two or more probability distributions are combined. This is useful when data does not follow a single distribution well.
- Mixtures can model data with skewness or fat tails
- They are used to represent multiple regimes, such as stable and volatile periods in financial markets
- The weights assigned to each component determine the overall characteristics
6. Why Exponential and Beta Deserve Attention
The exponential distribution is widely used to model waiting times or lifetimes of processes. Its defining property is memorylessness, meaning the probability of an event occurring in the future is independent of how much time has already passed.
The beta distribution is highly flexible and can take many shapes depending on its two parameters. It is useful for modelling probabilities, proportions, and any variable naturally constrained between 0 and 1.
7. Building Long Term Mastery
- Maintain a one page map linking keywords to distributions for quick reference
- Visualise the shapes of distributions to aid recall
- Learn how distributions are related so you can move between them intuitively
- Spend extra time mastering the normal distribution as it underpins much of probability theory and statistics
8. The Payoff
With strong familiarity, you can:
- Recognise patterns in data or problem statements
- Select the most appropriate model without delay
- Build more accurate and interpretable analyses
Probability distributions are the language of uncertainty. Knowing them well allows for clearer thinking, better modelling, and more effective communication of risk and variability in any field where data matters.


