Monte Carlo Simulation using Rust Part-1

Photo by Cassi Josh on Unsplash

Monte Carlo Simulation using Rust Part-1

·

7 min read

In a nutshell, the concept of Monte Carlo simulation is pretty straightforward; it involves generating random numbers and averaging of the outcome. We can use this technique for evaluating probability problems to complex integral equations. The two popular examples are estimating the value of π(pi) and integrating x square using Monte Carlo. Let's start with an example, and then we'll go over the concepts in greater depth.

Examples of Monte Carlo Simulation

1. Estimation of Pi

Let's consider a circle of unit radius centered at (0,0) enclosed in a square.

struct Point {
        x: f64,
        y: f64,
    }

The ratio of the area of the circle and square is: $$ \pi r^2 / 4r^2 = \pi/4$$ We count both the overall number of points and the number of points that are located inside the circle. We can determine if a point is contained inside the circle by calculating its distance from the origin.

impl Point {
        fn in_circles(&self) -> bool {
            let distance = (self.x.powi(2)   + self.y.powi(2)).sqrt();
            if distance<=1.0{
                return true;
            }
            else{
                return false;
            }
        }
    }

If we divide the number of points inside the circle by the total number of points, we get the probability of a point falling inside a circle, which should be equal to π / 4. Let's have a look at the code for counting the number of points within the circle:

    let number_of_simulation = 100000;
    let mut in_circle_count = 0;
    for i in 0..number_of_simulation{
        let x:f64 = Uniform::new(-1.0,1.0).sample(&mut rng);
        let y:f64 = Uniform::new(-1.0,1.0).sample(&mut rng);
        point.x = x;
        point.y = y;
        if point.in_circles(){
            in_circle_count+=1;
        }
    }

Once we get the total number of points that fall in the circle, we can estimate the value of pi as follows: $$ In \space circle \space points / total \space points = \pi/4 $$ $$ \pi = 4* In \space circle \space points / total \space points $$

    let pi = (in_circle_count as f64) / (number_of_simulation as f64)*4.0;
    println!("{:?}",pi);
3.140851

2. Integrate x square

Another example is integrating x^2. $$ I = \int_0^1 x^2 dx = [x^3/3]^1_0 = 1/3$$ It might be surprising to a few people, but if you just simulate a few uniformly distributed points between 0 and 1, and take the average of the function that is x^2 in our case, you can get the approximation of this integral.

    let mut rng = rand::thread_rng();
    let number_of_simulation = 100000;
    let mut sum_of_simulation =0.0;
    for i in 0..number_of_simulation{
        let mut x:f64 = Uniform::new(0.0,1.0).sample(&mut rng);
        sum_of_simulation+=x.powi(2);
    }
    let avg = sum_of_simulation / (number_of_simulation as f64);
    println!("{}",avg);
0.3324825398482972

I am sure by now this technique to solve integrals sound powerful and simple enough. However, it is computationally expensive so Rust is the natural choice for this. Let's look at the concepts to see how it actually happens.

Theory and Concepts

Let X be a stochastic variable with expected µ = E[X] and variance σ^2 = Var(X). Assume that we want to calculate µ = E[X] but there is no closed formula for E[X]. An excellent way to find this is through Monte Carlo simulations. Thanks to the law of large numbers, Monte Carlo simulation may provide an approximation of E[X].

Let us illustrate an example of Monte Carlo simulation:

  • First, simulate n independent and equally distributed stochastic variables X1, X2, X3, .... Xn, with all having the same distribution as X, with the expected value µ = E[X] and variance Var(X)
  • Now calculate the average, an approximation of the expected value µ: $$ \widetilde{\mu} = \frac{1}{n} \sum_{i=1}^{n} X_i $$ This suggests that with enough simulations n, our approximated value will be close or you can say the average converges to our expected value. This means the convergence of random variables is necessary for our simulation to work. Let's review basic probability theory concepts to understand it:

Convergence of Random Variables

We need to define convergence because it could mean different things and proof of the law of large numbers depends on this. Let (Ω, F, P) be a probability space and let {Xn} n∈N be a sequence of real-valued random variables defined on this probability space. We have the following convergence definition:

1. Point-wise convergence or sure convergence

Let Xn be a real-valued sequence, we say it converges point-wise or surely to X if $$ X_n(\omega) \rightarrow X, \space \forall \omega \in \Omega $$ This notion of convergence is exactly analogous to that defined for regular functions. Since this notion is too strict for most practical purposes, and neither does it consider the measurability of the random variables nor the probability measure P(·), we use other definitions to prove the limiting theorem

2. Almost sure convergence or convergence with probability 1

A sequence of random variables {Xn}n∈N is said to converge almost surely or with probability 1 (denoted by a.s. or w.p. 1) to X if $$ P(\omega \mid (X_n(\omega) \rightarrow X)) = 1, \space \forall \omega \in \Omega $$ Almost sure convergence demands that the set of ω’s where the random variables converge have a probability one.

3. Convergence in probability

A sequence of random variables {Xn}n∈N is said to converge in probability (denoted by i.p.) to X if $$ \lim_{n\to\infty} P( \mid X_n - X \mid > \epsilon ) = 0, \space \forall \epsilon > 0 $$

4. Convergence in rth mean

A sequence of random variables {Xn}n∈N is said to converge in rth mean to X if $$ \lim_{n\to\infty} E( \mid X_n - X \mid ^r ) = 0$$ r = 2 is most often used and called mean-squared.

5. Convergence in the distribution or weak convergence

A sequence of random variables {Xn}n∈N is said to converge in distribution to X if $$ \lim_{n\to\infty} F_Xn(x) = F_X(x), \forall x \in R $$ here Fx() is continuous.

Law of Large Numbers

This could be considered the backbone of the Monte Carlo simulation. It provides the mean of the average value of random variables. This is the theorem that says that the sample average of a large number of i.i.d. random variables converges to the expected value. Depending on the definition of convergence, we have weak law and strong law of large numbers.

Weak Law of Large Numbers

In the weak law of large numbers, we consider that the average converges to the expected value in probability (see above convergence in probability). Let X1, X2, . . . be i.i.d random variables with finite mean, E[X], then let $$ Sn = \sum_{i=1}^{n} X_i $$ $$ \frac{Sn}{n} \xrightarrow{i.p.} E[X]$$ We could prove using Chebyshev’s Inequality.

Strong Law of Large Numbers

This gives us the condition when the sample average converges almost surely to the expected value (see Almost sure convergence or convergence with probability 1). $$ \frac{Sn}{n} \xrightarrow{a.s.} E[X]$$ $$ P(\omega \mid ( \frac{Sn(\omega)}{n} \rightarrow E[X]) = 1$$

Central Limit Theorem

Let {Xi} be a sequence of i.i.d. random variables having finite variance. From the law of large numbers, we know that for large n, the sum Sn is approximately as big as nE[X], i.e., $$ \frac{Sn}{n} \xrightarrow{i.p.} E[X]$$ $$ \frac{Sn-nE[X]}{n} \xrightarrow{i.p.} 0$$ Thus whenever the variance of Xi is finite, the difference Sn − nE[X] grows slower as compared to n. The Central Limit Theorem (CLT) says that this difference scales as √n and the distribution approaches a normal distribution as n tends to infinity irrespective of the distribution of Xi. $$ \frac{Sn-nE[X]}{\sqrt{n}} \sim N(0, \sigma^2)$$ This is the reason people say that Monte Carlo "converges very slowly" or "converges with √n. If you need a tenfold improvement in accuracy, you'll need a hundredfold increase in the number of paths.

Monte Carlo Integration

We can write integral as the expectation of a function and vice versa.

Expectation

We consider X is a random variable that follows pdf fx(x). The domain of X is [a,b] The expectation for a function is defined as: $$ E[gX] = \int_a^b gx *fx dx $$

Integration

Needless to say, we can use simulation to calculate the expectation. Return to our previous example of x square integration. We can consider X follows a uniform distribution U[0,1], its pdf, fx = 1 and gx = x^2 We can write the integral as $$ \int_0^1 x^2 dx = \int_0^1 x^2 * fx dx = E[x^2] $$ We assume X follows the uniform distribution. We can easily estimate the expectation of x square by taking the mean.

Did you find this article valuable?

Support Siddharth by becoming a sponsor. Any amount is appreciated!