I have implemented an option pricing algorithm following the Heston model. The simulation involves specifying the number of simulations, then generating a discretized path for each simulation (code below). Setting N = 10000 and M = 10000 results in a fairly slow runtime on my machine. Is there any way I can speed this code up? Additionally, I'm looking to get rid of any bad habits when I code, so please feel free to point them out. For example, is there a better way than these nested loops?
#include <iostream>
#include <vector>
#include <cmath>
#include <algorithm>
#include <random>
std::vector<double> linspace(double start, double end, unsigned long long int N)
{
std::vector<double> v(N);
for (std::vector<double>::size_type i = 0; i <= v.size(); i++)
{
v[i] = start + i*(end - start)/static_cast<double>(N);
}
return v;
}
int main()
{
// Specify input parameters
char type{'c'};
double S0{50};
double K{50};
double T{0.5};
double r{0.02};
double q{0.0};
unsigned long long int N{1000}; // Number of steps
int M{1000}; // Number of simulations
// Build time array
std::vector<double> t_array{linspace(0.0, T, N)};
double dt{t_array[1] - t_array[0]};
// Stochastic volatility parameters
double v0{pow(0.15, 2)}; // Starting volatility
double theta{pow(0.15, 2)}; // Long-term mean volatility
double kappa{0.5}; // Speed of reversion
double xi{0.05}; // Volatility of the volatility
// lambda parameter for multidimensional Girsanov theorem
double lambda{0.02}; // Needs to be estimate in practice somehow, I choose this arbitrarily for now
// Generate standard normal random variables under risk-neutral probability measure
double rho{0.3}; // Correlation between W1 and W2 brownian motions under the risk-neutral probability measure
std::random_device rd; // random number generator
std::normal_distribution<> N1(0, 1);
std::normal_distribution<> N3(0, 1);
// Variables to hold standard normals generated during path simulation
double z1{};
double z2{};
// Run M number of simulations
double payoff_total_sum{0};
double S{};
double v{};
for (int h = 0; h < M; h++)
{
// Reset initial values
S = S0;
v = v0;
// Generate stock paths under risk neutral measure
for (std::vector<double>::size_type i = 0; i < t_array.size(); i++)
{
// Generate N1 and N3 standard normals
z1 = N1(rd);
z2 = rho*z1 + pow(1 - pow(rho, 2), 0.5)*N3(rd); // N2 is correlated to N1
// Update stock path under risk-neutral measure
S = S + (r - q)*S*dt + S*pow(v*dt, 0.5)*z1;
// Update stochastic volatility under risk-neutral measure
v = v + (kappa*theta - (kappa + lambda)*std::max(v, 0.0))*dt + xi*pow(std::max(v, 0.0)*dt, 0.5)*z2;
}
if (type == 'c')
{
payoff_total_sum += std::max(S - K, 0.0);
}
else
{
payoff_total_sum += std::max(K - S, 0.0);
}
}
double option_price{exp(-r*T)*payoff_total_sum/static_cast<double>(M)};
std::cout << option_price;
return 0;
}
-
\$\begingroup\$ Hi some IDEs come with profilers which tell you how much time each part of your code is taking, clion has one built in \$\endgroup\$r0k1m– r0k1m2022年06月03日 14:04:50 +00:00Commented Jun 3, 2022 at 14:04
1 Answer 1
You obtain random numbers directly from random_device
:
std::random_device rd;
....
z1 = N1(rd);
It is extremely expensive. See a comment in cppreference Random Device article:
++hist[dist(rd)]; // note: demo only: the performance of many
// implementations of random_device degrades sharply
// once the entropy pool is exhausted. For practical use
// random_device is generally only used to seed
// a PRNG such as mt19937
You need
std::random_device rd;
std::mt19937 gen{rd()};
....
z1 = N1(gen);
t_array
is a waste of space and time. It is only used to compute dt
. Just say
dt{(end - start)/N};
and use N
as the loop limit.
There is no need to static_cast<double>(M)
. It is OK to divide double by an integer.
-
\$\begingroup\$ For those interested, using the Mersenne twister cut the time from 47 seconds to about 26 seconds on my machine using N=10000 & M=10000. \$\endgroup\$MattA– MattA2022年06月03日 17:18:26 +00:00Commented Jun 3, 2022 at 17:18