I have trouble with for loop, my code runs very slowly. The thing I want to do is to use function from apply
family to make my codes run faster (instead of using for loop
and while
). Here is an example and my loop:
require(data.table)
require(zoo)
K<-seq(1,1000, by=1)
b<-c(rep(2,250), rep(3, 250), rep(4, 250), rep(5,250))
a<-c(rep(6,250), rep(7,250), rep(8,250), rep(9,250))
rf<-rep(0.05, 1000)
L<-rep(10,1000)
cap<-rep(20,1000)
df<-data.frame(K, rf, L, cap, a,b)
blackscholes <- function(S, X, rf, h, sigma) {
d1 <- (log(S/X)+(rf+sigma^2/2)*h)/sigma*sqrt(h)
}
df$logiterK<-log(df$K)
df<-as.data.table(df)
df[,rollsd:=rollapply(logret, 250, sd, fill = NA, align='right')*sqrt(250), by=c("a", "b")]
df[,assetreturn:=c(NA,diff(logiterK)),by=c("a", "b")]
df[,rollsdasset:=rollapply(assetreturn, 249, sd, fill=NA, align='right')*sqrt(250), by=c("a", "b")]
df[,K1:=(cap+L*exp(-rf)*pnorm(blackscholes(K,L,rf, 1,rollsdasset[250]))-rollsdasset[250])/pnorm(blackscholes(K,L,rf, 1,rollsdasset[250])),by=c("a","b")]
errors<-ddply( df, .(a,b), function(x) sum((x$K-x$K1)^2))
df<-as.data.frame(df)
df<-join(df, errors, by=c("a", "b"))
for ( i in 1:nrow(errors)){
while(errors$V1[i] >= 10^(-10)) {
df<-as.data.table(df)
df[,K:= K1,by=c("a", "b")]
df[,assetreturn:=c(NA,diff(log(K))),by=c("a", "b")]
df[,rollsdasset:=rollapply(assetreturn, 249, sd, fill=NA, align='right')*sqrt(250), by=c("a", "b")]
df[,iterK1:=(cap+L*exp(-rf)*pnorm(blackscholes(K,L,rf, 1,rollsdasset[250]))-rollsdasset[250])/pnorm(blackscholes(K,L,rf, 1,rollsdasset[250])) ,by=c("a", "b")]
df<-as.data.frame(df)
errors$V1[i]<-sum((df[df$V1 %in% errors$V1[i],"K"]-df[df$V1 %in% errors$V1[i],"K1"])^2)
}
}
Any help would be appreciated.
1 Answer 1
You could replace the for
loop with a function + sapply
like this:
reduce.errors <- function(err) {
while (err >= 10^(-10)) {
df<-as.data.table(df)
df[,K:= K1,by=c("a", "b")]
df[,assetreturn:=c(NA,diff(log(K))),by=c("a", "b")]
df[,rollsdasset:=rollapply(assetreturn, 249, sd, fill=NA, align='right')*sqrt(250), by=c("a", "b")]
df[,iterK1:=(cap+L*exp(-rf)*pnorm(blackscholes(K,L,rf, 1,rollsdasset[250]))-rollsdasset[250])/pnorm(blackscholes(K,L,rf, 1,rollsdasset[250])) ,by=c("a", "b")]
df<-as.data.frame(df)
err <- sum((df[df$V1 %in% err,"K"]-df[df$V1 %in% err,"K1"])^2)
}
}
sapply(errors$V1, reduce.errors)
But I don't think this will make it faster at all. If I understand correctly you need the while
loop there to reduce the error below a threshold, and so you need the iteration and this cannot be replaced with the "apply" functions easily.
If you want to improve the speed, I think you'll need to rethink come up with a different approach, if it's even possible.
logret
is not found. Is it a function? Could you please specify the package at the beginning? I have added loading ofdata.table
andzoo
. \$\endgroup\$apply
family of functions all implement afor
loop. They are not faster, just give you more ways to writefor
loops in a concise manner. \$\endgroup\$