I'm just starting to learn Julia, I work primarily in physics and am used to writing most of my code in Fortran90 and occasionally Python for Tensorflow (also Mathematica but that's less relevant). Julia has been recommended to me and I started checking it out; I like it a lot in theory as a middle ground between the speed of Fortran and the syntax of Python. To test it out I wrote a simple 2D Ising model code implementing a basic single-spin-flip Metropolis Monte Carlo algorithm. However, this code runs very slowly compared to an equivalent code in Fortran. Am I doing something wrong which is significantly affecting the performance of the code? I know almost nothing beyond what I've done here. I am using the Juno IDE in Atom on Windows 10. As an aside, I would also like to know how I can make multiple plots in the Atom plot tab, but that's secondary.
using Printf
using Plots
L = 20 # linear size of lattice
n_sweep = 20 # number of sweeps between sampling
n_therm = 1000 # number of sweeps to thermalize
n_data = 100 # number of data samples per temperature
temps = 4.0:-0.3:0.1 # temperatures to sample
e1 = Array(1:n_data) # array to hold energy measurements (fixed T)
m1 = Array(1:n_data) # array to hold magnetization measurements (fixed T)
et = [] # array to append average energy at each T
mt = [] # " magnetizations
s = ones(Int32,L,L) # lattice of Ising spins (+/-1)
function measure(i) # measure i'th sample of energy and magnetization
en = 0
m = 0
for x = 1:L
for y = 1:L
u = 1+mod(y,L) # up
r = 1+mod(x,L) # right
en -= s[x,y]*(s[x,u]+s[r,y]) # energy
m += s[x,y] # magnetization
end
end
energy[i] = en
magnetization[i] = abs(m)
end
function flip(x,y,T) # apply metropolis spin flip algorithm to site (x,y) w/ temp T
u = 1+mod(y,L) # up
d = 1+mod(y-2,L) # down
r = 1+mod(x,L) # right
l = 1+mod(x-2,L) # left
de = 2*s[x,y]*(s[x,u]+s[x,d]+s[l,y]+s[r,y])
if (de < 0)
s[x,y] = -s[x,y]
else
p = rand()
if (p < exp(-de/T))
s[x,y] = -s[x,y]
end
end
end
function sweep(n,T) # apply flip() to every site on the lattice
for i = 1:n
for x = 1:L
for y = 1:L
flip(x,y,T)
end
end
end
end
for T in temps # loop over temperatures
sweep(n_therm, T) # thermalize the lattice
energy = e1 # reset energy measurement array
magnetization = m1 # same
for i = 1:n_data # take n_data measurements w/ n_sweep
sweep(n_sweep, T)
measure(i)
end
en_ave = sum(energy)/n_data # compute average
ma_ave = sum(magnetization)/n_data
push!(et,en_ave/(L*L)) # add to the list
push!(mt,ma_ave/(L*L))
@printf("%8.3f %8.3f \n", en_ave/(L*L), ma_ave/(L*L))
end
plot(temps,mt) # plot magnetization vs. temperature
#plot(temps,et)
-
1\$\begingroup\$ I realize this is an old post. But in case you are still interested, you might want to take a look here: github.com/cossio/SquareIsingModel.jl \$\endgroup\$a06e– a06e2021年12月07日 22:23:39 +00:00Commented Dec 7, 2021 at 22:23
1 Answer 1
The biggest problem with this code is the amount of things in global scope. Rewriting it to make some things const
, and passing in the rest from a main function brings the time down to .7 seconds (not including the using plots
which takes 3.5 seconds, compared to ~10 seconds before. Here is the updated code, I hope it helps.
using Printf
using Plots
const L = 20 # linear size of lattice
const n_sweep = 20 # number of sweeps between sampling
const n_therm = 1000 # number of sweeps to thermalize
const n_data = 100 # number of data samples per temperature
const temps = 4.0:-0.3:0.1 # temperatures to sample
function measure(i, energy, magnetization, s) # measure i'th sample of energy and magnetization
en = 0
m = 0
for x = 1:L
for y = 1:L
u = 1+mod(y,L) # up
r = 1+mod(x,L) # right
en -= s[x,y]*(s[x,u]+s[r,y]) # energy
m += s[x,y] # magnetization
end
end
energy[i] = en
magnetization[i] = abs(m)
end
function flip(x, y, T, s) # apply metropolis spin flip algorithm to site (x,y) w/ temp T
u = 1+mod(y,L) # up
d = 1+mod(y-2,L) # down
r = 1+mod(x,L) # right
l = 1+mod(x-2,L) # left
de = 2*s[x,y]*(s[x,u]+s[x,d]+s[l,y]+s[r,y])
if (de < 0)
s[x,y] = -s[x,y]
else
p = rand()
if (p < exp(-de/T))
s[x,y] = -s[x,y]
end
end
end
function sweep(n, T, s) # apply flip() to every site on the lattice
for i = 1:n
for x = 1:L
for y = 1:L
flip(x,y,T, s)
end
end
end
end
function main()
e1 = Array(1:n_data) # array to hold energy measurements (fixed T)
m1 = Array(1:n_data) # array to hold magnetization measurements (fixed T)
et = [] # array to append average energy at each T
mt = [] # " magnetizations
s = ones(Int32,L,L) # lattice of Ising spins (+/-1)
for T in temps # loop over temperatures
sweep(n_therm, T, s) # thermalize the lattice
energy = e1 # reset energy measurement array
magnetization = m1 # same
for i = 1:n_data # take n_data measurements w/ n_sweep
sweep(n_sweep, T, s)
measure(i, energy, magnetization, s)
end
en_ave = sum(energy)/n_data # compute average
ma_ave = sum(magnetization)/n_data
push!(et,en_ave/(L*L)) # add to the list
push!(mt,ma_ave/(L*L))
@printf("%8.3f %8.3f \n", en_ave/(L*L), ma_ave/(L*L))
end
plot(temps,mt) # plot magnetization vs. temperature
#plot(temps,et)
end
main()
-
\$\begingroup\$ Hi, thanks for the reply, this is very helpful and did improve the performance quite a bit, but it still is running at least an order of magnitude slower than an equivalent code in Fortran (increasing the runtime with
L=40
,n_sweep = 200
, andn_data = 1000
). Is there some other area in which the code can be improved significantly? \$\endgroup\$Kai– Kai2019年04月16日 01:19:38 +00:00Commented Apr 16, 2019 at 1:19 -
\$\begingroup\$ Almost all of the time in this function is in
if (p < exp(-de/T))
, so I'm not really sure if much further improvement exists. \$\endgroup\$Oscar Smith– Oscar Smith2019年04月16日 03:38:48 +00:00Commented Apr 16, 2019 at 3:38 -
\$\begingroup\$ Does the order of loops in
sweep
matter? makingi
be the inner loop makes it ~20% faster. \$\endgroup\$Oscar Smith– Oscar Smith2019年04月16日 04:06:49 +00:00Commented Apr 16, 2019 at 4:06 -
\$\begingroup\$ Hmm I will play around with it. The order of
sweep
certainly matters, but I'm sure there must be some way to improve this further. I will keep reading about Julia. Perhaps pre-caching the random numbers could improve performance? Thanks for your help so far. \$\endgroup\$Kai– Kai2019年04月16日 04:41:37 +00:00Commented Apr 16, 2019 at 4:41 -
1\$\begingroup\$ You might find it faster to generate from a log distribution and remove the
exp
call. Not sure if that will be better, but worth trying. \$\endgroup\$Oscar Smith– Oscar Smith2019年04月16日 14:29:19 +00:00Commented Apr 16, 2019 at 14:29