5
\$\begingroup\$

I believe this is considered value noise or possibly gradient noise in that I simply interpolate between random values (always returning the same values per coordinate).

I am using this currently to generate a basic heightmap in a 3D game that expands outward as the player moves (being nearly infinite, only restricted by variable limitations).

I contain my game's map within chunks, 32x32 (1024) data points per chunk representing height. I call the function GetNoise2D() with the X and Z values of every data point.

This is currently my largest bottleneck. I could probably just do the 4 corners per chunk and interpolate between them to get reasonable looking terrain, but to put it simply, I'd rather not.

Does anyone see any noticeable performance issues with the algorithm or the concept after initialization? I call on the algorithm using the function GetHeight() which combines two calls to the algorithm.

private float GetHeight(int X, int Z) {
 float fNoise = Noise.GetNoise2D(X * .02f, Z * .02f) * .5f;
 fNoise += Noise.GetNoise2D(X * .04f, Z * .04f) * .5f;
 //Scale noise from 0-1 to 0-20
 return fNoise * 20f;
}
public class clsNoise2D {
 readonly byte[] Permutations = new byte[512];
 readonly float[] Values = new float[256];
 float xLerpAmount, yLerpAmount, v00, v10, v01, v11;
 //pX, pXa, dX, and dY are helper values to reduce operations
 int pX, pXa; int dX, dY;
 public Random random;
 public clsNoise2D(int iSeed) {
 random = new Random(iSeed);
 //Randomize permutations array with values 0-255
 List<byte> listByte = new List<byte>();
 for (int i = 0; i < 256; i++) listByte.Add((byte)i);
 for (int i = 256; i > 0; i--) { Permutations[256 - i] = listByte[random.Next(i)]; listByte.Remove(Permutations[256 - i]); }
 //Take permutations array up to 512 elements to reduce wrapping needs in GetNoise2D call
 for (int i = 256; i < 512; i++) { Permutations[i] = Permutations[i - 256]; }
 //Set values to be between 0 and 1 incrementally from 0/255 through 255/255.
 for (int i = 0; i < 256; i++) { Values[i] = (i / 255f); }
 }
 public float GetNoise2D(float CoordX, float CoordY) {
 //Get floor value of inputs
 dX = (int)Math.Floor(CoordX); dY = (int)Math.Floor(CoordY);
 //Get fractional value of inputs
 xLerpAmount = CoordX - dX; yLerpAmount = CoordY - dY;
 //Wrap floored values to byte values
 dX = dX & 255; dY = dY & 255;
 //Start permutation/value pulling
 pX = Permutations[dX]; pXa = Permutations[dX + 1];
 v00 = Values[Permutations[(dY + pX)]];
 v10 = Values[Permutations[(dY + pXa)]];
 v01 = Values[Permutations[(dY + 1 + pX)]];
 v11 = Values[Permutations[(dY + 1 + pXa)]];
 //Smooth lerp amounts by cosine function
 xLerpAmount = (1f - (float)Math.Cos(xLerpAmount * Math.PI)) * .5f;
 yLerpAmount = (1f - (float)Math.Cos(yLerpAmount * Math.PI)) * .5f;
 //Return 2D interpolation for v00, v01, v10, and v11
 return (v00 * (1 - xLerpAmount) * (1 - yLerpAmount) +
 v10 * xLerpAmount * (1 - yLerpAmount) +
 v01 * (1 - xLerpAmount) * yLerpAmount +
 v11 * xLerpAmount * yLerpAmount);
 }
}

Edit: To clarify the question itself, is there a very noticeable performance mistake currently being made OR is there a completely different way to achieve identical or nearly identical values that SHOULD knock the performance out of the park?

asked Nov 6, 2013 at 17:07
\$\endgroup\$
2
  • \$\begingroup\$ In what way is this a bottleneck? If I take your sample and run it over a sample set of 1024 values (as I think you are doing) then it takes milliseconds for it to complete, so I think the question is how fast do you expect it to perform and how are you sure this is the slowest part of your program? \$\endgroup\$ Commented Nov 14, 2013 at 17:16
  • \$\begingroup\$ This is being ran over 1024 values per chunk. 17x17 chunks. If the player is moving fast (ie. flying) it is possible to be generating up to 10 chunks per frame. Although I don't like it, I am currently interpolating using only the algorithm above in the corners of each chunk (and it works well enough I believe). I was simply hoping to not have to. \$\endgroup\$ Commented Nov 14, 2013 at 19:41

2 Answers 2

1
\$\begingroup\$

According to Visual Studio, the most expensive calls are indeed the lines where you call Math.Cos.

Visual Studio performance analysis for <code>GetNoise2D</code>

You could probably shave a couple of cycles by creating a lookup table which would return the result of that whole expression, (1-cos(w * PI))/2, just by indexing an array.

answered Dec 2, 2013 at 15:01
\$\endgroup\$
2
\$\begingroup\$

I'm not sure if Cos and Sin are efficient here, they might be a ferformance killer. Consider making a table of sin/cos values for come range of angles.

answered Nov 29, 2013 at 8:46
\$\endgroup\$
1
  • \$\begingroup\$ It's possible a lookup table would be more performant, but it would be almost insignificant. stackoverflow.com/questions/1382322/… \$\endgroup\$ Commented Nov 29, 2013 at 9:36

Your Answer

Draft saved
Draft discarded

Sign up or log in

Sign up using Google
Sign up using Email and Password

Post as a guest

Required, but never shown

Post as a guest

Required, but never shown

By clicking "Post Your Answer", you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.