5

I am recently building a basic ray tracer in C# from scratch, as a learning/teaching project. A previous release of the project (let's call it A) does reflections and diffuse shading. The program renders the scene at around 900ms. Now I've just made a release B which adds specular highlights. Naturally, I assumed rendering time would increase. Imagine my surprise when the same scene rendered in a speedy 120ms! The results are absolutely the same (since the objects in the scene don't actually have specular highlights).

Curious, I tried to narrow down what part of the code is actually making it faster. I think I've narrowed it down to a calculation that is made in both the reflection component, and the specular component (calculating the reflection vector). For each iteration of the ray tracing, both calculate the same vector (same inputs, same output), but there is no data sharing between the two. So I was wondering if C# is somehow caching the results which would account for the performance increase?

Here's the code for the reflection rendering

private Color TraceReflection(Ray ray, Vector3D normal, Vector3D hitPoint, IPrimitive hitObject, int Level)
 {
 //Calculate reflection direction
 var reflectionDir = (ray.Direction - (2 * (ray.Direction * normal) * normal)).Normalize();
 //Create reflection ray from just outside the intersection point, and trace it
 var reflectionRay = new Ray(hitPoint + reflectionDir * Globals.Epsilon, reflectionDir);
 //Get the color from the reflection
 var reflectionColor = RayTrace(reflectionRay, Level + 1);
 //Calculate final color
 var resultColor = reflectionColor * hitObject.PrimitiveMaterial.ReflectionCoeff;
 return resultColor;
 }

And here's the specular highlight function:

public Color GetColor(IPrimitive HitObject, ILight Light, Vector3D ViewDirection, Vector3D LightDirection, Vector3D Normal)
 {
 //Caulcate reflection vector
 var reflectionDirection = (LightDirection - (2 * LightDirection * Normal) * Normal).Normalize();
 var dot = reflectionDirection * ViewDirection; //if the dot product is zero or less that means the angle between the two vectors is 90 or more and no highlighting occurs.
 if (dot > 0)
 {
 var specularPower = HitObject.PrimitiveMaterial.SpecularCoeff * Math.Pow(dot, HitObject.PrimitiveMaterial.SpecularExponent);
 var highlightColor = HitObject.PrimitiveMaterial.DiffuseColor * specularPower;
 return highlightColor;
 }
 return new Color();
 }

UPDATE

The previous numbers were when both programs were running in debug mode. I just switched them both to release, and the numbers are what I was expecting them to be in the first place (270ms without specular. 380ms with specular). So it seems the debug mode, somehow, is the culprit.

asked May 3, 2012 at 20:38
2
  • Did you try running the tests in reverse order? Because it could be caused by OS caching. Commented May 3, 2012 at 21:34
  • @svick- I thought of that as well. Same results. Commented May 3, 2012 at 21:39

3 Answers 3

2

It's entirely possible that, in debug mode, the compiler is much more relaxed about when and where it optimizes code - so that your non-specular implementation is compiled as a significantly less optimized executable (since it's not doing that badly), while the specular implementation hits a threshold and is bumped to a slower but more optimized compilation mode.

answered May 3, 2012 at 21:50
5

It's highly unlikely that C# added some tricks that would increase the speed by anything that great. It's more likely that either 1) you improved your algorithm or 2) you had previously ran it in debug mode and not release mode. It's very hard to look at this without seeing what the program was before and what it is now or seeing the performance analysis that was done on it.

answered May 3, 2012 at 20:44
3
  • 1
    I accounted or that. I only need to comment out the contents of the specular shader, and the performance plummets. Commented May 3, 2012 at 20:46
  • @SystemDown: Sounds like you'll need some sort of profiling tool to get more details out of what is happening. Commented May 3, 2012 at 20:50
  • Updated the question with latest findings. Commented May 3, 2012 at 21:15
2

AFAIK, C# does not do that, and neither does any of the other mainstream platforms and languages.

But there are tons of other possible explanations. You may have hit an edge case in the optimizer; or the extra code may cause more beneficial branch prediction behavior, more beneficial garbage collection, or a more beneficial distribution of cache misses.

The latter would be my first guess, but without actual profiling, there is no way to tell for sure.

answered May 3, 2012 at 21:05
4
  • 1
    Hmmm. Do you know of a good (free) profiler I could use? Commented May 3, 2012 at 21:08
  • Updated the question with latest findings. Commented May 3, 2012 at 21:15
  • 1
    @SystemDown: ANTS has a 14 day free trial: red-gate.com/products/dotnet-development/… Commented May 3, 2012 at 23:00
  • Awesome! Glad you figured it out. Commented May 4, 2012 at 0:12

Your Answer

Draft saved
Draft discarded

Sign up or log in

Sign up using Google
Sign up using Email and Password

Post as a guest

Required, but never shown

Post as a guest

Required, but never shown

By clicking "Post Your Answer", you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.