[Updated: To add link explaining the .NET JIT compiler's interesting treatment of floating-point numbers.]
Over the years, I've made a habit of collecting curious, weird, cool, or just downright perverted snippets of code. So I thought I should put some of them online as part of an occasional series for your viewing pleasure.
The first article in this series investigates an interesting C# code snippet that demonstrates one of the perils of compiler code optimisation. The default Debug build turns off code optimisation, whereas the default Release build turns it on. Turning on code optimisation should in theory give you faster production code, but there is some risk with this approach. If the compiler is overzealous in its optimisation or makes assumptions that are different from those made by the developer, you run the risk of seeing a bug that only appears in production code built using the Release build. This means that you have to weigh the risk of an optimisation problem appearing in production code versus the improved speed of production code. The following console application demonstrates this behaviour.
When you run this code under the Visual Studio debugger, regardless of whether you're using a Debug or Release build, you'll see the result shown below on the left. But when you run this code without using a debugger, once again regardless of whether it's a Debug or Release build, you'll see the result shown below on the right. Note that the comparison of the result using double.Epsilon is essential if you using variables of type "double", because some numbers can't be represented exactly in binary. This is a similar problem to trying to represent 1/3 exactly as a decimal (0.3333...).
So regardless of the code optimisation setting specified by the build used to compile this code, running the code under the Visual Studio debugger produces a different result to running it without the debugger. Is this actually a code optimisation problem or a debugger problem?
The first thing to realise is that code optimisation is a two-step process. When you compile your source code, the resulting Intermediate Language (IL) is optimised or not, depending on the build setting. In fact, very few optimisations are performed at this level. Any substantial code optimisation is done by the Just-In-Time (JIT) compiler as it converts the IL into native code at run-time, by default using the same code optimisation build setting used by the source language compiler.
The next thing to realise is that the Visual Studio debugger always tells the JIT compiler to turn off code optimisation, regardless of the build setting. This makes debugging easier, as the debugger can map between the source code and the native code much easier if the native code isn't optimised. This is why the code optimisation setting specified in the build configuration doesn't affect this specific test.
The console debugger Cordbg has the neat facility of being able to turn the JIT compiler on and off at will. If you run this code under Cordbg, you'll see a result of True if you run without JIT code optimisation, and a result of False if you run with JIT code optimisation. This is regardless of the code optimisation setting specified in the build configuration. This tells us that the problem is probably related to JIT code optimisation, not to source code optimisation or a debugger being present.
The next step is to look at the native code generated with and without JIT optimisation. Looking at the native code without optimisation is easy - just run a Debug build of the app under the Visual Studio debugger and look at the Disassembly window. Looking at the optimised native code using the Visual Studio debugger is slightly harder, because as I said above, the Visual Studio debugger turns off the JIT optimiser. But there is a little-known trick that can help you here. First, add a call to System.Diagnostics.Debugger.Break() just before the Console.ReadLine. Then compile the app as a Release build (i.e. with code optimisation) and run it without using the Visual Studio debugger.
When the Break call is reached, you'll see an ugly dialog stating that a user-generated breakpoint has been hit. Press Retry to say that you want to debug, and then specify that you want to use the current instance of Visual Studio and that you want to debug only managed code. You should then be dumped unceremoniously into Visual Studio's Disassembly window. Because the native code has already been generated by the JIT compiler before the debugger was attached, you should be able to see the optimised version of the code.
Looking closely at the optimised native code, it appears as though it's reusing the 80-bit values already on the math coprocessor stack rather than the 64-bit values stored in the variables (memory locations), and this leads to the slight, but significant, difference in the final result. This is just business as usual in the wacky world of floating-point arithmetic.
What is rather scary about the above code is that you can wrap its significant ingredient into a function that returns a boolean specifying whether or not the native code being generated by your program is being optimised. This is somewhat easier, though less reliable, than using reflection to check for the existence of the DebuggableAttribute class and its IsJitOptimizerEnabled flag.