Fighting the software pirates

UPDATED: To add code to detect an un-managed debugger and add link to software piracy article.

If you're writing shrinkwrap closed-source software nowadays, you need to spend a little time thinking about software piracy. This is a large subject, with lots of angles to consider. You need to think about your target audience, how fast you want your software to spread (piracy can actually help with this), what licensing terms you want to have, how strictly you want to enforce your licensing terms, and so on. You also want to think about how you're going to thwart the software pirates directly from within your code.

One advantage that the .NET platform gives you is the ability to create a cryptographically-strong licensing solution that makes life much more difficult for the hackers. There are several such tools on the market - my current favourite is Desaware's Licensing System, which implements product activation and optional node locking by way of the tools built into .NET and the underlying operating system.

As with any security system, it's not enough for .NET to provide you with cryptographically-strong tools - you also need to implement them properly. And of course if the pirate owns the machine on which he's running your software, it's probably impossible to prevent a really determined attack from succeeding, regardless of the strength of your licensing. The best that you can do under these circumstances is to make the attacker's job as difficult and as time-consuming as possible.

Without looking at licensing systems in general, I want to show you some simple ways in which you can make life more difficult for somebody trying to hack your software. These tricks have the benefit of inconveniencing hackers without in any way making life more difficult for your legitimate end-users. 

  • At compile-time, obfuscate your assembly using a .NET obfuscator such as XenoCode .
  • At compile-time, give a strong name to the assembly.
  • Check at run-time that .NET Code Access Security is switched-on.
  • Check at run-time that no debugger is attached to the process executing your assembly.
  • Check at run-time that the assembly's public key token matches your public key token.
  • Check at run-time that the assembly's strong name is valid. 
  • Check at run-time that the assembly's strong name signature hasn't been hacked.

Use an obfuscator

An obfuscator takes the IL that makes up your released assemblies and munges it in a way that makes it very difficult for an attacker to figure out many of the details about what your code is doing. From an anti-piracy point of view, probably one of the most helpful features of an obfuscator is the ability to encrypt strings in your assembly after it's been compiled, and only decrypt them at run-time. This prevents an attacker from just grepping your assembly to look for sensitive information such as, say, a public key token that you want to use to check that nobody's hacked the public key token of your assembly.

Create strong name for each assembly

When you sign an assembly with a strong name based on a private key that you create, this has the following benefits:

  • A strong name guarantees the uniqueness of your assembly's identity by adding a public key token and a digital signature to the assembly.
  • A strong name ensures that the assembly comes from the publisher (namely you) with that public key, and only that publisher.
  • A strong name provides a strong integrity check. Passing the .NET Framework security checks guarantees that the contents of the assembly haven't been changed since it was built.

However ... as we'll see shortly, there's a nasty bug in versions 1.0 and 1.1 of the .NET Framework that significantly reduces the effectiveness of strong names. I'll look below at how you can mitigate the effects of this bug.

Check that .NET CAS is switched-on

Disabling .NET Code Access Security (CAS) allows a hacker to perform malicious luring and other attacks on your code. If you want to make life slightly more difficult for a hacker, you should check that CAS hasn't been switched-off with the following code.

Check for somebody using a debugger

The ability to step through your code with a managed or un-managed debugger makes life significantly easier for a hacker. Here's some code to make sure that neither type of debugger has been attached to the process executing your assembly.

The check for an un-managed debugger is actually done in a separate class class called NativeMethods (to keep FxCop happy).

Check assembly public key token is yours

An assembly's public key token is part of its unique identity, so if you want to validate the integrity of your assembly, you should check that a hacker hasn't tampered with the public key token that was written to your assembly when you applied the strong name. The method below accepts a byte array containing your public key token, and compares it with the actual token of the assembly. Note that for this method to be effective, your obfuscator should encrypt the string containing your public key token, and only decrypt it on the fly as it's used. And also be aware that you need to have FullTrust permission for this code to work because it uses reflection underneath the hood.

Check assembly strong name is valid

By default, the CLR will check that your assembly was signed with a specific key. A hacker can disable this check via the registry, which can help him to hack your assembly code, perhaps to subvert the licensing scheme in some way. To prevent this particular hack, you can force verification of the strong name signature regardless of whether this was disabled. The following code demonstrates a call into a static method of another class called NativeMethods (to keep FxCop happy). This is where the verification will be enforced.

The actual signature verification is done using P/Invoke as shown below. The usage of the StrongNameSignatureVerificationEx API is quite convoluted - for a decent explanation, see this blog entry

Check assembly's strong name signature hasn't been hacked

This page on the Web demonstrates the nasty CLR bug mentioned above. If you zeroise just a single byte in a strongly-named assembly, this will cause the 1.0 and 1.1 CLRs to believe that the assembly is no longer strongly named, with all of the security holes that this implies. In other words, strong name verification has been cracked. This bug appears to be fixed in .NET 2.0, but that's not going to help you until you start deploying 2.0 assemblies.

The following code parses its container assembly looking for the first byte of the strong name signature, and then verifies that it's not zero - this should prevent the hack above. Note that the code is rather messy - if anybody knows of a neater way to locate the first byte of the strong name signature, please email me and I'll fix this code.

The helper routines used by the above code are shown below.

Just remember that none of these security precautions will defeat the really determined and knowledgeable hacker who has full ownership of the machine where s/he's doing the hacking. But combined together, these checks should deter 99% of the potential pirates of your code. I've placed all of the code mentioned above into two C# classes - the first class contains the security checks, the second class contains the P/Invoke required. I did it this way to please FxCop, which insists on P/Invoke calls being placed in a separate class called NativeMethods.

UPDATED AGAIN: Please note that the code shown above isn't production code, and hasn't been tested like production code. I've done some cursory testing with a FullTrust .NET 1.1 assembly running under Windows XP, but that's all. If you want to run this code in production, please test it thoroughly, especially if you're running an older OS or if your assembly might not run with FullTrust.