Cryptography .NET, Avoiding Timing Attack

When comparing two MACs or hashes (also password hashes) for equality you would first think a simple comparison would be okay?  Think again as they are susceptible to timing attacks.

In cryptography, a timing attack is a side channel attack in which the attacker attempts to compromise a cryptosystem by analysing the time taken to execute cryptographic algorithms.

If you want to read more about this take a look at this article.

if ($hash1 == $hash2) {
//mac verification is Okay
return "hashs are equal"
} else {
//something happened
return "hashs verification failed!";
}

Okay, so what is wrong with this?

Both arguments must be of the same length to be compared successfully. When arguments of differing length are supplied, FALSE is returned immediately and the length of the known string may be leaked in the case of a timing attack.  It is done by timing the amount of time it takes to compare a known string length with the HASH, and as you will then find the length of the string based on the time it takes during the comparison.

Compares two-byte arrays in length-constant time. This comparison method is used so that password hashes cannot be extracted from on-line systems using a timing attack and then attacked off-line.

 private static bool SlowEquals(byte[] a, byte[] b)
 {
 uint diff = (uint)a.Length ^ (uint)b.Length;
 for (int i = 0; i < a.Length && i < b.Length; i++)
 diff |= (uint)(a[i] ^ b[i]);
 return diff == 0;
 }

What does the line diff |= (uint)(a[i] ^ b[i]); do?

This sets diff based on whether there’s a difference between a and b.

It avoids a timing attack by always walking through the entirety of the shorter of the two of a and b, regardless of whether there’s a mismatch sooner than that or not.

The diff |= (uint)(a[i] ^ (uint)b[i]) takes the exclusive-or of a byte of a with a corresponding byte of b. That will be 0 if the two bytes are the same, or non-zero if they’re different. It then ors that with diff.

Therefore, diff will be set to non-zero in an iteration if a difference was found between the inputs in that iteration. Once diff is given a non-zero value at any iteration of the loop, it will retain the non-zero value through further iterations.

Therefore, the final result in diff will be non-zero if any difference is found between corresponding bytes of a and b, and 0 only if all bytes (and the lengths) of a and b are equal.

Unlike a typical comparison, however, this will always execute the loop until all the bytes in the shorter of the two inputs have been compared to bytes in the other. A typical comparison would have an early-out where the loop would be broken as soon as a mismatch was found:

Microsoft provides a SecureString to hold sensitive data, and this uses a generic method Equals(Object), which determines whether the specified object is equal to the current object. There is no information saying that Equals method protects against Timing Attacks. Therefore we can only conclude that it is not safe for this operation.

I would suggest research into writing our own String Compare that is protected against Timing Attacks.

I’m not sure about all of this and further research is required, I did find this article which would be worth considering:

https://paragonie.com/blog/2015/11/preventing-timing-attacks-on-string-comparison-with-double-hmac-strategy

SecureString

I’ve come across a class in the System.Security namespace that is often overlooked. The class is SecureString. In this post, I will go over what SecureString is and why it is needed.

Have you ever come across these scenarios before?

  • A password appears in a log file accidentally.
  • A password is being shown at somewhere – once a GUI did show a command line of application that was being run, and the command line consisted of the password.
  • Using memory profiler to profile software with your colleague. Colleague sees your password in memory.
  • Using RedGate software that could capture the “value” of local variables in case of exceptions, amazingly useful. Though, I can imagine that it will log “string passwords” accidentally.
  • A crash dump that includes string password.

Do you know how to avoid all these problems? SecureString. It makes sure you don’t make silly mistakes as such. How does it avoid it? By making sure that password is encrypted in unmanaged memory and the real value can be only accessed when you are 90% sure what you’re doing.

In the sense, SecureString works pretty easily:

  1. Everything is encrypted
  2. User calls AppendChar
  3. Decrypt everything in UNMANAGED MEMORY and add the character
  4. Encrypt everything again in UNMANAGED MEMORY.

What if the user has access to your computer? Would a virus be able to get access to all the SecureStrings? Yes. All you need to do is hook yourself into RtlEncryptMemory then decrypt the memory, and you will get the location of the unencrypted memory address, and read it out. Voila! In fact, you could make a virus that will continuously scan for usage of SecureString and log all the activities with it. I am not saying it will be an easy task, but it can be done. As you can see, the “powerfulness” of SecureString is completely gone once there’s a user/virus on your system.

You have few points in your post. Sure, if you use some of the UI controls that hold “string password” internally, using actual SecureString is not that useful.

The bottom line is; if you have sensitive data(passwords, credit cards, ..), use SecureString. This is what C# Framework is following. For example, NetworkCredential class stores password as SecureString. If you look this, you can see over ~80 different usages in .NET framework of SecureString.

There are many cases when you have to convert SecureString to string because some API expects it.

The usual problem is either:

  • The API is GENERIC. It does not know that there’s a sensitive data.
  • The API knows that it’s dealing with sensitive data and uses “string” – that’s just bad design.

You raised a good point: what happens when you convert SecureString to a string? That can only happen because of the first point. Eg the API does not know that it’s sensitive data. I have personally not seen that happening. Getting string out of SecureString is not that simple.

It’s not simple for a simple reason; it was never intended to let the user convert SecureString to string, as you stated: GC will kick in. If you see yourself doing that, you need to step back and ask yourself: Why am I even doing this, or do I need this, why?

You can always extend the SecureString class with an extension method, such as ToEncryptedString(__SERVER__PUBLIC_KEY), which gives you a string instance of SecureString that is encrypted using server’s public key. The only server can then decrypt it. Problem solved, GC will never see the “original” string, as you never expose it in managed memory. This is precisely what is being done in PSRemotingCryptoHelper (EncryptSecureStringCore(SecureString secureString)).

And as something very almost-related: Mono SecureString does not encrypt at all. The implementation has been commented out because ..wait for it. “It somehow causes nunit test breakage”, which brings to my last point:

Not supported everywhere is SecureString. If the platform/architecture does not support SecureString, you’ll get an exception. The recommended list of deployment platforms is documentation.

Using SecureString for Sensitive Data

The standard System.String class has never been a very secure solution for storing sensitive strings such as passwords or credit card numbers. Using a string for this purposes has numerous problems including it’s not pinned, so the garbage collector can move it around and will leave several copies in memory, it’s not encrypted, so anyone who can read your processor’s memory will be able to see the values of the string easily. Also, if your process gets swapped out to disk, the unencrypted contents of the string will be sitting in your swap file. And it’s not mutable, so whenever you need to modify it there will be an old version and the new version both in memory.

Since it’s not mutable, there’s no effective way to clear it out when you’re done using it. Secure strings are held in encrypted memory by the CLR using the Data Protection API, or DPAPI, and they’re only unencrypted when they are accessed.  This limits the amount of time that your string is in plain text for an attacker to see.

The garbage collector will not move the encrypted string around in memory, so you never have to worry about multiple copies of your string sitting in your address space, unless you make copies of those.

SecureString also implements IDisposable, and when it’s disposed of or finalised if you forget to dispose of it, the memory that was used to hold your encrypted string will be zeroed out.

They also provide a feature that lets you lock them down as read only preventing other code from modifying your string.

You can create a secure string with a pointer to a character array and a length of that array. When constructed this way, the secure string will make a copy of your array, allowing you to zero out your insecure copy.

A secure string can also be constructed without an existing character array, and the data can be copied in one character at a time.

One important thing to note though is that SecureString shouldn’t be used as a blanket replacement for System.String, it should only be used in places where you need to store sensitive information that you do not want to spread around in memory for longer than is required.

To add data or modify data in your string, standard operations are provided. For instance, you’ll find an AppendChar, InsertAt, RemoveAt, and SetAt methods. MakeReadOnly and IsReadOnly allows you to lock down the secure string. Clear, Dispose, and the finaliser takes care of removing any trace of the safe string from memory.

The main idea with SecureString is that you would never store a password or other textual secret in plain text. Unfortunately, SecureString was introduced into the framework only after plenty of APIs were built and shipped using passwords stored in a string, such as the System.Net.NetworkCredential. So any application that must use these APIs had no option but to convert secure strings to strings.

However, the SecureString class itself doesn’t provide any method to get back a plain string with its content, precisely to discourage this type of usage. What a developer has to do is use functions from the System.Runtime.InteropServices.Marshal to get a native buffer with the plain string, marshal a value into managed string, and then very importantly free the native buffer. The best implementation to get a string out of a secure string is to use a try/finally block to free the native buffer.

Here is a sample application showing how you can use SecureString, please remember in production do not use the SecureString to String as this just does not make sense and would make it insecure.

SecureString

Here is a short method to convert a string to a SecureString

public static SecureString ToSecureString(this string source)
{
 if (string.IsNullOrWhiteSpace(source))
 return null;
 else
 {
 SecureString result = new SecureString();
 foreach (char c in source.ToCharArray())
 result.AppendChar(c);
 return result;
 }
}

Raising Multiple Exceptions with AggregateException

There are occasions where you are aware of many exceptions within your code that you want to raise together in one go. Perhaps your system makes a service call to a middleware orchestration component that potentially returns many exceptions detailed in its response, or another scenario might be a batch processing task dealing with many items in one process that require you to collate all exceptions until the end and then throw them together.

Let us look at the batch scenario in more detail. In this situation, if you raised the first exception that you found it would exit the method without processing the remaining items. Alternatively, you could store the exception information in a variable of some sort and once all the elements are processed using the information to construct an exception and throw it. While this approach works, there are some drawbacks. There is the extra effort required to create a viable storage container to hold the exception information, and this may mean modifying existing code not to throw an exception but instead to log the details in this new ‘exception detail helper class’. This solution also lacks the additional benefits you get with creating an exception then, for example, the numerous intrinsic properties that exist within Exception objects that provide valuable additional context information to support the message within the exception. Even when all the relevant information has been collated into a single exception class, then you are still left with one exception holding all that information when you may need to handle the exceptions individually and pass them off to existing error handling frameworks which rely on a type deriving from Exception.

Luckily included in .Net Framework 4.0 is the simple but very useful AggregateException class which lives in the System namespace (within mscorlib.dll). It was created to use the Task Parallel Library, and its use within that library is described on MSDN here. Don’t think that is it’s only used though, as it can be put to good use within your code in situations like those described above where you need to throw many exceptions, so let’s see what it offers.

The AggregateException class is an exception type, inheriting from System.Exception, that acts a wrapper for a collection of child exceptions. Within your code, you can create instances of any exception based type and add them to the AggregateException’s collection. The idea is a simple one, but the AggregateException’s beauty comes in the implementation of this simplicity. As it is a regular exception class, it can handle in the usual way by existing code but also as a unique exception collection by the particular system that cares about all the exceptions nested within its bowels.

The class accepts the child exceptions on one of its seven constructors and then exposes them through its InnerExceptions property. Unfortunately, this is a read-only collection, and so it is not possible to add inner exceptions to the AggregateException after it has instantiated (which would have been nice) and so you will need to store your exceptions in a collection until you’re ready to create the Aggregate and throw it:

// create a collection container to hold exceptions List<Exception> exceptions = new List<Exception>(); // do some stuff here ........ // we have an exception with an innerexception, so add it to the list exceptions.Add(new TimeoutException("It timed out", new ArgumentException("ID missing"))); // do more stuff ..... // Another exception, add to list exceptions.Add(new NotImplementedException("Somethings not implemented")); // all done, now create the AggregateException and throw it AggregateException aggEx = new AggregateException(exceptions); throw aggEx;

The method you use to store the exceptions is up to you as long as you have them all ready at the time you create the AggregateException class. Seven constructors are allowing you to pass combinations of nothing, a string message, collections or arrays of inner exceptions.

Once created you interact with the class as you would any other exception type:

try {    // do task } catch (AggregateException ex) {    // handle it  }

The key as it means that you can make use of existing code and patterns for handling exceptions within your (or third parties) codebase.

In addition to the general Exception members, the class exposes a few custom ones. The common InnerException property is there for compatibility, and this appears to return the first exception added to the AggregateException class via the constructor, so in the example above it would be the TimeoutException instance. All of the child exceptions expose via the InnerExceptions read-only collection property (as shown below).

The Flatten() method is another custom property that might prove useful if you find the need to nest Exceptions as inner exceptions within several AggregateExceptions. The method will iterate the InnerExceptions collection, and if it finds AggregateExceptions nested as InnerExceptions, it will promote their child exceptions to the parent level. As you can see in this example:

AggregateException aggExInner = 
 new AggregateException("inner AggEx", new TimeoutException());
AggregateException aggExOuter1 = 
 new AggregateException("outer 1 AggEx", aggExInner);
AggregateException aggExOuter2 = 
 new AggregateException("outer 2 AggEx", new ArgumentException());
AggregateException aggExMaster =
 new AggregateException(aggExOuter1, aggExOuter2);

If we create this structure above of AggregrateExceptions with inner exceptions of TimeoutException and ArgumentException then the InnerExceptions property of the parent AggregateException (i.e. aggExMaster) shows, as expected, two objects, both being of type AggregrateException and both containing child exceptions of their own:

But if we call Flatten()…

AggregateException aggExFlatterX = aggExMaster.Flatten();

…we get a new ArgumentException instance returned that contains still two objects, but this time the AggregrateException objects have gone, and we have the two child exceptions of TimeoutException and ArgumentException:

A useful feature to discard the AggregateException containers (which are effectively just packaging) and expose the real meat, i.e. the real exceptions that have been thrown and needs to be addressed.

If you’re wondering how the ToString() is implemented then the aggExMaster object in the examples above (without flattening) produces this:

System.AggregateException: One or more errors occurred. ---> System.AggregateException
: outer 1 AggEx ---> System.AggregateException: inner AggEx ---> 
System.TimeoutException: The operation has timed out. --- End of inner exception 
stack trace --- --- End of inner exception stack trace --- --- End of inner exception 
stack trace ------> (Inner Exception #0) System.AggregateException: outer 1 AggEx ---> 
System.AggregateException: inner AggEx ---> System.TimeoutException: The operation
 has timed out. --- End of inner exception stack trace --- --- End of inner 
exception stack trace ------> (Inner Exception #0) System.AggregateException: inner
AggEx ---> System.TimeoutException: The operation has timed out. --- End of inner 
exception stack trace ------> (Inner Exception #0) System.TimeoutException: The 
operation has timed out.<---<---<------> (Inner Exception #1) System.AggregateException
: outer 2 AggEx --- System.ArgumentException: Value does not fall within the expected
 range. --- End of inner exception stack trace ------> (Inner Exception #0) 
System.ArgumentException: Value does not fall within the expected range.

As you can see the data has been formatted in a neat and convenient way for readability, with separators between the inner exceptions.

In summary, this is a very useful class to be aware of and have in your arsenal whether you are dealing with the Parallel Tasks Library or you just need to manage multiple exceptions. I like simple and neat solutions, and to me, this is a good example of that philosophy.

Orginal Article

Using MongoDB

If you want to setup MongoDB with C#, then the best option is to follow the getting started guide on the MongoDB website

https://docs.mongodb.com/getting-started/csharp/client/

You can install the NuGet package for MongoDB by typing into the Package Manager Console

  • “Install-Package MongoDB.Driver”

Although MongoDB does provide an example that you can work through, I did find that the MoveNextAsync command is no longer supported and the case fails.

I built a simple application that uses MongoDB CRUD application that works very well to show how it can be implemented.

MongoDB

Here is a helpful MongoDB tool to administer your Mongo database, Robomogo

https://robomongo.org

On my searching to find good source of material I came across this nice video demo here https://youtu.be/ubwC63DwJ8w

Custom Configuration in your config

This post is to show how to setup a customised configuration section in your app.config or web.conifg of your .NET application

I have written an article on this before, back in 2008, but things have moved on in the libraries and this is the latest and easiest way to create your own customised configuration.

We just need to define our class, inherit from System.Configuration.ConfigurationSection, and add a property per setting we wish to store.

using System;
using System.Configuration;

public class BlogSettings : ConfigurationSection
{
 private static BlogSettings settings = ConfigurationManager.GetSection("BlogSettings") as BlogSettings;
 
 public static BlogSettings Settings
 {
 get
 {
 return settings;
 }
 }

 [ConfigurationProperty("frontPagePostCount" , DefaultValue = 20 , IsRequired = false)]
 [IntegerValidator(MinValue = 1 , MaxValue = 100)]
 public int FrontPagePostCount
 {
 get { return (int)this["frontPagePostCount"]; }
 set { this["frontPagePostCount"] = value; }
 }


 [ConfigurationProperty("title" , IsRequired=true)]
 public string Title
 {
 get { return (string)this["title"]; }
 set { this["title"] = value; }
 }
}

Notice that you use an indexed property to store and retrieve each property value.

Also not a static property named Settings for convenience.

Add your new configuration section to web.config (or app.config).

<configuration>
 <configSections>
 <section name="BlogSettings" type="Fully.Qualified.TypeName.BlogSettings, 
 AssemblyName" />
 </configSections>
 <BlogSettings
 frontPagePostCount="10"
 title="You’ve Been Haacked" />
</configuration>

And to access the new configuration using code it is as simple as:

string title = BlogSettings.Settings.Title;
Response.Write(title); //it works!!!

Here is the source code: Configuration Example

Another article that goes into this in a lot more details can be found on Code Project – Unraveling the Mysteries of .NET 2.0 Configuration

New Features in C# 7.0

Microsoft have release C# 7.0 and it comes with a number of new features and brings a focus on data consumption, code simplification and performance. Tuples have been around for a while but they have made them so much easier to use, which make it easy to have multiple results, and pattern matching which simplifies code that is conditional on the shape of data. But there are many other features big and small.

Tuples

It is common to want to return more than one value from a method. The options available in older versions of C# are less than optimal:

  • Out parameters: Use is clunky (even with the improvements described above), and they don’t work with async methods.
  • System.Tuple<...> return types: Verbose to use and require an allocation of a tuple object.
  • Custom-built transport type for every method: A lot of code overhead for a type whose purpose is just to temporarily group a few values.
  • Anonymous types returned through a dynamic return type: High performance overhead and no static type checking.

To do better at this, C# 7.0 adds tuple types and tuple literals:

(string, string, string) LookupName(long id) // tuple return type
{
    ... // retrieve first, middle and last from data storage
    return (first, middle, last); // tuple literal
}

The method now effectively returns three strings, wrapped up as elements in a tuple value.

The caller of the method will receive a tuple, and can access the elements individually:

var names = LookupName(id);
WriteLine($"found {names.Item1} {names.Item3}.");

Item1 etc. are the default names for tuple elements, and can always be used. But they aren’t very descriptive, so you can optionally add better ones:

(string first, string middle, string last) LookupName(long id) // tuple elements have names

Now the recipient of that tuple have more descriptive names to work with:

var names = LookupName(id);
WriteLine($"found {names.first} {names.last}.");

You can also specify element names directly in tuple literals:

return (first: first, middle: middle, last: last); // named tuple elements in a literal

Generally you can assign tuple types to each other regardless of the names: as long as the individual elements are assignable, tuple types convert freely to other tuple types.

Tuples are value types, and their elements are simply public, mutable fields. They have value equality, meaning that two tuples are equal (and have the same hash code) if all their elements are pairwise equal (and have the same hash code).

This makes tuples useful for many other situations beyond multiple return values. For instance, if you need a dictionary with multiple keys, use a tuple as your key and everything works out right. If you need a list with multiple values at each position, use a tuple, and searching the list etc. will work correctly.

Tuples rely on a family of underlying generic struct types called ValueTuple<...>. If you target a Framework that doesn’t yet include those types, you can instead pick them up from NuGet:

  • Right-click the project in the Solution Explorer and select “Manage NuGet Packages…”
  • Select the “Browse” tab and select “nuget.org” as the “Package source”
  • Search for “System.ValueTuple” and install it.

Pattern matching

C# 7.0 introduces the notion of patterns, which, abstractly speaking, are syntactic elements that can test that a value has a certain “shape”, and extract information from the value when it does.

Examples of patterns in C# 7.0 are:

  • Constant patterns of the form c (where c is a constant expression in C#), which test that the input is equal to c
  • Type patterns of the form T x (where T is a type and x is an identifier), which test that the input has type T, and if so, extracts the value of the input into a fresh variable x of type T
  • Var patterns of the form var x (where x is an identifier), which always match, and simply put the value of the input into a fresh variable x with the same type as the input.

This is just the beginning – patterns are a new kind of language element in C#.

In C# 7.0 they are enhancing two existing language constructs with patterns:

  • is expressions can now have a pattern on the right hand side, instead of just a type
  • case clauses in switch statements can now match on patterns, not just constant values

In future versions of C# they are likely to add more places where patterns can be used.

Is-expressions with patterns

Here is an example of using is expressions with constant patterns and type patterns:

public void PrintStars(object o)
{
    if (o is null) return;     // constant pattern "null"
    if (!(o is int i)) return; // type pattern "int i"
    WriteLine(new string('*', i));
}

As you can see, the pattern variables – the variables introduced by a pattern – are similar to the out variables described earlier, in that they can be declared in the middle of an expression, and can be used within the nearest surrounding scope. Also like out variables, pattern variables are mutable. We often refer to out variables and pattern variables jointly as “expression variables”.

Patterns and Try-methods often go well together:

if (o is int i || (o is string s && int.TryParse(s, out i)) { /* use i */ }

Switch statements with patterns

We’re generalizing the switch statement so that:

  • You can switch on any type (not just primitive types)
  • Patterns can be used in case clauses
  • Case clauses can have additional conditions on them

Here’s a simple example:

switch(shape)
{
    case Circle c:
        WriteLine($"circle with radius {c.Radius}");
        break;
    case Rectangle s when (s.Length == s.Height):
        WriteLine($"{s.Length} x {s.Height} square");
        break;
    case Rectangle r:
        WriteLine($"{r.Length} x {r.Height} rectangle");
        break;
    default:
        WriteLine("<unknown shape>");
        break;
    case null:
        throw new ArgumentNullException(nameof(shape));
}

There are several things to note about this newly extended switch statement:

  • The order of case clauses now matters: Just like catch clauses, the case clauses are no longer necessarily disjoint, and the first one that matches gets picked. It’s therefore important that the square case comes before the rectangle case above. Also, just like with catch clauses, the compiler will help you by flagging obvious cases that can never be reached. Before this you couldn’t ever tell the order of evaluation, so this is not a breaking change of behavior.
  • The default clause is always evaluated last: Even though the null case above comes last, it will be checked before the default clause is picked. This is for compatibility with existing switch semantics. However, good practice would usually have you put the default clause at the end.
  • The null clause at the end is not unreachable: This is because type patterns follow the example of the current is expression and do notmatch null. This ensures that null values aren’t accidentally snapped up by whichever type pattern happens to come first; you have to be more explicit about how to handle them (or leave them for the default clause).

Pattern variables introduced by a case ...: label are in scope only in the corresponding switch section.

Literal improvements

C# 7.0 allows _ to occur as a digit separator inside number literals, I’m not a big fan of this, but I thought it worth mentioning here:

var d = 123_456;
var x = 0xAB_CD_EF;

You can put them wherever you want between digits, to improve readability. They have no effect on the value.

Also, C# 7.0 introduces binary literals, so that you can specify bit patterns directly instead of having to know hexadecimal notation by heart.

var b = 0b1010_1011_1100_1101_1110_1111;

These are just a few new features of C# 7.0, to see more view Mads Torgersen blog entry

Or watch the short video of the new features:

How slow are Exceptions

I’ve seen from time to time applications that wrap a TRY – CATCH around some code, but do nothing except carry on.  So why is this so wrong?

As programmers we want to write quality code that solves problems. Unfortunately, exceptions come as side effects of our code. No one likes side effects, so we soon find our own ways to get around them. I have seen some smart programmers deal with exceptions the following way:

public void consumeAndForgetAllExceptions(){
    try {
        ...some code that throws exceptions
    } catch (Exception ex){
        ex.printStacktrace();
    }
}

What is wrong with the code above?

Once an exception is thrown, normal program execution is suspended and control is transferred to the catch block. The catch block catches the exception and just suppresses it. Execution of the program continues after the catch block, as if nothing had happened.

How about the following?

public void someMethod() 
 {
 throw new Exception();
 }

This method is a blank one; it does not have any code in it. How can a blank method throw exceptions? C# does not stop you from doing this. Recently, I came across similar code where the method was declared to throw exceptions, but there was no code that actually generated that exception. When I asked the programmer, he replied “I know, it is corrupting the API, but I am used to doing it and it works.”

This debate goes around in circles in the C# community. I have seen several C# programmers struggle with the use of exceptions. If not used correctly, exceptions can slow down your program, as it takes memory and CPU power to create, throw, and catch exceptions. If overused, they make the code difficult to read and frustrating for the programmers using the API. We all know frustrations lead to hacks and code smells. The client code may circumvent the issue by just ignoring exceptions or throwing them.

Let’s go back to basics and start with the definition of an Exception:

Exception handling is the process of responding to the occurrence, during computation, of exceptions – anomalous or exceptional conditions requiring special processing – often changing the normal flow of program execution. It is provided by specialized programming language constructs or computer hardware mechanisms.

The important words here I feel is special processing something which is out of your control.

Here is a simple example

 var a = 0;
 int b;
 try
 {
  b = 10 / a;
 }
 catch
 {
 }

So what is wrong with this?

I call it lazy and poormans programming, one correct solution to this could be:

 var a = 0;
 int b;
 if (a != 0)
  b = 10 / a;

Okay this is a very simple example of abusive Exception handling, but does it really matter?

I wrote a small benchmark test to see just what the difference is in performance.  Just a simple 10,000 parallel loop count running the same code over and over again.

I was quite overwhelmed with the performnce hit of the Exception handling, even just in this small example.  The Exception handling caused the code to slow down by over 600 times.

600 times slower code, well that does not matter if this is only happening once I hear you say…!

The performance is just one area, what about the extra memory that the exception is using.  With a few changes to the Benchmark application am able to monitor the Garbage Collector, which although it is not a true representation of how much memory is being used it will provide a very good gauge.

The results of the memory useage were quite staggering

  • 14,416 for the programming logic
  • 2,501,832 for the exception handing

Just another reason why this is not acceptable to use the exception handler in this way.

This is just a short article on Excpetion Handling, but it does frustrate me when I see Exception Handling  being abused.

ExceptionPerformance

Entity Framework Unit of Work Patterns

I’m not often shocked, but when I found out that Entity Framework was not thread safe, I was horrified.  I even had to buy a fellow college dinner as I lost a bet.

Now that I knew what the problem was how can it be resolved?

One area I recently took some time to research is how the Unit of Work pattern is best implemented within the context of using Entity Framework. While the topic is still relatively fresh on my mind, I thought I’d use this as an opportunity to create a catalog of various approaches I’ve encountered and include some thoughts about each approach.

Unit of Work

To start, it may be helpful to give a basic definition of the Unit of Work pattern. A Unit of Work can be defined as a collection of operations that succeed or fail as a single unit. Given a series of operations which need to be executed in response to some interaction with an application, it’s often necessary to ensure that none of the operations cause side-effects if any one of them fails. This is accomplished by having participating operations respond to either a commit or rollback message indicating whether the operation performed should be completed or reverted.

A Unit of Work can consist of different types of operations such as Web Service calls, Database operations, or even in-memory operations, however, the focus of this article will be on approaches to facilitating the Unit of Work pattern with the Entity Framework.

With that out of the way, let’s take a look at various approaches to facilitating the Unit of Work pattern with Entity Framework.

Implicit Transactions

The first approach to achieving a Unit of Work around a series of Entity Framework operations is to simply create an instance of a DbContext class, make changes to one or more DbSet instances, and then call SaveChanges() on the context. Entity Framework automatically creates an implicit transaction for changesets which include INSERTs, UPDATEs, and DELETEs.

Here’s an example:

public Customer CreateCustomer(CreateCustomerRequest request)
{
 Customer customer = null;

 using (var context = new MyStoreContext())
 {
 customer = new Customer { FirstName = request.FirstName, LastName = request.LastName };
 context.Customers.Add(customer);
 context.SaveChanges();
 return customer;
 }
}

The benefit of this approach is that a transaction is created only when necessary and is kept alive only for the duration of the SaveChanges() call. Some drawbacks to this approach, however, are that it leads to opaque dependencies and adds a bit of repetitive infrastructure code to each of your applications services.

If you prefer to work directly with Entity Framework then this approach may be fine for simple needs.

TransactionScope

Another approach is to use the System.Transactions.TransactionScope class provided by the .Net framework. When any of the Entity Framework operations are used which cause a connection to be opened (e.g. SaveChanges()), the connection will enlist in the ambient transaction defined by the TransactionScope class and close the transaction once the TransactionScope is successfully completed. Here’s an example of this approach:

public Customer CreateCustomer(CreateCustomerRequest request)
{
 Customer customer = null;

 using (var transaction = new TransactionScope())
 {
 using (var context = new MyStoreContext())
 {
 customer = new Customer { FirstName = request.FirstName, LastName = request.LastName };
 context.Customers.Add(customer);
 context.SaveChanges();
 transaction.Complete();
 }

 return customer;
 }
}

In general, you’ll find using TransactionScope to be a good general-purpose solution for defining a Unit of Work around Entity Framework operations as it works with ADO.Net, all versions of Entity Framework, and other ORMs which provides the ability to use multiple libraries if needed. Additionally, it provides a foundation for building a more comprehensive Unit of Work pattern which would allow other types of operations to enlist in the Unit of Work.

Caution should be exercised when using TransactionScope, however, as certain operations can implicitly escalate the transaction to a distributed transaction causing undesired overhead. For those choosing solutions involving TransactionScope, I would recommend educating yourself on how and when transactions are escalated.

While you’ll find using the TransactionScope class to be a good general-purpose solution, using it directly does couple your services to a specific strategy and adds a bit of noise to your code.

ADO.Net Transactions

This approach involves creating an instance of DbTransaction and instructing the participating DbContext instance to use the existing transaction:

public Customer CreateCustomer(CreateCustomerRequest request)
{
 Customer customer = null;

 var connectionString = ConfigurationManager.ConnectionStrings["MyStoreContext"].ConnectionString;
 using (var connection = new SqlConnection(connectionString))
 {
 connection.Open();
 using (var transaction = connection.BeginTransaction())
 {
 using (var context = new MyStoreContext(connection))
 {
 context.Database.UseTransaction(transaction);
 try
 {
 customer = new Customer { FirstName = request.FirstName, LastName = request.LastName };
 context.Customers.Add(customer);
 context.SaveChanges();
 }
 catch (Exception e)
 {
 transaction.Rollback();
 throw;
 }
 }

 transaction.Commit();
 return customer;
 }
 }

As can be seen from the example, this approach adds quite a bit of infrastructure noise to your code. While not something I’d recommend standardizing upon, this approach provides another avenue for sharing transactions between Entity Framework and straight ADO.Net code which might prove useful in certain situations. In general, I wouldn’t recommend such an approach.

Entity Framework Transactions

The relative new-comer to the mix is the new transaction API introduced with Entity Framework 6. Here’s a basic example of it’s use:

public Customer CreateCustomer(CreateCustomerRequest request)
{
 Customer customer = null;

 using (var context = new MyStoreContext())
 {
 using (var transaction = context.Database.BeginTransaction())
 {
 try
 {
 customer = new Customer { FirstName = request.FirstName, LastName = request.LastName };
 context.Customers.Add(customer);
 context.SaveChanges();
 transaction.Commit();
 }
 catch (Exception e)
 {
 transaction.Rollback();
 throw;
 }
 }
 }

 return customer;
}

This is the approach recommended by Microsoft for achieving transactions with Entity Framework going forward. If you’re deploying applications with Entity Framework 6 and beyond, this will be your safest choice for Unit of Work implementations which only require database operation participation. Similar to a couple of the previous approaches we’ve already considered, the drawbacks of using this directly are that it creates opaque dependencies and adds repetitive infrastructure code to all of your application services. This is also a viable option, but I would recommend coupling this with other approaches we’ll look at later to improve the readability and maintainability of your application services.

Unit of Work Repository Manager

The first approach I encountered when researching how others were facilitating the Unit of Work pattern with Entity Framework was a strategy set forth by Microsoft’s guidance on the topic here. This strategy involves creating a UnitOfWork class which encapsulates an instance of the DbContext and exposes each repository as a property. Clients of repositories take a dependency upon an instance of UnitOfWork and access each repository as needed through properties on the UnitOfWork instance. The UnitOfWork type exposes a SaveChanges() method to be used when all the changes made through the repositories are to be persisted to the database. Here is an example of this approach:

public interface IUnitOfWork
{
 ICustomerRepository CustomerRepository { get; }
 IOrderRepository OrderRepository { get; }
 void Save();
}

public class UnitOfWork : IDisposable, IUnitOfWork
{
 readonly MyContext _context = new MyContext();
 ICustomerRepository _customerRepository;
 IOrderRepository _orderRepository;

 public ICustomerRepository CustomerRepository
 {
 get { return _customerRepository ?? (_customerRepository = new CustomerRepository(_context)); }
 }

 public IOrderRepository OrderRepository
 {
 get { return _orderRepository ?? (_orderRepository = new OrderRepository(_context)); }
 }

 public void Dispose()
 {
 if (_context != null)
 {
 _context.Dispose();
 }
 }

 public void Save()
 {
 _context.SaveChanges();
 }
}

public class CustomerService : ICustomerService
{
 readonly IUnitOfWork _unitOfWork;

 public CustomerService(IUnitOfWork unitOfWork)
 {
 _unitOfWork = unitOfWork;
 }

 public void CreateCustomer(CreateCustomerRequest request)
 {
 customer = new Customer { FirstName = request.FirstName, LastName = request.LastName };
 _unitOfWork.CustomerRepository.Add(customer);
 _unitOfWork.Save();
 }
}

It isn’t hard to imagine how this approach was conceived given it closely mirrors the typical implementation of the DbContext instance you find in Entity Framework guidance where public instances of DbSet are exposed for each aggregate root. Given this pattern is presented on the ASP.Net website and comes up as one of the first results when doing a search for “Entity Framework” and “Unit of Work”, I imagine this approach has gained some popularity among .Net developers. There are, however, a number of issues I have with this approach.

First, this approach leads to opaque dependencies. Due to the fact that classes interact with repositories through the UnitOfWork instance, the client interface doesn’t clearly express the inherent business-level collaborators it depends upon (i.e. any aggregate root collections).

Second, this violates the Open/Closed Principle. To add new aggregate roots to the system requires modifying the UnitOfWork each time.

Third, this violates the Single Responsibility Principle. The single responsibility of a Unit of Work implementation should be to encapsulate the behavior necessary to commit or rollback an set of operations atomically. The instantiation and management of repositories or any other component which may wish to enlist in a unit of work is a separate concern.

Lastly, this results in a nominal abstraction which is semantically coupled with Entity Framework. The example code for this approach sets forth an interface to the UnitOfWork implementation which isn’t the approach used in the aforementioned Microsoft article. Whether you take a dependency upon the interface or the implementation directly, however, the presumption of such an abstraction is to decouple the application from using Entity Framework directly. While such an abstraction might provide some benefits, it reflects Entity Framework usage semantics and as such doesn’t really decouple you from the particular persistence technology you’re using. While you could use this approach with another ORM (e.g. NHibernate), this approach is more of a reflection of Entity Framework operations (e.g. it’s flushing model) and usage patterns. As such, you probably wouldn’t arrive at this same abstraction were you to have started by defining the abstraction in terms of the behavior required by your application prior to choosing a specific ORM (i.e. following The Dependency Inversion Principle). You might even find yourself violating the Liskof Substitution Principle if you actually attempted to provide an alternate ORM implementation. Given these issues, I would advise people to avoid this approach.

Injected Unit of Work and Repositories

For those inclined to make all dependencies transparent while maintaining an abstraction from Entity Framework, the next strategy may seem the natural next step. This strategy involves creating an abstraction around the call to DbContext.SaveChanges() and requires sharing a single instance of DbContext among all the components whose operations need to participate within the underlying SaveChanges() call as a single transaction.

Here is an example:

public class CustomerService : ICustomerService
{
  readonly IUnitOfWork _unitOfWork;
  readonly ICustomerRepository _customerRepository;

  public CustomerService(IUnitOfWork unitOfWork, ICustomerRepository customerRepository)
  {
    _unitOfWork = unitOfWork;
    _customerRepository = customerRepository;
  }

  public void CreateCustomer(CreateCustomerRequest request)
  {
    customer = new Customer { FirstName = request.FirstName, LastName = request.LastName };
    _customerRepository.Add(customer);
    _unitOfWork.Save();
  }
}

This approach shares many of the same issues with the previous one. While it reduces a bit of infrastructure noise, it’s still semantically coupled to Entity Framework’s approach and still lacks a defined Unit of Work boundary. Additionally, it lacks clarity as to what happens when you call the SaveChanges() method. Given the Repository pattern is intended to be a virtual collection of all the entities within your system of a given type, one might suppose a method named “SaveChanges” means that you are somehow persisting any changes made to the particular entities represented by the repository (setting aside the fact that doing so is really a subversion of the pattern’s purpose). On the contrary, it really means “save all the changes made to any entities tracked by the underlying DbContext”. I would also recommend avoiding this approach.

Unit of Work Per Request

A pattern I’m a bit embarrassed to admit has been characteristic of many projects I’ve worked on in the past (though not with EF) is to create a Unit of Work implementation which is scoped to a Web Application’s Request lifetime. Using this approach, whatever method is used to facilitate a Unit of Work is configured with a DI container using a Per-HttpRequest lifetime scope and the Unit of Work boundary is opened by the first component being injected by the UnitOfWork and committed/rolled-back when the HttpRequest is disposed by the container.

There are a few different manifestations of this approach depending upon the particular framework and strategy you’re using, but here’s a pseudo-code example of how configuring this might look for Entity Framework with the Autofac DI container:

builder.RegisterType()
 .As()
 .InstancePerRequest()
 .OnActivating(x =>
 {
 // start a transaction
 })
 .OnRelease(context =>
 {
 try
 {
 // commit or rollback the transaction
 }
 catch (Exception e)
 {
 // log the exception
 throw;
 }
 });

public class SomeService : ISomeService
{
 public void DoSomething()
 {
 // do some work
 }
}

While this approach eliminates the need for your services to be concerned with the Unit of Work infrastructure, the biggest issue with this is when an error happens to occur. When the application can’t successfully commit a transaction for whatever reason, the rollback occurs AFTER you’ve typically relinquished control of the request (e.g. You’ve already returned results from a controller). When this occurs, you may end up telling your customer that something happened when it actually didn’t and your client state may end up out of sync with the actual persisted state of the application.

While I used this strategy without incident for some time with NHibernate, I eventually ran into a problem and concluded that the concern of transaction boundary management inherently belongs to the application-level entry point for a particular interaction with the system. This is another approach I’d recommend avoiding.

Instantiated Unit of Work

The next strategy involves instantiating a UnitOfWork implemented using either the .Net framework TransactionScope class or the transaction API introduced by Entity Framework 6 to define a transaction boundary within the application service. Here’s an example:

public class CustomerService : ICustomerService
{
  readonly ICustomerRepository _customerRepository;

  public CustomerService(ICustomerRepository customerRepository)
  {
    _customerRepository = customerRepository;
  }

  public void CreateCustomer(CreateCustomerRequest request)
  {
    using (var unitOfWork = new UnitOfWork())
    {
      try
      {
        customer = new Customer { FirstName = request.FirstName, LastName = request.LastName };
        _customerRepository.Add(customer);        
        unitOfWork.Commit();
      }
      catch (Exception ex)
      {
        unitOfWork.Rollback();
      }
    }
  }
}

Functionally, this is a viable approach to facilitating a Unit of Work boundary with Entity Framework. A few drawbacks, however, are that the dependency upon the Unit Of Work implementation is opaque and that it’s coupled to a specific implementation. While this isn’t a terrible approach, I would recommend other approaches discussed here which either surface any dependencies being taken on the Unit of Work infrastructure or invert the concerns of transaction management completely.

Injected Unit of Work Factory

This strategy is similar to the one presented in the Instantiated Unit of Work example, but makes its dependence upon the Unit of Work infrastructure transparent and provides a point of abstraction which allows for an alternate implementation to be provided by the factory:

public class CustomerService : ICustomerService
{
 readonly ICustomerRepository _customerRepository;
 readonly IUnitOfWorkFactory _unitOfWorkFactory;

 public CustomerService(IUnitOfWorkFactory unitOfWorkFactory, ICustomerRepository customerRepository)
 {
 _customerRepository = customerRepository;
 _unitOfWorkFactory = unitOfWorkFactory;
 }

 public void CreateCustomer(CreateCustomerRequest request)
 {
 using (var unitOfWork = _unitOfWorkFactory.Create())
 {
 try
 {
 customer = new Customer { FirstName = request.FirstName, LastName = request.LastName };
 _customerRepository.Add(customer);
 unitOfWork.Commit();
 }
 catch (Exception ex)
 {
 unitOfWork.Rollback();
 }
 }
 }
}

While I personally prefer to invert such concerns, I consider this to be a sound approach.

As a side note, if you decide to use this approach, you might also consider utilizing your DI Container to just inject a Func to avoid the overhead of maintaining an IUnitOfWorkFactory abstraction and implementation.

Unit of Work ActionFilterAttribute

For those who prefer to invert the Unit of Work concerns as I do, the following approach provides an easy to implement solution for those using ASP.Net MVC and/or Web API. This technique involves creating a custom Action filter which can be used to control the boundary of a Unit of Work at the Controller action level. The particular implementation may vary, but here’s a general template:

public class UnitOfWorkFilter : ActionFilterAttribute
{
 public override void OnActionExecuting(ActionExecutingContext filterContext)
 {
 // begin transaction
 }

 public override void OnActionExecuted(ActionExecutedContext filterContext)
 {
 // commit/rollback transaction
 }
}

The benefits of this approach are that it’s easy to implement and that it eliminates the need for introducing repetitive infrastructure code into your application services. This attribute can be registered with the global action filters, or for the more discriminant, only placed on actions resulting in state changes to the database. Overall, this would be my recommended approach for Web applications. It’s easy to implement, simple, and keeps your code clean.

Unit of Work Decorator

A similar approach to the use of a custom ActionFilterAttribute is the creation of a custom decorator. This approach can be accomplished by utilizing a DI container to automatically decorate specific application service interfaces with a class which implements a Unit of Work boundary.

Here is a pseudo-code example of how configuring this might look for Entity Framework with the Autofac DI container which presumes that some form of command/command-handler pattern is being utilized (e.g. frameworks like MediatR , ShortBus, etc.):

// DI Registration
builder.RegisterGenericDecorator(
 typeof(TransactionRequestHandler<>), // the decorator instance
 typeof(IRequestHandler<>), // the types to decorate
 "requestHandler", // the name of the key to decorate
 null); // the name of the key to this decorator



public class TransactionRequestHandler : IRequestHandler where TResponse : ApplicationResponse
{
 readonly DbContext _context;
 readonly IRequestHandler _decorated;

 public TransactionRequestHandler(IRequestHandler decorated, DbContext context)
 {
 _decorated = decorated;
 _context = context;
 }

 public TResponse Handle(TRequest request)
 {
 TResponse response;

 // Open transaction here

 try
 {
 response = _decorated.Handle(request);

 // commit transaction

 }
 catch (Exception e)
 {
 //rollback transaction
 throw;
 }

 return response;
 }
}


public class SomeRequestHandler : IRequestHandler
{
 public ApplicationResponse Handle()
 {
 // do some work
 return new SuccessResponse();
 }
}

While this approach requires a bit of setup, it provides an alternate means of facilitating the Unit of Work pattern through a decorator which can be used by other consumers of the application layer aside from just ASP.Net (i.e. Windows services, CLI, etc.) It also provides the ability to move the Unit of Work boundary closer to the point of need for those who would rather provide any error handling prior to returning control to the application service client (e.g. the Controller actions) as well as giving more control over the types of operations decorated (e.g. IQueryHandler vs. ICommandHandler). For Web applications, I’d recommend trying the custom Action Filter approach first, as it’s easier to implement and doesn’t presume upon the design of your application layer, but this is certainly a good approach if it fits your needs.

Conclusion

Out of the approaches I’ve evaluated, there are several that I see as sound approaches which maintain some minimum adherence to good design practices. Of course, which approach is best for your application will be dependent upon the context of what you’re doing and to some extent the design values of your team.

Orginal Post

Decorator Pattern

Decorator pattern falls under Structural Pattern of Gang of Four (GOF) Design Patterns in .Net. Decorator pattern is used to add new functionality to an existing object without changing its structure. Hence Decorator pattern provides an alternative way to inheritance for modifying the behavior of an object. In this article, I would like to go over what the decorator pattern can do, and how it works.

There are some occasions in our applications when we need to create an object with some basic functionality in such a way that some extra functionality can be added to this object dynamically. For example, Lets say we need to create a Stream object to handle some data but in some cases we need this stream object to be able to encrypt the stream in some cases. So what we can do is that we can have the basic Stream object ready and then dynamically add the encryption functionality when it is needed.

One may also say that why not keep this encryption logic in the stream class itself and turn it on or off by using a Boolean property. But this approach will have problems like – How can we add the type custom encryption logic inside a class? Now this can be done easily by subclassing the existing class and have custom encryption logic in the derived class.

This is a valid solution but only when this encryption is the only functionality needed with this class. But what if there are multiple functionalities that could be added dynamically to this class and also the combination of functionalities too. If we use the subclassing approach then we will end up with derievd classes equal to the number of combination we could have for all our functionalities and the actual object.

This is exactly the scenario where the decorator patter can be useful.  Decorators provide a flexible alternative to subclassing for extending functionality.”

Before looking into the details of decorator pattern let us go ahead, let’s have a look at what this pattern is and then see the class diagram of this pattern and see what each class is responsible for.

What is Decorator Pattern

Decorator pattern is used to add new functionality to an existing object without changing its structure.

This pattern creates a decorator class which wraps the original class and add new behaviors/operations to an object at run-time.

Still none the wiser?  Let’s look at a diagram to help you picture the pattern.

Decorator Pattern – UML Diagram & Implementation

The UML class diagram for the implementation of the decorator design pattern is given below:

The classes, interfaces and objects in the above UML class diagram are as follows:

Component

This is an interface containing members that will be implemented by ConcreteClass and Decorator.

ConcreteComponent

This is a class which implements the Component interface.

Decorator

This is an abstract class which implements the Component interface and contains the reference to a Component instance. This class also acts as base class for all decorators for components.

ConcreteDecorator

This is a class which inherits from Decorator class and provides a decorator for components.

C# – Implementation Code

public interface Component
{
 void Operation();
}
 
public class ConcreteComponent : Component
{
 public void Operation()
 {
 Console.WriteLine("Component Operation");
 }
}
 
public abstract class Decorator : Component
{
 private Component _component;
 
 public Decorator(Component component)
 {
 _component = component;
 }
 
 public virtual void Operation()
 {
 _component.Operation();
 }
}
 
public class ConcreteDecorator : Decorator
{
 public ConcreteDecorator(Component component) : base(component) { }
 
 public override void Operation()
 {
 base.Operation();
 Console.WriteLine("Override Decorator Operation");
 }
}

Decorator Pattern – Example

Who is what?

The classes, interfaces and objects in the above class diagram can be identified as follows:

  • Vehicle – Component Interface.
  • HondaCity- ConcreteComponent class.
  • VehicleDecorator- Decorator Class.
  • Special Offer- ConcreteDecorator class.

C# – Sample Code

/// <summary>
/// The 'Component' interface
/// </summary>
public interface Vehicle
{
 string Make { get; }
 string Model { get; }
 double Price { get; }
}
 
/// <summary>
/// The 'ConcreteComponent' class
/// </summary>
public class HondaCity : Vehicle
{
 public string Make
 {
 get { return "HondaCity"; }
 }
 
 public string Model
 {
 get { return "CNG"; }
 }
 
 public double Price
 {
 get { return 1000000; }
 }
}
 
/// <summary>
/// The 'Decorator' abstract class
/// </summary>
public abstract class VehicleDecorator : Vehicle
{
 private Vehicle _vehicle;
 
 public VehicleDecorator(Vehicle vehicle)
 {
 _vehicle = vehicle;
 }
 
 public string Make
 {
 get { return _vehicle.Make; }
 }
 
 public string Model
 {
 get { return _vehicle.Model; }
 }
 
 public double Price
 {
 get { return _vehicle.Price; }
 }
 
}
 
/// <summary>
/// The 'ConcreteDecorator' class
/// </summary>
public class SpecialOffer : VehicleDecorator
{
 public SpecialOffer(Vehicle vehicle) : base(vehicle) { }
 
 public int DiscountPercentage { get; set; }
 public string Offer { get; set; }
 
 public new double Price
 {
 get
 {
 double price = base.Price;
 int percentage = 100 - DiscountPercentage;
 return Math.Round((price * percentage) / 100, 2);
 }
 }
 
}
 
/// <summary>
/// Decorator Pattern Demo
/// </summary>
class Program
{
 static void Main(string[] args)
 {
 // Basic vehicle
 HondaCity car = new HondaCity();
 
 Console.WriteLine("Honda City base price are : {0}", car.Price);
 
 // Special offer
 SpecialOffer offer = new SpecialOffer(car);
 offer.DiscountPercentage = 25;
 offer.Offer = "25 % discount";
 
 Console.WriteLine("{1} @ Diwali Special Offer and price are : {0} ", offer.Price, offer.Offer);
 
 Console.ReadKey();
 
 }
}

Decorator Pattern Demo – Output

When to use it?

Add additional state or behavior to an object dynamically.

Make changes to some objects in a class without affecting others.

Original Article by Shailendra Chauhan

Here is a sample application Decorator Pattern