How slow are Exceptions

I’ve seen from time to time applications that wrap a TRY – CATCH around some code, but do nothing except carry on.  So why is this so wrong?

As programmers we want to write quality code that solves problems. Unfortunately, exceptions come as side effects of our code. No one likes side effects, so we soon find our own ways to get around them. I have seen some smart programmers deal with exceptions the following way:

public void consumeAndForgetAllExceptions(){
    try {
        ...some code that throws exceptions
    } catch (Exception ex){
        ex.printStacktrace();
    }
}

What is wrong with the code above?

Once an exception is thrown, normal program execution is suspended and control is transferred to the catch block. The catch block catches the exception and just suppresses it. Execution of the program continues after the catch block, as if nothing had happened.

How about the following?

public void someMethod() 
 {
 throw new Exception();
 }

This method is a blank one; it does not have any code in it. How can a blank method throw exceptions? C# does not stop you from doing this. Recently, I came across similar code where the method was declared to throw exceptions, but there was no code that actually generated that exception. When I asked the programmer, he replied “I know, it is corrupting the API, but I am used to doing it and it works.”

This debate goes around in circles in the C# community. I have seen several C# programmers struggle with the use of exceptions. If not used correctly, exceptions can slow down your program, as it takes memory and CPU power to create, throw, and catch exceptions. If overused, they make the code difficult to read and frustrating for the programmers using the API. We all know frustrations lead to hacks and code smells. The client code may circumvent the issue by just ignoring exceptions or throwing them.

Let’s go back to basics and start with the definition of an Exception:

Exception handling is the process of responding to the occurrence, during computation, of exceptions – anomalous or exceptional conditions requiring special processing – often changing the normal flow of program execution. It is provided by specialized programming language constructs or computer hardware mechanisms.

The important words here I feel is special processing something which is out of your control.

Here is a simple example

 var a = 0;
 int b;
 try
 {
  b = 10 / a;
 }
 catch
 {
 }

So what is wrong with this?

I call it lazy and poormans programming, one correct solution to this could be:

 var a = 0;
 int b;
 if (a != 0)
  b = 10 / a;

Okay this is a very simple example of abusive Exception handling, but does it really matter?

I wrote a small benchmark test to see just what the difference is in performance.  Just a simple 10,000 parallel loop count running the same code over and over again.

I was quite overwhelmed with the performnce hit of the Exception handling, even just in this small example.  The Exception handling caused the code to slow down by over 600 times.

600 times slower code, well that does not matter if this is only happening once I hear you say…!

The performance is just one area, what about the extra memory that the exception is using.  With a few changes to the Benchmark application am able to monitor the Garbage Collector, which although it is not a true representation of how much memory is being used it will provide a very good gauge.

The results of the memory useage were quite staggering

  • 14,416 for the programming logic
  • 2,501,832 for the exception handing

Just another reason why this is not acceptable to use the exception handler in this way.

This is just a short article on Excpetion Handling, but it does frustrate me when I see Exception Handling  being abused.

ExceptionPerformance

Entity Framework Unit of Work Patterns

I’m not often shocked, but when I found out that Entity Framework was not thread safe, I was horrified.  I even had to buy a fellow college dinner as I lost a bet.

Now that I knew what the problem was how can it be resolved?

One area I recently took some time to research is how the Unit of Work pattern is best implemented within the context of using Entity Framework. While the topic is still relatively fresh on my mind, I thought I’d use this as an opportunity to create a catalog of various approaches I’ve encountered and include some thoughts about each approach.

Unit of Work

To start, it may be helpful to give a basic definition of the Unit of Work pattern. A Unit of Work can be defined as a collection of operations that succeed or fail as a single unit. Given a series of operations which need to be executed in response to some interaction with an application, it’s often necessary to ensure that none of the operations cause side-effects if any one of them fails. This is accomplished by having participating operations respond to either a commit or rollback message indicating whether the operation performed should be completed or reverted.

A Unit of Work can consist of different types of operations such as Web Service calls, Database operations, or even in-memory operations, however, the focus of this article will be on approaches to facilitating the Unit of Work pattern with the Entity Framework.

With that out of the way, let’s take a look at various approaches to facilitating the Unit of Work pattern with Entity Framework.

Implicit Transactions

The first approach to achieving a Unit of Work around a series of Entity Framework operations is to simply create an instance of a DbContext class, make changes to one or more DbSet instances, and then call SaveChanges() on the context. Entity Framework automatically creates an implicit transaction for changesets which include INSERTs, UPDATEs, and DELETEs.

Here’s an example:

public Customer CreateCustomer(CreateCustomerRequest request)
{
 Customer customer = null;

 using (var context = new MyStoreContext())
 {
 customer = new Customer { FirstName = request.FirstName, LastName = request.LastName };
 context.Customers.Add(customer);
 context.SaveChanges();
 return customer;
 }
}

The benefit of this approach is that a transaction is created only when necessary and is kept alive only for the duration of the SaveChanges() call. Some drawbacks to this approach, however, are that it leads to opaque dependencies and adds a bit of repetitive infrastructure code to each of your applications services.

If you prefer to work directly with Entity Framework then this approach may be fine for simple needs.

TransactionScope

Another approach is to use the System.Transactions.TransactionScope class provided by the .Net framework. When any of the Entity Framework operations are used which cause a connection to be opened (e.g. SaveChanges()), the connection will enlist in the ambient transaction defined by the TransactionScope class and close the transaction once the TransactionScope is successfully completed. Here’s an example of this approach:

public Customer CreateCustomer(CreateCustomerRequest request)
{
 Customer customer = null;

 using (var transaction = new TransactionScope())
 {
 using (var context = new MyStoreContext())
 {
 customer = new Customer { FirstName = request.FirstName, LastName = request.LastName };
 context.Customers.Add(customer);
 context.SaveChanges();
 transaction.Complete();
 }

 return customer;
 }
}

In general, you’ll find using TransactionScope to be a good general-purpose solution for defining a Unit of Work around Entity Framework operations as it works with ADO.Net, all versions of Entity Framework, and other ORMs which provides the ability to use multiple libraries if needed. Additionally, it provides a foundation for building a more comprehensive Unit of Work pattern which would allow other types of operations to enlist in the Unit of Work.

Caution should be exercised when using TransactionScope, however, as certain operations can implicitly escalate the transaction to a distributed transaction causing undesired overhead. For those choosing solutions involving TransactionScope, I would recommend educating yourself on how and when transactions are escalated.

While you’ll find using the TransactionScope class to be a good general-purpose solution, using it directly does couple your services to a specific strategy and adds a bit of noise to your code.

ADO.Net Transactions

This approach involves creating an instance of DbTransaction and instructing the participating DbContext instance to use the existing transaction:

public Customer CreateCustomer(CreateCustomerRequest request)
{
 Customer customer = null;

 var connectionString = ConfigurationManager.ConnectionStrings["MyStoreContext"].ConnectionString;
 using (var connection = new SqlConnection(connectionString))
 {
 connection.Open();
 using (var transaction = connection.BeginTransaction())
 {
 using (var context = new MyStoreContext(connection))
 {
 context.Database.UseTransaction(transaction);
 try
 {
 customer = new Customer { FirstName = request.FirstName, LastName = request.LastName };
 context.Customers.Add(customer);
 context.SaveChanges();
 }
 catch (Exception e)
 {
 transaction.Rollback();
 throw;
 }
 }

 transaction.Commit();
 return customer;
 }
 }

As can be seen from the example, this approach adds quite a bit of infrastructure noise to your code. While not something I’d recommend standardizing upon, this approach provides another avenue for sharing transactions between Entity Framework and straight ADO.Net code which might prove useful in certain situations. In general, I wouldn’t recommend such an approach.

Entity Framework Transactions

The relative new-comer to the mix is the new transaction API introduced with Entity Framework 6. Here’s a basic example of it’s use:

public Customer CreateCustomer(CreateCustomerRequest request)
{
 Customer customer = null;

 using (var context = new MyStoreContext())
 {
 using (var transaction = context.Database.BeginTransaction())
 {
 try
 {
 customer = new Customer { FirstName = request.FirstName, LastName = request.LastName };
 context.Customers.Add(customer);
 context.SaveChanges();
 transaction.Commit();
 }
 catch (Exception e)
 {
 transaction.Rollback();
 throw;
 }
 }
 }

 return customer;
}

This is the approach recommended by Microsoft for achieving transactions with Entity Framework going forward. If you’re deploying applications with Entity Framework 6 and beyond, this will be your safest choice for Unit of Work implementations which only require database operation participation. Similar to a couple of the previous approaches we’ve already considered, the drawbacks of using this directly are that it creates opaque dependencies and adds repetitive infrastructure code to all of your application services. This is also a viable option, but I would recommend coupling this with other approaches we’ll look at later to improve the readability and maintainability of your application services.

Unit of Work Repository Manager

The first approach I encountered when researching how others were facilitating the Unit of Work pattern with Entity Framework was a strategy set forth by Microsoft’s guidance on the topic here. This strategy involves creating a UnitOfWork class which encapsulates an instance of the DbContext and exposes each repository as a property. Clients of repositories take a dependency upon an instance of UnitOfWork and access each repository as needed through properties on the UnitOfWork instance. The UnitOfWork type exposes a SaveChanges() method to be used when all the changes made through the repositories are to be persisted to the database. Here is an example of this approach:

public interface IUnitOfWork
{
 ICustomerRepository CustomerRepository { get; }
 IOrderRepository OrderRepository { get; }
 void Save();
}

public class UnitOfWork : IDisposable, IUnitOfWork
{
 readonly MyContext _context = new MyContext();
 ICustomerRepository _customerRepository;
 IOrderRepository _orderRepository;

 public ICustomerRepository CustomerRepository
 {
 get { return _customerRepository ?? (_customerRepository = new CustomerRepository(_context)); }
 }

 public IOrderRepository OrderRepository
 {
 get { return _orderRepository ?? (_orderRepository = new OrderRepository(_context)); }
 }

 public void Dispose()
 {
 if (_context != null)
 {
 _context.Dispose();
 }
 }

 public void Save()
 {
 _context.SaveChanges();
 }
}

public class CustomerService : ICustomerService
{
 readonly IUnitOfWork _unitOfWork;

 public CustomerService(IUnitOfWork unitOfWork)
 {
 _unitOfWork = unitOfWork;
 }

 public void CreateCustomer(CreateCustomerRequest request)
 {
 customer = new Customer { FirstName = request.FirstName, LastName = request.LastName };
 _unitOfWork.CustomerRepository.Add(customer);
 _unitOfWork.Save();
 }
}

It isn’t hard to imagine how this approach was conceived given it closely mirrors the typical implementation of the DbContext instance you find in Entity Framework guidance where public instances of DbSet are exposed for each aggregate root. Given this pattern is presented on the ASP.Net website and comes up as one of the first results when doing a search for “Entity Framework” and “Unit of Work”, I imagine this approach has gained some popularity among .Net developers. There are, however, a number of issues I have with this approach.

First, this approach leads to opaque dependencies. Due to the fact that classes interact with repositories through the UnitOfWork instance, the client interface doesn’t clearly express the inherent business-level collaborators it depends upon (i.e. any aggregate root collections).

Second, this violates the Open/Closed Principle. To add new aggregate roots to the system requires modifying the UnitOfWork each time.

Third, this violates the Single Responsibility Principle. The single responsibility of a Unit of Work implementation should be to encapsulate the behavior necessary to commit or rollback an set of operations atomically. The instantiation and management of repositories or any other component which may wish to enlist in a unit of work is a separate concern.

Lastly, this results in a nominal abstraction which is semantically coupled with Entity Framework. The example code for this approach sets forth an interface to the UnitOfWork implementation which isn’t the approach used in the aforementioned Microsoft article. Whether you take a dependency upon the interface or the implementation directly, however, the presumption of such an abstraction is to decouple the application from using Entity Framework directly. While such an abstraction might provide some benefits, it reflects Entity Framework usage semantics and as such doesn’t really decouple you from the particular persistence technology you’re using. While you could use this approach with another ORM (e.g. NHibernate), this approach is more of a reflection of Entity Framework operations (e.g. it’s flushing model) and usage patterns. As such, you probably wouldn’t arrive at this same abstraction were you to have started by defining the abstraction in terms of the behavior required by your application prior to choosing a specific ORM (i.e. following The Dependency Inversion Principle). You might even find yourself violating the Liskof Substitution Principle if you actually attempted to provide an alternate ORM implementation. Given these issues, I would advise people to avoid this approach.

Injected Unit of Work and Repositories

For those inclined to make all dependencies transparent while maintaining an abstraction from Entity Framework, the next strategy may seem the natural next step. This strategy involves creating an abstraction around the call to DbContext.SaveChanges() and requires sharing a single instance of DbContext among all the components whose operations need to participate within the underlying SaveChanges() call as a single transaction.

Here is an example:

public class CustomerService : ICustomerService
{
  readonly IUnitOfWork _unitOfWork;
  readonly ICustomerRepository _customerRepository;

  public CustomerService(IUnitOfWork unitOfWork, ICustomerRepository customerRepository)
  {
    _unitOfWork = unitOfWork;
    _customerRepository = customerRepository;
  }

  public void CreateCustomer(CreateCustomerRequest request)
  {
    customer = new Customer { FirstName = request.FirstName, LastName = request.LastName };
    _customerRepository.Add(customer);
    _unitOfWork.Save();
  }
}

This approach shares many of the same issues with the previous one. While it reduces a bit of infrastructure noise, it’s still semantically coupled to Entity Framework’s approach and still lacks a defined Unit of Work boundary. Additionally, it lacks clarity as to what happens when you call the SaveChanges() method. Given the Repository pattern is intended to be a virtual collection of all the entities within your system of a given type, one might suppose a method named “SaveChanges” means that you are somehow persisting any changes made to the particular entities represented by the repository (setting aside the fact that doing so is really a subversion of the pattern’s purpose). On the contrary, it really means “save all the changes made to any entities tracked by the underlying DbContext”. I would also recommend avoiding this approach.

Unit of Work Per Request

A pattern I’m a bit embarrassed to admit has been characteristic of many projects I’ve worked on in the past (though not with EF) is to create a Unit of Work implementation which is scoped to a Web Application’s Request lifetime. Using this approach, whatever method is used to facilitate a Unit of Work is configured with a DI container using a Per-HttpRequest lifetime scope and the Unit of Work boundary is opened by the first component being injected by the UnitOfWork and committed/rolled-back when the HttpRequest is disposed by the container.

There are a few different manifestations of this approach depending upon the particular framework and strategy you’re using, but here’s a pseudo-code example of how configuring this might look for Entity Framework with the Autofac DI container:

builder.RegisterType()
 .As()
 .InstancePerRequest()
 .OnActivating(x =>
 {
 // start a transaction
 })
 .OnRelease(context =>
 {
 try
 {
 // commit or rollback the transaction
 }
 catch (Exception e)
 {
 // log the exception
 throw;
 }
 });

public class SomeService : ISomeService
{
 public void DoSomething()
 {
 // do some work
 }
}

While this approach eliminates the need for your services to be concerned with the Unit of Work infrastructure, the biggest issue with this is when an error happens to occur. When the application can’t successfully commit a transaction for whatever reason, the rollback occurs AFTER you’ve typically relinquished control of the request (e.g. You’ve already returned results from a controller). When this occurs, you may end up telling your customer that something happened when it actually didn’t and your client state may end up out of sync with the actual persisted state of the application.

While I used this strategy without incident for some time with NHibernate, I eventually ran into a problem and concluded that the concern of transaction boundary management inherently belongs to the application-level entry point for a particular interaction with the system. This is another approach I’d recommend avoiding.

Instantiated Unit of Work

The next strategy involves instantiating a UnitOfWork implemented using either the .Net framework TransactionScope class or the transaction API introduced by Entity Framework 6 to define a transaction boundary within the application service. Here’s an example:

public class CustomerService : ICustomerService
{
  readonly ICustomerRepository _customerRepository;

  public CustomerService(ICustomerRepository customerRepository)
  {
    _customerRepository = customerRepository;
  }

  public void CreateCustomer(CreateCustomerRequest request)
  {
    using (var unitOfWork = new UnitOfWork())
    {
      try
      {
        customer = new Customer { FirstName = request.FirstName, LastName = request.LastName };
        _customerRepository.Add(customer);        
        unitOfWork.Commit();
      }
      catch (Exception ex)
      {
        unitOfWork.Rollback();
      }
    }
  }
}

Functionally, this is a viable approach to facilitating a Unit of Work boundary with Entity Framework. A few drawbacks, however, are that the dependency upon the Unit Of Work implementation is opaque and that it’s coupled to a specific implementation. While this isn’t a terrible approach, I would recommend other approaches discussed here which either surface any dependencies being taken on the Unit of Work infrastructure or invert the concerns of transaction management completely.

Injected Unit of Work Factory

This strategy is similar to the one presented in the Instantiated Unit of Work example, but makes its dependence upon the Unit of Work infrastructure transparent and provides a point of abstraction which allows for an alternate implementation to be provided by the factory:

public class CustomerService : ICustomerService
{
 readonly ICustomerRepository _customerRepository;
 readonly IUnitOfWorkFactory _unitOfWorkFactory;

 public CustomerService(IUnitOfWorkFactory unitOfWorkFactory, ICustomerRepository customerRepository)
 {
 _customerRepository = customerRepository;
 _unitOfWorkFactory = unitOfWorkFactory;
 }

 public void CreateCustomer(CreateCustomerRequest request)
 {
 using (var unitOfWork = _unitOfWorkFactory.Create())
 {
 try
 {
 customer = new Customer { FirstName = request.FirstName, LastName = request.LastName };
 _customerRepository.Add(customer);
 unitOfWork.Commit();
 }
 catch (Exception ex)
 {
 unitOfWork.Rollback();
 }
 }
 }
}

While I personally prefer to invert such concerns, I consider this to be a sound approach.

As a side note, if you decide to use this approach, you might also consider utilizing your DI Container to just inject a Func to avoid the overhead of maintaining an IUnitOfWorkFactory abstraction and implementation.

Unit of Work ActionFilterAttribute

For those who prefer to invert the Unit of Work concerns as I do, the following approach provides an easy to implement solution for those using ASP.Net MVC and/or Web API. This technique involves creating a custom Action filter which can be used to control the boundary of a Unit of Work at the Controller action level. The particular implementation may vary, but here’s a general template:

public class UnitOfWorkFilter : ActionFilterAttribute
{
 public override void OnActionExecuting(ActionExecutingContext filterContext)
 {
 // begin transaction
 }

 public override void OnActionExecuted(ActionExecutedContext filterContext)
 {
 // commit/rollback transaction
 }
}

The benefits of this approach are that it’s easy to implement and that it eliminates the need for introducing repetitive infrastructure code into your application services. This attribute can be registered with the global action filters, or for the more discriminant, only placed on actions resulting in state changes to the database. Overall, this would be my recommended approach for Web applications. It’s easy to implement, simple, and keeps your code clean.

Unit of Work Decorator

A similar approach to the use of a custom ActionFilterAttribute is the creation of a custom decorator. This approach can be accomplished by utilizing a DI container to automatically decorate specific application service interfaces with a class which implements a Unit of Work boundary.

Here is a pseudo-code example of how configuring this might look for Entity Framework with the Autofac DI container which presumes that some form of command/command-handler pattern is being utilized (e.g. frameworks like MediatR , ShortBus, etc.):

// DI Registration
builder.RegisterGenericDecorator(
 typeof(TransactionRequestHandler<>), // the decorator instance
 typeof(IRequestHandler<>), // the types to decorate
 "requestHandler", // the name of the key to decorate
 null); // the name of the key to this decorator



public class TransactionRequestHandler : IRequestHandler where TResponse : ApplicationResponse
{
 readonly DbContext _context;
 readonly IRequestHandler _decorated;

 public TransactionRequestHandler(IRequestHandler decorated, DbContext context)
 {
 _decorated = decorated;
 _context = context;
 }

 public TResponse Handle(TRequest request)
 {
 TResponse response;

 // Open transaction here

 try
 {
 response = _decorated.Handle(request);

 // commit transaction

 }
 catch (Exception e)
 {
 //rollback transaction
 throw;
 }

 return response;
 }
}


public class SomeRequestHandler : IRequestHandler
{
 public ApplicationResponse Handle()
 {
 // do some work
 return new SuccessResponse();
 }
}

While this approach requires a bit of setup, it provides an alternate means of facilitating the Unit of Work pattern through a decorator which can be used by other consumers of the application layer aside from just ASP.Net (i.e. Windows services, CLI, etc.) It also provides the ability to move the Unit of Work boundary closer to the point of need for those who would rather provide any error handling prior to returning control to the application service client (e.g. the Controller actions) as well as giving more control over the types of operations decorated (e.g. IQueryHandler vs. ICommandHandler). For Web applications, I’d recommend trying the custom Action Filter approach first, as it’s easier to implement and doesn’t presume upon the design of your application layer, but this is certainly a good approach if it fits your needs.

Conclusion

Out of the approaches I’ve evaluated, there are several that I see as sound approaches which maintain some minimum adherence to good design practices. Of course, which approach is best for your application will be dependent upon the context of what you’re doing and to some extent the design values of your team.

Orginal Post

Decorator Pattern

Decorator pattern falls under Structural Pattern of Gang of Four (GOF) Design Patterns in .Net. Decorator pattern is used to add new functionality to an existing object without changing its structure. Hence Decorator pattern provides an alternative way to inheritance for modifying the behavior of an object. In this article, I would like to go over what the decorator pattern can do, and how it works.

There are some occasions in our applications when we need to create an object with some basic functionality in such a way that some extra functionality can be added to this object dynamically. For example, Lets say we need to create a Stream object to handle some data but in some cases we need this stream object to be able to encrypt the stream in some cases. So what we can do is that we can have the basic Stream object ready and then dynamically add the encryption functionality when it is needed.

One may also say that why not keep this encryption logic in the stream class itself and turn it on or off by using a Boolean property. But this approach will have problems like – How can we add the type custom encryption logic inside a class? Now this can be done easily by subclassing the existing class and have custom encryption logic in the derived class.

This is a valid solution but only when this encryption is the only functionality needed with this class. But what if there are multiple functionalities that could be added dynamically to this class and also the combination of functionalities too. If we use the subclassing approach then we will end up with derievd classes equal to the number of combination we could have for all our functionalities and the actual object.

This is exactly the scenario where the decorator patter can be useful.  Decorators provide a flexible alternative to subclassing for extending functionality.”

Before looking into the details of decorator pattern let us go ahead, let’s have a look at what this pattern is and then see the class diagram of this pattern and see what each class is responsible for.

What is Decorator Pattern

Decorator pattern is used to add new functionality to an existing object without changing its structure.

This pattern creates a decorator class which wraps the original class and add new behaviors/operations to an object at run-time.

Still none the wiser?  Let’s look at a diagram to help you picture the pattern.

Decorator Pattern – UML Diagram & Implementation

The UML class diagram for the implementation of the decorator design pattern is given below:

The classes, interfaces and objects in the above UML class diagram are as follows:

Component

This is an interface containing members that will be implemented by ConcreteClass and Decorator.

ConcreteComponent

This is a class which implements the Component interface.

Decorator

This is an abstract class which implements the Component interface and contains the reference to a Component instance. This class also acts as base class for all decorators for components.

ConcreteDecorator

This is a class which inherits from Decorator class and provides a decorator for components.

C# – Implementation Code

public interface Component
{
 void Operation();
}
 
public class ConcreteComponent : Component
{
 public void Operation()
 {
 Console.WriteLine("Component Operation");
 }
}
 
public abstract class Decorator : Component
{
 private Component _component;
 
 public Decorator(Component component)
 {
 _component = component;
 }
 
 public virtual void Operation()
 {
 _component.Operation();
 }
}
 
public class ConcreteDecorator : Decorator
{
 public ConcreteDecorator(Component component) : base(component) { }
 
 public override void Operation()
 {
 base.Operation();
 Console.WriteLine("Override Decorator Operation");
 }
}

Decorator Pattern – Example

Who is what?

The classes, interfaces and objects in the above class diagram can be identified as follows:

  • Vehicle – Component Interface.
  • HondaCity- ConcreteComponent class.
  • VehicleDecorator- Decorator Class.
  • Special Offer- ConcreteDecorator class.

C# – Sample Code

/// <summary>
/// The 'Component' interface
/// </summary>
public interface Vehicle
{
 string Make { get; }
 string Model { get; }
 double Price { get; }
}
 
/// <summary>
/// The 'ConcreteComponent' class
/// </summary>
public class HondaCity : Vehicle
{
 public string Make
 {
 get { return "HondaCity"; }
 }
 
 public string Model
 {
 get { return "CNG"; }
 }
 
 public double Price
 {
 get { return 1000000; }
 }
}
 
/// <summary>
/// The 'Decorator' abstract class
/// </summary>
public abstract class VehicleDecorator : Vehicle
{
 private Vehicle _vehicle;
 
 public VehicleDecorator(Vehicle vehicle)
 {
 _vehicle = vehicle;
 }
 
 public string Make
 {
 get { return _vehicle.Make; }
 }
 
 public string Model
 {
 get { return _vehicle.Model; }
 }
 
 public double Price
 {
 get { return _vehicle.Price; }
 }
 
}
 
/// <summary>
/// The 'ConcreteDecorator' class
/// </summary>
public class SpecialOffer : VehicleDecorator
{
 public SpecialOffer(Vehicle vehicle) : base(vehicle) { }
 
 public int DiscountPercentage { get; set; }
 public string Offer { get; set; }
 
 public new double Price
 {
 get
 {
 double price = base.Price;
 int percentage = 100 - DiscountPercentage;
 return Math.Round((price * percentage) / 100, 2);
 }
 }
 
}
 
/// <summary>
/// Decorator Pattern Demo
/// </summary>
class Program
{
 static void Main(string[] args)
 {
 // Basic vehicle
 HondaCity car = new HondaCity();
 
 Console.WriteLine("Honda City base price are : {0}", car.Price);
 
 // Special offer
 SpecialOffer offer = new SpecialOffer(car);
 offer.DiscountPercentage = 25;
 offer.Offer = "25 % discount";
 
 Console.WriteLine("{1} @ Diwali Special Offer and price are : {0} ", offer.Price, offer.Offer);
 
 Console.ReadKey();
 
 }
}

Decorator Pattern Demo – Output

When to use it?

Add additional state or behavior to an object dynamically.

Make changes to some objects in a class without affecting others.

Original Article by Shailendra Chauhan

Here is a sample application Decorator Pattern

Kendo Grid custom sort order

I’ve been racking my brains as the sorting on the Kendo Grid is pretty good, only that I need to be able to not sort in the order that is being displayed.  An example would be numbers written out in full and you want them in number sort order.

@(Html.Kendo().Grid<Number>()
 .Name("grid")
 .Columns(columns =>
 {
 columns.Bound(c => c.Id);
 columns.Bound(c => c.Item);
 })
 .Sortable()
 .DataSource(dataSource => dataSource
 .Ajax()
 .Read(read => read.Action("Numbers_Read", "Home"))
 )
)


Here is the list not in any order, but it does have a sortorder column

 private static List<Number> GetNumbers()
 {
 List<Number> numbers = new List<Number>();

 numbers.Add(new Number() { Id = 1, Item = "one", SortOrder = "1" });

 numbers.Add(new Number() { Id = 2, Item = "three", SortOrder = "3" });

 numbers.Add(new Number() { Id = 3, Item = "six", SortOrder = "6" });
 numbers.Add(new Number() { Id = 4, Item = "two", SortOrder = "2" });
 numbers.Add(new Number() { Id = 5, Item = "five", SortOrder = "5" });
 numbers.Add(new Number() { Id = 6, Item = "seven", SortOrder = "7" });
 numbers.Add(new Number() { Id = 7, Item = "four", SortOrder = "4" });
 return numbers;
 }

and now for the magic, when the request is coming back into the controller action we check to see what is being requested and then change it to another hidden column, this way it retains all the functionality of the Kendo Grid and it just works….!

public ActionResult Numbers_Read([DataSourceRequest]DataSourceRequest request)
 {
 List<Number> numbers = GetNumbers();

 foreach (var sort in request.Sorts)
 {
 // Find the sort member you need a custom sort for and change it to the custom column 
 if (sort.Member.ToLowerInvariant() == "item")
 {
 sort.Member = "SortOrder";
 }
 }
 return Json(numbers.ToDataSourceResult(request), JsonRequestBehavior.AllowGet);
 }

I have created this sample Kendo Grid project so you can see how it is all done.

kendo_grid

Uncle Bob On Coding Standards

It’s important for a team to have a single coding standard for each language to avoid several problems:

  • A lack of standards can make your code unreadable.
  • Disagreement over standards can cause check-in wars between developers.
  • Seeing different standards in the same class can be extremely irritating.

UncleBob wrote this:

On coding standards:

  • Let them evolve during the first few iterations.
  • Let them be team specific instead of company specific.
  • Don’t write them down if you can avoid it. Rather, let the code be the way the standards are captured.
  • Don’t legislate good design. (e.g. don’t tell people not to use goto)
    Make sure everyone knows that the standard is about communication, and nothing else.
  • After the first few iterations, get the team together to decide.

Original article