Factory Support Facility

Factory Support Facility allows using factories to create components. This is beneficial when you want to make available as services components that do not have accessible constructor, or that you don’t instantiate, like HttpContext.

Prefer UsingFactoryMethod over this facility: while the facility provides programmatic API it is deprecated and its usage is discouraged and won’t be discussed here. Recommended approach is to use UsingFactoryMethod method of fluent registration API to create components. This limits the usefulness of the facility to XML-driven and legacy scenarios.

UsingFactoryMethod does not require this facility anymore: In older versions of Windsor (up to and including version 2.1) UsingFactoryMethod method in the fluent API discussed above required this facility to be active in the container. That was later changed and there’s no such dependency anymore.

Using factories from configuration

In addition to code, the facility uses XML configuration. You can register the facility in the standard facilities section of Windsor’s config:

Just install the facility and add the proper configuration.

<configuration>
 <facilities>
   <facility
     id="factory.support"
     type="Castle.Facilities.FactorySupport.FactorySupportFacility, 
           Castle.Facilities.FactorySupport" />
  </facilities>
</configuration>

Configuration Schema

Broadly speaking facility exposes the following scheme, with two kinds of supported factories: accessors and methods

<components>
 <component id="mycomp1" instance-accessor="Static accessor name" />
 <component id="factory1" />
 <component id="mycomp2" factoryId="factory1" factoryCreate="Create" />
</components>

Accessor example

Given the following singleton class:

public class SingletonWithAccessor
{
 private static readonly SingletonWithAccessor instance = new SingletonWithAccessor();

private SingletonWithAccessor()
 {
 }

public static SingletonWithAccessor Instance
 {
 get { return instance; }
 }
}

You may expose its instance to the container through the following configuration:

<components>
 <component id="mycomp1"
 type="Company.Components.SingletonWithAccessor, Company.Components"
 instance-accessor="Instance" />
</components>

Using it:

var comp = container.Resolve<SingletonWithAccessor>("mycomp1");

Factory example

Given the following component and factory classes:

public class MyComp
{
 internal MyComp()
 {
 }

...
}

public class MyCompFactory
{
 public MyComp Create()
 {
 return new MyComp();
 }
}

You may expose its instance to the container through the following configuration:

<components>
 <component id="mycompfactory"
 type="Company.Components.MyCompFactory, Company.Components"/>
 <component id="mycomp"
 type="Company.Components.MyComp, Company.Components"
 factoryId="mycompfactory" factoryCreate="Create" />
</components>

Using it:

var comp = container.Resolve<MyComp>("mycomp");

Factory with parameters example

Given the following component and factory classes:

public class MyComp
{
 internal MyComp(String storeName, IDictionary props)
 {
 }

...
}

public class MyCompFactory
{
 public MyComp Create(String storeName, IDictionary props)
 {
 return new MyComp(storeName, props);
 }
}

You may expose its instance to the container through the following configuration:

<components>
 <component id="mycompfactory"
 type="Company.Components.MyCompFactory, Company.Components"/>
 <component id="mycomp"
 type="Company.Components.MyComp, Company.Components"
 factoryId="mycompfactory" factoryCreate="Create">
 <parameters>
 <storeName>MyStore</storeName>
 <props>
 <dictionary>
 <entry key="key1">item1</entry>
 <entry key="key2">item2</entry>
 </dictionary>
 </props>
 </parameters>
 </component>
</components>

Using it:

var comp = container.Resolve<MyComp>("mycomp");

Factory using auto-wire example

If your factory request as parameter some other component instance, this facility will be able to resolve it without your aid:

public class MyComp
{
 internal MyComp(IMyService serv)
 {
 }

...
}

public class MyCompFactory
{
 public MyComp Create(IMyService service)
 {
 return new MyComp(service);
 }
}

You may expose its instance to the container through the following configuration:

<facilities>
 <facility
 id="factorysupport"
 type="Castle.Facilities.FactorySupport.FactorySupportFacility, Castle.Facilities.FactorySupport"/>
</facilities>

<components>
 <component id="myservice"
 service="SomethingElse.IMyService"
 type="Company.Components.MyServiceImpl, Company.Components" />
 <component id="mycompfactory"
 type="Company.Components.MyCompFactory, Company.Components" />
 <component id="mycomp"
 type="Company.Components.MyComp, Company.Components"
 factoryId="mycompfactory" factoryCreate="Create" />
</components>

Using it:

var comp = container.Resolve<MyComp>("mycomp");

 

Registering components for IoC

Inversion of Control (IoC) is quite a big step for some developers, but the benefits once implemented are huge.  With the applications performance, maintainability, unit testing and more importantly separation of concerns, just to name a few.

Due to the nature and the new concept of IoC to many of the developers, I would strongly recommend that you take some time to go over these two PluralSight course that go into IoC in a lot more detail that we can cover here.   There are other courses that also cover IoC available on PluralSights, but these are the two I would recommend.
Inversion of Control – by John Sonmez
A comprehensive look at inversion of control and how to use common IoC containers
Practical IoC With ASP.NET MVC 4 – by John Sonmez
In this course, we’ll learn how to use an IoC container, like Unity in an ASP.NET MVC 4 application and some of the basics of the practical application of IoC containers.

Basic Registration

The starting point for registering anything in the container is the container’s Register method, with has one or more IRegistration objects as parameter. The simplest way to create those objects is using the static Castle.MicroKernel.Registration.Component class. Its For method returns a ComponentRegistration that you can use to further configure the registration.

Isolate your registration code: It is a recommended practice to keep your registration code in a dedicated class(es) implementing IWindsorInstaller.

Install infrastructure components first: Some components may require a facility or other extension to the core container to be registered properly. As such it is recommended that you always register your facilities, custom subsystems, Component model creation contributors etc before you start registering your components.

To register a type in the container

container.Register(
	Component.For(MyServiceImpl) );

This will register type MyServiceImpl as service MyServiceImpl with default lifestyle (Singleton).

To register a type as non-default service

container.Register(
    Component.For(IMyService).ImplementedBy(MyServiceImpl));

Note that For and ImplementedBy also have non-generic overloads.

// Same result as example above.
container.Register(Component.For(typeof(IMyService))
    .ImplementedBy(typeof(MyServiceImpl))
);

Services and Components: You can find more information about services and components here.

To register a generic type

Suppose you have a IRepository<TEntity> interface, with NHRepository<TEntity> as the implementation.

You could register a repository for each entity class, but this is not needed.

// Registering a repository for each entity is not needed.
container.Register(
    Component.For<IRepository<Customer>>()
        .ImplementedBy<NHRepository<Customer>>(),
    Component.For<IRepository<Order>>()
        .ImplementedBy<NHRepository<Order>>(),
//    and so on...
);


One IRepository<> (so called open generic type) registration, without specifying the entity, is enough.

// Does not work (compiler won't allow it):
 container.Register(
     Component.For<IRepository<>>()
         .ImplementedBy<NHRepository<>>()
 );

Doing it like this however is not legal, and the above code would not compile. Instead you have to use typeof()

// Use typeof() and do not specify the entity:
container.Register(
     Component.For(typeof(IRepository<>))
         .ImplementedBy(typeof(NHRepository<>))
);

Configuring component’s lifestyle

container.Register(
 Component.For<IMyService>()
 .ImplementedBy<MyServiceImpl>()
 .LifeStyle.Transient
);

When the lifestyle is not set explicitly, the default Singleton lifestyle will be used.

Register more components for the same service

You can do this simply by having more registrations for the same service.

container.Register(
    Component.For<IMyService>().ImplementedBy<MyServiceImpl>(),
    Component.For<IMyService>().ImplementedBy<OtherServiceImpl>()
);

When a component has a dependency on IMyService, it will by default get the IMyService that was registered first (in this case MyServiceImpl).

In Windsor first one wins: In Castle, the default implementation for a service is the first registered implementation.

You can force the later-registered component to become the default instance via the method IsDefault.

container.Register(
    Component.For<IMyService>().ImplementedBy<MyServiceImpl>(),
    Component.For<IMyService>().Named("OtherServiceImpl")
        .ImplementedBy<OtherServiceImpl>().IsDefault()
);

In the above example, any component that has a dependency on IMyService, will by default get an instance of OtherServiceImpl, even though it was registered later.

Of course, you can override which implementation is used by a component that needs it. This is done with service overrides.

When you explicitly call container.Resolve<IMyService>() (without specifying the name), the container will also return the first registered component for IMyService (MyServiceImpl in the above example).

Provide unique names for duplicated components: If you want to register the same implementation more than once, be sure to provide different names for the registered components.

Using a delegate as component factory

You can use a delegate as a lightweight factory for a component:

container.Register(
    Component.For<IMyService>()
        .UsingFactoryMethod(
            () => MyLegacyServiceFactory.CreateMyService())
);

UsingFactoryMethod method has two more overloads, which can provide you with access to kernel, and creation context if needed.

Example of UsingFactoryMethod with kernel overload (Converter)

container.Register(
    Component.For<IMyFactory>().ImplementedBy<MyFactory>(),
        Component.For<IMyService>()
            .UsingFactoryMethod(kernel => kernel.Resolve<IMyFactory>().Create())
);

In addition to UsingFactoryMethod method, there’s a UsingFactory method. (without the “method” suffix 🙂 ). It can be regarded as a special version of UsingFactoryMethod method, which resolves an existing factory from the container, and lets you use it to create instance of your service.

container.Register(
    Component.For<User>().Instance(user),
         Component.For<AbstractCarProviderFactory>(),
         Component.For<ICarProvider>()
             .UsingFactory((AbstractCarProviderFactory f) =>
                f.Create(container.Resolve<User>()))
);

Avoid UsingFactory: It is advised to use UsingFactoryMethod, and to avoid UsingFactory when creating your services via factories. UsingFactory will be obsoleted/removed in future releases.

OnCreate

It is sometimes needed to either inspect or modify created instance, before it is used. You can use OnCreate method to do this

container.Register(
   Component.For<IService>()
       .ImplementedBy<MyService>()
       .OnCreate((kernel, instance) => instance.Name += "a")
);

The method has two overloads. One that works with a delegate to which an IKernel and newly created instance are passed. Another only takes the newly created instance.

OnCreate works only for components created by the container: This method is not called for components where instance is provided externally (like when using Instance method). It is called only for components created by the container. This also includes components created via certain facilities (Remoting Facility, Factory Support Facility)

A good source of reference is the Castle Windsor documentation, which can be found here:

https://github.com/castleproject/Windsor/tree/master/docs

Here is a sample project with examples and a MVC application showing how it all works

castle windsor

Introduction to IoC with Windsor

I do love dependency injection, but many developers still seem to miss the point and the reason for having it. If you’re only using it on a small scale, you don’t really need any tools to use the technique. But once you’re used to this design technique, you’ll quickly start using it in many places of your code. If you do, it quickly becomes cumbersome to deal with the real instances of your runtime dependencies manually. This is where tools like Inversion Of Control (IoC) containers come in to play. There are a few solid containers available for the .NET world, and even Microsoft has released their own container. Basically, what the IoC container does for you, is take care of providing dependencies to components in a flexible and customizable way. It allows clients to remain completely oblivious to the dependencies of components they use. This makes it easy to change components without having to modify client code. Not to mention the fact that your components are a lot easier to test, since you can simply inject fake dependencies during your tests.

Lets get do to looking at a sample. Suppose we have a class called OrderRepository which exposes methods such as GetById, GetAll, FindOne, FindMany and Store. Obviously, the OrderRepository has a dependency on a class that can actually communicate with some kind of physical datastore, either a database or an xml file or whatever. Either way, it needs another object to access the Order data. Suppose we have an OrderAccessor class which implements an IOrderAccessor interface. The interface declares all the methods we need to retrieve or store our Orders. So our OrderRepository would need to communicate with an object that implements the IOrderAccessor interface. Instead of letting the OrderRepository instantiate that object itself, it will receive it as a parameter in it’s constructor:

        private readonly IOrderDataAccessor _accessor;

        public OrderRepository(IOrderDataAccessor accessor)

        {

            _accessor = accessor;

        }

This makes it easy to test the OrderRepository class, and it’s also easy to make it use different implementations of IOrderDataAccessor later on, should we need to. Now obviously, you really don’t want to do this when you need to instantiate the OrderRepository in your production code:

OrderRepository repo = new OrderRepository(new OrderDataAccessor());

As a consumer of the OrderRepository, you shouldn’t need to know what its dependencies are and you most certainly shouldn’t need to pass the right dependencies into the constructor. Instead, you just want a valid instance of OrderRepository. You really don’t care how it was constructed, which dependencies it has and how they’re provided. You just need to be able to use it. That’s all. This is where the IoC container comes in to help you. Suppose we wrap the IoC container in a Container class that has a few static methods to help you with instantiating instances of types. We could then do this:

OrderRepository repository = Container.Resolve<OrderRepository>();

That would leave you with a valid OrderRepository instance… one that has a usable IOrderDataAccessor but you don’t even know about it, nor do you care how it got there. In other words, you can use the OrderRepository without knowing anything about its underlying implementation.

Let’s take a look at the implementation of the Container class:

    public static class Container

    {

        private static readonly IWindsorContainer _container;

        static Container()

        {

            _container = new WindsorContainer();

            _container.Register(Component.For<IOrderDataAccessor>).ImplementedBy<OrderDataAccessor>());

_container.Register(Component.For<OrderRepository>).ImplementedBy<OrderRepository>());

        }

        public static T Resolve<T>()

        {

            return _container.Resolve<T>();

        }

    }

It just uses a static instance of Windor’s Container and it registers the types we need… let’s examine the following line:

_container.Register(Component.For<IOrderDataAccessor>).ImplementedBy<OrderDataAccessor>());

this basically sets up the container to return a new instance of OrderDataAccessor whenever an instance of IOrderDataAcessor is requested.

We still have to make sure the Windsor container knows about the OrderRepository class by adding it as a known component like this:

_container.Register(Component.For<OrderRepository>).ImplementedBy<OrderRepository>());

By doing this, the Windsor container will inspect the type (in this case, OrderRepository) and it will see that its constructor requires an IOrderDataAccessor instance. We ‘registered’ the IOrderDataAccessor type with the container to return an instance of the OrderDataAccessor type. So basically, whenever someone asks the container to return an instance of an OrderRepository class, the container knows to instantiate an OrderDataAccessor instance to pass along as the required IOrderDataAccessor object to the OrderRepository constructor.

At this point, you may be wondering: “Why go through all this trouble to register the concrete implementation of IOrderDataAccessor to be used in code? We could just as well instantiate the type ourselves!”. That’s certainly true. The code would be slightly uglier, but you’d get the same behavior. Of course, the Windsor container supports XML configuration (either in the app.config or web.config or in a custom configuration file) as well as explicit configuration through code. So you can configure the container through code explicitly, but if there is a config file present, the container will use that configuration instead of the one provided through code. So you could define the defaults in code, and should you need to change it later on, you can just provide a config file.

You know what bothers me about our current implementation? We’re still communicating with an OrderRepository instance. If we wanna be really flexible, it would be better if we were communicating with an object that implemented an IOrderRepository interface. So let’s just define the following interface:

    public interface IOrderRepository

    {

        Order GetById(Guid id);

        IEnumerable<Order> GetAll();

        Order FindOne(Criteria criteria);

        IEnumerable<Order> FindMany(Criteria criteria);

        void Store(Order order);

    }

After all, that’s all we care about as consumers of a IOrderRepository type. We shouldn’t really care about the concrete implementation. We just need an interface to program to. So let’s change the OrderRepository definition to this:

    public class OrderRepository : IOrderRepository

And then when we configure our IoC container we do it like this:

        static Container()

        {

            _container = new WindsorContainer();

            _container.Register(Component.For<IOrderDataAccessor>).ImplementedBy<OrderDataAccessor>());

_container.Register(Component.For<IOrderRepository>).ImplementedBy<OrderRepository>());

        }

Now we can no longer ask the contianer for an OrderRepository interface. But we can ask for an instance that implements the IOrderRepository interface like this:

IOrderRepository repository = Container.Resolve<IOrderRepository>();

So now our client is completely decoupled from the implementation of IOrderRepository, as well as the dependencies it may or may not have.

Ok, lets suppose that this implementation makes it to the production environment. Everything’s working but for some reason, someone makes a decision to retrieve the orders from a specially prepared XML file instead of the database. Unfortunately, your OrderDataAccessor class communicates with a SQL server database. Luckily, the OrderRepository implementation doesn’t know which specific implementation of IOrderDataAccessor it’s using. We just need to make sure that every time someone needs an IOrderRepository instance, it uses the new xml-based IOrderDataAccessor implementation instead of the one we originally intended.

Because we’re using Dependency Injection and an IoC container, this only requires changing one line of code:

_container.Register(Component.For<IOrderDataAccessor>().ImplementedBy<XmlOrderDataAccessor>());

Actually, if we’d put the mapping between the IOrderDataAccessor type and the XmlOrderDataAccessor implementation in an xml file, we wouldn’t even have to change any code! Well, except for the XmlOrderDataAccessor implementation obviously.

We can even take this one step further… After the change to the xml-based OrderDataAccessor went successfully, they (the ‘business’) all of a sudden want to log who retrieves or saves each order for auditing purposes.

We create an implementation of IOrderRepository which keeps extensive auditing logs so they can be retrieved later on. We could just inherit from the default OrderRepository implementation and add auditing logic before each method is executed. Then we’d only have to configure our IoC container to return a different instance of the IOrderRepository type whenever someone requests it:

        static Container()

        {

            _container = new WindsorContainer();

            _container.Register(Component.For<IOrderDataAccessor>().ImplementedBy<XmlOrderDataAccessor>());

            _container.Register(Component.For<IOrderRepository>().ImplementedBy<OrderRepositoryWithAuditing>());

        }

Again, our client code does not need to be modified in any way, yet we did modify the runtime behavior of the application. Instead of retrieving the Orders from a SQL database, it’s now retrieving them from an XML file, and the repository is performing auditing as well, without having to change any client code.

And if we were using the xml-configuration features of Windsor, we could get all of this working without even having to recompile the client-assemblies.

This was just an introduction to using an IoC contianer (Castle’s Windsor specifically) and we briefly touched on benefits that you can achieve with this way of working. The Windsor container can do much more, but you’ll either have to figure that stuff out yourself, or wait for future posts about its other features/possibilities

Updated with new methods and calls for Castle Windsor

Caching to improve the user experience

One of the most important factors in building high-performance, scalable Web applications is the ability to store items, whether data objects, pages, or even parts of a page in memory the initial time they are requested. You can store these items on the Web server or on other software in the request stream, such as a proxy server or at the browser. This allows you to avoid recreating information that satisfied a previous request. . Known as caching, it allows you to use a number of techniques to store page output or application data across HTTP requests and reuse it. When the server does not have to recreate information you save time and resources, and throughput and scalability increase.

It is possible to obtain significant performance improvements in ASP.NET applications by caching frequently requested objects and data in either the Application or Cache classes. While the Cache class certainly offers far more flexibility and control, it only appears to offer a marginal advantage in terms of increased throughput over the Application class for caching. It would be very difficult to develop a testing scheme that could accurately measure the potential advantages of the Cache class’s built – in management of lesser-used objects through the scavenging process as opposed to the fact that Application does not offer this feature. The developer needs to take decision in this case and should be based on the needs and convenience of the project and its usage patterns.

In the article I’ll be looking at the application caching and what is the most effective and scaliable options.  If you would like to know more about the web caching then take a look at the Microsoft Caching Architecture Guide for .NET Framework Applications

So we all know that caching increases performance, what I am not getting into here is when it should be used or when it shouldn’t be used.  I’m more interested in the performance and scalability of the caching used.

The performance testing I am going to use the following different caching methods:

I’ve tried to provide a number of different options that are available for Caching, if you know of any others please let me know and I’ll add them to this post.

There are two types of tests I am using a small test which waits for 30 milliseconds and then returns back the current date and time:

Thread.Sleep(30);
return System.DateTime.UtcNow.ToString(CultureInfo.InvariantCulture);

then a test to generate a much large object of 267 Mb

public IEnumerable<Block> GetData()
{
    // should eat around 267 Mb of RAM.
    var blocks = new Block[512 * 512 * 1];

    for (var i = 0; i < (512 * 512) * 1; i++)
    {
        blocks[i] = new Block();
    }
    return blocks;
}

Both are very simple tests, by no means are they a true representaion of real world methods, but they do show you how it effects the performance when using different Caching.

The idea is to iterate around these test 1,000 times and record how long it takes for each test, what I don’t do here is work out the amount of memory is being used (if you know of an easy way to include this please let me know).

I won’t go into great details here on the results of the tests as I think you’ll find them self explanatory in the test application, attached at the bottom of this post.

Few things to note about the testing and results is that you should use Paralling process for the testing, as this will be much closer to the real world multi threading environment.

Also careful consideration should be given to using locking in the Cache module, as you don’t want the same cache key function running while another one is trying to process the same function, it should wait until the process has finished and stored it into the cache to increase performance.

Here is the sample code with different sample tests showing how effective each option is

cachesample

Object to CSV extension

Here is a nice little extension that will take an object and convert it into a CSV file.

Also has the option if you want to include a header on the CSV file.

public static class ObjectToCsv
 {
 public static string ToCsv<T>(this IEnumerable<T> items, bool includeHeading = true)
 where T : class
 {
 var csvBuilder = new StringBuilder();
 var properties = typeof(T).GetProperties();

 if (includeHeading)
 {
 csvBuilder.AppendLine(string.Join(",", properties.Select(p => p.Name.ToCsvValue()).ToArray()));
 }

 foreach (var item in items)
 {
 var line = string.Join(",", properties.Select(p => p.GetValue(item, null).ToCsvValue()).ToArray());
 csvBuilder.AppendLine(line);
 }
 return csvBuilder.ToString();
 }

 private static string ToCsvValue<T>(this T item)
 {
 if (item == null) return "\"\"";

 if (item is string)
 {
 return $"\"{item.ToString().Replace("\"", "\\\"")}\"";
 }
 double dummy;
 return double.TryParse(item.ToString(), out dummy) ? $"{item}" : $"\"{item}\"";
 }
 }

OWASP Testing Guide v3

Every vibrant technology marketplace needs an unbiased source of information on best practices as well as an active body advocating open standards. In the Application Security space, one of those groups is the Open Web Application Security Project (or OWASP for short).

The Open Web Application Security Project (OWASP) is a 501(c)(3) worldwide not-for-profit charitable organization focused on improving the security of software. Our mission is to make software security visible, so that individuals and organizations are able to make informed decisions. OWASP is in a unique position to provide impartial, practical information about AppSec to individuals, corporations, universities, government agencies and other organizations worldwide. Operating as a community of like-minded professionals, OWASP issues software tools and knowledge-based documentation on application security.

The entire Open Web Application Security Project (OWASP) Testing Guide v3 can be downloaded here

Testing for Vulnerable Remember Password and Password Reset

Brief Summary

Most web applications allow users to reset their password if they have forgotten it, usually by sending them a password reset email and/or by asking them to answer one or more “security questions.” In this test, we check that this function is properly implemented and that it does not introduce any flaw in the authentication scheme. We also check whether the application allows the user to store the password in the browser (“remember password” function).

Description of the Issue

A great majority of web applications provide a way for users to recover (or reset) their password in case they have forgotten it. The exact procedure varies heavily among different applications, also depending on the required level of security, but the approach is always to use an alternate way of verifying the identity of the user. One of the simplest (and most common) approaches is to have on file the user’s email address (e.g., this is obtained when the user first registers), and send the old password (or a new one) to that address. This scheme is based on the assumption that the user’s email has not been compromised and that is secure enough for this goal.
Alternatively (or in addition to that), the application could ask the user to answer one or more “secret questions”, which are usually chosen by the user among a set of possible ones. The security of this scheme lies in the ability to provide a way for someone to identify themselves to the system with answers to questions that are not easily answerable via personal information lookups. As an example, a very insecure question would be “your mother’s maiden name” since that is a piece of information that an attacker could find out without much effort. An example of a better question would be “favorite grade-school teacher” since this would be a much more difficult topic to research about a person whose identity may otherwise already be stolen.
Another common feature that applications use to provide users a convenience is to cache the password locally in the browser (on the client machine) and having it ‘pre-typed’ in all subsequent accesses. While this feature can be perceived as extremely friendly for the average user, at the same time, it introduces a flaw, as the user account becomes easily accessible to anyone that uses the same account on the client machine.

Black Box Testing and Examples

Password Reset
The first step is to check whether secret questions are used. Sending the password (or a password reset link) to the user email address without first asking for a secret question means relying 100% on the security of that email address, which is not suitable if the application needs a high level of security.
On the other hand, if secret questions are used, the next step is to assess their strength.
As a first point, how many questions need to be answered before the password can be reset? The majority of applications only need the user to answer to one question, but some critical applications require the user to answer correctly to two or even more questions.
As a second step, we need to analyze the questions themselves. Often a self-reset system offers the choice of multiple questions; this is a good sign for the would-be attacker as this presents him/her with options. Ask yourself whether you could obtain answers to any or all of these questions via a simple Google search on the Internet or with a social engineering attack. As a penetration tester, here is a step-by-step walk-through of assessing a password self-reset tool:

  • Are there multiple questions offered?
    • If so, try to pick a question which would have a “public” answer; for example, something Google would find with a simple query
    • Always pick questions which have a factual answer such as a “first school” or other facts which can be looked up
    • Look for questions which have few possible options, such as “what make was your first car”. These questions would present the attacker with a short-list of answers to guess at and based on statistics the attacker could rank answers from most to least likely
  • Determine how many guesses you have (if possible)
    • Does the password reset allow unlimited attempts?
    • Is there a lockout period after X incorrect answers? Keep in mind that a lockout system can be a security problem in itself, as it can be exploited by an attacker to launch a Denial of Service against legitimate users
  • Pick the appropriate question based on analysis from above point, and do research to determine the most likely answers
  • How does the password-reset tool (once a successful answer to a question is found) behave?
    • Does it allow immediate change of the password?
    • Does it display the old password?
    • Does it email the password to some pre-defined email address?
    • The most insecure scenario here is if the password reset tool shows you the password; this gives the attacker the ability to log into the account, and unless the application provides information about the last login the victim would not know that his/her account has been compromised.
    • A less insecure scenario is if the password reset tool forces the user to immediately change his/her password. While not as stealthy as the first case, it allows the attacker to gain access and locks the real user out.
    • The best security is achieved if the password reset is done via an email to the address the user initially registered with, or some other email address; this forces the attacker to not only guess at which email account the password reset was sent to (unless the application tells that) but also to compromise that account in order to take control of the victim’s access to the application.

The key to successfully exploiting and bypassing a password self-reset is to find a question or set of questions which give the possibility of easily acquiring the answers. Always look for questions which can give you the greatest statistical chance of guessing the correct answer, if you are completely unsure of any of the answers. In the end, a password self-reset tool is only as strong as the weakest question. As a side note, if the application sends/visualizes the old password in cleartext it means that passwords are not stored in a hashed form, which is a security issue in itself.

Password Remember

The “remember my password” mechanism can be implemented with one of the following methods:

  1. Allowing the “cache password” feature in web browsers. As of 2014 this is the preferred method as all major browsers have disabled the setting of autocomplete=”off” by default for password fields.
  2. Storing the password in a permanent cookie. The password must be hashed/encrypted and not sent in the clear.

To check the second implementation type, examine the cookies stored by the application. Verify that the credentials are not stored in cleartext, but are hashed. Examine the hashing mechanism: if it is a common, well-known algorithm, check for its strength; in homegrown hash functions, attempt several usernames to check whether the hash function is easily guessable. Additionally, verify that the credentials are only sent during the login phase, and not sent together with every request to the application.

Gray Box Testing and Examples

This test uses only functional features of the application and HTML code that is always available to the client, the graybox testing follows the same guidelines of the previous section. The only exception is for the password encoded in the cookie, where the same gray box analysis described in the Testing for Session Management Schema chapter can be applied.

Original Article

Deployments Best Practices

Table of Contents

Introduction

This guide is aimed to help you better understand how to deal with deployments in your development workflow and provide some best practices. Sometimes a bad production deployment can ruin all the effort you have invested in a development process. Having a solid deployment workflow can become one of the greatest advantages of your team.

Before you start, I recommend reading our Developing and Deploying with Branches guide first to get a general idea of how branches should be setup in your repository to be able to fully utilize tips from this guide. It’s a great read!

Note on Development Branch

In this guide you will see a lot of references to a branch called development. In your repository you can use master (Git), trunk (Subversion) or default (Mercurial) for the same purpose, there’s no need to create a branch specifically called “development”. I chose this name because it’s universal for all version control systems.

The Workflow

Deployments should be treated as part of a development workflow, not as an afterthought. If you are developing a web site or an application, your workflow will usually include at least three environments: Development, Staging and Production. In that case the workflow might look like this:

  • Developers work on bugs and features in separate branches. Really minor updates can be committed directly to the stable development branch.
  • Once features are implemented, they are merged into the staging branch and deployed to the Staging environment for quality assurance and testing.
  • Once testing is complete, feature branches are merged into the development branch.
  • On the release date, the development branch is merged into production and then deployed to the Production environment.

Let’s take a closer look at each environment to see what are the most efficient way to deploy each one of them.

Development Environment

If you make web applications, you don’t need a remote development environment, every developer should have their own local setup.

Many customers have Development environments set up with automatic deployments on every commit or push. While this gives developers a small advantage of not installing the site or the application on their computers to perform testing locally, it also wastes a lot of time. Every tiny change must be committed, pushed, deployed, and only then it can be verified. If the change was made by mistake, a developer will have to revert it, push it, then redeploy.

Testing on a local computer removes the need to commit, push and deploy completely. Every change can be verified locally first, then, once it’s more or less stable, it can be pushed to a Staging environment for proper quality assurance testing.

However, what this does provide, is an environment that can ensure the auto-deployment to environment is successful and runs in an independent installation process far removed from the developers’ environment.

We do not recommend using deployments for rapidly changing development environments. Running your software locally is the best choice for that sort of testing.

We recommend to deploy to the development environment automatically on every commit or push.  This will ensure that the build process is full working.

Staging Environment

Once the features are implemented and considered fairly stable, they get merged into the staging branch and then automatically deployed to the Staging environment. This is when quality assurance kicks in: testers go to staging servers and verify that the code works as intended.

It is very handy to have a separate branch called staging to represent your staging environment. It will allow developers to deploy multiple branches to the same server simultaneously, simply by merging everything that needs to be deployed to the staging branch. It will also help testers understand what exactly is on staging servers at the moment, just by looking inside the staging branch.

We recommend always deploying major releases to staging at a scheduled time, of which the whole team is aware of.

Production Environment

Once the feature is implemented and tested, it can be deployed to production. If the feature was implemented in a separate branch, it should be merged into a stable development branch first. The branches should be deleted after they are merged to avoid confusion between team members.

The next step is to make a to show the difference between the production and development branches to take a quick look at the code that will be deployed to production. This gives you one last chance to spot something that’s not ready or not intended for production. Things like debugger breakpoints, verbose logging or incomplete features.

Once the diff review is finished, you can merge the development branch into production and then initialize a deployment of the production branch to your production environment by hand. Specify a meaningful message for your deployment so that your team knows exactly what you deployed.

Make sure to only merge development branch into production when you actually plan to deploy. Don’t merge anything into production in advance. Merging on time will make files in your production branch match files on your actual production servers and will help everyone better understand the state of your production environment.

We recommend always deploying major releases to production at a scheduled time, this should be a MANUALLY process not automated (This can be as simple as clicking a link to start the process going or moving some files, it just needs to be a human who activates the process), of which the whole team is aware of.  Find the time when your application is least active and use that time to roll out updates. This may sound obvious, but make sure that it’s not too late in the day, because someone needs to be around after the deployment for at least a few hours to monitor the application and make sure the deployment went fine. Urgent production fixes can be deployed at any time.

After deployment finishes make sure to verify it. It is best to check all the features or fixes that you deployed to make sure they work properly in production. It is a big win if your deployment tool can send an email to all team members with a summary of changes after every deployment. This helps team members to understand what exactly went live and how to communicate it to customers. Beanstalk does this for you automatically.

Your deployment to production is now complete, time to pop champagne and celebrate with your team!

Rolling Back

Sometimes deployments don’t go as planned and things break. In that case you have the possibility to rollback. However, you should be as careful with rollbacks as with production deployments themselves. Sometimes a rollback brings more havoc than the issue it was trying to fix. So first of all stay calm and don’t make any sudden moves. Before performing a rollback, answer the following questions:

Did it break because of the code that I deployed, or did something else break?

You can only rollback files that you deployed, so if the source of the issues is something else a rollback won’t be much help.

Is it possible to rollback this release?

Not all releases can be rolled back. Sometimes a release introduces a new database structure that is incompatible with the previous release. In that case if your rollback, your application will break.

If the answer to both questions is “yes”, you can rollback safely. After rollback is done, make sure to fix the bug that you discovered and commit it to either the development branch (if it was minor) or a separate bug-fix branch. Then proceed with the regular bug-fix branch → staging; bug-fix → development → production integration workflow.

Deploying Urgent Fixes

Sometimes you need to deploy a bug-fix to production quickly, when your development branch is not ready for release yet. The workflow in that case stays the same as described above, but instead of merging the development branch into production you actually merge your bug-fix branch first into the development branch, then separately into production, without merging development into production. Then deploy the production branch as usual. This will ensure that only your bug-fix will be deployed to the production environment without all the other stuff from the development branch that’s not ready yet.

It is important to merge the bug-fix branch to both the development and production branches in this case, because your production branch should never include anything that doesn’t exist in your stable development branch. The development branch is where developers work all day, so if your fix is only in the production branch they will never see it and it can cause confusion.

Automatic Deployments to Production?

I can’t stress enough how important it is for all production deployments to be performed and verified by a responsible human being. Using automatic deployments for Production environment is dangerous and can lead to unexpected results. If every commit is deployed to your production site automatically, imagine what happens when someone commits something by mistake or commits an incomplete feature in the middle of the night when the rest of the team is sleeping? Using automatic deployments makes your Production environment very vulnerable. Please don’t do that, always deploy to production manually.

Permissions

Every developer should be able to deploy to the Staging environment. They just need to make sure they don’t overwrite each other’s changes when they do. That’s exactly why the staging branch is a great help: all changes from all developers are getting merged into it so it contains all of them.

Your Production environment, ideally, should only be accessible to a limited number of experienced developers. These guys should always be prepared to fix the servers immediately after a deployment went rogue.

Conclusion

We’ve been using this workflow with many customers for many years to deploy their application. Some of these things were learned the hard way, through broken production servers. Following these guidelines and all production deployments will become incredibly smooth and won’t cause any stress at all.

Orginal Article

Developing and Deploying with Branches

Table of Contents

Introduction

Getting started with Version Control can be an eye-opening experience. You may have already said to yourself, “How did I work without this?”. Now that you’ve got the basics of Version Control down, you want to start getting really productive by continuing to improve your workflow. Your next step is learning to code in branches.

Coding in branches is a simple practice that keeps you and your work more organised. Branches let you easily maintain your “in-progress” work separately from your completed, tested, and stable code. Not only is this an effective way to collaborate with others, but it will also allow you to automate the deployment of updates, when needed and fixes to your servers.

Coding in master/trunk “branches”

Even if you don’t know how to use branching in your development process, you’ve already been using a branch just by committing your code to version control. In all major version control systems, each repository contains at least one branch by default, your working branch:

  • in Subversion this is a folder called trunk,
  • in Git this is a branch called master.

Without configuring anything, your first commit to any repository will be made to this working branch.

Each version control system has a different approach to creating, merging, and deleting branches. We’ll be focusing on overall development process, and suggest that you refer to the documentation of your preferred VCS for specific details about commands:

Branching Workflow

Using branches can seem complicated without some guidance. We’re going to help you by focusing on two specific uses for branches and the benefits of having them in your workflow.

Benefits of Branches: Building Features & Fixing Bugs

Most coding falls into one of these two categories: you’re either building new features or fixing bugs in an existing codebase. A problem occurs when those two things need to be happen at the same time.

Imagine that you’ve recently launched big Feature X. Things are going well at first, so you move on to start the next task on your to-do list, awesome Feature Y. You start coding and committing changes to your repository, but along the way discover a problem with big Feature X that you need to fix right away. What do you do?

If all of your work is being done in the default working branch of your repository, you’ll need to figure out how to save the work you’ve done so far on Feature Y, revert your repository to the state it was in when you deployed Feature X, make your fix, and then re-introduce your work from Feature Y. This is messy, and you could potentially lose some of your work or introduce new bugs.

Instead, you should be doing all of your work in a feature or bug-fix branches and let the VCS do the hard work for you. You would branch your repository from the point where Feature X was deployed, creating an alternate working copy for you to do new work on. Your Feature Y branch includes the entire repository’s history and code, but a separate history “starts” from the moment the branch is created. This allows you to work on the Feature Y branch, committing to your hearts content, without disturbing the code that you deployed to release Feature X.

Only once the feature is tested and complete, ready for deployment, you can merge that branch back into your main working branch.

This also means you can switch between a feature branch to the default working branch any time to create new branches from that point – like the bug-fix that you need to make. After switching back to the working branch, you would create a bug-fix branch. Working on the bug-fix in its own branch might not seem necessary if the fix is small, but following this practice will help you avoid situations where small bug-fixes turn into bigger bug-fixes, potentially leaving your working branch in a messy state.

Best Practices with Feature & Bug Branches

  • Try to avoid committing unfinished work to your repository’s default working branch.
  • Create a branch any time you begin non-trivial work including features and complex bug-fixes.
  • Don’t forget to delete feature branches once they were merged into stable branch. This will keep your repository clean.

Benefits of Branching: Server Environment Branches

Another reason to use version control is so that you can use your repository as the source to deploy code to your servers. Much like feature and bug-fix branches, environment branches make it easy for you to separate your in-progress code from your stable code. Using environment branches and deploying from them means you will always know exactly what code is running on your servers in each of your environments.

We’ve been talking about your “default working branch” – but you can also think of this as your development environment branch. It’s a good idea to keep this branch clean – this is easily done by using feature and bug-fix branches and only merging them back to your development branch once they are tested. In other words, at any point in time your development branch should contain only stable code. Therefore, we will be using the name “stable” from now on in this guide to refer to this branch.

Using Production and Staging Branches

In addition to your development environment, you’re likely to have at least one other environment:production, where your website or application actually runs. Having a production environment branch and making that the only source of code that goes to production ensures that you have a snapshot of what is on your production server at any time, and a granular history of what’s been added to production and when.

In most cases you will have a staging environment as well, where your team or clients can review changes together. Having a staging environment branch will help you keep that environment with the same benefits as a production branch.

Diff Branches for Easy Code Review & Release Notes

When your development environment has been updated with features and bug-fixes that are tested, you can use your VCS to do a diff between your stable branch and staging branch to see what would be deployed that’s not currently on staging. This is a great opportunity to look for low quality or incomplete code, debug code, and other development leftovers that shouldn’t be deployed. This diff can also be helpful in writing your release notes.

Never Merge to Environment Branches Without Deploying

In order to keep your environment branches in sync with the environments, it’s a best practice to only execute a merge into an environment branch at the time you intend to deploy. If you complete a merge without deploying, your environment branch will be out of sync with your actual production environment.

With environment branches, you never want to commit changes directly to the branch. By only merging code from your stable branch into staging or production branches, you ensure that changes are flowing in one direction: from feature and bug-fix branches to stable and staging environments, and from stable to production. This keeps your history clean, and again, lets you be confident about knowing what code is running in which environments.

 

Best Practices with Environment Branches

  • Use your repository’s default working branch as your “stable” branch.
  • Create a branch for each environment, including staging and production.
  • Never merge into an environment branch unless you are ready to deploy to that environment.
  • Perform a diff between branches before merging—this can help prevent merging something that wasn’t intended, and can also help with writing release notes.
  • Merges should only flow in one direction: first from feature to staging for testing; then from feature to stable once tested; then from stable to production to ship.

Further Reading: Branching with Beanstalk

This is just an overview of some of the most common practices for using branches in version control. If you’re using Beanstalk, we’ve included some resources to help you take advantage of branching.

Original Article