Uncle Bob On Coding Standards

It’s important for a team to have a single coding standard for each language to avoid several problems:

  • A lack of standards can make your code unreadable.
  • Disagreement over standards can cause check-in wars between developers.
  • Seeing different standards in the same class can be extremely irritating.

UncleBob wrote this:

On coding standards:

  • Let them evolve during the first few iterations.
  • Let them be team specific instead of company specific.
  • Don’t write them down if you can avoid it. Rather, let the code be the way the standards are captured.
  • Don’t legislate good design. (e.g. don’t tell people not to use goto)
    Make sure everyone knows that the standard is about communication, and nothing else.
  • After the first few iterations, get the team together to decide.

Original article

GitHub for web Designers

Unless you’re a one person web shop with no team to collaborate with, you’ve experienced the frustration that goes along with file sharing. No matter how hard you try, when multiple people are working on a single project without a version control system in place things get chaotic.

If you work with developers on the build-out and implementation of websites, the merge between front-end templates and back-end functionality can be a scary black hole.

Issues like overwrites, lost files, and the all-too-common “working off a previous version” phenomenon crop up constantly. And once back-end functionality has been put into your templates, you become terrified to touch them for fear of breaking something a developer spent a great deal of time getting to work.

In addition, even if you have a common repository that everyone is pulling from odds are at least one member of your team forgot to grab the latest files and is about to blow things up with their latest additions.

Fear not GitHub is here to save the day, I’ll give you a quick review of GitHub, an excellent version control system.

Version Control – A Quick and Dirty Explanation

Version Control (also known as Revision Control or Source Control Management) is a great way to solve the file sharing problem.

The basic concept is this: there is one main repository for all of the project files. Team members check files out, make changes, and then check them back in (or commit them). The Version Control System (VCS) automatically notes who changed the files, when they were changed, and what about them was new or different.

It also asks you to write a little note about the change so everyone on the project knows at a glance what you did and why. Each file will then have a revision history so you can easily go back to a previous version of any file if something goes horribly wrong.

A good  Version Control System also allows you to merge changes to the same file. If you and another person work locally on the same file at the same time, when you push these files back into the main repository the system will merge both sets of changes to create a new and fully up-to-date file. If any conflicts arise during the merge it will highlight them for you.

You’re probably using a very crude  Version Control System right now to keep your files straight. If you’re a designer, it looks something like this:

This works well enough for PSDs and other large binary files, which don’t really lend themselves to  Version Control System. But there’s a much better way to do it when you are managing the source code for a website.

Benefits to using a Version Control System include:

  • Files cannot be overwritten
  • There is a common repository that holds all the latest files
  • People can work on the same files simultaneously without conflict
  • Allows you to revert back to an older version of the file/project if needed
  • Making your developers very happy

Even if you don’t work with a team, version control can be a lifesaver. Backing up files is one of the easiest things you can do to save yourself from losing work or having to start over.

The idea of a  Version Control System seems daunting at first, especially since most of the documentation is written by and for developers. But once you make the move to incorporate it into your workflow, you’ll find it’s not nearly as hard as it looks.

Meet GitHub

OK, so now you can see why a Version Control System is a must-have for your web team. If you do a little Googling you’ll see that there are quite a few options out there including SVN, Mercurial, CVS, Bazaar and GitHub. Any one of them could be a good solution for your needs, and I encourage you to do some research before selecting a  Version Control System. In this article I’m going to focus on GitHub, the one I use daily. It’s a “rising star” that has gained popularity thanks to a strong Linux fanbase, GitHub and the Rails community.

GitHub is a free open-source Version Control System originally created by Linus Torvalds for Linux kernal development. Linus is a very smart guy; when he sets out to solve a problem, he doesn’t mess around. One of Git’s big differentiators is that unlike SVN and CVS it is a distributed version control system. This means that every user has a complete copy of the repository data stored locally on their machine. What’s so great about that? A few things:

  • Everything is local, so you can work offline
  • There is no single point of failure. It doesn’t rely on one central server that could crash and burn, taking the only repository for your project with it.
  • Because it doesn’t have to communicate with a central server constantly, processes run much faster

Git has a slightly tougher learning curve than SVN, but the trade-off is worth it. Just think how impressed your developer friends will be when you tell them you’re using the new hotness that is Git! In all seriousness, I don’t think the learning curve is all that steep. 

Installing Git isn’t fun and games. But there are plenty of resources online to get you through it. It will run on a PC, Mac or Linux box, although installation for Linux and OSX is considerable easier than for Windows.

You can download the latest version of Git here. Once you have the files, try this quick guide to get you started with the installation process. For Windows users, this step-by-step visual guide should be helpful. Mac users, try this guide found on GitHub

Original Article

What is the difference between running in Debug and Release mode?

Is the only difference between Debug and Release configurations that Debug have the DEBUG constant defined, and Release produces the Optimize code?

  1. Are there performance differences between these two configurations. Are there any specific type of code that will cause big differences in performance here, or is it actually not that important?
  2. Are there any type of code that will run fine under the Debug configuration that might fail under Release configuration, or can you be certain that code that is tested and working fine under the Debug configuration will also work fine under Release configuration.

The C# compiler itself doesn’t alter the emitted IL a great deal in the Release build. Notable is that it no longer emits the NOP opcodes that allow you to set a breakpoint on a curly brace. The big one is the optimizer that’s built into the JIT compiler. It does make the following optimizations:

  • Method inlining. A method call is replaced by the injecting the code of the method. This is a big one, it makes property accessors essentially free.
  • CPU register allocation. Local variables and method arguments can stay stored in a CPU register without ever (or less frequently) being stored back to the stack frame. This is a big one, notable for making debugging optimized code so difficult. And giving the volatile keyword a meaning.
  • Array index checking elimination. An important optimization when working with arrays (all .NET collection classes use an array internally). When the JIT compiler can verify that a loop never indexes an array out of bounds then it will eliminate the index check. Big one.
  • Loop unrolling. Short loops (up to 4) with small bodies are eliminated by repeating the code in the loop body. Avoids the branch misprediction penalty.
  • Dead code elimination. A statement like if (false) { /…/ } gets completely eliminated. This can occur due to constant folding and inlining. Other cases is where the JIT compiler can determine that the code has no possible side-effect. This optimization is what makes profiling code so tricky.
  • Code hoisting. Code inside a loop that is not affected by the loop can be moved out of the loop.
  • Common sub-expression elimination. x = y + 4; z = y + 4; becomes z = x;
  • Constant folding. x = 1 + 2; becomes x = 3; This simple example is caught early by the compiler, but happens at JIT time when other optimizations make this possible.
  • Copy propagation. x = a; y = x; becomes y = a; This helps the register allocator make better decisions. It is a big deal in the x86 jitter because it has so few registers to work with. Having it select the right ones is critical to perf.

These are very important optimizations that can make a great deal of difference when, for example, you profile the Debug build of your app and compare it to the Release build. That only really matters though when the code is on your critical path, the 5 to 10% of the code you write that actually affects the perf of your program. The JIT optimizer isn’t smart enough to know up front what is critical, it can only apply the “turn it to eleven” dial for all the code.

The effective result of these optimizations on your program’s execution time is often affected by code that runs elsewhere. Reading a file, executing a dbase query, etc. Making the work the JIT optimizer does completely invisible. It doesn’t mind though 🙂

The JIT optimizer is pretty reliable code, mostly because it has been put to the test millions of times. It is extremely rare to have problems in the Release build version of your program. It does happen however. Both the x64 and the x86 jitters have had problems with structs. The x86 jitter has trouble with floating point consistency, producing subtly different results when the intermediates of a floating point calculation are kept in a FPU register at 80-bit precision instead of getting truncated when flushed to memory.

Reference

Using the WebGrid in MVC correctly

I know the WebGrid is a basic Grid control, but sometimes you are constrained with what you can use with a customer and WebGrid is by default from Microsoft so it is safe to use and does not require any unknown third party tools to be installed.

By default the WebGrid requires all the data to be loaded in order for paging to work.  The problem is when you page through the data, all the data is returned. If you try to limit the data being returned, the problem you’ll encounter because you’re only returning a subset of the data is the WebGrid thinks there’s only that amount of data to display, so the paging links will disappear! Not good!  This is the reason why I’m writing this blog post to over come this issue.

Throughout this example I’ve tried to keep to using interfaces where ever possible as I always think these are easier to understand and keep the application more flexible.

The WebGrid supports dynamic typing, while dynamic typing is probably a good fit for WebMatrix, there are benefits to strongly typed views. One way to achieve this is to create a derived type WebGrid<T>, here it is:

using System;
using System.Collections.Generic;
using System.Web.Helpers;

public class WebGrid<T> : WebGrid
{
    public WebGrid(IEnumerable<T> source = null, IEnumerable<string> columnNames = null, string defaultSort = null, int rowsPerPage = 10, bool canPage = true, bool canSort = true, string ajaxUpdateContainerId = null, string ajaxUpdateCallback = null, string fieldNamePrefix = null, string pageFieldName = null,
    string selectionFieldName = null, string sortFieldName = null, string sortDirectionFieldName = null)
        : base(source.SafeCast<object>(), columnNames, defaultSort, rowsPerPage, canPage, canSort, ajaxUpdateContainerId, ajaxUpdateCallback, fieldNamePrefix, pageFieldName,
            selectionFieldName, sortFieldName, sortDirectionFieldName)
    {
    }
    public WebGridColumn _Column(string columnName = null, string header = null, Func<T, object> format = null, string style = null, bool canSort = true)
    {
        Func<object, object> wrappedFormat = null;
        if (format != null)
        {
            wrappedFormat = o => format((T)o);
        }
        var _scolumn = base.Column(columnName, header, wrappedFormat, style, canSort);
        return _scolumn;
    }
    public WebGrid<T> _Bind(IEnumerable<T> source, IEnumerable<string> columnNames = null, bool autoSortAndPage = true, int rowCount = -1)
    {
        base.Bind(source.SafeCast<object>(), columnNames, autoSortAndPage, rowCount);
        return this;
    }
}

And to extend the exisiting WebGrid we require a static extension:

using System.Web.Mvc;
using System.Collections.Generic;

public static class WebGridExtensions
{
    public static WebGrid<T> Grid<T>(this HtmlHelper htmlHelper, IEnumerable<T> source, IEnumerable<string> columnNames = null, string defaultSort = null, int rowsPerPage = 10, bool canPage = true, bool canSort = true, string ajaxUpdateContainerId = null, string ajaxUpdateCallback = null, string fieldNamePrefix = null,
    string pageFieldName = null, string selectionFieldName = null, string sortFieldName = null, string sortDirectionFieldName = null)
    {
        return new WebGrid<T>(source, columnNames, defaultSort, rowsPerPage, canPage, canSort, ajaxUpdateContainerId, ajaxUpdateCallback, fieldNamePrefix, pageFieldName,
        selectionFieldName, sortFieldName, sortDirectionFieldName);
    }

    public static WebGrid<T> ServerPagedGrid<T>(this HtmlHelper htmlHelper, IEnumerable<T> source, int totalRows, IEnumerable<string> columnNames = null, string defaultSort = null, int rowsPerPage = 10, bool canPage = true, bool canSort = true, string ajaxUpdateContainerId = null, string ajaxUpdateCallback = null,
    string fieldNamePrefix = null, string pageFieldName = null, string selectionFieldName = null, string sortFieldName = null, string sortDirectionFieldName = null)
    {
        dynamic webGrid = new WebGrid<T>(null, columnNames, defaultSort, rowsPerPage, canPage, canSort, ajaxUpdateContainerId, ajaxUpdateCallback, fieldNamePrefix, pageFieldName,
        selectionFieldName, sortFieldName, sortDirectionFieldName);
        return webGrid.Bind(source, rowCount: totalRows, autoSortAndPage: false);
    }
}

One further method we will need is a SafeCast to extend the Enumerable

using System.Collections;
using System.Collections.Generic;
using System.Linq;

public static class EnumerableExtensions
{
    public static IEnumerable<TTarget> SafeCast<TTarget>(this IEnumerable source)
    {
        return source == null ? null : source.Cast<TTarget>();
    }
}

Okay so this is it for the infrastructure, now lets get down to using the new WebGrid, so first the domain objects or interfaces we want to display

using System;
    
public interface IDocument
{
    global::System.Guid Id { get; set; }
    DateTime? Timestamp { get; set; }
    global::System.Boolean Inactive { get; set; }
}

Now for the important worker, the Service interface:

using System.Collections.Generic;

public interface IDocumentService
{
    IEnumerable<IDocument> GetDocuments(out int totalRecords, int pageSize = -1, int pageIndex = -1, string sort = "Id", SortDirection sortOrder = SortDirection.Ascending);
}

and the impmentation looks something like this:

using System;
using System.Collections.Generic;
using System.Linq;
using System.Linq.Expressions;

public class EfDocumentService : IDocumentService
{
    private readonly IDictionary<string, Func<IQueryable<IDocument>, bool, IOrderedQueryable<IDocument>>>
        _documentOrderings
            = new Dictionary<string, Func<IQueryable<IDocument>, bool, IOrderedQueryable<IDocument>>>
                    {
                        {
                            "Id",
                            CreateOrderingFunc<IDocument, Guid>(p => p.Id)
                            },
                        {
                            "Inactive",
                            CreateOrderingFunc<IDocument, bool?>(p => p.Inactive )
                            },
                        {
                            "Timestamp",
                            CreateOrderingFunc<IDocument, DateTime?>(p => p.Timestamp )
                            }
                    };


    private static Func<IQueryable<T>, bool, IOrderedQueryable<T>> CreateOrderingFunc<T, TKey>(Expression<Func<T, TKey>> keySelector)
    {
        return (source, @ascending) => @ascending ? source.OrderBy(keySelector) : source.OrderByDescending(keySelector);
    }

    public IEnumerable<IDocument> GetDocuments(out int totalRecords, int pageSize, int pageIndex, string sort, SortDirection sortOrder)
    {
        using (var context = new EDM())
        {
            IQueryable<IDocument> documents = context.Documents;

            totalRecords = documents.Count();

            Func<IQueryable<IDocument>, bool, IOrderedQueryable<IDocument>> applyOrdering;
            _documentOrderings.TryGetValue(sort, out applyOrdering);

            documents = applyOrdering(documents, sortOrder == SortDirection.Ascending);

            if (pageSize > 0 && pageIndex >= 0)
            {
                documents = documents.Skip(pageIndex * pageSize).Take(pageSize);
            }

            return documents.ToList();
        }
    }
}

Now for the Controller to process the data:

using System;
    using System.Web.Mvc;
    using Domain;
    using Models;

    public class HomeController : Controller
    {
        private readonly IDocumentService _documentService;

        public HomeController()
        {
            _documentService = new EfDocumentService();
        }

        public ActionResult index(int page = 1, string sort = "Id", string sortDir = "Ascending")
        {
            const int pageSize = 5;
            int totalRecords;

            var documents = _documentService.GetDocuments(out totalRecords, pageSize: pageSize, pageIndex: page - 1, sort: sort, sortOrder: GetSortDirection(sortDir));
            
            var model = new PagedDocumentsModel
            {
                PageSize = pageSize,
                PageNumber = page,
                Documents = documents,
                TotalRows = totalRecords
            };

            return View(model);
        }

        private SortDirection GetSortDirection(string sortDirection)
        {
            if (sortDirection != null)
            {
                if (sortDirection.Equals("DESC", StringComparison.OrdinalIgnoreCase) || sortDirection.Equals("DESCENDING", StringComparison.OrdinalIgnoreCase))
                {
                    return SortDirection.Ascending;
                }
            }
            return SortDirection.Descending;
        }
    }

The model to display all the information to the screen:

using System.Collections.Generic;
using Domain;

public class PagedDocumentsModel
{
    public int PageSize { get; set; }
    public int PageNumber { get; set; }
    public IEnumerable<IDocument> Documents { get; set; }
    public int TotalRows { get; set; }
}

And finally the view to display the information:

@model WebGridPaging.Models.PagedDocumentsModel
@using WebGridPaging.Infrastructure
@{
    Layout = null;
}
<!DOCTYPE html>
<html>
<head>
    <meta name="viewport" content="width=device-width" />
    <script src="../../Scripts/jquery-2.0.3.min.js" type="text/javascript"></script>
    <title>List</title>
</head> 
<body>
    <div id="grid">
        @{
            var grid = new WebGrid<WebGridPaging.Domain.IDocument>(null, rowsPerPage: Model.PageSize, defaultSort: "Id", ajaxUpdateContainerId:"grid");
            grid.Bind(Model.Documents, rowCount: Model.TotalRows, autoSortAndPage: false);
        }
        @grid.GetHtml()
    </div>
    <p>
	Model.Documents.Count() = @Model.Documents.Count()
</p>
<p>
	Model.TotalRows = @Model.TotalRows
</p>
<p>
	Time = @System.DateTime.Now
</p>
</body>
</html>

I’ve included a project with code samples, but please note this is talking to an external database with a table called Documents, so either generate a sample database with this table or point it to one you already have.

WebGridPaging.zip (3.37 mb)

Reference links

Get the Most out of WebGrid in ASP.NET MVC

Bug Process

Background
 
 

Note:  Click the process for a larger version.

 
When a bug is first created it is assigned a nominal story point value of 1.  This value is intended to cover the investigation phase of the bug only (or a fix if it’s found to be a small issue).  Having a story point value against the bug  (even if it only covers investigation initially) means that we can allocate each bug into a sprint along with other backlog items and gives a window of time to investigate the problem.
The outcome of this stage should be either:
  1. Bug resolved; should only occur with small bugs where the issue was found, fixed and re-tested within the allocated tasked hours
  2. Further investigation required; in which case the story-point value of the bug should be increased to indicate that the problem is very complex and was not able to even be progressed during the first pass through the stage
  3. Investigation completed; where the problem has been documented within the bug item and a number of tasks (with associated hours assigned) have been created which detail how to resolve the problem
Bugs which have not progressed (e.g. outcome 2 in the list above) should be re-story-pointed and added back into the main backlog to progress through stage 1 again at a later date.  Obviously bugs which have been resolved (outcome 1) can be marked as such and bugs which have been investigated and tasked (outcome 3) can be added back into the product backlog for assigning to a future sprint.
 
Note:  The important point to note about this phase is that it should be considered time-boxed based on what time is allocated to the tasks created for the bug at sprint planning – based on the default story point value and current velocity, this should be roughly around 3 hours total.
 
Stage 2 – Fixing
 
 

 
Note:  Click the process for a larger version.
 
Bugs which enter this stage should have already been through the first stage of investigation at least once and should have appropriate tasks against them which detail what needs to happen to resolve the problem.
Since each bug should have been through the first stage of analysis, we should know enough at this stage to be confident in being able to both apply a fix and complete the associated tasks in the allocated sprint.

Validation from WCF layer through to MVC

 

First off, why would you want to perform validation in the WCF layer?

After a vigorous PEN test, it was noted that the validation was mainly happening at the client browser and the application required validation to occur at the business logic layer, in our case the WCF layer.

In general, when working with an MVC/ASP.NET web application, you would typically want to do validation on the client-side as well as on the server side. Whilst the custom validation is simple enough, you’d have to duplicate it on the client and server, which is annoying – now you have two places to maintain a single validation routine.

Lets look at different validation options that will assist us in solving the issues over validation.

There are five validation approaches which you can prefer during validations. Each one has advantages and disadvantages over each other. Also, it is possible to apply multiple approaches at the same time. For example, you can implement self validation and data annotation attributes approaches at the same time, which gives you much flexibility.

  1. Rule sets in configuration
  2. Validation block attributes
  3. Data annotation attributes
  4. Self-validation
  5. Validators created programmatically

Rule sets in Configuration

In this approach, we put our validation rules into the configuration file (web.config in ASP.NET and app.config in Windows applications). Here is an example showing how to define validation rules:

<?xml version="1.0" encoding="utf-8" ?>
<configuration>
  <configSections>
    <section name="validation" type="Microsoft.Practices.EnterpriseLibrary.
	Validation.Configuration.ValidationSettings, 
	Microsoft.Practices.EnterpriseLibrary.Validation, Version=5.0.414.0, 
	Culture=neutral, PublicKeyToken=31bf3856ad364e35" requirePermission="true" />
  </configSections>
  <validation>
    <type name="ELValidation.Entities.BasicCustomer" 
		defaultRuleset="BasicCustomerValidationRules"
      assemblyName="ELValidation, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null">
      <ruleset name="BasicCustomerValidationRules">
        <properties>
          <property name="CustomerNo">
            <validator type="Microsoft.Practices.EnterpriseLibrary.
		Validation.Validators.NotNullValidator, 
		Microsoft.Practices.EnterpriseLibrary.Validation"
              negated="false" messageTemplate="Customer must have valid no"
              tag="CustomerNo" name="Not Null Validator" />
            <validator type="Microsoft.Practices.EnterpriseLibrary.
		Validation.Validators.StringLengthValidator, 
		Microsoft.Practices.EnterpriseLibrary.Validation"
              upperBound="5" lowerBound="5" lowerBoundType="Inclusive" 
		upperBoundType="Inclusive"
              negated="false" messageTemplate="Customer no must have {3} characters."
              tag="CustomerNo" name="String Length Validator" />
            <validator type="Microsoft.Practices.EnterpriseLibrary.
		Validation.Validators.RegexValidator, 
		Microsoft.Practices.EnterpriseLibrary.Validation"
              pattern="[A-Z]{2}[0-9]{3}" options="None" patternResourceName=""
              patternResourceType="" 
		messageTemplate="Customer no must be 2 capital letters and 3 numbers."
              messageTemplateResourceName="" messageTemplateResourceType=""
              tag="CustomerNo" name="Regex Validator" />
          </property>
        </properties>
      </ruleset>
    </type>
  </validation>
</configuration>

Validation Block Attributes

In this approach, we define our validations through the attributes defined in Enterprise Library validation block.

[NotNullValidator(MessageTemplate = "Customer must have valid no")]
[StringLengthValidator(5, RangeBoundaryType.Inclusive, 
		5, RangeBoundaryType.Inclusive, 
		MessageTemplate = "Customer no must have {3} characters.")]
[RegexValidator("[A-Z]{2}[0-9]{3}", 
	MessageTemplate = "Customer no must be 2 capital letters and 3 numbers.")]
public string CustomerNo { get; set; }

Message template is a good way to provide meaningful messages on failure with the flexibility to be replaced by Enterprise Library validation block for brackets.

Data Annotation Attributes

In this approach, we define our validations through the attributes defined within System.ComponentModel.DataAnnotations assembly.

[Required(ErrorMessage = "Customer no can not be empty")]
[StringLength(5, ErrorMessage = "Customer no must be 5 characters.")]
[RegularExpression("[A-Z]{2}[0-9]{3}", 
	ErrorMessage = "Customer no must be 2 capital letters and 3 numbers.")]
public string CustomerNo { get; set; }

This approach is widely used in conjunction with Entity Framework, MVC and ASP.NET validations.

Self-validation

This approach gives much flexibility to us in order to create and execute complex validation rules.

In order to implement this approach, we first decorate HasSelfValidation attribute with the object type as shown in the following example:

[HasSelfValidation]
public class AttributeCustomer
{
    …
}

 

Then, we write our validation logic by putting SelfValidation attribute on the top of the method which executes the validations.

[SelfValidation]
public void Validate(ValidationResults validationResults)
{
    var age = DateTime.Now.Year - DateTime.Parse(BirthDate).Year;

    // Due to laws, only customers older than 18 can be registered 
    // to system and allowed to order products
    if (age < 18)
    {
        validationResults.AddResult(
            new ValidationResult("Customer must be older than 18",
                this,
                "BirthDate",
                null,
                null));
    }
}

Validators Created Programmatically

This approach is different from the others because validation rules are created programmatically and executed independent of the type.

First, we define our validation rules:

Validator[] validators = new Validator[] 
{ 
    new NotNullValidator(false, "Value can not be NULL."),
    new StringLengthValidator(5, RangeBoundaryType.Inclusive, 
	5, RangeBoundaryType.Inclusive,  "Value must be between {3} and {5} chars.")
};

Then, we add them into one of the composite validators depending on your preference.

var validator = new AndCompositeValidator(validators);

In this example, we check Value to be tested… if it is not null and it has five exact characters.

Finally, I want to mention about the validations against collections. Actually, it is similar to the validations for objects.

// Initialize our object and set the values
var customer = new AttributeCustomer();
            
FillCustomerInfo(customer);

// Create a list of objects and add the objects to be tested to the list
List<AttributeCustomer> customers = new List<AttributeCustomer>();
customers.Add(customer);

// Initialize our validator by providing the type of objects in the list and validate them
Validator cusValidator = new ObjectCollectionValidator(typeof(AttributeCustomer));
ValidationResults valResults = cusValidator.Validate(customers);

// Show our validation results
ShowResults(valResults);

 

First thought was to use the MVC answer to validation – DataAnnotations, so why can’t we do this?

WCF is technology for exposing services and it does it in interoperable way. Data contract exposed by the service is just create for data. It doesn’t matter how many fancy attributes you use on the contract or how many custom logic you put inside get and set methods of the property. On the client side you always see just the properties.

The reason for this is that once you expose the service it exposes all its contracts in interoperable way – service and operation contracts are described by WSDL and data contracts are described by XSD. XSD can describe only structure of data but no logic. The validation itself can be in some limited way be described in XSD but .NET XSD generator doesn’t do this. Once you add service reference to your WCF service the proxy generator will take WSDL and XSD as a source and create your classes again without all that attributes.

If you want to have client side validation you should in the first place implement that validation on the client side – it can be done by using buddy classes for partial classes used by WCF proxy. If you want to have a maintenance nightmare and you don’t want to use this way you must share assembly with your entities between your WCF client and WCF service and reuse those types when adding service reference. This will create tight coupling between your service and ASP.NET MVC application.

What about Microsoft.Practices.EnterpriseLibrary.Validation.Validators?

This is a possible solution to the WCF layer, but what this does not provide is a process to pass the validation back to the UI for JavaScript validation, which you get from DataAnnotations.

It may be worth using both the DataAnnotations and the Enterprise Library validation block together.

One question still to remain, is if the validation fails in the Services layer (WCF) how does the validation message get passed to the calling WCF client?

In general, when working with an MVC/ASP.NET web application, you would typically want to do validation on the client-side as well as on the server side. Whilst the custom validation is simple enough, you’d have to duplicate it on the client and server, which is annoying – now you have two places to maintain a single validation routine.

So what else can we use?

First of all, do not throw Exceptions as a way of validating data – that’s a way too expensive operation, instead of graceful handling of invalid data.

If you would like to see some sample code take a look at Microsoft Enterprise Library 5.0 – Introduction to Validation Block by Ercan Anlama

ELValidation_src.zip (772.17 kb)

 

We’ve looked at all possible validation options, now the searching question is how do pass the validation over WCF to the WCF client for validation?  to be continued……

How to perform JavaScript Unit Testing

Following the TDD approach when you are working with MVC it is quite easy to produce Unit Tests that are based on the MVC Controllers, with more and more business logic appearing in the User Interface by using libraries such as jQuery it is becoming more important to perform Unit Testing on JavaScript.

What options are available today, more importantly what are the pros and cons of each solution?

JsUnit

Pros

    can be invoked from an ant build file
    launches browser to run the tests
    Eclipse plug-in

Cons

    launches browser to run the tests
    Does not support js file to write the unit test code: it has to be embedded inside an html file
    it has not been updated for a few years

Notes:

    There is a JsUnit (2).
    An ‘ant’ is an open source build tool; “Ant” because it is a little thing that can build big things.

RhinoUnit

Pros

    ant driven
    supports js file
    very simple to use

Cons

    Simulation of JavaScript engine: not advanced enough to support all coding types

crosscheck

Pros

    Can be invoked from ant build file
    Simulates real browser behaviour

Cons

    Simulation of JavaScript engine from a limited number of browser versions
    No activity for 2 years: it does not support Firefox versions 2.x nor 3.x

jsspec

Pros

    Runs on actual browser

Cons

    JavaScript only framework: cannot be called from ant build file

jspec

Pros

    Runs on actual browser

Cons

    Does not seem to support all code types
    JavaScript only framework: cannot be called from ant build file,

Screw.unit

Pros

    Runs on actual browser

Cons

    JavaScript only framework: cannot be called from ant build file

JSTest

Pros

    Can be intergrated with most testing frameworks, such as MSTest, NUnit, xUnit etc
    ant build supported
    Simple to install, only requires a single dll
    Browserless
    Can be installed using NuGet
    Small overhead of 56k

Cons

    Not very active on CodePlex, with only 51 downloads
    Reference directly to the js files, so if the js files are moved all the tests need to be updated

I know there are many many more and this is just a cross sample of JavaScript testing libraries, but I think I may have found the crown jewels for TDD.

It looks like JSTest is, not only the only choice we have, it is the simplest and best approach to JavaScript Unit Testing, but why has it only had 51 downloads?  Have I found a hidden gem or a monster waiting to bite me?

JSTest does provide an easy way to apply the Test-Driven Development (TDD) process

For an example to use it check out Unit Testing JavaScript with MSTest and JSTest.Net by Shawn Sweeney, I have created a sample MVC application showing it working.

JSTestMVCExample.zip (2.67 mb)

Conclusion, I do think we now have the tools to be fully “TDD” compliant.

JSTest.zip (16.80 kb)