Creating API Help Pages

Install ASP.NET and Web Tools 2012.2 Update. This update integrates help pages into the Web API project template.

Next, create a new ASP.NET MVC 4 project and select the Web API project template. The project template creates an example API controller named ValuesController. The template also creates the API help pages. All of the code files for the help page are placed in the Areas folder of the project.

When you run the application, the home page contains a link to the API help page. From the home page, the relative path is /Help.

This link brings you to an API summary page.

The MVC view for this page is defined in Areas/HelpPage/Views/Help/Index.cshtml. You can edit this page to modify the layout, introduction, title, styles, and so forth.

The main part of the page is a table of APIs, grouped by controller. The table entries are generated dynamically, using the IApiExplorer interface. (I’ll talk more about this interface later.) If you add a new API controller, the table is automatically updated at run time.

The “API” column lists the HTTP method and relative URI. The “Description” column contains documentation for each API. Initially, the documentation is just placeholder text. In the next section, I’ll show you how to add documentation from XML comments.

Each API has a link to a page with more detailed information, including example request and response bodies.

Adding Help Pages to an Existing Project

You can add help pages to an existing Web API project by using NuGet Package Manager. This option is useful you start from a different project template than the “Web API” template.

From the Tools menu, select Library Package Manager, and then select Package Manager Console. In the Package Manager Console window, type one of the following commands:

For a C# application: Install-Package Microsoft.AspNet.WebApi.HelpPage

For a Visual Basic application: Install-Package Microsoft.AspNet.WebApi.HelpPage.VB

There are two packages, one for C# and one for Visual Basic. Make sure to use the one that matches your project.

This command installs the necessary assemblies and adds the MVC views for the help pages (located in the Areas/HelpPage folder). You’ll need to manually add a link to the Help page. The URI is /Help. To create a link in a razor view, add the following:

@Html.ActionLink("API", "Index", "Help", new { area = "" }, null)

Also, make sure to register areas. In the Global.asax file, add the following code to the Application_Start method, if it is not there already:

protected void Application_Start()
{
 // Add this code, if not present.
 AreaRegistration.RegisterAllAreas();

// ...
}

Adding API Documentation

By default, the help pages have placeholder strings for documentation. You can use XML documentation commentsto create the documentation. To enable this feature, open the file Areas/HelpPage/App_Start/HelpPageConfig.cs and uncomment the following line:

config.SetDocumentationProvider(new XmlDocumentationProvider(
 HttpContext.Current.Server.MapPath("~/App_Data/XmlDocument.xml")));

Now enable XML documentation. In Solution Explorer, right-click the project and select Properties. Select the Buildpage.

Under Output, check XML documentation file. In the edit box, type “App_Data/XmlDocument.xml”.

Next, open the code for the ValuesController API controller, which is defined in /Controllers/ValuesControler.cs. Add some documentation comments to the controller methods. For example:

/// <summary>
/// Gets some very important data from the server.
/// </summary>
public IEnumerable<string> Get()
{
 return new string[] { "value1", "value2" };
}

/// <summary>
/// Looks up some data by ID.
/// </summary>
/// <param name="id">The ID of the data.</param>
public string Get(int id)
{
 return "value";
}

Note

Tip: If you position the caret on the line above the method and type three forward slashes, Visual Studio automatically inserts the XML elements. Then you can fill in the blanks.

Now build and run the application again, and navigate to the help pages. The documentation strings should appear in the API table.

The help page reads the strings from the XML file at run time. (When you deploy the application, make sure to deploy the XML file.)

Under the Hood

The help pages are built on top of the ApiExplorer class, which is part of the Web API framework. The ApiExplorerclass provides the raw material for creating a help page. For each API, ApiExplorer contains an ApiDescription that describes the API. For this purpose, an “API” is defined as the combination of HTTP method and relative URI. For example, here are some distinct APIs:

  • GET /api/Products
  • GET /api/Products/{id}
  • POST /api/Products

If a controller action supports multiple HTTP methods, the ApiExplorer treats each method as a distinct API.

To hide an API from the ApiExplorer, add the ApiExplorerSettings attribute to the action and set IgnoreApi to true.

[ApiExplorerSettings(IgnoreApi=true)] public HttpResponseMessage Get(int id) { }

You can also add this attribute to the controller, to exclude the entire controller.

The ApiExplorer class gets documentation strings from the IDocumentationProvider interface. As you saw earlier, the Help Pages library provides an IDocumentationProvider that gets documentation from XML documentation strings. The code is located in /Areas/HelpPage/XmlDocumentationProvider.cs. You can get documentation from another source by writing your own IDocumentationProvider. To wire it up, call the SetDocumentationProviderextension method, defined in HelpPageConfigurationExtensions

ApiExplorer automatically calls into the IDocumentationProvider interface to get documentation strings for each API. It stores them in the Documentation property of the ApiDescription and ApiParameterDescription objects.

Orginal article https://docs.microsoft.com/en-us/aspnet/web-api/overview/getting-started-with-aspnet-web-api/creating-api-help-pages

DateTimeOffset for SOAP

If you are serious about using SOAP, it won’t be long before you find out that if you need a full ISO date in the XML, it will not work using C# and SOAP, as DateTimeOffset is not supported from Microsoft.

Here is how you go about ensuring that you deliver an ISO DateTime formatted field in SOAP.

First, you’ll need to create a struct to hold the ISO 8601 Date Time Offset, and we will use IXmlSerializable interface which provides custom formatting for XML serialisation and deserialization.

public struct Iso8601SerializableDateTimeOffset : IXmlSerializable
 {
 public DateTimeOffset value;

public Iso8601SerializableDateTimeOffset(DateTimeOffset value)
 {
 this.value = value;
 }

public static implicit operator Iso8601SerializableDateTimeOffset(DateTimeOffset value)
 {
 return new Iso8601SerializableDateTimeOffset(value);
 }

public static implicit operator DateTimeOffset(Iso8601SerializableDateTimeOffset instance)
 {
 return instance.value;
 }

public static bool operator ==(Iso8601SerializableDateTimeOffset a, Iso8601SerializableDateTimeOffset b)
 {
 return a.value == b.value;
 }

public static bool operator !=(Iso8601SerializableDateTimeOffset a, Iso8601SerializableDateTimeOffset b)
 {
 return a.value != b.value;
 }

public static bool operator <(Iso8601SerializableDateTimeOffset a, Iso8601SerializableDateTimeOffset b)
 {
 return a.value < b.value;
 }

public static bool operator >(Iso8601SerializableDateTimeOffset a, Iso8601SerializableDateTimeOffset b)
 {
 return a.value > b.value;
 }

public override bool Equals(object o)
 {
 if (o is Iso8601SerializableDateTimeOffset)
 return value.Equals(((Iso8601SerializableDateTimeOffset)o).value);
 else if (o is DateTimeOffset)
 return value.Equals((DateTimeOffset)o);
 else
 return false;
 }

public override int GetHashCode()
 {
 return value.GetHashCode();
 }

public XmlSchema GetSchema()
 {
 return null;
 }

public void ReadXml(XmlReader reader)
 {
 var text = reader.ReadElementString();
 value = DateTimeOffset.ParseExact(text, format: "o", formatProvider: null);
 }

public override string ToString()
 {
 return value.ToString(format: "o");
 }

public string ToString(string format)
 {
 return value.ToString(format);
 }

public void WriteXml(XmlWriter writer)
 {
 writer.WriteString(value.ToString(format: "o"));
 }
 }

We also need to cater for Json Date Time Offsets too, so we will write a converter

public class UtcDateTimeOffsetConverter : Newtonsoft.Json.Converters.IsoDateTimeConverter
 {
 public override void WriteJson(JsonWriter writer, object value, JsonSerializer serializer)
 {
 if (value is Iso8601SerializableDateTimeOffset)
 {
 var date = (Iso8601SerializableDateTimeOffset)value;
 value = date.value;
 }
 base.WriteJson(writer, value, serializer);
 }

public override object ReadJson(JsonReader reader, Type objectType, object existingValue, JsonSerializer serializer)
 {
 object value = base.ReadJson(reader, objectType, existingValue, serializer);
 if (value is Iso8601SerializableDateTimeOffset)
 {
 var date = (Iso8601SerializableDateTimeOffset)value;
 value = date.value;
 }
 return value;
 }
 }

Next, we need to implement it in an object model, with a slight twist as you can not use the Iso8601SerializableDateTimeOffset struct directly, to do this we need to wrap the result into a string property

public class Foo
 {
 public Guid Id { get; set; }

[JsonConverter(typeof(UtcDateTimeOffsetConverter))]
 [XmlElement("AcquireDate")]
 public string acquireDateForXml
 {
 get { return AcquireDate.ToString(); }
 set { AcquireDate = DateTimeOffset.Parse(value); }
 }

[XmlIgnore]
 public Iso8601SerializableDateTimeOffset? AcquireDate;
 }

That is it, job done

https://github.com/BryanAvery/DateTimeOffsetSOAP

DateTimeOffsetforSOAP

Bing Maps in an MVC Application

Here is a simple example of how to use Bing Maps in an MVC application.

I am using Bing Maps V8 which is the current version of Microsoft.

Also, nothing that document.ready will fire long before the map script loads as it loads asynchronously.  Sometimes document.ready will fire before the page is loaded which means the map div might not even be available. To overcome this, we are using the callback parameter of the map script UR: for example:

http://www.bing.com/api/maps/mapcontrol?callback=LoadMap

BingMapsDemo

The Microsoft documentation can be found here:

https://www.bing.com/api/maps/sdkrelease/mapcontrol/isdk/Overview#JS

To setup a Key you can do this quite easily at the Bing Maps Portal website:

https://www.bingmapsportal.com

Caching to improve the user experience

(Added LazyCache in to the Testing)

One of the essential factors in building high-performance, scalable Web applications is the ability to store items, whether data objects, pages, or even parts of a page in memory the initial time they are requested. You can save these items on the Web server or other software in the request stream, such as a proxy server or at the browser. This allows you to avoid recreating information that satisfied a previous request. Known as caching, it will enable you to use many techniques to store page output or application data across HTTP requests and reuse it. When the server does not have to recreate information you save time and resources, and throughput and scalability increase.

It is possible to obtain significant performance improvements in ASP.NET applications by caching frequently requested objects and data in either the Application or Cache classes. While the Cache class indeed offers far more flexibility and control, it only appears to provide a marginal advantage regarding increased throughput over the Application class for caching. It would be challenging to develop a testing scheme that could accurately measure the potential benefits of the Cache class’s built-in management of lesser-used objects through the scavenging process as opposed to the fact that Application does not offer this feature. The developer needs to decide this case and should be based on the needs and convenience of the project and its usage patterns.

In the article, I’ll be looking at the application caching and what is the most effective and scalable options.  If you would like to know more about the web caching then take a look at the Microsoft Caching Architecture Guide for .NET Framework Applications

So we all know that caching increases performance, what I am not getting into here is when it should be used or when it shouldn’t be used.  I’m more interested in the performance and scalability of the caching used.

The performance testing I am going to use the following different caching methods:

I’ve tried to provide some different options that are available for Caching, if you know of any others, please let me know and I’ll add them to this post.

There are two types of tests I am using a small test which waits for 30 milliseconds and then returns back the current date and time:

Thread.Sleep(30);
return System.DateTime.UtcNow.ToString(CultureInfo.InvariantCulture);

then a test to generate a much large object of 267 Mb

public IEnumerable<Block> GetData()
{
    // should eat around 267 Mb of RAM.
    var blocks = new Block[512 * 512 * 1];

    for (var i = 0; i < (512 * 512) * 1; i++)
    {
        blocks[i] = new Block();
    }
    return blocks;
}

Both are straightforward tests, by no means are they an accurate representation of real-world methods, but they do show you how it affects the performance when using different Caching.

The idea is to iterate around these test 1,000 times and record how long it takes for each test, what I don’t do here is work out the amount of memory is being used (if you know of an easy way to include this, please let me know).

I won’t go into great details here on the results of the tests as I think you’ll find them self-explanatory in the test application, attached at the bottom of this post.

Few things to note about the testing and results is that you should use a Paralling process for the examination, as this will be much closer to the real world multi-threading environment.

Also, careful consideration should be given to using locking in the Cache module, as you don’t want the same cache key function running while another one is trying to process the same function, it should wait until the process has finished and stored it into the cache to increase performance.

Here is the sample code with different sample tests showing how useful each option is

Cache

SignalR – the right way

It has been quite sometime that SignalR has been around, 2014, so why are not more people using it? First what is SignalR?

ASP.NET SignalR is a library for ASP.NET developers that makes developing real-time web functionality easy. SignalR allows bi-directional communication between server and client. Servers can now push content to connected clients instantly as it becomes available. SignalR supports Web Sockets and falls back to other compatible techniques for older browsers. SignalR includes APIs for connection management (for instance, connect and disconnect events), grouping connections, and authorisation.

For further information check https://www.asp.net/signalr

Could it because people just don’t get it or really understand what it can do, or perhaps they hit issues without realising they don’t understand the structure?  Whatever the reason people are not using SignalR, I’m going to provide a useful sample application to get you off on the right footing.

There are a lot of different samples and information on how to use SignalR, but I’m always a firm believer of KISS (Keep It Simple Stupid).

There are two primary sides to SignalR, the client side and the server hubs, here I have created an MVC application with Individual User Accounts for Authentication.

First, add the SignalR NuGet package

Install-Package Microsoft.AspNet.SignalR

Then we need to map the Hubs connection to the application.

To enable SignalR in your application, create a class called Startup with the following:

using Microsoft.Owin;
using Owin;
using MyWebApplication;

namespace MyWebApplication
{
 public class Startup
 {
   public void Configuration(IAppBuilder app)
   {
     app.MapSignalR();
   }
 }
}

What is important here is that app.MapSignalR() is the last to be called, and this is because any changes to the app need to be done before you call the mapping.  The incorrect order got me once when we had some custom Authentication, and it was not being passed to SignalR hubs.

I won’t be going into how you go about setting up the step by step process, as this is documented in many places, and also comes in the readme.txt file as part of the NuGet package.

What I will be adding is the Authorization to the project, which is covered by Microsoft in Authentication and Authorization for SignalR Hubs.

What is important to note how the connection is handled, we are using a class called SignalRConnectionManager, and this controls the connections based on the username coming from the context and the connection id which also comes from the context.

 

public class SignalRConnectionManager<T> : IDisposable
 {
 private readonly ConcurrentDictionary<T, HashSet<string>> _connections = new ConcurrentDictionary<T, HashSet<string>>();

public int Count { get { return _connections.Count; } }

/// <summary>
 /// Attempts to add the specified userid and connectionid
 /// </summary>
 public void Add(T userid, string connectionid)
 {
 HashSet<string> connections = _connections.GetOrAdd(userid, new HashSet<string>());

lock (connections)
 {
 connections.Add(connectionid);
 }
 }

public IEnumerable<string> Connections(T userid)
 {
 HashSet<string> connections;
 if (_connections.TryGetValue(userid, out connections))
 {
 return connections;
 }

return Enumerable.Empty<string>();
 }

public IEnumerable<T> UserIds()
 {
 return _connections.Keys;
 }

/// <summary>
 /// Attempts to remove a connectionid that has the specified userid
 /// </summary>
 public void Remove(T userid, string connectionid)
 {
 HashSet<string> connections;
 if (!_connections.TryGetValue(userid, out connections))
 {
 return;
 }

lock (connections)
 {
 connections.Remove(connectionid);

if (connections.Count == 0)
 {
 HashSet<string> emptyConnections;
 _connections.TryRemove(userid, out emptyConnections);
 }
 }
 }

#region IDisposable Support

private bool disposedValue = false; // To detect redundant calls

protected virtual void Dispose(bool disposing)
 {
 if (!disposedValue)
 {
 if (disposing)
 {
 _connections.Clear();
 }

// TODO: free unmanaged resources (unmanaged objects) and override a finalizer below.
 // TODO: set large fields to null.

disposedValue = true;
 }
 }

// This code added to correctly implement the disposable pattern.
 public void Dispose()
 {
 // Do not change this code. Put cleanup code in Dispose(bool disposing) above.
 Dispose(true);
 // TODO: uncomment the following line if the finalizer is overridden above.
 // GC.SuppressFinalize(this);
 }

#endregion IDisposable Support
 }

Client Code

In my case I’m going to be looking at JavaScript within a C# MVC application, which looks like this:

<p>SignalR</p>
<!--The jQuery library is required. -->
<script src="~/Scripts/jquery-1.10.2.js"></script>
<!--Reference the SignalR library. -->
<script src="~/Scripts/jquery.signalR-2.2.3.min.js"></script>
<!--Reference the auto generated SignalR hub script. -->
<script src="~/signalr/hubs"></script>

<!--Add script to update the page and send messages - SignalR - HeartBeat.-->
<script type="text/javascript">
 $(function () {
 // Declare a proxy to reference the hub.
 var heartBeat = $.connection.heartBeatHub;

heartBeat.client.broadcastMessage = function (html) {
 $('#message').html(html).fadeIn();
 };

if ($.connection.hub && $.connection.hub.state === $.signalR.connectionState.disconnected) {
 $.connection.hub.start()
 .done(function () {
 console.log('SignalR now connected, connection ID=' + $.connection.hub.id);
 heartBeat.server.send('Heart beat listening');
 console.log("Heart beat started")
 })
 .fail(function () { console.log('Could not Connect!'); });
 }
 });
</script>
<div id="message">
</div>

two important lines in this code are:

Reference the auto generated SignalR hub script

<script src="~/signalr/hubs"></script>

Declaring the proxy to reference the hub, you’ll notice the case of the letter ‘h’ is different the the C# code, this is important otherwise you will get a JavaScript error in your browser.

var heartBeat = $.connection.heartBeatHub;

Another important thing to note is that you should only start the hub once, no matter how many SignalR endpoints you have, and you place the listening code within the done section of hub, I’ve commented out another listening hub in this sample code:

if ($.connection.hub && $.connection.hub.state === $.signalR.connectionState.disconnected) {
 $.connection.hub.start()
 .done(function () {
 console.log('SignalR now connected, connection ID=' + $.connection.hub.id);
 heartBeat.server.send('Heart beat listening');
 console.log("Heart beat started")
 //anotherHub.server.send('Another hub listening');
 })
 .fail(function () { console.log('Could not Connect!'); });
 }

That is it for now, a good clean SignalR project, and here it is: SignalR

Async Unit Tests

In theory, Async unit testing seems easy, run the Async, wait until it is finished and look at the results.  But as you will find out it is not that easy.

Here is the official approach to Async unit testing

[TestMethod]
public void FourDividedByTwoIsTwo()
{
    GeneralThreadAffineContext.Run(async () =>
    {
        int result = await MyClass.Divide(4, 2);
        Assert.AreEqual(2, result);
    });
}
    
[TestMethod]
[ExpectedException(typeof(DivideByZeroException))]
public void DenominatorIsZeroThrowsDivideByZero()
{
    GeneralThreadAffineContext.Run(async () =>
    {
        await MyClass.Divide(4, 0);
    });
}

Hang on what is GeneralThreadAffineContext, it a Utility code originally distributed as part of the Async CTP, and the project file can be found here: AsyncTestUtilities

Original article: Async Unit Tests, Part 2: The Right Way

Repository Pattern – for the REST API

The Repository Pattern used to be the next big thing when it was used, but over time it got replaced by frameworks such as the Entity Framework and LINQ, which provided much more functionality and flexibility.

I started working on an external customers REST API then I realised that the Repository Pattern would work perfectly here.

Let’s recap the Repository Pattern.

The Repository Pattern has gained quite a bit of popularity since it was first introduced as a part of Domain-Driven Design in 2004. Primarily, it provides an abstraction of data, so that your application can work with a pure abstraction that has an interface approximating that of a collection. Adding, removing, updating, and selecting items from this collection is done through a series of straightforward methods, without the need to deal with database concerns like connections, commands, cursors, or readers. Using this pattern can help achieve loose coupling and can keep domain objects persistence ignorant. Although the pattern is prevalent (or perhaps because of this), it is also frequently misunderstood and misused. There are many different ways to implement the Repository pattern. Let’s consider a few of them, and their merits and drawbacks.

Repository Per Entity or Business Object

The most straightforward approach, especially with an existing system, is to create a new Repository implementation for each business object you need to store to or retrieve from your persistence layer. Further, you should only implement the specific methods you are calling in your application. Avoid the trap of creating a “standard” repository class, base class, or default interface that you must implement for all repositories. Yes, if you need to have an Update or a Delete method, you should strive to make its interface consistent (does Delete take an ID, or does it take the object itself?). Don’t implement a Delete method on your LookupTableRepository that you’re only ever going to be calling List(). The most significant benefit of this approach is YAGNI – you won’t waste any time implementing methods that never get called.

Generic Repository Interface

Another approach is to go ahead and create a simple, generic interface for your Repository. You can constrain what kind of types it works with to be of a specific type or to implement a particular interface (e.g. ensuring it has an Id property, as is done below using a base class). An example of a generic C# repository interface might be:

public interface IRepository<T> where T : EntityBase
{
    T GetById(int id);
    IEnumerable<T> List();
    IEnumerable<T> List(Expression<Func<T, bool>> predicate);
    void Add(T entity);
    void Delete(T entity);
    void Edit(T entity);
}
 
public abstract class EntityBase
{
   public int Id { get; protected set; }
}

The advantage of this approach is that it ensures you have a common interface for working with any of your objects. You can also simplify the implementation by using a Generic Repository Implementation (below). Note that taking in a predicate eliminates the need to return an IQueryable since any filter details can be passed into the repository. This can still lead to leaking of data access details into calling code, though. Consider using the Specification pattern (described below) to alleviate this issue if you encounter it.

Generic Repository Implementation

Assuming you create a Generic Repository Interface, you can implement the interface generically as well. Once this is done, you can quickly develop repositories for any given type without having to write any new code, and your classes the declare dependencies can merely specify IRepository<Item> as the type, and it’s easy for your IoC container to match that up with a Repository<Item> implementation. You can see an example Generic Repository Implementation, using Entity Framework, here.

public class Repository<T> : IRepository<T> where T : EntityBase
{
private readonly ApplicationDbContext _dbContext;

public Repository(ApplicationDbContext dbContext)
{
_dbContext = dbContext;
}

public virtual T GetById(int id)
{
return _dbContext.Set<T>().Find(id);
}

public virtual IEnumerable<T> List()
{
return _dbContext.Set<T>().AsEnumerable();
}

public virtual IEnumerable<T> List(System.Linq.Expressions.Expression<Func<T, bool>> predicate)
{
return _dbContext.Set<T>()
.Where(predicate)
.AsEnumerable();
}

public void Insert(T entity)
{
_dbContext.Set<T>().Add(entity);
_dbContext.SaveChanges();
}

public void Update(T entity)
{
_dbContext.Entry(entity).State = EntityState.Modified;
_dbContext.SaveChanges();
}

public void Delete(T entity)
{
_dbContext.Set<T>().Remove(entity);
_dbContext.SaveChanges();
}
}

Note that in this implementation, all operations are saved as they are performed; there is no Unit of Work being applied. There are a variety of ways in which Unit of Work behaviour can be added to this implementation, the simplest of which being to add an explicit Save() method to the IRepository<T> method, and to only call the underlying SaveChanges() method from this method.

IQueryable?

Another common question with Repositories has to do with what they return. Should they return data, or should they return queries that can be further refined before execution (IQueryable)? The former is safer, but the latter offers a great deal of flexibility. In fact, you can simplify your interface only to provide a single method for reading data if you go the IQueryable route since from there any number of items can be returned.

A problem with this approach is that it tends to result in business logic bleeding into higher application layers and becoming duplicated there. If the rule for returning valid customers is that they’re not disabled and they’ve bought something in the last year, it would be better to have a method ListValidCustomers() that encapsulates this logic rather than specifying these criteria in lambda expressions in multiple different UI layer references to the repository. Another typical example in real applications is the use of “soft deletes” represented by an IsActive or IsDeleted property on an entity. Once an item has been deleted, 99% of the time it should be excluded from display in any UI scenario, so nearly every request will include something like

.Where(foo => foo.IsActive)

in addition to whatever other filters are present. This is better achieved within the repository, where it can be the default behaviour of the List() method, or the List() method might be renamed to something like ListActive(). If it’s essential to view deleted/inactive items, a unique List method can be used for just this (probably rare) purpose.

Specification

Repositories that follow the advice of not exposing IQueryable can often become bloated with many custom query methods. The solution to this is to separate queries into their types, using the Specification Design Pattern. The Specification can include the expression used to filter the query, any parameters associated with this expression, as well as how much data the query should return (i.e. “.Include()” in EF/EF Core). Combining the Repository and Specification patterns can be a great way to ensure you follow the Single Responsibility Principle in your data access code. See an example of how to implement a generic repository along with a generic specification in C#.

Repository Pattern for the REST API

Now let’s see if this can work for the REST API, first the HTTP verbs that are used, the primary or most-commonly-used HTTP verbs (or methods, as they are properly called) are POST, GET, PUT, PATCH, and DELETE. These correspond to create, read, update, and delete (or CRUD) operations, respectively. There are a number of other verbs, too, but are utilized less frequently. Of those less-frequent methods, OPTIONS and HEAD are used more often than others.

HTTP Verb CRUD
POST Create
GET Read
PUT Update/Replace
PATCH Update/Modify
DELETE Delete

One of the main difference is that the calls to REST API are Asynchronous, and the interface needs to reflect this.

public interface IRepository
 {
    Task AddAsync<T>(T entity, string requestUri);
    Task<HttpStatusCode> DeleteAsync(string requestUri);
    Task EditAsync<T>(T t, string requestUri);
    Task<T> GetAsync<T>(string path);
 }

In the concrete implementation I am using the HttpClient to connect to the REST API, which needs a few parameters:

  • Uri of the end point
  • Authorization – type of authorization to be used, default NoAuthHttpSample
  • username – username for basic authorization – default null
  • password – the password for basic authorization – default null

A sample application can be found here:

Original reference Repository Pattern A data persistence abstraction

Timeout Process

When you are dealing with large amounts of data or processes that are very time hungry you sometimes need the ability to timeout that task and continue.

Here is a snippet of code that loops around that takes 1 second, and you can set a timeout to any time you like, but if it is less than 1 second, it will cancel the task and free up resources.

class Program
 {
 public static void Main()
 {
 int timeOutInMilliseconds = 450;
 var startTime = DateTime.Now;

var cTokenSource = new CancellationTokenSource();

// Create a cancellation token from CancellationTokenSource
 var cToken = cTokenSource.Token;
 // Create a task and pass the cancellation token
 var t1 = Task<int>.Factory.StartNew(() => GenerateNumbers(cToken), cToken);

// to register a delegate for a callback when a cancellation request is made
 cToken.Register(() => cancelNotification(cTokenSource));


 while (true)
 {
 if(t1.IsCompleted)
 { 
 Console.WriteLine("Finished Processing");
 break;
 }

if (DateTime.Now > startTime.AddMilliseconds(timeOutInMilliseconds))
 {
 Console.WriteLine("Timed out");
 cTokenSource.Cancel();
 break;
 }
 }

Console.WriteLine("finished");
 Console.ReadLine();
 }

private static Task HandleTimer(CancellationTokenSource cancellationTokenSource)
 {
 cancellationTokenSource.Cancel();
 Console.WriteLine("\nHandler not implemented...");
 return Task.Run(() => { var a = 0; });
 
 }

static int GenerateNumbers(CancellationToken cancellationTokenSource)
 {
 int i;
 for (i = 0; i < 10; i++)
 {
 Console.WriteLine("Method1 - Number: {0}", i);
 Thread.Sleep(100);

// poll the IsCancellationRequested property
 // to check if cancellation was requested
 if (cancellationTokenSource.IsCancellationRequested)
 break;
 }
 return i;
 }

// Notify when task is cancelled
 static void cancelNotification(CancellationTokenSource cancellationTokenSource)
 {
 cancellationTokenSource.Cancel();
 }
 }

Source: Parallel

Differences Between ASMX and WCF Services

Some differences between ASMX and WCF services are subtle and some differences between them are not subtle. The purpose of this section is to identify many of these differences and to provide guidance for how to handle them when preparing for, or when performing, migration. ASMX provides a very successful baseline for services, and WCF extends those capabilities for the next generation of services.

There are differences between ASMX and WCF, but it is important to also understand that WCF supports the same capabilities that ASMX provides. For example, you can use message types, XmlSerialization, custom SOAP headers, and Web Service Enhancements (WSE) in a WCF service. The remainder of this section describes some of these items; it also describes other areas that represent differences between ASMX and WCF.

The following subtopics describe the major differences between ASMX and WCF services:

  • Message Structure and Serialization
  • SOAP Extensions
  • Transport Protocols
  • Security
  • Exception Handling
  • State Management

Message Structure and Serialization

Serialization is the process of translating binary objects into a data format that can be transmitted across process boundaries, computer boundaries, and network boundaries. When serialized data reaches the destination, or endpoint, it can be deserialized back into binary objects for use by an application. Both ASMX and WCF use SOAP for messages passed between two endpoints. However, for serialization, each uses different classes that implement different rules.

ASMX uses the XmlSerializer to translate classes into XML for communication, and to translate the XML back into classes on the receiver’s end. All public members are serialized unless they are marked as non-serializable using the XmlIgnoreAttribute. A large number of attributes can also be used to control the structure of the XML. For example, a property can be represented as an attribute using the XmlAttributeAttribute, or as an element using the XmlElementAttribute. The use of these attributes provides a great deal of control over how a type is serialized into XML; however, that power comes with an unfortunate downside. It may be possible to create XML structures that are not easily translated by other type systems, such as Java; XML structures that are not easily translated can hamper interoperability.

WCF uses a DataContractSerializer to perform the same translation; however, the behavior is different from the XmlSerializer. The XmlSerializer uses an implicit model where all public properties are serialized unless they are marked with the XmlIgnoreAttribute, but the DataContractSerializer uses an explicit model where the properties and/or fields that you want to serialize must be marked with a DataMemberAttribute. It is important to note that WCF can also use the XmlSerializer to perform serialization operations.

A notable difference between ASMX and WCF is the WCF ability to serialize class members regardless of the access specifier used. This means it is now possible to serialize private fields. Using this capability, you can encapsulate fields in a data type. For example, you can provide read-only access to a private field by implementing only the get method on a property. You can then serialize that private field by adding the DataMember attribute to the field in a WCF service.

Another important difference is that the DataContractSerializer generates a simplified XML structure that increases its ability to interoperate between different operating systems. In addition, the ability for users to control the XML structure is limited. This simplified structure also means that future versions of WCF will be able to target this structure for optimization. Finally, in comparison to the XmlSerializer, the serialization of data is greatly improved with the DataContractSerializer.

Recommendation

Using WCF, you can use the XmlSerializer for types that are already created. However, to maximize interoperability, it is recommended to use WCF data contracts and the DataContractSerializer. The Web Service Software Factory (ASMX) guidance package, also referred to as the ASMX guidance package, creates types that should also migrate to WCF data contracts if no additional XML serialization attributes, such as XmlAttributeAttribute, are added. The ASMX guidance package includes an XmlNamespaceAttribute to specify the namespace of the types.

SOAP Extensions

Developers can use the SOAP extensions feature of ASP.NET to interact directly with SOAP messages. By using SOAP extensions, you can intercept the SOAP message and insert your own code into the SOAP message pipeline, which allows you to extend the capabilities of SOAP. For example, features such as security, transaction management, routing, and tracing can be implemented by using SOAP extensions. The downside of this capability is that it reduces the SOAP message’s ability to interoperate with other operating systems. In other words, other operating systems may not be able to handle a SOAP message that has been customized.

WCF does not support the use of SOAP extensions, but it does have other extensibility points that can be used to intercept and manipulate SOAP messages. For example, you can use a behavior extension to hook into the WCF Dispatcher with a class that implements IDispatchMessageInspector.

Recommendation

Avoid the use of SOAP extensions when developing new ASMX services that will be migrated to WCF unless you are willing to rewrite them or if you are confident that WCF provides a similar capability.

Transport Protocols

ASMX services use the HTTP transport protocol for communications with Internet Information Services (IIS) as the host. An ASMX service file has the file name extension .asmx, which is accessed using a Uniform Resource Locator (URL); for example, http://localhost/ASMXEmployee/EmployeeManager.asmx.

WCF services can also use the HTTP protocol, but unlike ASMX services, you also have the option to use other transport protocols. The following protocols are supported by WCF:

  • Hypertext Transfer Protocol (HTTP)
  • Transmission Control Protocol (TCP)
  • Message Queuing (also known as MSMQ)
  • Named pipes

Many different hosts can also be used with WCF services. For example, IIS can be used as a host with the HTTP transport protocol. Windows services and stand-alone applications can be used as a host for other transport protocols.

Accessing a service hosted in IIS is similar to accessing an ASMX service; for example, http://localhost/WCFEmployee/EmployeeManager.svc.

The only difference between the two services is the file name extension. However, you can also configure a WCF service to use the .asmx file name extension, as described in the Service Configuration topic. WCF services that use other transport protocols are accessed using methods associated with the specific protocol.

When migrating an ASMX service to WCF, the protocol you choose is based on the client applications that will be accessing the service. If you need to support ASMX client applications, you also need to use the HTTP protocol. In addition, when configuring a WCF service for ASMX client applications, you need to configure the service to use BasicHttpBinding, as described in the Service Configuration topic.

Recommendation

When migrating services from ASMX to WCF, and if ASMX client applications need to access the migrated service, use the HTTP protocol and configure the new service to use BasicHttpBinding.

Security

Typically, authentication and authorization with ASMX is done using IIS and ASP.NET security configurations and transport layer security. In addition, Web Service Extensions (WSE) can be used to provide additional security capabilities, such as message layer security.

WCF can use the same security components as ASMX, such as transport layer security and WSE. However, WCF also has its own built-in security, which allows for a consistent security programming model for any transport. The security implemented by WCF supports many of the same capabilities as IIS and WS-* security protocols. However, when using IIS, you must also enable anonymous access to the service so that WCF security is implemented.

A powerful reason for migrating an ASMX service to WCF is to take advantage of new security capabilities that are provided by WCF. For example, WCF provides support for claims-based authorization that provides finer-grained control over resources than role-based security. In addition, instead of depending on a transport protocol such as HTTP and extensions such as WSE, security is built into WCF. The end result is that security is consistent regardless of the host that is used to implement a WCF service.

Recommendation

When possible, migrate both client applications and services to WCF to take advantage of the new security features that are available with WCF. Use of the WCF Security guidance package is also recommended when configuring security for WCF services and client applications.

If a migrated WCF service must support ASMX client applications, you can use transport security associated with HTTP and HTTPS. You can also use WSE 3.0, but you must configure a custom WCF binding for this to work. For additional information about using WSE 3.0 with WCF, see Interoperating with WSE Sample on MSDN.

Exception Handling

With ASMX services, unhandled exceptions are always returned to client applications as SOAP faults. An ASMX service can also throw the SoapException class, which provides more control over the content of the SOAP fault that is returned to the client application.

However, when an unhandled exception occurs, the default configuration of WCF protects sensitive data from exposure by not returning sensitive information in SOAP fault messages. You can override this behavior by adding a serviceDebug element to the service behavior in the configuration file that is associated with a WCF service. Overriding this behavior is not recommended for deployment; it should be used only in a development environment.

Similar to the SoapException class used with ASMX services, you can also throw a custom exception by using the FaultException<T> type, where T is a data contract that contains the exception information. The use of custom exceptions also requires the declaration of a FaultContract on operations that will throw the exception.

The following code example demonstrates how a DataContract is used to define a FaultContract declared on a service operation in a ServiceContract.

[DataContract]
public class FindEmployeeFault
{
    [DataMember]
    public string Request;
    [DataMember]
    public string Description;
}

[ServiceContract]
public interface IEmployeeManager
{
    [OperationContract(Action = "FindEmployeeByLastName")]
    [FaultContract(typeof(EmployeeService.FaultContracts.FindEmployeeFault))]
    EmployeeService.DataContracts.Employee FindEmployeeByLastName(string request);
}

This next code example demonstrates how to catch a FaultException that may be thrown by the service operation shown in the preceding code example.

// client function used to find an employee
public Employee FindEmployee( string lastName )
{
    Employee employee = null;
    try
    {
        employee = proxy.FindEmployeeByLastName( lastName   );
    }
    catch( FaultException<FindEmployeeFault> ex )
    {
        Console.WriteLine("FaultException<FindEmployeeFault>: While finding " 
            + ex.Detail.Request
            + ". Because: " 
            + ex.Detail.Description );
    }
    return employee;
}

The filtering of exception data that is returned from a service is described using an Exception Shielding pattern. The pattern describes how exception handlers can be used to filter the data that is returned to a client application. To help facilitate the creation of exception handlers in WCF services, the Service Factory: Modeling Edition includes the Data Contract Model that contains a fault contract shape that you can use to create WCF fault contracts.

Recommendation

WCF provides exception shielding, but you should always define exception handlers for both ASMX and WCF services. For more information, see Exception Handling in Service Oriented Applications.

State Management

ASMX services have access to the HttpContext class, which provides access to state managers with a different scope, such as application scope and session scope. ASP.NET also provides control over how state data is managed. Consequently, you should minimize the use of state in a service because of the effect it has on the scalability of an application.

WCF provides extensible objects that can be used for state management. Extensible objects implement the System.ServiceModel.IExtensibleObject<T> interface. The two main classes that implement the IExtensibleObject interface are ServiceHostBasedand InstanceContext. The ServiceHostBased class provides the ability for all service instances in the same host to access the same state data. On the other hand, the InstanceContext class only allows access to state data within the same service instance.

To implement state management in WCF, you need to define a class that implements the IExtension interface, which is used to hold the state data. Instances of this class can then be added to one of the IExtensibleObject classes using the Extensions property. The following code example shows how to implement state using the InstanceContext.

internal class StateData: IExtension<InstanceContext>
{
    public string Identifier = string.Empty;
    public string Data = string.Empty;

    public StateData( string data )
    {
        Identifier = Guid.NewGuid().ToString();
        Data = data;
    }
}

public string AddStateData( string data )
{
    StateData state = new StateData( data );
    OperationContext.Current.InstanceContext.Extensions.Add(state);
    return state.Identifier;
}

public string GetStateData( string identifier )
{
    Collection<StateData> dataCollection = 
        OperationContext.Current.InstanceContext.Extensions.FindAll<StateData>();

    string result = string.Empty;
    foreach( StateData data in dataCollection )
    {
        if( data.Identifier == identifier )
        {
            Result = data.Data;
            break;
        }
    }
    return result;
}

Recommendation

The ability to control where state is maintained in WCF is much more limited than in ASP.NET. As a result, state management is commonly used as the primary reason for enabling the ASP.NET compatibility mode, which provides access to ASP.NET components; this means you have much more control over where state is maintained. For example, when a service is configured for ASP.NET compatibility, you have access to HTTP context classes that provide the same functionality as the ASMX implementation.

When developing new ASMX services, you should avoid the use of state in your services; all services should be stateless.

Original source from Microsoft, edited and moved here as the content on Microsoft was in fear of being removed.

Defining and Using Custom Attribute Classes in C#

Attributes are advantageous in C# to provide none business-related functionality in an application or to abstract out the code to make it easier to follow.

The issue is when does the attribute code get called and how do you force it to run?

I set the task on to write a simple console application showing different ways you can use attributes.

I found a fascinating article for my first sample application on Defining and Using Custom Attribute Class in C# by David Tansey, below is an edited version of this article:

The complex, component-style development that businesses expect out of modern software developers requires greater design flexibility than the design methodologies of the past.

Microsoft’s .NET Framework makes extensive use of attributes to provide added functionality through what is known as “declarative” programming. Attributes enhance flexibility in software systems because they promote loose coupling of functionality. Because you can create your custom attribute classes and then act upon them, you can leverage the loose coupling power of attributes for your purposes.

The .NET Framework makes many aspects of Windows programming much more straightforward. In many cases, the Framework’s use of metadata that .NET compilers bind to assemblies at compile time causes tricky programming easier. Indeed, the use of intrinsic metadata makes it possible for .NET to relieve us from “DLL Hell.”

Lucky for us, the designers of the .NET framework did not choose to keep these metadata “goodies” hidden away under the covers. The designers gave us the Reflection API through which a .NET application can programmatically investigate this metadata. An application can “reflect” upon any imaginable aspect of a given assembly or on its contained types and their members.

Binding the metadata to the executable provides many advantages. It makes the assemblies in .NET fully self-describing. This allows developers to share components across languages and eliminates the need for header files. (They can become out-of-date relative to the implementation code.)

With all this positive news about .NET metadata, it seems hard to believe there could be anything more to the story? But there is. You can create your application-specific metadata in .NET and then use that metadata for any purpose you can imagine.

Developers define their application-specific metadata through the use of Custom Attributes. Because these attribute values become just another part of the metadata bound into an assembly, the custom attribute values are available for examination by the Reflection API.

In this article, you’ll learn how to define custom attribute classes, how to apply attributes to classes and methods in your source code, and you’ll learn how to use the Reflection API to retrieve and act upon these values.

How Does .NET Use Attributes in the Common Language Runtime?

Before you start to consider what you can accomplish with your custom attribute classes, let’s examine some of the standard attributes that the Common Language Runtime already makes available.

The [WebService] attribute provides a simple example? It lets you turn any public method of a WebService subclass into a method that you can expose as part of the Web Service merely by attaching the [WebMethod] attribute to the method definition.

public class SomeWebService : System.Web.Services.WebService
{

   [WebMethod]
   public DataSet GetDailySales( )
   {
      // code to process the request...
   }
}

You just attach the [WebMethod] attribute to the method, and .NET handles everything else for you behind the scenes.

Using the [Conditional] attribute allows you to make a given method conditional based on the presence or absence of the specified preprocessing symbol. For example, the following code:

public class SomeClass
{
   [Conditional( "DEBUG" )]
   public void UnitTest( )
   {
      // code to do unit testing...
   }
}

Indicates that the UnitTest( ) method of this class is “conditional” based on the presence of the preprocessing symbol “DEBUG”. The fascinating part is what happens. The compiler stubs out all calls to the method when the condition fails rather than attempt to nullify the behaviour of the method the way an #if…#endif pre-processing directive does. This is a much cleaner approach, and again we didn’t have to do much of anything to utilise this functionality.

Attributes utilise positional and/or named parameters. In the example using the [Conditional] attribute, the symbol specification is a positional parameter. You must always supply positional parameters.

To look at named parameters, let’s return to the [WebMethod] attribute example. This attribute has a named parameter called Description. To use it you would change the line to read:

[WebMethod(Description = "Sales volume" )]

Named parameters are optional, and you write them using the name of the parameter followed by the assignment of a value. Named parameters follow after you’ve specified all positional parameters.

I will talk more about named and positional parameters later in this article when I show you how to create and apply your Attribute class.

Run-Time, Design-Time

The examples provided in this article are involved in run-time activities. But Binaries (assemblies) aren’t just for run-time. In .NET, the metadata you describe isn’t limited to being available only at runtime. You can query the metadata at any time after you’ve compiled an assembly.

Think about some design-time possibilities. The open nature of the IDE in Visual Studio.NET allows you to create tools (using .NET languages) that facilitate development and design (wizards, builders, etc.) Thus, one module’s run-time environment (the IDE tool) is another module’s design-time environment (the source code being developed). This presents an excellent opportunity to implement some custom attributes. You could allow the IDE tool to reflect and then act upon the source classes/types you develop. Unfortunately, due to the additional subject of the IDE tool code, exploring such an example is beyond the scope of a single article.

The standard .NET attributes contain a similar example. When a developer creates custom controls to include in the Toolbox of the Visual Studio .NET IDE, they have attributes available to them to indicate how to handle the control in the property sheet. Table 1 lists and describes the four standard .NET attributes that the property sheet uses.

These property sheet-related attributes make it clear that you can use attributes and their values in the design-time as well as in the run-time environment.

Custom Attributes vs. Class Properties

Apparent similarities exist between attributes and regular member properties of a class. This can make it difficult to decide when and where you might want to utilise a custom attribute class. Developers commonly refer to properties of a class and their values as being “attributes” themselves, so what is the difference between properties and attributes?

An attribute takes the same “shape and form” as a property when you define it, but you can attach it to all manner of different assembly level types?not just Classes. Table 2 lists all the assembly level types that you can apply attributes.

Let’s pick one item from the list as an example. You can apply an attribute to a parameter, which is a little bit like adding a property to a parameter? A very novel and powerful idea indeed, because you just can’t do that with class member properties. This emphasises the most significant way in which attributes and properties are different because properties are just always going to be members of a class?  They can’t be associated with a parameter or any number of other types listed in Table 2 different than Class.

Member properties of a class are also limited in another way in which attributes are not. By definition, a member property is tied to a specific class. That member property can only ever be used through an instance or subclass instance of the class on which the property was defined. On the other hand, you can attach/apply attributes anywhere! The only requirement is that the assembly type the attribute is being assigned to matches the validon definition in the custom attribute. We’ll talk more about the validon property of custom attribute classes in the next section. This characteristic of attributes helps to promote the loose coupling that is so helpful in component-style development.

Another difference between properties and attributes relates to the values you can store in each of them. The values of member properties are instance values and can be changed at run-time. However, in the case of attributes, you set values at design time (in source code) and then compile the attributes (and their values) directly into the metadata contained in an assembly. After that point you cannot change the values of the attributes?you’ve essentially turned the values of the attributes into hard-coded, read-only data.

Consider this when you attach an attribute. If you attach an attribute to a class definition, for example, every instance of the class will have the same values assigned to the attribute regardless of how many objects of this class type you instantiate. You cannot attach an attribute to an instance of a class. You may only attach an attribute to a Type/Class definition.

Creating a Custom Attribute Class

Now we’ll create a more realistic implementation of the ideas presented above. Let’s create a custom attribute class. This will allow us to store some tracking information about code modifications that you would typically record as comments in source code. For example, we’ll mark just a few items: defect id, developer id, the date of the change, the origin of the defect, and a comment about the fix. To keep the example simple we’ll focus on creating a custom attribute class (DefectTrackAttribute) designated for use only with classes and methods.

Listing 1 shows the source code for the DefectTrackAttribute class. You can identify some critical lines of code.

If you haven’t used attributes before, the following line of code might look a bit strange.

[AttributeUsage(AttributeTargets.Class | AttributeTargets.Method, AllowMultiple = true)]

This line attaches an [AttributeUsage] attribute to the attribute class definition. Square bracket syntax identifies the construct as an attribute. So, Attributes classes can have their attributes. This may seem a bit confusing at first, but it should become more evident as we show you what you’ll use it for.

The [AttributeUsage] attribute has one positional parameter and two named parameters. The positional parameter validon specifies which of the various assembly types you can attach this attribute to. The value for this parameter uses a combination of values from the AttributeTargets enumeration. In my example, I allow only classes and methods, so I get the proper specification by OR’ing the two AttributeTargets values together.

The first named parameter of the [AttributeUsage] attribute (and the only one specified in the example) is the AllowMultiple parameter, which indicates whether you can apply this type of attribute multiple times to the same category. The default value is false. However, you want to use the AllowMultiple parameter of the Attribute more than once on a single type because that represents what the example will model. A given method or class potentially goes through many revisions during its lifetime, and you need to be able to denote each of these changes with an individual [DefectTrack] attribute.

The second named parameter of the [AttributeUsage] attribute is the Inherited parameter, which indicates whether or not derived classes inherit the attribute. I’ve made the default value for this parameter false. I opted to take the default value, so I did not specify this named parameter. Why? The source code modification information I want to capture is always related to each class and method individually. Would it confuse the developer for a class to inherit the [DefectTrack] attribute(s) from its parent class? The developer couldn’t distinguish which [DefectTrack] attributes came from the parent and which were specified directly.

Listing1 then lists the class declaration. Attribute classed are subclassed from System.Attribute. You will directly or indirectly subclass all custom Attribute classes from System.Attribute.

Next, Listing 1 shows that I’ve defined five private fields to hold the values for the attribute.

The first method in our Attribute class is the class constructor, which has a call signature with three parameters. The parameters of a constructor for an Attribute class represent the positional parameters for that attribute, which makes these required parameters. If you choose, you can create overloaded constructors and have more than one proper positional / configuration of parameter needed.

The remainder of the Attribute class is a series of public property declarations that correspond to the private fields of the class. You’ll use these properties to access the values of the attribute when you get to the example that examines the metadata. Note that the properties that correspond to the positional parameters only have a get clause and do not have a set clause. This makes these properties read-only and communicates with the fact that these are meant to be positional and not named parameters.

Applying the Custom Attribute

You’ve already seen that you can attach an attribute to a target item in your C# code by putting the attribute name and its parameters in square brackets immediately before the item’s declaration statement.

In Listing 2 you attach the [DefectTrack] attribute to a couple of methods and a couple of classes.

You need to ensure that you have access to the class definition for your custom attribute, so you start by including this line.

using MyAttributeClasses;

Beyond that, you’re merely “adorning” or “decorating” your class declarations and some of your methods with the [DefectTrack] custom attribute.

SomeCustomPricingClass has two uses of the [DefectTrack] attribute attached. The first [DefectTrack] attribute uses only the three positional parameters whereas the second [DefectTrack] attribute also includes a specification for the named parameter Origin.

[DefectTrack( "1377", "12/15/02", "David Tansey" ) ]
[DefectTrack( "1363", "12/12/02", "Toni Feltman", Origin = "Coding: Unhandled Exception")]
public class SomeCustomPricingClass
{ ... }

The PriceIsValid( ) method also uses the [DefectTrack] custom attribute, and it includes a specification for both of the named parameters, Origin and FixComment. Listing 2 contains a couple of additional uses of the [DefectTrack] attribute that you can examine on you own.

Some readers might wonder if you could rely on the old-fashioned approach of using comments for this sort of source modification information. .NET does make tools available for using XML blocks within comments to give them some structure and form.

You can easily see a comment in your source code right at the relevant place. You could process such information by text parsing the comments in the source, but it’s tedious and potentially error-prone. .NET provides tools to process XML blocks in comments that practically eliminate this issue.

Using a custom attribute for the same purpose also provides you with a structured approach to recording and processing the information, but it has an added advantage. Consider that after you compile source code into a binary, you lose your comments?forever removed from the byproduct executable code. By comparison, the values of the attributes become a part of the metadata that you’ve permanently bound to the assembly?you have still had access to the information even without any source code.

Additionally, the way an attribute “reads” in source code allows it to fill the same valuable design-time function still that the original comment did.

Retrieving the Values of the Custom Attributes

At this point, even though you’ve applied your custom attribute to some classes and methods, you haven’t seen it in action. It seems as if nothing indeed occurs whether you attach the attributes or not. But something does happen and you don’t have to take my word for it. You can use the MSIL Disassembler to open an EXE or DLL that contains types you’ve decorated with your custom attributes. The MSIL Disassembler lets you see that .NET included your attributes and their values right there in the IL code. Figure 1 shows an example of ILDASM form with the EXE from the sample code in this article opened.

Despite seeing the attribute values in the disassembly as proof of their existence, you still haven’t seen any action related to them. Now you’ll use the Reflection API to traverse the types/objects of an assembly, query for your custom attribute, and retrieve the attribute values when you find types that have your custom attribute attached to them.

Consider the general structure and intent of the test program in Listing 3. The program loads the specified assembly, gets an array of all members of the assembly, and iterates through each member looking for classes that have the [DefectTrack] attribute attached. For classes that have the attribute, the test program outputs the values of the attribute to the console. The program then performs the same steps and iteration for methods. These loops “walk” their way through the entire assembly.

Now examine some of the more critical lines of code. The first line and second line of the DisplayDefectTrack( ) method retrieve a reference to an Assembly object by loading the specified Assembly and then extracts an array containing all of the types in the assembly.

Assembly loAssembly = Assembly.Load( lcAssembly ) ;
Type[ ] laTypes = loAssembly.GetTypes( ) ;

A FOR…EACH loop iterates through each of the types of the assembly. The program outputs the name of the current type to the console, and then the following line of code queries the present type for an array containing [DefectTrack] attributes.

object[ ] laAttributes = loType.GetCustomAttributes(typeof(DefectTrackAttribute ), false ) ;

You specify the parameter typeof(DefectTrackingAttribute) on the GetCustomAttributes() method so that you can limit the returned custom attributes to be only of the type that you created in the example. The second parameter of false indicates that you do not want to include the type’s inheritance chain when trying to find your attributes.

A FOR…EACH loop iterates through each of the custom attributes and outputs its values to the console. You should recognise that the first line of the FOR…EACH block creates a new variable and does a typecast against the current attribute.

DefectTrackAttribute loDefectTrack = (DefectTrackAttribute)loAtt ;

Why is this necessary? The GetCustomAttributes() method returned an array that contains references that get cast to the generic type Object. You want to gain access to the values from your custom attribute class, and to do so, you must recast these references to their actual concrete type, DefectTrackAttribute. Once you’ve completed this, you can use the attributes, and the program can output the attribute values to the console.

Because you can apply your attribute to either classes or methods, the program then calls the GetMethods() method of the current type object from the assembly.

MethodInfo[ ] laMethods =
  loType.GetMethods(
    BindingFlags.Public |
    BindingFlags.Instance |
    BindingFlags.DeclaredOnly ) ;

For example, we have chosen to pass some values from the BindingFlags enumeration to GetMethods(). These three BindingFlags, when used in combination, limit the methods returned to ones that you defined directly in the current class. I wanted to limit the amount of output in the example, but you probably would not do this in practice because a developer might apply the [DefectTrackAttribute] to an overridden method. My implementation would not catch those attributes.

The remaining code does virtually the same processing for each of the methods that it did for each of the classes? The code queries each method for custom attributes of the [DefectTrack] type and then outputs the values for the ones it finds to the console.

Conclusion

This is just presented the implementation as only one example of how a developer might use .NET attributes to enhance their development process. Custom attributes are a bit like XML in that the significant benefits aren’t related to “what it does.” Custom attributes’ real interests lie in “what you can do with it.” The possibilities are truly limitless, and the open nature of custom attributes makes it likely that some of their most novel and powerful uses have yet to be conceived of.

Listing 1: Custom attribute class for DefectTrack attribute

using System;

namespace DefiningAndUsingCustomAttribute
{
 [AttributeUsage(AttributeTargets.Class |
 AttributeTargets.Method,
 AllowMultiple = true)]
 public class DefectTrackAttribute : System.Attribute
 {
 private string cDefectID;
 private DateTime dModificationDate;
 private string cDeveloperID;
 private string cDefectOrigin;
 private string cFixComment;

public DefectTrackAttribute(string lcDefectID, string lcModificationDate, string lcDeveloperID)
 {
 this.cDefectID = lcDefectID;
 this.dModificationDate = System.DateTime.Parse(lcModificationDate);
 this.cDeveloperID = lcDeveloperID;
 }

public string DefectID
 { get { return cDefectID; } }

public DateTime ModificationDate
 { get { return dModificationDate; } }

public string DeveloperID
 { get { return cDeveloperID; } }

public string Origin
 {
 get { return cDefectOrigin; }
 set { cDefectOrigin = value; }
 }

public string FixComment
 {
 get { return cFixComment; }
 set { cFixComment = value; }
 }
 }
}

Listing 2: Two example classes with attributes attached

namespace DefiningAndUsingCustomAttribute
{
 [DefectTrack("1377", "12/15/02", "David Tansey")]
 [DefectTrack("1363", "12/12/02", "Toni Feltman",
 Origin = "Coding: Unhandled Exception")]
 public class SomeCustomPricingClass
 {
 public double GetAdjustedPrice(double tnPrice, double tnPctAdjust)
 {
 return tnPrice + (tnPrice * tnPctAdjust);
 }

[DefectTrack("1351", "12/10/02", "David Tansey", Origin = "Specification: Missing Requirement", FixComment = "Added PriceIsValid( ) function")]
 public bool PriceIsValid(double tnPrice)
 {
 return tnPrice > 0.00 && tnPrice < 1000.00;
 }
 }
}

Listing 3: Code to walk assembly and output attribute values

using System;
using System.Reflection;

namespace DefiningAndUsingCustomAttribute
{
 public class TestMyAttribute
 {
 public static void Main()
 {
 DisplayDefectTrack("DefiningAndUsingCustomAttribute");
 Console.ReadLine();
 }

public static void DisplayDefectTrack(
 string lcAssembly)
 {
 Assembly loAssembly = Assembly.Load(lcAssembly);

Type[] laTypes = loAssembly.GetTypes();

foreach (Type loType in laTypes)
 {
 Console.WriteLine("*======================*");
 Console.WriteLine("TYPE:\t" + loType.ToString());
 Console.WriteLine("*=====================*");

object[] laAttributes = loType.GetCustomAttributes(typeof(DefectTrackAttribute), false);

if (laAttributes.Length > 0)
 Console.WriteLine("\nMod/Fix Log:");

foreach (Attribute loAtt in laAttributes)
 {
 DefectTrackAttribute loDefectTrack = (DefectTrackAttribute)loAtt;

Console.WriteLine("----------------------");
 Console.WriteLine("Defect ID:\t" + loDefectTrack.DefectID);
 Console.WriteLine("Date:\t\t" + loDefectTrack.ModificationDate);
 Console.WriteLine("Developer ID:\t" + loDefectTrack.DeveloperID);
 Console.WriteLine("Origin:\t\t" + loDefectTrack.Origin);
 Console.WriteLine("Comment:\n" + loDefectTrack.FixComment);
 }

MethodInfo[] laMethods = loType.GetMethods(BindingFlags.Public | BindingFlags.Instance | BindingFlags.DeclaredOnly);

if (laMethods.Length > 0)
 {
 Console.WriteLine("\nMethods: ");
 Console.WriteLine("----------------------");
 }

foreach (MethodInfo loMethod in laMethods)
 {
 Console.WriteLine("\n\t" + loMethod.ToString());

object[] laMethodAttributes = loMethod.GetCustomAttributes(typeof(DefectTrackAttribute), false);

if (laMethodAttributes.Length > 0)
 Console.WriteLine("\n\t\tMod/Fix Log:");

foreach (Attribute loAtt in laMethodAttributes)
 {
 DefectTrackAttribute loDefectTrack = (DefectTrackAttribute)loAtt;
 Console.WriteLine("\t\t----------------");
 Console.WriteLine("\t\tDefect ID:\t" + loDefectTrack.DefectID);
 Console.WriteLine("\t\tDeveloper ID:\t" + loDefectTrack.DeveloperID);
 Console.WriteLine("\t\tOrigin:\t\t" + loDefectTrack.Origin);
 Console.WriteLine("\t\tComment:\n\t\t" + loDefectTrack.FixComment);
 }
 }
 Console.WriteLine("\n\n");
 }
 }
 }
}

Table 1: Standard .NET attributes that the property sheet uses at design-time in the Visual Studio .NET IDE.

Attribute Description
Designer Specifies the class used to implement design-time services for a component.
DefaultProperty Specifies which property to indicate as the default property for a component in the property sheet.
Category Specifies the category in which the property will be displayed in the property sheet.
Description Specifies the description to display in the property sheet for a property.

Table 2: .NET assembly level types that you can apply attributes to.

Type
Assembly
Class
Delegate
Enum
Event
Interface
Method
Module
Parameter
Constructor
Field
Property
ReturnValue
Structure

DefiningAndUsingCustomAttribute.zip