Timeout Process

When you are dealing with large amounts of data or processes that are very time hungry you sometimes need the ability to timeout that task and continue.

Here is a snippet of code that loops around that takes 1 second, and you can set a timeout to any time you like, but if it is less than 1 second, it will cancel the task and free up resources.

class Program
 {
 public static void Main()
 {
 int timeOutInMilliseconds = 450;
 var startTime = DateTime.Now;

var cTokenSource = new CancellationTokenSource();

// Create a cancellation token from CancellationTokenSource
 var cToken = cTokenSource.Token;
 // Create a task and pass the cancellation token
 var t1 = Task<int>.Factory.StartNew(() => GenerateNumbers(cToken), cToken);

// to register a delegate for a callback when a cancellation request is made
 cToken.Register(() => cancelNotification(cTokenSource));


 while (true)
 {
 if(t1.IsCompleted)
 { 
 Console.WriteLine("Finished Processing");
 break;
 }

if (DateTime.Now > startTime.AddMilliseconds(timeOutInMilliseconds))
 {
 Console.WriteLine("Timed out");
 cTokenSource.Cancel();
 break;
 }
 }

Console.WriteLine("finished");
 Console.ReadLine();
 }

private static Task HandleTimer(CancellationTokenSource cancellationTokenSource)
 {
 cancellationTokenSource.Cancel();
 Console.WriteLine("\nHandler not implemented...");
 return Task.Run(() => { var a = 0; });
 
 }

static int GenerateNumbers(CancellationToken cancellationTokenSource)
 {
 int i;
 for (i = 0; i < 10; i++)
 {
 Console.WriteLine("Method1 - Number: {0}", i);
 Thread.Sleep(100);

// poll the IsCancellationRequested property
 // to check if cancellation was requested
 if (cancellationTokenSource.IsCancellationRequested)
 break;
 }
 return i;
 }

// Notify when task is cancelled
 static void cancelNotification(CancellationTokenSource cancellationTokenSource)
 {
 cancellationTokenSource.Cancel();
 }
 }

Source: Parallel

Differences Between ASMX and WCF Services

Some differences between ASMX and WCF services are subtle and some differences between them are not subtle. The purpose of this section is to identify many of these differences and to provide guidance for how to handle them when preparing for, or when performing, migration. ASMX provides a very successful baseline for services, and WCF extends those capabilities for the next generation of services.

There are differences between ASMX and WCF, but it is important to also understand that WCF supports the same capabilities that ASMX provides. For example, you can use message types, XmlSerialization, custom SOAP headers, and Web Service Enhancements (WSE) in a WCF service. The remainder of this section describes some of these items; it also describes other areas that represent differences between ASMX and WCF.

The following subtopics describe the major differences between ASMX and WCF services:

  • Message Structure and Serialization
  • SOAP Extensions
  • Transport Protocols
  • Security
  • Exception Handling
  • State Management

Message Structure and Serialization

Serialization is the process of translating binary objects into a data format that can be transmitted across process boundaries, computer boundaries, and network boundaries. When serialized data reaches the destination, or endpoint, it can be deserialized back into binary objects for use by an application. Both ASMX and WCF use SOAP for messages passed between two endpoints. However, for serialization, each uses different classes that implement different rules.

ASMX uses the XmlSerializer to translate classes into XML for communication, and to translate the XML back into classes on the receiver’s end. All public members are serialized unless they are marked as non-serializable using the XmlIgnoreAttribute. A large number of attributes can also be used to control the structure of the XML. For example, a property can be represented as an attribute using the XmlAttributeAttribute, or as an element using the XmlElementAttribute. The use of these attributes provides a great deal of control over how a type is serialized into XML; however, that power comes with an unfortunate downside. It may be possible to create XML structures that are not easily translated by other type systems, such as Java; XML structures that are not easily translated can hamper interoperability.

WCF uses a DataContractSerializer to perform the same translation; however, the behavior is different from the XmlSerializer. The XmlSerializer uses an implicit model where all public properties are serialized unless they are marked with the XmlIgnoreAttribute, but the DataContractSerializer uses an explicit model where the properties and/or fields that you want to serialize must be marked with a DataMemberAttribute. It is important to note that WCF can also use the XmlSerializer to perform serialization operations.

A notable difference between ASMX and WCF is the WCF ability to serialize class members regardless of the access specifier used. This means it is now possible to serialize private fields. Using this capability, you can encapsulate fields in a data type. For example, you can provide read-only access to a private field by implementing only the get method on a property. You can then serialize that private field by adding the DataMember attribute to the field in a WCF service.

Another important difference is that the DataContractSerializer generates a simplified XML structure that increases its ability to interoperate between different operating systems. In addition, the ability for users to control the XML structure is limited. This simplified structure also means that future versions of WCF will be able to target this structure for optimization. Finally, in comparison to the XmlSerializer, the serialization of data is greatly improved with the DataContractSerializer.

Recommendation

Using WCF, you can use the XmlSerializer for types that are already created. However, to maximize interoperability, it is recommended to use WCF data contracts and the DataContractSerializer. The Web Service Software Factory (ASMX) guidance package, also referred to as the ASMX guidance package, creates types that should also migrate to WCF data contracts if no additional XML serialization attributes, such as XmlAttributeAttribute, are added. The ASMX guidance package includes an XmlNamespaceAttribute to specify the namespace of the types.

SOAP Extensions

Developers can use the SOAP extensions feature of ASP.NET to interact directly with SOAP messages. By using SOAP extensions, you can intercept the SOAP message and insert your own code into the SOAP message pipeline, which allows you to extend the capabilities of SOAP. For example, features such as security, transaction management, routing, and tracing can be implemented by using SOAP extensions. The downside of this capability is that it reduces the SOAP message’s ability to interoperate with other operating systems. In other words, other operating systems may not be able to handle a SOAP message that has been customized.

WCF does not support the use of SOAP extensions, but it does have other extensibility points that can be used to intercept and manipulate SOAP messages. For example, you can use a behavior extension to hook into the WCF Dispatcher with a class that implements IDispatchMessageInspector.

Recommendation

Avoid the use of SOAP extensions when developing new ASMX services that will be migrated to WCF unless you are willing to rewrite them or if you are confident that WCF provides a similar capability.

Transport Protocols

ASMX services use the HTTP transport protocol for communications with Internet Information Services (IIS) as the host. An ASMX service file has the file name extension .asmx, which is accessed using a Uniform Resource Locator (URL); for example, http://localhost/ASMXEmployee/EmployeeManager.asmx.

WCF services can also use the HTTP protocol, but unlike ASMX services, you also have the option to use other transport protocols. The following protocols are supported by WCF:

  • Hypertext Transfer Protocol (HTTP)
  • Transmission Control Protocol (TCP)
  • Message Queuing (also known as MSMQ)
  • Named pipes

Many different hosts can also be used with WCF services. For example, IIS can be used as a host with the HTTP transport protocol. Windows services and stand-alone applications can be used as a host for other transport protocols.

Accessing a service hosted in IIS is similar to accessing an ASMX service; for example, http://localhost/WCFEmployee/EmployeeManager.svc.

The only difference between the two services is the file name extension. However, you can also configure a WCF service to use the .asmx file name extension, as described in the Service Configuration topic. WCF services that use other transport protocols are accessed using methods associated with the specific protocol.

When migrating an ASMX service to WCF, the protocol you choose is based on the client applications that will be accessing the service. If you need to support ASMX client applications, you also need to use the HTTP protocol. In addition, when configuring a WCF service for ASMX client applications, you need to configure the service to use BasicHttpBinding, as described in the Service Configuration topic.

Recommendation

When migrating services from ASMX to WCF, and if ASMX client applications need to access the migrated service, use the HTTP protocol and configure the new service to use BasicHttpBinding.

Security

Typically, authentication and authorization with ASMX is done using IIS and ASP.NET security configurations and transport layer security. In addition, Web Service Extensions (WSE) can be used to provide additional security capabilities, such as message layer security.

WCF can use the same security components as ASMX, such as transport layer security and WSE. However, WCF also has its own built-in security, which allows for a consistent security programming model for any transport. The security implemented by WCF supports many of the same capabilities as IIS and WS-* security protocols. However, when using IIS, you must also enable anonymous access to the service so that WCF security is implemented.

A powerful reason for migrating an ASMX service to WCF is to take advantage of new security capabilities that are provided by WCF. For example, WCF provides support for claims-based authorization that provides finer-grained control over resources than role-based security. In addition, instead of depending on a transport protocol such as HTTP and extensions such as WSE, security is built into WCF. The end result is that security is consistent regardless of the host that is used to implement a WCF service.

Recommendation

When possible, migrate both client applications and services to WCF to take advantage of the new security features that are available with WCF. Use of the WCF Security guidance package is also recommended when configuring security for WCF services and client applications.

If a migrated WCF service must support ASMX client applications, you can use transport security associated with HTTP and HTTPS. You can also use WSE 3.0, but you must configure a custom WCF binding for this to work. For additional information about using WSE 3.0 with WCF, see Interoperating with WSE Sample on MSDN.

Exception Handling

With ASMX services, unhandled exceptions are always returned to client applications as SOAP faults. An ASMX service can also throw the SoapException class, which provides more control over the content of the SOAP fault that is returned to the client application.

However, when an unhandled exception occurs, the default configuration of WCF protects sensitive data from exposure by not returning sensitive information in SOAP fault messages. You can override this behavior by adding a serviceDebug element to the service behavior in the configuration file that is associated with a WCF service. Overriding this behavior is not recommended for deployment; it should be used only in a development environment.

Similar to the SoapException class used with ASMX services, you can also throw a custom exception by using the FaultException<T> type, where T is a data contract that contains the exception information. The use of custom exceptions also requires the declaration of a FaultContract on operations that will throw the exception.

The following code example demonstrates how a DataContract is used to define a FaultContract declared on a service operation in a ServiceContract.

[DataContract]
public class FindEmployeeFault
{
    [DataMember]
    public string Request;
    [DataMember]
    public string Description;
}

[ServiceContract]
public interface IEmployeeManager
{
    [OperationContract(Action = "FindEmployeeByLastName")]
    [FaultContract(typeof(EmployeeService.FaultContracts.FindEmployeeFault))]
    EmployeeService.DataContracts.Employee FindEmployeeByLastName(string request);
}

This next code example demonstrates how to catch a FaultException that may be thrown by the service operation shown in the preceding code example.

// client function used to find an employee
public Employee FindEmployee( string lastName )
{
    Employee employee = null;
    try
    {
        employee = proxy.FindEmployeeByLastName( lastName   );
    }
    catch( FaultException<FindEmployeeFault> ex )
    {
        Console.WriteLine("FaultException<FindEmployeeFault>: While finding " 
            + ex.Detail.Request
            + ". Because: " 
            + ex.Detail.Description );
    }
    return employee;
}

The filtering of exception data that is returned from a service is described using an Exception Shielding pattern. The pattern describes how exception handlers can be used to filter the data that is returned to a client application. To help facilitate the creation of exception handlers in WCF services, the Service Factory: Modeling Edition includes the Data Contract Model that contains a fault contract shape that you can use to create WCF fault contracts.

Recommendation

WCF provides exception shielding, but you should always define exception handlers for both ASMX and WCF services. For more information, see Exception Handling in Service Oriented Applications.

State Management

ASMX services have access to the HttpContext class, which provides access to state managers with a different scope, such as application scope and session scope. ASP.NET also provides control over how state data is managed. Consequently, you should minimize the use of state in a service because of the effect it has on the scalability of an application.

WCF provides extensible objects that can be used for state management. Extensible objects implement the System.ServiceModel.IExtensibleObject<T> interface. The two main classes that implement the IExtensibleObject interface are ServiceHostBasedand InstanceContext. The ServiceHostBased class provides the ability for all service instances in the same host to access the same state data. On the other hand, the InstanceContext class only allows access to state data within the same service instance.

To implement state management in WCF, you need to define a class that implements the IExtension interface, which is used to hold the state data. Instances of this class can then be added to one of the IExtensibleObject classes using the Extensions property. The following code example shows how to implement state using the InstanceContext.

internal class StateData: IExtension<InstanceContext>
{
    public string Identifier = string.Empty;
    public string Data = string.Empty;

    public StateData( string data )
    {
        Identifier = Guid.NewGuid().ToString();
        Data = data;
    }
}

public string AddStateData( string data )
{
    StateData state = new StateData( data );
    OperationContext.Current.InstanceContext.Extensions.Add(state);
    return state.Identifier;
}

public string GetStateData( string identifier )
{
    Collection<StateData> dataCollection = 
        OperationContext.Current.InstanceContext.Extensions.FindAll<StateData>();

    string result = string.Empty;
    foreach( StateData data in dataCollection )
    {
        if( data.Identifier == identifier )
        {
            Result = data.Data;
            break;
        }
    }
    return result;
}

Recommendation

The ability to control where state is maintained in WCF is much more limited than in ASP.NET. As a result, state management is commonly used as the primary reason for enabling the ASP.NET compatibility mode, which provides access to ASP.NET components; this means you have much more control over where state is maintained. For example, when a service is configured for ASP.NET compatibility, you have access to HTTP context classes that provide the same functionality as the ASMX implementation.

When developing new ASMX services, you should avoid the use of state in your services; all services should be stateless.

Original source from Microsoft, edited and moved here as the content on Microsoft was in fear of being removed.

Defining and Using Custom Attribute Classes in C#

Attributes are advantageous in C# to provide none business-related functionality in an application or to abstract out the code to make it easier to follow.

The issue is when does the attribute code get called and how do you force it to run?

I set the task on to write a simple console application showing different ways you can use attributes.

I found a fascinating article for my first sample application on Defining and Using Custom Attribute Class in C# by David Tansey, below is an edited version of this article:

The complex, component-style development that businesses expect out of modern software developers requires greater design flexibility than the design methodologies of the past.

Microsoft’s .NET Framework makes extensive use of attributes to provide added functionality through what is known as “declarative” programming. Attributes enhance flexibility in software systems because they promote loose coupling of functionality. Because you can create your custom attribute classes and then act upon them, you can leverage the loose coupling power of attributes for your purposes.

The .NET Framework makes many aspects of Windows programming much more straightforward. In many cases, the Framework’s use of metadata that .NET compilers bind to assemblies at compile time causes tricky programming easier. Indeed, the use of intrinsic metadata makes it possible for .NET to relieve us from “DLL Hell.”

Lucky for us, the designers of the .NET framework did not choose to keep these metadata “goodies” hidden away under the covers. The designers gave us the Reflection API through which a .NET application can programmatically investigate this metadata. An application can “reflect” upon any imaginable aspect of a given assembly or on its contained types and their members.

Binding the metadata to the executable provides many advantages. It makes the assemblies in .NET fully self-describing. This allows developers to share components across languages and eliminates the need for header files. (They can become out-of-date relative to the implementation code.)

With all this positive news about .NET metadata, it seems hard to believe there could be anything more to the story? But there is. You can create your application-specific metadata in .NET and then use that metadata for any purpose you can imagine.

Developers define their application-specific metadata through the use of Custom Attributes. Because these attribute values become just another part of the metadata bound into an assembly, the custom attribute values are available for examination by the Reflection API.

In this article, you’ll learn how to define custom attribute classes, how to apply attributes to classes and methods in your source code, and you’ll learn how to use the Reflection API to retrieve and act upon these values.

How Does .NET Use Attributes in the Common Language Runtime?

Before you start to consider what you can accomplish with your custom attribute classes, let’s examine some of the standard attributes that the Common Language Runtime already makes available.

The [WebService] attribute provides a simple example? It lets you turn any public method of a WebService subclass into a method that you can expose as part of the Web Service merely by attaching the [WebMethod] attribute to the method definition.

public class SomeWebService : System.Web.Services.WebService
{

   [WebMethod]
   public DataSet GetDailySales( )
   {
      // code to process the request...
   }
}

You just attach the [WebMethod] attribute to the method, and .NET handles everything else for you behind the scenes.

Using the [Conditional] attribute allows you to make a given method conditional based on the presence or absence of the specified preprocessing symbol. For example, the following code:

public class SomeClass
{
   [Conditional( "DEBUG" )]
   public void UnitTest( )
   {
      // code to do unit testing...
   }
}

Indicates that the UnitTest( ) method of this class is “conditional” based on the presence of the preprocessing symbol “DEBUG”. The fascinating part is what happens. The compiler stubs out all calls to the method when the condition fails rather than attempt to nullify the behaviour of the method the way an #if…#endif pre-processing directive does. This is a much cleaner approach, and again we didn’t have to do much of anything to utilise this functionality.

Attributes utilise positional and/or named parameters. In the example using the [Conditional] attribute, the symbol specification is a positional parameter. You must always supply positional parameters.

To look at named parameters, let’s return to the [WebMethod] attribute example. This attribute has a named parameter called Description. To use it you would change the line to read:

[WebMethod(Description = "Sales volume" )]

Named parameters are optional, and you write them using the name of the parameter followed by the assignment of a value. Named parameters follow after you’ve specified all positional parameters.

I will talk more about named and positional parameters later in this article when I show you how to create and apply your Attribute class.

Run-Time, Design-Time

The examples provided in this article are involved in run-time activities. But Binaries (assemblies) aren’t just for run-time. In .NET, the metadata you describe isn’t limited to being available only at runtime. You can query the metadata at any time after you’ve compiled an assembly.

Think about some design-time possibilities. The open nature of the IDE in Visual Studio.NET allows you to create tools (using .NET languages) that facilitate development and design (wizards, builders, etc.) Thus, one module’s run-time environment (the IDE tool) is another module’s design-time environment (the source code being developed). This presents an excellent opportunity to implement some custom attributes. You could allow the IDE tool to reflect and then act upon the source classes/types you develop. Unfortunately, due to the additional subject of the IDE tool code, exploring such an example is beyond the scope of a single article.

The standard .NET attributes contain a similar example. When a developer creates custom controls to include in the Toolbox of the Visual Studio .NET IDE, they have attributes available to them to indicate how to handle the control in the property sheet. Table 1 lists and describes the four standard .NET attributes that the property sheet uses.

These property sheet-related attributes make it clear that you can use attributes and their values in the design-time as well as in the run-time environment.

Custom Attributes vs. Class Properties

Apparent similarities exist between attributes and regular member properties of a class. This can make it difficult to decide when and where you might want to utilise a custom attribute class. Developers commonly refer to properties of a class and their values as being “attributes” themselves, so what is the difference between properties and attributes?

An attribute takes the same “shape and form” as a property when you define it, but you can attach it to all manner of different assembly level types?not just Classes. Table 2 lists all the assembly level types that you can apply attributes.

Let’s pick one item from the list as an example. You can apply an attribute to a parameter, which is a little bit like adding a property to a parameter? A very novel and powerful idea indeed, because you just can’t do that with class member properties. This emphasises the most significant way in which attributes and properties are different because properties are just always going to be members of a class?  They can’t be associated with a parameter or any number of other types listed in Table 2 different than Class.

Member properties of a class are also limited in another way in which attributes are not. By definition, a member property is tied to a specific class. That member property can only ever be used through an instance or subclass instance of the class on which the property was defined. On the other hand, you can attach/apply attributes anywhere! The only requirement is that the assembly type the attribute is being assigned to matches the validon definition in the custom attribute. We’ll talk more about the validon property of custom attribute classes in the next section. This characteristic of attributes helps to promote the loose coupling that is so helpful in component-style development.

Another difference between properties and attributes relates to the values you can store in each of them. The values of member properties are instance values and can be changed at run-time. However, in the case of attributes, you set values at design time (in source code) and then compile the attributes (and their values) directly into the metadata contained in an assembly. After that point you cannot change the values of the attributes?you’ve essentially turned the values of the attributes into hard-coded, read-only data.

Consider this when you attach an attribute. If you attach an attribute to a class definition, for example, every instance of the class will have the same values assigned to the attribute regardless of how many objects of this class type you instantiate. You cannot attach an attribute to an instance of a class. You may only attach an attribute to a Type/Class definition.

Creating a Custom Attribute Class

Now we’ll create a more realistic implementation of the ideas presented above. Let’s create a custom attribute class. This will allow us to store some tracking information about code modifications that you would typically record as comments in source code. For example, we’ll mark just a few items: defect id, developer id, the date of the change, the origin of the defect, and a comment about the fix. To keep the example simple we’ll focus on creating a custom attribute class (DefectTrackAttribute) designated for use only with classes and methods.

Listing 1 shows the source code for the DefectTrackAttribute class. You can identify some critical lines of code.

If you haven’t used attributes before, the following line of code might look a bit strange.

[AttributeUsage(AttributeTargets.Class | AttributeTargets.Method, AllowMultiple = true)]

This line attaches an [AttributeUsage] attribute to the attribute class definition. Square bracket syntax identifies the construct as an attribute. So, Attributes classes can have their attributes. This may seem a bit confusing at first, but it should become more evident as we show you what you’ll use it for.

The [AttributeUsage] attribute has one positional parameter and two named parameters. The positional parameter validon specifies which of the various assembly types you can attach this attribute to. The value for this parameter uses a combination of values from the AttributeTargets enumeration. In my example, I allow only classes and methods, so I get the proper specification by OR’ing the two AttributeTargets values together.

The first named parameter of the [AttributeUsage] attribute (and the only one specified in the example) is the AllowMultiple parameter, which indicates whether you can apply this type of attribute multiple times to the same category. The default value is false. However, you want to use the AllowMultiple parameter of the Attribute more than once on a single type because that represents what the example will model. A given method or class potentially goes through many revisions during its lifetime, and you need to be able to denote each of these changes with an individual [DefectTrack] attribute.

The second named parameter of the [AttributeUsage] attribute is the Inherited parameter, which indicates whether or not derived classes inherit the attribute. I’ve made the default value for this parameter false. I opted to take the default value, so I did not specify this named parameter. Why? The source code modification information I want to capture is always related to each class and method individually. Would it confuse the developer for a class to inherit the [DefectTrack] attribute(s) from its parent class? The developer couldn’t distinguish which [DefectTrack] attributes came from the parent and which were specified directly.

Listing1 then lists the class declaration. Attribute classed are subclassed from System.Attribute. You will directly or indirectly subclass all custom Attribute classes from System.Attribute.

Next, Listing 1 shows that I’ve defined five private fields to hold the values for the attribute.

The first method in our Attribute class is the class constructor, which has a call signature with three parameters. The parameters of a constructor for an Attribute class represent the positional parameters for that attribute, which makes these required parameters. If you choose, you can create overloaded constructors and have more than one proper positional / configuration of parameter needed.

The remainder of the Attribute class is a series of public property declarations that correspond to the private fields of the class. You’ll use these properties to access the values of the attribute when you get to the example that examines the metadata. Note that the properties that correspond to the positional parameters only have a get clause and do not have a set clause. This makes these properties read-only and communicates with the fact that these are meant to be positional and not named parameters.

Applying the Custom Attribute

You’ve already seen that you can attach an attribute to a target item in your C# code by putting the attribute name and its parameters in square brackets immediately before the item’s declaration statement.

In Listing 2 you attach the [DefectTrack] attribute to a couple of methods and a couple of classes.

You need to ensure that you have access to the class definition for your custom attribute, so you start by including this line.

using MyAttributeClasses;

Beyond that, you’re merely “adorning” or “decorating” your class declarations and some of your methods with the [DefectTrack] custom attribute.

SomeCustomPricingClass has two uses of the [DefectTrack] attribute attached. The first [DefectTrack] attribute uses only the three positional parameters whereas the second [DefectTrack] attribute also includes a specification for the named parameter Origin.

[DefectTrack( "1377", "12/15/02", "David Tansey" ) ]
[DefectTrack( "1363", "12/12/02", "Toni Feltman", Origin = "Coding: Unhandled Exception")]
public class SomeCustomPricingClass
{ ... }

The PriceIsValid( ) method also uses the [DefectTrack] custom attribute, and it includes a specification for both of the named parameters, Origin and FixComment. Listing 2 contains a couple of additional uses of the [DefectTrack] attribute that you can examine on you own.

Some readers might wonder if you could rely on the old-fashioned approach of using comments for this sort of source modification information. .NET does make tools available for using XML blocks within comments to give them some structure and form.

You can easily see a comment in your source code right at the relevant place. You could process such information by text parsing the comments in the source, but it’s tedious and potentially error-prone. .NET provides tools to process XML blocks in comments that practically eliminate this issue.

Using a custom attribute for the same purpose also provides you with a structured approach to recording and processing the information, but it has an added advantage. Consider that after you compile source code into a binary, you lose your comments?forever removed from the byproduct executable code. By comparison, the values of the attributes become a part of the metadata that you’ve permanently bound to the assembly?you have still had access to the information even without any source code.

Additionally, the way an attribute “reads” in source code allows it to fill the same valuable design-time function still that the original comment did.

Retrieving the Values of the Custom Attributes

At this point, even though you’ve applied your custom attribute to some classes and methods, you haven’t seen it in action. It seems as if nothing indeed occurs whether you attach the attributes or not. But something does happen and you don’t have to take my word for it. You can use the MSIL Disassembler to open an EXE or DLL that contains types you’ve decorated with your custom attributes. The MSIL Disassembler lets you see that .NET included your attributes and their values right there in the IL code. Figure 1 shows an example of ILDASM form with the EXE from the sample code in this article opened.

Despite seeing the attribute values in the disassembly as proof of their existence, you still haven’t seen any action related to them. Now you’ll use the Reflection API to traverse the types/objects of an assembly, query for your custom attribute, and retrieve the attribute values when you find types that have your custom attribute attached to them.

Consider the general structure and intent of the test program in Listing 3. The program loads the specified assembly, gets an array of all members of the assembly, and iterates through each member looking for classes that have the [DefectTrack] attribute attached. For classes that have the attribute, the test program outputs the values of the attribute to the console. The program then performs the same steps and iteration for methods. These loops “walk” their way through the entire assembly.

Now examine some of the more critical lines of code. The first line and second line of the DisplayDefectTrack( ) method retrieve a reference to an Assembly object by loading the specified Assembly and then extracts an array containing all of the types in the assembly.

Assembly loAssembly = Assembly.Load( lcAssembly ) ;
Type[ ] laTypes = loAssembly.GetTypes( ) ;

A FOR…EACH loop iterates through each of the types of the assembly. The program outputs the name of the current type to the console, and then the following line of code queries the present type for an array containing [DefectTrack] attributes.

object[ ] laAttributes = loType.GetCustomAttributes(typeof(DefectTrackAttribute ), false ) ;

You specify the parameter typeof(DefectTrackingAttribute) on the GetCustomAttributes() method so that you can limit the returned custom attributes to be only of the type that you created in the example. The second parameter of false indicates that you do not want to include the type’s inheritance chain when trying to find your attributes.

A FOR…EACH loop iterates through each of the custom attributes and outputs its values to the console. You should recognise that the first line of the FOR…EACH block creates a new variable and does a typecast against the current attribute.

DefectTrackAttribute loDefectTrack = (DefectTrackAttribute)loAtt ;

Why is this necessary? The GetCustomAttributes() method returned an array that contains references that get cast to the generic type Object. You want to gain access to the values from your custom attribute class, and to do so, you must recast these references to their actual concrete type, DefectTrackAttribute. Once you’ve completed this, you can use the attributes, and the program can output the attribute values to the console.

Because you can apply your attribute to either classes or methods, the program then calls the GetMethods() method of the current type object from the assembly.

MethodInfo[ ] laMethods =
  loType.GetMethods(
    BindingFlags.Public |
    BindingFlags.Instance |
    BindingFlags.DeclaredOnly ) ;

For example, we have chosen to pass some values from the BindingFlags enumeration to GetMethods(). These three BindingFlags, when used in combination, limit the methods returned to ones that you defined directly in the current class. I wanted to limit the amount of output in the example, but you probably would not do this in practice because a developer might apply the [DefectTrackAttribute] to an overridden method. My implementation would not catch those attributes.

The remaining code does virtually the same processing for each of the methods that it did for each of the classes? The code queries each method for custom attributes of the [DefectTrack] type and then outputs the values for the ones it finds to the console.

Conclusion

This is just presented the implementation as only one example of how a developer might use .NET attributes to enhance their development process. Custom attributes are a bit like XML in that the significant benefits aren’t related to “what it does.” Custom attributes’ real interests lie in “what you can do with it.” The possibilities are truly limitless, and the open nature of custom attributes makes it likely that some of their most novel and powerful uses have yet to be conceived of.

Listing 1: Custom attribute class for DefectTrack attribute

using System;

namespace DefiningAndUsingCustomAttribute
{
 [AttributeUsage(AttributeTargets.Class |
 AttributeTargets.Method,
 AllowMultiple = true)]
 public class DefectTrackAttribute : System.Attribute
 {
 private string cDefectID;
 private DateTime dModificationDate;
 private string cDeveloperID;
 private string cDefectOrigin;
 private string cFixComment;

public DefectTrackAttribute(string lcDefectID, string lcModificationDate, string lcDeveloperID)
 {
 this.cDefectID = lcDefectID;
 this.dModificationDate = System.DateTime.Parse(lcModificationDate);
 this.cDeveloperID = lcDeveloperID;
 }

public string DefectID
 { get { return cDefectID; } }

public DateTime ModificationDate
 { get { return dModificationDate; } }

public string DeveloperID
 { get { return cDeveloperID; } }

public string Origin
 {
 get { return cDefectOrigin; }
 set { cDefectOrigin = value; }
 }

public string FixComment
 {
 get { return cFixComment; }
 set { cFixComment = value; }
 }
 }
}

Listing 2: Two example classes with attributes attached

namespace DefiningAndUsingCustomAttribute
{
 [DefectTrack("1377", "12/15/02", "David Tansey")]
 [DefectTrack("1363", "12/12/02", "Toni Feltman",
 Origin = "Coding: Unhandled Exception")]
 public class SomeCustomPricingClass
 {
 public double GetAdjustedPrice(double tnPrice, double tnPctAdjust)
 {
 return tnPrice + (tnPrice * tnPctAdjust);
 }

[DefectTrack("1351", "12/10/02", "David Tansey", Origin = "Specification: Missing Requirement", FixComment = "Added PriceIsValid( ) function")]
 public bool PriceIsValid(double tnPrice)
 {
 return tnPrice > 0.00 && tnPrice < 1000.00;
 }
 }
}

Listing 3: Code to walk assembly and output attribute values

using System;
using System.Reflection;

namespace DefiningAndUsingCustomAttribute
{
 public class TestMyAttribute
 {
 public static void Main()
 {
 DisplayDefectTrack("DefiningAndUsingCustomAttribute");
 Console.ReadLine();
 }

public static void DisplayDefectTrack(
 string lcAssembly)
 {
 Assembly loAssembly = Assembly.Load(lcAssembly);

Type[] laTypes = loAssembly.GetTypes();

foreach (Type loType in laTypes)
 {
 Console.WriteLine("*======================*");
 Console.WriteLine("TYPE:\t" + loType.ToString());
 Console.WriteLine("*=====================*");

object[] laAttributes = loType.GetCustomAttributes(typeof(DefectTrackAttribute), false);

if (laAttributes.Length > 0)
 Console.WriteLine("\nMod/Fix Log:");

foreach (Attribute loAtt in laAttributes)
 {
 DefectTrackAttribute loDefectTrack = (DefectTrackAttribute)loAtt;

Console.WriteLine("----------------------");
 Console.WriteLine("Defect ID:\t" + loDefectTrack.DefectID);
 Console.WriteLine("Date:\t\t" + loDefectTrack.ModificationDate);
 Console.WriteLine("Developer ID:\t" + loDefectTrack.DeveloperID);
 Console.WriteLine("Origin:\t\t" + loDefectTrack.Origin);
 Console.WriteLine("Comment:\n" + loDefectTrack.FixComment);
 }

MethodInfo[] laMethods = loType.GetMethods(BindingFlags.Public | BindingFlags.Instance | BindingFlags.DeclaredOnly);

if (laMethods.Length > 0)
 {
 Console.WriteLine("\nMethods: ");
 Console.WriteLine("----------------------");
 }

foreach (MethodInfo loMethod in laMethods)
 {
 Console.WriteLine("\n\t" + loMethod.ToString());

object[] laMethodAttributes = loMethod.GetCustomAttributes(typeof(DefectTrackAttribute), false);

if (laMethodAttributes.Length > 0)
 Console.WriteLine("\n\t\tMod/Fix Log:");

foreach (Attribute loAtt in laMethodAttributes)
 {
 DefectTrackAttribute loDefectTrack = (DefectTrackAttribute)loAtt;
 Console.WriteLine("\t\t----------------");
 Console.WriteLine("\t\tDefect ID:\t" + loDefectTrack.DefectID);
 Console.WriteLine("\t\tDeveloper ID:\t" + loDefectTrack.DeveloperID);
 Console.WriteLine("\t\tOrigin:\t\t" + loDefectTrack.Origin);
 Console.WriteLine("\t\tComment:\n\t\t" + loDefectTrack.FixComment);
 }
 }
 Console.WriteLine("\n\n");
 }
 }
 }
}

Table 1: Standard .NET attributes that the property sheet uses at design-time in the Visual Studio .NET IDE.

Attribute Description
Designer Specifies the class used to implement design-time services for a component.
DefaultProperty Specifies which property to indicate as the default property for a component in the property sheet.
Category Specifies the category in which the property will be displayed in the property sheet.
Description Specifies the description to display in the property sheet for a property.

Table 2: .NET assembly level types that you can apply attributes to.

Type
Assembly
Class
Delegate
Enum
Event
Interface
Method
Module
Parameter
Constructor
Field
Property
ReturnValue
Structure

DefiningAndUsingCustomAttribute.zip

Parse JSON into a C# Object

Let’s start with a simple JSON string. In most cases, you will get this string from a web service call. For the sake of this tutorial, we will do this manually.

var example1 = @"{""name"":""John Doe"",""age"":20}";

example1 is a simple JSON object with 2 fields: name and age.

In order to access the field(s) in this JSON string, we need to deserialize it into something C# can understand. This is where I would like to introduce the JavaScriptSerializer class, which is part of the System.Web.Script namespace.

var JSONObj = new JavaScriptSerializer().Deserialize<Dictionary<string, string>>(example1);

What this line does is it deserializes the string example1 into an object of type Dictionary<string, string>.

Once we have done that, we can access the fields like this:

JSONObj["name"]; // equals John Doe
JSONObj["age"]; // equals 20

Note: the Dictionary definition must match the types of the values in our JSON. “John Doe” is a string but 20 is an integer, so we have to use <string, string> and not <string, int>.

Okay, so we have deserialized it but you still have to reference it in a clunky way – object["field_name"] – so let’s fix that!

First, create a class which matches the definition of your JSON. In our case, we need a class with a string property and an int property:

class Example1Model
{
    public string name { get; set; }
    public int age { get; set; }
}

And now, to deserialize our JSON into an object of that type:

var example1Model = new JavaScriptSerializer().Deserialize<Example1Model>(example1);

And to reference the fields:

example1Model.name; // equals John Doe
example1Model.age; // equals 20

As you can see, this is much cleaner! Your IDE will give you intellisense/auto-completion, type information and everything you would expect from a native type in .NET.

Show me more!

Our first example was great, but it was basic. Now let us handle a complex JSON string — how about a list of orders for a customer?

var example2 = @"{""custId"": 123, ""ordId"": 4567, ""items"":[{""prodId"":1, ""price"":9.99, ""title"":""Product 1""},{""prodId"":78, ""price"":95.99, ""title"":""Product 2""},{""prodId"":1985, ""price"":3.01, ""title"":""Product 3""}] }";

As in example 1, the first thing we need to do is to create your classes, which represent the data in JSON. Here, I created 2 classes — the CustomerOrderSummary is for the outer fields (custId and ordId) and a list of objects of type Item.

class Item
{
    public int prodId { get; set; }
    public double price { get; set; }
    public string title { get; set; }
}
class CustomerOrderSummary
{
    public int custId { get; set; }
    public int ordId { get; set; }
    public List<Item> items { get; set; }
}

And to deserialize the JSON we simply do:

var example2Model = new JavaScriptSerializer().Deserialize<CustomerOrderSummary>(example2);

And to reference the fields:

example2Model.custId; // equals 123
example2Model.ordId; // equals 4567
example2Model.items.Count; // equals 3
example2Model.items[0].price // equals 9.99

There you have it! You can now parse JSON into .NET objects using C#! If you would like to retrieve and read the JSON via Objective-C and Node.js, feel free to read these two articles: iOS QuickTip: Getting and Reading JSON Data from a URL and How to Use JSON files in Node.js

Other Useful references:

original article

Symmetric encryption/decryption routine using AES

The following is a symmetric encryption/decryption routine using AES in GCM mode. This code operates in the application layer and is meant to receive user-specific and confidential information and encrypt it, after which it is stored in a separate database server. It also is called upon to decrypt encrypted information from the database.

The full description of the AES in GCM mode can be found in this document produced by David A. McGrew and John Viega The Galois/Counter Mode of Operation (GCM)

 

Database Layer

The DatabaseLayer is an abstraction for database access which means the calling application or library does not need to be tight-coupled to the database itself. A set of factory methods is available to call in order to create commands and parameters, all based on the .NET abstract base classes.

Currently supported engines are SQL server and MySQL, represented by “sql” and “mysql” in your web.config or app.config.

The DatabaseLayer does not use a provider pattern but the argument is passed in the factory to tell the library what database engine to use. An example is shown below. It is preferable to create one constant for the database type and share it across multiple places it is used.

The example below is lazy loaded and uses AppSettings and ConnectionStrings from the web.config or app.config file.

private static Database database;

internal static Database GetDb()
{
if ( database == null )
{
database = Database.GetDatabase(
ConfigurationManager.AppSettings["databasetype"],
ConfigurationManager.ConnectionStrings["myconnection"].ConnectionString);
}
return database;
}

DatabaseLayer solution

Cryptography .NET, Avoiding Timing Attack

When comparing two MACs or hashes (also password hashes) for equality you would first think a simple comparison would be okay?  Think again as they are susceptible to timing attacks.

In cryptography, a timing attack is a side channel attack in which the attacker attempts to compromise a cryptosystem by analysing the time taken to execute cryptographic algorithms.

If you want to read more about this take a look at this article.

if ($hash1 == $hash2) {
//mac verification is Okay
return "hashs are equal"
} else {
//something happened
return "hashs verification failed!";
}

Okay, so what is wrong with this?

Both arguments must be of the same length to be compared successfully. When arguments of differing length are supplied, FALSE is returned immediately and the length of the known string may be leaked in the case of a timing attack.  It is done by timing the amount of time it takes to compare a known string length with the HASH, and as you will then find the length of the string based on the time it takes during the comparison.

Compares two-byte arrays in length-constant time. This comparison method is used so that password hashes cannot be extracted from on-line systems using a timing attack and then attacked off-line.

 private static bool SlowEquals(byte[] a, byte[] b)
 {
 uint diff = (uint)a.Length ^ (uint)b.Length;
 for (int i = 0; i < a.Length && i < b.Length; i++)
 diff |= (uint)(a[i] ^ b[i]);
 return diff == 0;
 }

What does the line diff |= (uint)(a[i] ^ b[i]); do?

This sets diff based on whether there’s a difference between a and b.

It avoids a timing attack by always walking through the entirety of the shorter of the two of a and b, regardless of whether there’s a mismatch sooner than that or not.

The diff |= (uint)(a[i] ^ (uint)b[i]) takes the exclusive-or of a byte of a with a corresponding byte of b. That will be 0 if the two bytes are the same, or non-zero if they’re different. It then ors that with diff.

Therefore, diff will be set to non-zero in an iteration if a difference was found between the inputs in that iteration. Once diff is given a non-zero value at any iteration of the loop, it will retain the non-zero value through further iterations.

Therefore, the final result in diff will be non-zero if any difference is found between corresponding bytes of a and b, and 0 only if all bytes (and the lengths) of a and b are equal.

Unlike a typical comparison, however, this will always execute the loop until all the bytes in the shorter of the two inputs have been compared to bytes in the other. A typical comparison would have an early-out where the loop would be broken as soon as a mismatch was found:

Microsoft provides a SecureString to hold sensitive data, and this uses a generic method Equals(Object), which determines whether the specified object is equal to the current object. There is no information saying that Equals method protects against Timing Attacks. Therefore we can only conclude that it is not safe for this operation.

I would suggest research into writing our own String Compare that is protected against Timing Attacks.

I’m not sure about all of this and further research is required, I did find this article which would be worth considering:

https://paragonie.com/blog/2015/11/preventing-timing-attacks-on-string-comparison-with-double-hmac-strategy

SecureString

I’ve come across a class in the System.Security namespace that is often overlooked. The class is SecureString. In this post, I will go over what SecureString is and why it is needed.

Have you ever come across these scenarios before?

  • A password appears in a log file accidentally.
  • A password is being shown at somewhere – once a GUI did show a command line of application that was being run, and the command line consisted of the password.
  • Using memory profiler to profile software with your colleague. Colleague sees your password in memory.
  • Using RedGate software that could capture the “value” of local variables in case of exceptions, amazingly useful. Though, I can imagine that it will log “string passwords” accidentally.
  • A crash dump that includes string password.

Do you know how to avoid all these problems? SecureString. It makes sure you don’t make silly mistakes as such. How does it avoid it? By making sure that password is encrypted in unmanaged memory and the real value can be only accessed when you are 90% sure what you’re doing.

In the sense, SecureString works pretty easily:

  1. Everything is encrypted
  2. User calls AppendChar
  3. Decrypt everything in UNMANAGED MEMORY and add the character
  4. Encrypt everything again in UNMANAGED MEMORY.

What if the user has access to your computer? Would a virus be able to get access to all the SecureStrings? Yes. All you need to do is hook yourself into RtlEncryptMemory then decrypt the memory, and you will get the location of the unencrypted memory address, and read it out. Voila! In fact, you could make a virus that will continuously scan for usage of SecureString and log all the activities with it. I am not saying it will be an easy task, but it can be done. As you can see, the “powerfulness” of SecureString is completely gone once there’s a user/virus on your system.

You have few points in your post. Sure, if you use some of the UI controls that hold “string password” internally, using actual SecureString is not that useful.

The bottom line is; if you have sensitive data(passwords, credit cards, ..), use SecureString. This is what C# Framework is following. For example, NetworkCredential class stores password as SecureString. If you look this, you can see over ~80 different usages in .NET framework of SecureString.

There are many cases when you have to convert SecureString to string because some API expects it.

The usual problem is either:

  • The API is GENERIC. It does not know that there’s a sensitive data.
  • The API knows that it’s dealing with sensitive data and uses “string” – that’s just bad design.

You raised a good point: what happens when you convert SecureString to a string? That can only happen because of the first point. Eg the API does not know that it’s sensitive data. I have personally not seen that happening. Getting string out of SecureString is not that simple.

It’s not simple for a simple reason; it was never intended to let the user convert SecureString to string, as you stated: GC will kick in. If you see yourself doing that, you need to step back and ask yourself: Why am I even doing this, or do I need this, why?

You can always extend the SecureString class with an extension method, such as ToEncryptedString(__SERVER__PUBLIC_KEY), which gives you a string instance of SecureString that is encrypted using server’s public key. The only server can then decrypt it. Problem solved, GC will never see the “original” string, as you never expose it in managed memory. This is precisely what is being done in PSRemotingCryptoHelper (EncryptSecureStringCore(SecureString secureString)).

And as something very almost-related: Mono SecureString does not encrypt at all. The implementation has been commented out because ..wait for it. “It somehow causes nunit test breakage”, which brings to my last point:

Not supported everywhere is SecureString. If the platform/architecture does not support SecureString, you’ll get an exception. The recommended list of deployment platforms is documentation.

Using SecureString for Sensitive Data

The standard System.String class has never been a very secure solution for storing sensitive strings such as passwords or credit card numbers. Using a string for this purposes has numerous problems including it’s not pinned, so the garbage collector can move it around and will leave several copies in memory, it’s not encrypted, so anyone who can read your processor’s memory will be able to see the values of the string easily. Also, if your process gets swapped out to disk, the unencrypted contents of the string will be sitting in your swap file. And it’s not mutable, so whenever you need to modify it there will be an old version and the new version both in memory.

Since it’s not mutable, there’s no effective way to clear it out when you’re done using it. Secure strings are held in encrypted memory by the CLR using the Data Protection API, or DPAPI, and they’re only unencrypted when they are accessed.  This limits the amount of time that your string is in plain text for an attacker to see.

The garbage collector will not move the encrypted string around in memory, so you never have to worry about multiple copies of your string sitting in your address space, unless you make copies of those.

SecureString also implements IDisposable, and when it’s disposed of or finalised if you forget to dispose of it, the memory that was used to hold your encrypted string will be zeroed out.

They also provide a feature that lets you lock them down as read only preventing other code from modifying your string.

You can create a secure string with a pointer to a character array and a length of that array. When constructed this way, the secure string will make a copy of your array, allowing you to zero out your insecure copy.

A secure string can also be constructed without an existing character array, and the data can be copied in one character at a time.

One important thing to note though is that SecureString shouldn’t be used as a blanket replacement for System.String, it should only be used in places where you need to store sensitive information that you do not want to spread around in memory for longer than is required.

To add data or modify data in your string, standard operations are provided. For instance, you’ll find an AppendChar, InsertAt, RemoveAt, and SetAt methods. MakeReadOnly and IsReadOnly allows you to lock down the secure string. Clear, Dispose, and the finaliser takes care of removing any trace of the safe string from memory.

The main idea with SecureString is that you would never store a password or other textual secret in plain text. Unfortunately, SecureString was introduced into the framework only after plenty of APIs were built and shipped using passwords stored in a string, such as the System.Net.NetworkCredential. So any application that must use these APIs had no option but to convert secure strings to strings.

However, the SecureString class itself doesn’t provide any method to get back a plain string with its content, precisely to discourage this type of usage. What a developer has to do is use functions from the System.Runtime.InteropServices.Marshal to get a native buffer with the plain string, marshal a value into managed string, and then very importantly free the native buffer. The best implementation to get a string out of a secure string is to use a try/finally block to free the native buffer.

Here is a sample application showing how you can use SecureString, please remember in production do not use the SecureString to String as this just does not make sense and would make it insecure.

SecureString

Here is a short method to convert a string to a SecureString

public static SecureString ToSecureString(this string source)
{
 if (string.IsNullOrWhiteSpace(source))
 return null;
 else
 {
 SecureString result = new SecureString();
 foreach (char c in source.ToCharArray())
 result.AppendChar(c);
 return result;
 }
}

Raising Multiple Exceptions with AggregateException

There are occasions where you are aware of many exceptions within your code that you want to raise together in one go. Perhaps your system makes a service call to a middleware orchestration component that potentially returns many exceptions detailed in its response, or another scenario might be a batch processing task dealing with many items in one process that require you to collate all exceptions until the end and then throw them together.

Let us look at the batch scenario in more detail. In this situation, if you raised the first exception that you found it would exit the method without processing the remaining items. Alternatively, you could store the exception information in a variable of some sort and once all the elements are processed using the information to construct an exception and throw it. While this approach works, there are some drawbacks. There is the extra effort required to create a viable storage container to hold the exception information, and this may mean modifying existing code not to throw an exception but instead to log the details in this new ‘exception detail helper class’. This solution also lacks the additional benefits you get with creating an exception then, for example, the numerous intrinsic properties that exist within Exception objects that provide valuable additional context information to support the message within the exception. Even when all the relevant information has been collated into a single exception class, then you are still left with one exception holding all that information when you may need to handle the exceptions individually and pass them off to existing error handling frameworks which rely on a type deriving from Exception.

Luckily included in .Net Framework 4.0 is the simple but very useful AggregateException class which lives in the System namespace (within mscorlib.dll). It was created to use the Task Parallel Library, and its use within that library is described on MSDN here. Don’t think that is it’s only used though, as it can be put to good use within your code in situations like those described above where you need to throw many exceptions, so let’s see what it offers.

The AggregateException class is an exception type, inheriting from System.Exception, that acts a wrapper for a collection of child exceptions. Within your code, you can create instances of any exception based type and add them to the AggregateException’s collection. The idea is a simple one, but the AggregateException’s beauty comes in the implementation of this simplicity. As it is a regular exception class, it can handle in the usual way by existing code but also as a unique exception collection by the particular system that cares about all the exceptions nested within its bowels.

The class accepts the child exceptions on one of its seven constructors and then exposes them through its InnerExceptions property. Unfortunately, this is a read-only collection, and so it is not possible to add inner exceptions to the AggregateException after it has instantiated (which would have been nice) and so you will need to store your exceptions in a collection until you’re ready to create the Aggregate and throw it:

// create a collection container to hold exceptions 
List<Exception> exceptions = new List<Exception>();
// do some stuff here ........
// we have an exception with an innerexception, so add it to the list
 
exceptions.Add(new TimeoutException("It timed out", new ArgumentException("ID missing"))); 

// do more stuff ..... 
// Another exception, add to list exceptions.Add(new NotImplementedException("Somethings not implemented")); 

// all done, now create the AggregateException and throw it 
AggregateException aggEx = new AggregateException(exceptions); 
throw aggEx;

The method you use to store the exceptions is up to you as long as you have them all ready at the time you create the AggregateException class. Seven constructors are allowing you to pass combinations of nothing, a string message, collections or arrays of inner exceptions.

Once created you interact with the class as you would any other exception type:

try 
{    // do task } 
catch (AggregateException ex) 
{    // handle it  }

The key as it means that you can make use of existing code and patterns for handling exceptions within your (or third parties) codebase.

In addition to the general Exception members, the class exposes a few custom ones. The common InnerException property is there for compatibility, and this appears to return the first exception added to the AggregateException class via the constructor, so in the example above it would be the TimeoutException instance. All of the child exceptions expose via the InnerExceptions read-only collection property (as shown below).

The Flatten() method is another custom property that might prove useful if you find the need to nest Exceptions as inner exceptions within several AggregateExceptions. The method will iterate the InnerExceptions collection, and if it finds AggregateExceptions nested as InnerExceptions, it will promote their child exceptions to the parent level. As you can see in this example:

AggregateException aggExInner = 
 new AggregateException("inner AggEx", new TimeoutException());
AggregateException aggExOuter1 = 
 new AggregateException("outer 1 AggEx", aggExInner);
AggregateException aggExOuter2 = 
 new AggregateException("outer 2 AggEx", new ArgumentException());
AggregateException aggExMaster =
 new AggregateException(aggExOuter1, aggExOuter2);

If we create this structure above of AggregrateExceptions with inner exceptions of TimeoutException and ArgumentException then the InnerExceptions property of the parent AggregateException (i.e. aggExMaster) shows, as expected, two objects, both being of type AggregrateException and both containing child exceptions of their own:

But if we call Flatten()…

AggregateException aggExFlatterX = aggExMaster.Flatten();

…we get a new ArgumentException instance returned that contains still two objects, but this time the AggregrateException objects have gone, and we have the two child exceptions of TimeoutException and ArgumentException:

A useful feature to discard the AggregateException containers (which are effectively just packaging) and expose the real meat, i.e. the real exceptions that have been thrown and needs to be addressed.

If you’re wondering how the ToString() is implemented then the aggExMaster object in the examples above (without flattening) produces this:

System.AggregateException: One or more errors occurred. ---> System.AggregateException
: outer 1 AggEx ---> System.AggregateException: inner AggEx ---> 
System.TimeoutException: The operation has timed out. --- End of inner exception 
stack trace --- --- End of inner exception stack trace --- --- End of inner exception 
stack trace ------> (Inner Exception #0) System.AggregateException: outer 1 AggEx ---> 
System.AggregateException: inner AggEx ---> System.TimeoutException: The operation
 has timed out. --- End of inner exception stack trace --- --- End of inner 
exception stack trace ------> (Inner Exception #0) System.AggregateException: inner
AggEx ---> System.TimeoutException: The operation has timed out. --- End of inner 
exception stack trace ------> (Inner Exception #0) System.TimeoutException: The 
operation has timed out.<---<---<------> (Inner Exception #1) System.AggregateException
: outer 2 AggEx --- System.ArgumentException: Value does not fall within the expected
 range. --- End of inner exception stack trace ------> (Inner Exception #0) 
System.ArgumentException: Value does not fall within the expected range.

As you can see the data has been formatted in a neat and convenient way for readability, with separators between the inner exceptions.

In summary, this is a very useful class to be aware of and have in your arsenal whether you are dealing with the Parallel Tasks Library or you just need to manage multiple exceptions. I like simple and neat solutions, and to me, this is a good example of that philosophy.

Orginal Article