Elmah or Microsoft Enterprise Library for Exception logging

 

Enterprise Library Logging Application Block

The Enterprise Library Logging Application Block simplifies the implementation of common logging functions. You can use the Logging Application Block to write information to a variety of locations:

 

·         The event log

·         An e-mail message

·         A database

·         A message queue

·         A text file

·         A Windows® Management Instrumentation (WMI) event

·         Custom locations using application block extension points

Features

·         Multiple targets can be configured

·         Enterprise Library Configuration Console available for easy configuration

·         Large number of options to be configured may make it a bit daunting to start with

·         Invasive – requires code change to implement logging

·         Logs all kinds of events/information, not just for error logging

·         Formatters available for formatting the event message

·         Log filters allow filtering of log messages

·         Facilities for WCF integration

 

 

·         It will take you a bit to get up to speed with Enterprise Library – it’s not a 5 second install.

·         There’s a lot of configuration to do in the app.config/web.config file just to make it work. That said, once you understand it, it easier on other projects.

·         You must implement the Logging Block, not just the Exception Handling Block to log the information somewhere (event log, flat file, database, etc.)

·         It’s not just for logging exceptions. For example, you may want to get log events for when a user logs in or logs out of an application.

·         You can use the configuration file to change how logging works depending on the environment (i.e. Log exceptions for Production, log everything for Dev, etc.).

·         It’s not just for web, but for all kinds of applications.

 

 

ELMAH

 

ELMAH (Error Logging Modules and Handlers) is an application-wide error logging facility that is completely pluggable. It can be dynamically added to a running ASP.NET web application, or even all ASP.NET web applications on a machine, without any need for re-compilation or re-deployment.

 

Once ELMAH has been dropped into a running web application and configured appropriately, you get the following facilities without changing a single line of your code:

 

    Logging of nearly all unhandled exceptions.

    A web page to remotely view the entire log of recoded exceptions.

    A web page to remotely view the full details of any one logged exception, including coloured stack traces.

    In many cases, you can review the original yellow screen of death that ASP.NET generated for a given exception, even with customErrors mode turned off.

    An e-mail notification of each error at the time it occurs.

    An RSS feed of the last 15 errors from the log.

 

Runs using the Apache License (open source)

 

 

Comparison with ELMAH

Feature/s

Logging Application Block

ELMAH

Scope

  • True enterprise level exception handling framework, across layers, tiers, processes and services (in tandem with Exception Handling Application Block)
  • Logs all kinds of events, not just exceptions
  • Lightweight logging framework with extension points for ASP.NET web applications
  • By default logs unhandled exceptions
  • Error signaling allows handled exceptions to be logged
  • No support for non-error messages

Targets supported

  • The event log
  • An e-mail message
  • A database (supports multiple)
  • A message queue
  • A text file
  • A Windows® Management Instrumentation (WMI) event
  • Custom locations using application block extension points
  • Microsoft SQL Server
  • Oracle (OracleErrorLog)
  • SQLite (version 3) database file
  • Microsoft Access (AccessErrorLog)
  • Loose XML files
  • RAM (in-memory)
  • SQL Server Compact Edition
  • MySQL
  • PostgreSQL

Pluggable?

No. Requires careful configuration and implementation in code

Fully pluggable out of the box, requires only configuration for basic features

Configuration

  • XML configuration in the app.config/web.config as applicable
  • Enterprise Library Configuration Console tool available for ease in configuration

XML configuration in the web.config

Intrusiveness

Requires code change for implementation

No code change required for basic features

Extensibility

  • Easily extensible with multiple points for extensibility
  • Easy to extend for log message formatting and contextual information
  • No source code change required
  • Requires change to source code for any kind of extensibility
  • Error message formatting and contextual information addition requires source code changes

Scalability

  • Easily scales for small to medium sized applications
  • Not enough data available for large sized applications

Requires more research

 

Summary

The Logging Application block beats ELMAH hands down in comprehensiveness. It can be used for logging all kinds of messages from all layers of various kinds of applications including ASP.NET, windows forms, WCF services etc. It can also be used for tracing for performance and debugging purposes.

On the other hand ELMAH is a light-weight framework for logging exceptions in ASP.NET applications.

ELMAH is very easy to configure and use. It is fully pluggable. Whereas implementation of the Logging Application block requires careful planning, configuration and is intrusive as it requires changes to code wherever logging is required.

One of the biggest benefits of the Enterprise Library is that it works in tandem with the Exception Handling block to implement a consistent strategy for processing exceptions in all the architectural tiers of an enterprise application including WCF Services.

While ELMAH is perfect for ASP.NET applications, for enterprise applications the Enterprise Library fulfils the comprehensiveness required.

No Caching in any Browser

I need to stop caching across all browsers of my website for security reasons.  This has been driving me nuts over the past two weeks, as every method I try just keep allow some caching to be stored.  Easiest way to check is by pressing the back key on the browsers window as the results should update.

I finally found a solution

using HTML:

<meta http-equiv="Cache-Control" content="no-cache, no-store, must-revalidate" />
<meta http-equiv="Pragma" content="no-cache" />
<meta http-equiv="Expires" content="0" />

In ASP.NET

Response.AppendHeader("Cache-Control", "no-cache, no-store, must-revalidate"); // HTTP 1.1.
Response.AppendHeader("Pragma", "no-cache");
Response.AppendHeader("Expires", "0"); 

The Cache-Control is per the HTTP 1.1 spec for clients (and implicitly required by some browsers next to Expires), the Pragma is per the HTTP 1.0 spec for clients and proxies and Expires is per the HTTP 1.1 spec for clients and proxies. Other Cache-Control parameters are irrelevant if the above mentioned three are specified. The Last-Modified header as included in most other answers here is only if you actually want to cache the request, so you don’t need to specify it at all.

Note that when the page is served over HTTP and a header is present in both the HTTP response headers and the HTML meta tags, then the one specified in the response header will get precedence over the HTML meta tag. The HTML meta tag will only be used when the page is viewed from local disk file system. See also W3 HTML spec chapter 5.2.2. Take care with this when you don’t specify them programmatically, because the web server can namely include some default values. To verify the one and other, you can see/debug them using Firebug Net panel.

Bug Process

Background
 
 

Note:  Click the process for a larger version.

 
When a bug is first created it is assigned a nominal story point value of 1.  This value is intended to cover the investigation phase of the bug only (or a fix if it’s found to be a small issue).  Having a story point value against the bug  (even if it only covers investigation initially) means that we can allocate each bug into a sprint along with other backlog items and gives a window of time to investigate the problem.
The outcome of this stage should be either:
  1. Bug resolved; should only occur with small bugs where the issue was found, fixed and re-tested within the allocated tasked hours
  2. Further investigation required; in which case the story-point value of the bug should be increased to indicate that the problem is very complex and was not able to even be progressed during the first pass through the stage
  3. Investigation completed; where the problem has been documented within the bug item and a number of tasks (with associated hours assigned) have been created which detail how to resolve the problem
Bugs which have not progressed (e.g. outcome 2 in the list above) should be re-story-pointed and added back into the main backlog to progress through stage 1 again at a later date.  Obviously bugs which have been resolved (outcome 1) can be marked as such and bugs which have been investigated and tasked (outcome 3) can be added back into the product backlog for assigning to a future sprint.
 
Note:  The important point to note about this phase is that it should be considered time-boxed based on what time is allocated to the tasks created for the bug at sprint planning – based on the default story point value and current velocity, this should be roughly around 3 hours total.
 
Stage 2 – Fixing
 
 

 
Note:  Click the process for a larger version.
 
Bugs which enter this stage should have already been through the first stage of investigation at least once and should have appropriate tasks against them which detail what needs to happen to resolve the problem.
Since each bug should have been through the first stage of analysis, we should know enough at this stage to be confident in being able to both apply a fix and complete the associated tasks in the allocated sprint.

TFS Standards Policy

Rules to organise the process of Checking in and out of TFS is vital for preserving the integrity of our solution.
 
This list is by no means fool-proof and exhaustive and hence any suggestions are welcomed and encouraged.
  1. Before Checking In or Out:
    1. Consider getting latest.
    2. Check if someone else has already checked the file out. If so, inform that person. This to avoid duplicating activities.
  2. Checking out:
    1. You can check out a file either explicitly by right clicking and select Check-out or implicitly by typing into the file.
    2. Before Checking-Out, check that the file is already checked out by another person; if so, inform that user, this to avoid duplicating activities.
  3. Checking In:
    1. Before checking in make sure:
      1. The modified code compiles.
      2. The entire solution builds successfully.
      3. StyleCop rules implemented.
      4. Code reviewer is present.
    2. Add clear and concise description to what has been changed and why.
    3. Assign to Work Item(s) ID
    4. Code reviewer name entered.
    5. Never Auto-Merge; always merge manually.
    6. Check that all new files, or relevant assembly in the global Library folder, have been successfully added to TFS. Missing such files might trigger Build Server failure.
  4. After checking in keep an eye on the Build Server notification for successful build.
  5. At the end of the working day; if the coding is not successfully completed, you must “Shelf” the files within TFS.
  6. Make sure to regularly update relevant work items with progress and relevant comment.

Singleton Pattern the right way

The singleton pattern is used in almost all modern day programming languages, so why do I keep finding it written incorrectly in so many applications, so lets start with the right way

public sealed class SimpleNoLockLazy
    {
        static readonly SimpleNoLockLazy instance = new SimpleNoLockLazy();

        // Explicit static constructor to tell C# compiler
        // not to mark type as beforefieldinit
        static SimpleNoLockLazy()
        {
        }

        SimpleNoLockLazy()
        {
        }

        public static SimpleNoLockLazy Instance
        {
            get
            {
                return instance;
            }
        }
    }

So why this implementation?   The reason for using a Singleton is for performance in a multi-threaded environment and this pattern is not only simple, but it is the fastest.  I have attached a benchmark application, which is based on Jon Skeet’s benchmark, but testing using a Parallel processes.

Benchmark.zip (9.98 kb)

I found Jon Skeet’s article very useful