Tuesday, September 13, 2011

Passing objects of anonymous types as parameters to method

One of the underpinnings of LINQ is anonymous types. When you use a projection (i.e. select) you can choose to return data as an anonymous type. However, you are constrained by the fact that you can't pass that data out of this method because the type is anonymous. A consequence of this is that a method with LINQ in it can become bloated due to not being able to pass anonymous data out. A solution to this problem is that you can use the dynamic keyword to allow you to pass out this dynamic object. This object will be read-only. Here is a small example of that.
 void Main()

{

dynamic foo;

var bar = new { name = "John", title="code monkey"};

var bar2 = new { name = "John", title="junior code monkey"};

foo = bar;

WriteIt( foo);

WriteIt(bar2);

}

static void WriteIt( dynamic dynamo)

{

Console.WriteLine(dynamo.title);

}

One important gotcha with using dynamic typing is that extension methods aren't currently available. That is a real shame since these two techniques would go together naturally for more ruby-esque development. But at some point, if you really want or need to program with dynamic techniques in .NET you are better of with IronRuby or IronPython. Sadly, Microsoft's interest in the DLR appears to have waned after successfully making programming for Microsoft Office easier in .NET. You can perform a cast though, but only if your type is not anonymous.

Tuesday, August 23, 2011

Compiler As a Service (CAAS) for VB.NET

Microsoft has been promoting a futures feature of its .NET compilers called Compiler As A Service(CAAS). CAAS will allow direct access to the functionality of the compiler. From a Microsoft job posting described here CAAS allows “REPL, write-your-own refactorings, C# and VB as scripting languages, and taking .NET to the cloud”. I think VB.NET, in particular, stands to gain with the potential to have C# code be convertible to VB.NET.


“Tightening the feedback loop” has been my mantra over the past year. In just one sense of this phrase, REPLs (read evaluate print loop) are a way to get immediate feedback on your code. I have been using, and loving, LinqPad a lot lately. It is far more than way to perform Linq queries. It is a very well thought out code snippet compiler. It isn’t truly a REPL, code is compiled with the benefit of behind the scenes code that LinqPad provides to make a complete class library. Also, unlike a REPL, once code is run, its variables aren’t treated as globally available for continued use. Essentially it is read-evaluate-print without the loop. Linqpad does succeed in providing much quicker feedback. Besides LinqPad, PowerShell and the Firebug for Firefox Javascript commandline are things I use frequently.


Aspect-oriented programming with tools like PostSharp could be greatly enhanced by CAAS. Postsharp works as a post-compilation code weaving. I think it might be significantly easier to weave code with the compiler functionality opened up. The job posting suggests that Refactoring processes could benefit on the other direction as a mini-compilation step done in the background to assist in changing the codebase. I wouldn’t want to speculate how though at this point.


As a LinqPad user in VB.NET you are a second class citizen since Intellisense is only provided for C#. Joe Albahari articulates on stackoverflow that CAAS would allow him to more easily provide VB.NET intellisense.


Putting a VB.NET specific spin on CAAS, is the potential ability to seamlessly convert between VB.NET and C#. One of the obstacles facing VB.Net is the necessity of converting code snippets available only in C#. For example, trying to convert a C# Linq statement to VB.NET fails utterly using Telerik’s converter, http://converter.telerik.com/. This will help a real pain point in using VB.NET.

Friday, August 19, 2011

Nesting Depth Metric

One of my fundamental coding practices is to avoid nesting depth in methods. A really good explanation of this point has been made by Patrick Smacchia, author of NDepend. NDepend, speaking as a purchaser of the product, is a very cool tool for measuring an application. I won't belabor the technique, since Patrick explains it well, but I just want to stress it because I think it so critical. I call this the early exit strategy. Rather than have nested if statements, I return from a function as soon as possible. So, instead of having a more complex boolean statement or nested ifs, I would have a simpler expression with two if statements, each of which would return from its function if appropriate. As Patrick shows, this also applies to a continue statement, to abort further process on the current iteration of a for loop.

Friday, August 12, 2011

NCover and working with Code Coverage tools

I have been reluctant to embrace TDD philosophy in my development. I have spent the last year with a philosophy of adding unit test after coding done with DI techniques. The tests were aimed at the “low hanging fruit” scenarios. These included lower level classes with fewer dependencies and logic centric classes. Dependencies like communications, file I/O, and timing based events were classes that tended to be ignored. Not surprisingly my defects were focused in areas not covered by tests. I did employ a practice of writing at least one unit test for each defect.


Recently, after getting encouragement from management to spend time improving unit tests (in part due to schedule slack), I spent some time working with NCover, one of the leading .NET code coverage tools.


NCover has been simply excellent to work with. I have used NCover 3 complete which, at $479, is pretty pricey. NCover 3 classic is reasonably priced at $199. The features provided by the classic level, suit my current usage fine. The quickstart documentation is emphasized by the UI presentation for new projects. The focused ribbon-based UI made it easy to navigate. I found it to be a nice workflow to move between production code viewed in NCover and test code viewed in Visual Studio. For me, I tend to open many documents in Visual Studio at once, and not having to bounce between test and production code files in Visual Studio helped me be more productive.


I spent some time later working with Visual Studio 2010 Code Coverage (available with Premium or Ultimate editions). I think it important to mention my development environment might well affect my perspective. My machine is modern but lacks a solid state hard drive. Visual Studio does crash or hang more than I would like. Visual Studio is also kinda slow and uses lots of virtual memory. I have never met an Add-In that I didn’t want so, so this could well be affecting my experience. These factors collectively encourage me to favor non-integrated tools. More importantly, my single monitor and 1024x768 display really limits my desire to minimize the number and size of visual studio windows open.


Having given these caveats, I strongly preferred my experience with NCover. First, my above issues with Visual Studio make it feel cramped as it is. Second, debugging MSTest unit tests and obtained code coverage are mutually exclusive. I can have on or other turned on at any given time. Hitting Control-R Control-A to debug into unit test is where my sweet spot for working with tests is. I really don’t like the idea of having to switch settings regularly. Third, NCover’s windows and icons are tightly focused on its core tasks. This has helped make learning NCover really easy. In contrast, Visual Studio leverages its existing code editor, status windows, menus, and icons. Using Visual Studio Ultimate, the code coverage-related options are scattered amidst a plethora of features. Fourth , VS requires instrumenting assemblies whereas NCover doesn’t. Since my assemblies are usually signed, this means the assemblies must be modified and then re-signed for each coverage run.


Wednesday, August 10, 2011

Thanks to attendees of my Introduction to Linq as a Language Feature talk.

Here is a LINQ to a zip file with my presentation and all sample LinqPad files.

Here are a set of LINQs covered in the talk.

Thursday, June 16, 2011

Clean Coding in Conshohocken

I was pleased to find out that Brian Donahue has lured another great teacher and presenter to the Philadelphia area. From February 20 to the 23 2012, Uncle Bob Martin is coming to Conshohocken to teach TDD! at http://www.eventbrite.com/event/1804937617. I have signed up and am looking forward to it.
This has inspired me to read his new book, Clean Coder. I really enjoyed reading the book. It is very much anecdotal in nature and he has great stories to tell. Having taken a course from Juval Lowy recently where he focused on the successes he had and how to generalize that and allow others to be successful, in contrast Bob’s stories are funny, self-deprecating, and even courageous in the level of self revelation. I would never have the courage to tell his story about a meeting regarding project estimation while drunk. He wasn't drinking on the job of course, just bringing work talk into an off hours social gathering. A generous measure of failure is important, especially when a perspective comes from someone like Uncle Bob, who is essentially a preacher for his perspective. Preachers without this perspective can otherwise come across to me as strident or humorless. Readers who don’t expect that Uncle Bob will offer sermons on agile and TDD perspectives may be disappointed or even hostile. His views, particularly in this book, represent ideals. Striving for 100% code coverage is definitely an extreme view.
He has a really intriguing quiz early on in the some more obscure topics in computer science. His questions are

  • Do you know what a Nassi-Schneiderman chart is?

  • Do you know the difference between a Mealy and a Moore state machine?

  • Could you write a quicksort without looking it up?

  • Do you know what the term “Transform Analysis” means?
    Answer: I am guessing that this refers to Fourier Analysis. Despite learning about this in grad school, all I can say is that this is a way to solve problems whose answers can be found through analysis of differential equations.

  • Could you perform a functional decomposition with Data Flow Diagrams?
    Yuck, I have done this all too often in my first professional project. I have found it to not be easily applicable to OO design. Cross-cutting concerns like logging or communication ending up being elements in far too many different diagrams.

  • What does the term “Tramp Data” mean?
    Answer: A term describing data which is passed to a function only to be passed on to another function.

  • Have you heard the term “Connascence”?
    Answer: Two software components are connascent if a change in one would require the other to be modified in order to maintain the overall correctness of the system. Connascence is a way to characterize and reason about certain types of complexity in software systems. I really like this term, I hope it sticks in my brain.

  • What is a Parnas Table?
    Answer: Tabular documentation of function values. It inspired the creation of FitNesse.It is just one of many contributions made by the brilliant David Parnas. His collected papers can be purchased here at Amazon.


Although the mistake isn’t significant, it merely reflects my pedantic nature, age and level of geekdom to be bothered by the fact that he misquoted Yoda. It is “do or do not. There is no try”, not “Do or do not. There is no trying”. To be fair this does come across as something an editor might have corrected. For the best Yoda reference see, Udi Dahan’s videocast, http://www.infoq.com/presentations/Making-Roles-Explicit-Udi-Dahan. Although a misquote, Bob’s general point about language usage and estimation is one of my favorites in the book. As a developer, agreeing to try to meet a deadline is agreeing to the schedule. I have been in situations where other developers have been caught by trying to agree to something that deep down they realize is unattainable. It is much better to be upfront with management early and not yield to the try request. Bob emphasizes the value in analyzing hedge phrases like try, do my best, see what I can do on the developer side and similar words on the management side. He mentions several that I need to be more alert for as signs of non-commitment, including saying “we need to … “ or “let’s …”.
I found the coverage of individual estimation vs group estimation interesting. The ideas behind PERT estimation are familiar to me and being individually responsible for estimation is something I have done often. What I haven’t done, or learned much about, is techniques for group-based estimation. Bob talks about low-ceremony versions of using a Wideband Delphi process like flying fingers or planning poker. His discussion of a technique called Affinity Estimation was particularly intriguing. In Affinity Estimation a group of tasks are written down on cards. A group of people, without talking, must order the tasks. Tasks which keep moving in a volatile manner are set aside for discussion. Once the ordering of the tasks becomes stable, discussion can begin. I like the idea that the risk of Groupthink is lessened by preventing vocal communication for a time.

Reflections on my Code Contracts experience

Microsoft has half-baked in Code Contracts to the .NET framework. Although having language-independent support for Code Contracts is a great step forward, widely available static analysis tools are the piece that makes contracts worthwhile. Until Code Contracts are available at the Visual Studio Professional level (it current requires the Academic or Premium versions) widespread adoption will not take hold.

I have been lucky enough to work with Visual Studio Ultimate so I’d like to share my experience of several months of working with contracts. This is based on my experience with release 1.4.31130.0 (November 30, 2010) which is not the current version (currently a Pi day release!). These tools are a community release, not a fully integrated final version so I won’t focus on the relatively few bugs or missing features.

I started with bringing up the adoption of contracts because of a beneficial snowball effect. Having an open source and commercial project use code libraries allows me to use, not abuse, their APIs. I think this is especially true for open source projects because developers like to write code, not documentation. Test driven development has been a boon for open source documentation, a benefit that doesn’t get the attention that other TDD benefits do. I can look at the tests to help understand how to use the code. The more documentation that can be incorporated as source code, the better developers who use a given library will be. Having contracts in place with static analysis can effortlessly (aside from slower compilation)push feedback to the developers, minimizing the effort developers need to take pull information based on reading code, comments, and documentation. Microsoft’s adoption of contracts within the .NET framework has already been helpful to me in preventing bugs. I am not talking about documentation in the conventional sense here, although I address that point below.

I have found it useful to switch between synchronous and asynchronous static analysis. If I really want to get the maximum benefits of contracts in an area I use synchronous analysis, where my build completes only after the static analysis is complete. However, the extra 5 to 10 seconds of compilation, in my development keeps me from keeping static remaining synchronous analysis be a permanent setting. I have to be in a coding area where I feel significant contracts benefits before I turn on synchronous static checking. This may seem like a small point to be focusing on one specific setting, but I feel usage patterns of synchronous analysis can be a great metric to see what a developer perceives as an ad hoc cost-benefit analysis of using contracts. I am sure many developers do have one consistent process and settings usage, and for those this wouldn’t be as good of a metric, but I am always a knob twiddler myself.

One way of classifying the development I do is between highly-reusable code and application code that I don’t find the cost-benefit of high reuse to be compelling. Code reuse is constantly cited benefit of OO programming but that benefit is often not significant enough to outweigh its cost. Reusable code often involves dealing with levels of abstraction that can make grokking using this reusable code take some time investment. My rate of line of code generation for reusable code is much, much lower. When you provide a library for others to use, you have to be prepared for every possible kind of abuse be made by callers. For less-reusable code, the level of paranoia in trusting the calling object can be reduced. In areas like null-checking this can be a tremendous use case for code contracts. I can assume that, for example a sequence of method calls that must occur in order, are being utilized by a developer more familiar with this codebase. They may still screw things up but my handling of mistakes can be simpler without trying to correct the misuse or going through significant effort to make clear exactly what the mistake was and how to rectify it. The sequence of methods calls is an example, though, of where contracts aren’t applicable, which I’ll expound upon below. Also, coming up with naming conventions and other forms of consistency is no mean task, as the very useful Framework Design Guidelines book illustrates.

Actual generated documentation can be another thing provided by Code Contracts, which is one of the bullets Microsoft has used as a selling point for contracts. I find the above split between highly reusable code and less reusable code to be applicable here to. I find this kind of documentation beneficial for highly reusable code, and less so otherwise. I feel the same way about XML documentation as used provided as input to tools like Sandcastle. When I am writing reusable code I try to write useful XML documentation but when I am not, I abhor writing code with it. I find it just clutters the code and too often I fall into the trap of not always writing comments that are truly meaningful. It can be too easy to rely on GhostDoc, wonderful tool that is, to provide documentation with zero value added. At the moment, the stable Sandcastle release doesn’t support processing code contract documentation. The Sandcastle project has a patch to support this, but I failed to get that working, and rolled back to the stable release.

Writing code that utilizes contracts leads one to a more functional approach, rather than an OO one. My experience with pre and post conditions was more immediately fruitful than my use of object invariants. When I focused on not having side effects and not storing state in objects, that is the scenario where pre and post conditions are valid. Any possibility of side effects limits or eliminates the applicability of this variety of contracts. In my example earlier in this post, a scenario where I relied on a sequence of method calls being made relies on object state to work, and is beyond the scope of where contracts can help, at least today’s version of contracts. In the much smaller time I have played with object invariants, I didn’t get static analysis findings that I had hoped for. It is quite possible that this just requires more effort on my part to obtain benefit there. Interestingly, although I see contracts being awesome for functional programming, it seems like the current state of f# support may be lacking as this blog post suggests.

I have found myself deciding what classes go in what assemblies partly based on whether I am using contracts. Code contracts can provide a lot of data to either treat as warnings or errors. Employing implicit non null checking for example, means that calling code will need to do a LOT of error handling. In a less reusable scenario, it can be convenient to have null checking done by the called code in one place, rather than in each piece of calling code. In such a scenario, I would still check for contract usage based on other assemblies, I would just not create contracts for the classes within this assembly.

To get serious with Code Contract usage requires dealing with a large volume of warnings. Unless you are in a situation where 100% compliance is pursued, there will be warnings that you’d like to disregard. Code Contracts attempts to deal with this through using baseline files. These represent a list of warnings previously observed to be disregarded. I didn’t spent much time trying to use this feature but when I tried I couldn’t get it to work. Another option available is using the ContractVerification attribute to allow classes, methods, or even assemblies to selectively be ignored or verified.

Code Contracts is a really compelling feature. At this time, provided my employer has the license for static analysis, I will continue to use Code Contracts for high reuse code. Since I only have professional at home, my side projects will go without Code Contracts. Until Microsoft decides when and how to fully include Code Contract support within Visual Studio, I am leery of an “all-in” Code Contract usage for a team.

Sunday, March 20, 2011

Inaugural DevReady event on MVVM

There has been a tremendous amount of hype recently around the MVVM (Model-View- Viewmodel)pattern. Miguel Castro and DevExpress put together a developer day that shows the hype is backed by substance. There was a sold out crowd this past Saturday to see this event at the Microsoft office in Malvern PA. There was a convivial atmosphere created by the three
hispanic presenters: Miguel, Dani Diaz the Microsoft developer evangelist, and Seth Juarez, a DevExpress developer evangelist. The backgrounds of the three speakers, respectively Cuba, Dominican Republic, and Mexico family backgrounds, provided an ongoing source of levity and continuity. The back and forth between the speakers, both light-hearted and technical, helped make the event cohere into a single focused event, not a variety of individual topics like a code camp.


The MVVM pattern was first articulated by Martin Fowler, see http://martinfowler.com/eaaDev/PresentationModel.html. Martin named this the Presentation Model pattern. The MVVM pattern, despite being a really awkward palindrome, is a valuable pattern that has emerged in the rich client area. Web applications, with the need for routing preeminent, tend to emphasize the MVC pattern. Many developers are familiar with what a model and view are by this team. A Model represents the business logic of an application and a View represent a visual interface to an area of the application. The role of the ViewModel is to represent a logical representation of the visual interface.


The core of the event was Miguel’s three sessions on Xaml and MVVM. His Xaml talk really was an overview of WPF development. Attending this talk is a must for developers new to WPF, Silverlight, and WP7. It will be given again at Philly Code Camp on April 9, so check it out there! I can’t quantify just how much time it would have saved me in learning WPF. He talked a lot about layout,which is much different than in Windows Forms or other rich client development. I initially misconstrued the Grid as something like a DataGrid, when it is really intended purely for layout. As Miguel pointed out, the Grid is much like a table in HTML. Miguel drove the point that WPF layout is really driven a lot by how HTML and CSS combine to represent web markup. Miguel showed how using a drag and drop approach adds a lot of bad markup to the XAML. He did point out that the DevExpress WPF tools really are a great way to allow devs to deal with layout less. Miguel pointed out that the pain points in the native .NET DataGrid control for WPF is an area where the DevExpress grid really shines. I think investing in a good control library can be the most well-spent money in a project.


Dani Diaz gave a presentation that focused on Microsoft’s efforts to support multitargeting .NET applications. I personally think that this will be the most important reason for developers to use the MVVM pattern. The approach in MVVM to support a loosely coupled architecture increases the ease of reuse by cordoning off direct GUI dependencies into the view. Dani’s presentation focused on WPF, Silverlight, and WP7 multitargeting. I actually think that is likely that two platforms not discussed in this event, IPhone and Android will drive MVVM adoption by allowing developers to work in a .NET shop and still target the most popular mobile platforms. Right now, iPhone and Android development with .NET means through use of Novell’s MonoTouch and MonoDroid development tools, As Dani pointed out, write once run everywhere is a myth. Users, myself included, want apps that behave natively to its OS. MVVM is the perfect pattern to allow multitargeting.


Dani talked about the upcoming Microsoft effort to support what they call Portable Library Tools. The Portable Library Tools are in a CTP stage right now but probably aren’t that useful yet. As Dani explained, Microsoft is still in the process of adding .NET framework elements that exist in a compatible way on multiple platforms. At the moment, the ObservableCollection class is not publicly supported by the CTP but it is expected that future releases will include this class. The ObservableCollection class is pretty essential in taking advantage of data binding.


The final presentation of the day was by Seth Juarez of DevExpress. Seth writes about his research in Machine Learning here. Seth’s approach was to give himself a crash course in Prism and share the results with us. Seth’s candor in talking about his experiences in dealing with Microsoft’s effort to support composite application development was refreshing. Prism can be a little difficult to get ones head around and appreciate. Seth showed that presenting a topic as a newby, it can be a good way to provide a quickstart to using Prism. Miguel also made a good point in his talk that Prism itself doesn’t provide any support for validation. Miguel is working on a MVVM framework built on top of Prism that adds validation. Seth used CodeRush throughout his talk illustrating just how much of a performance boost a developer can get by using a tool like CodeRush to support refactoring and minimize typing. It was pretty impressive although I still find the giant arrows that CodeRush uses to be garish and over the top. A little subtlety would be preferable, and less of a distraction.

Tuesday, March 8, 2011

FileHelpers is my new best friend

I have gained countless tools and information over the years from Scott Hanselman’s blog . My new favorite tool is now Marcos Meli's FileHelpers . It is great tool for dealing with delimited or fixed length field messages. Despite the name, it is extremely useful in dealing with messages that are memory-based as well. I wish I had this tool years ago, it would have saved my countless hours of development! In particular, dealing with problems in data or record length changes is a teeth-gnashing, prototypically “bejabbers” experience.

As a longtime C++ developer, the only feature I missed in .NET is the ability to have structures/union of character arrays to deal with fixed length field messages. In C++, this feature relies on the ability to directly access memory. FileHelpers gives me a similar capability using reflection and attributes to accomplish this task. You adorn a class and fields with FileHelpers specific attributes and it knows how to do the rest. The FileHelpers documentation is superb, unlike many open source project, so I’ll just include a brief excerpt of the documentation here to whet your appetite.

[FixedLengthRecord()]   
 public class Customer   
 {   
     [FieldFixedLength(5)]   
     public int CustId;   
        
     [FieldFixedLength(20)]   
     [FieldTrim(TrimMode.Right)]   
     public string Name;   
}

As cool as reflection is, performance is always a potential concern. FileHelpers is a great illustration of how .NET can create highly-performant code. Many times developers, including myself, only think of reflection as a way to access methods, properties, and fields dynamically. Using Reflection.Emit, as FileHelpers does, is a mechanism to allow code to be generated at runtime. Combined with caching, dynamic code generation allows excellent performance.

To work with FileHelpers I create a wrapper around usage of the fixed length record.


using System;
using System.Collections.Generic;
using System.Text;
using FileHelpers;
    /// <summary>
    /// This generic class provides convenience functions for the FileHelpers library.
    /// This class implements the Facade design pattern by encapsulating the FileHelpers
    /// engine and providing a simple interface to that functionality.
    /// </summary>
    /// <typeparam name="T">the class type representing a message. This type must use the 
    /// FixedLengthRecord attribute</typeparam>
    public class FixedLengthMessageParser2<T>  where T:class
    {
        FileHelperEngine engine;
        /// <summary>
        /// Initializes a new instance of the <see cref="FixedLengthMessageParser&lt;T&gt;"/> class.
        /// It creates the file helper object.
        /// </summary>
        public FixedLengthMessageParser2()
        {
            engine = new FileHelperEngine(typeof(T));
        }
        /// <summary>
        /// Parses the specified aggregate data.
        /// </summary>
        /// <param name="aggregateData">The aggregate data.</param>
        /// <param name="records">The records found to be contained in the data</param>
        /// <param name="resultText">Results of parsing. Empty string if 
        /// successful, or exception details otherwise</param>
        /// <returns>True if able to parse, False otherwise.</returns>
        public bool Parse(string aggregateData, out T[] records, out string resultText)
        {
            records = null;
            resultText = String.Empty;
            try
            {
                records = (T[])engine.ReadString(aggregateData);
            }
            catch (Exception ex)
            {
                resultText = "entry " + typeof(T) + " [" + aggregateData
                    + "]couldn't be read due to error " + ex.ToString();
                return false;
            }
            return true;
        }
 
        /// <summary>
        /// Provides a string representation of a record.
        /// </summary>
        /// <param name="msg">The MSG received as input.</param>
        /// <returns>String representation of a record.</returns>
        public string ReturnToString(T msg)
        {
            string retVal;
            retVal = engine.WriteString(msg.AsEnumerable());
            retVal = retVal.Substring(0, retVal.Length - 2); //trim end of line characters
            return retVal;
        }
 
 
        /// <summary>
        /// Parses a single record
        /// </summary>
        /// <param name="singleData">A single data record.</param>
        /// <param name="fixedLengthRecord">The fixed length record.</param>
        /// <param name="resultText">Results of parsing. Empty string if 
        /// successful, or exception details otherwise</param>
        /// <returns></returns>
        public bool ParseSingle(string singleData, out T fixedLengthRecord, out string resultText)
        {
            T [] records = null;
            fixedLengthRecord = null;
            resultText = String.Empty;
            try
            {
                records = (T[])engine.ReadString(singleData);
                if (records.Length == 0)
                {
                    resultText  = "entry " + typeof(T) + " [" + singleData
                       + "]couldn't be read due to empty data set";
                    return false;
 
                }
                fixedLengthRecord = records[0];
            }
            catch (Exception ex)
            {
                resultText = "entry " + typeof(T) + " [" + singleData
                    + "]couldn't be read due to error " + ex.ToString();
                return false;
            }
            return true;
        }
    }

Wrasslin with Log4Net

I have a love-hate relationship with Log4Net. It is powerful, capable of doing everything I have ever wanted, yet configuration is always a challenge for me to get right. I wish there was a good, free GUI configuration tool available for it. In this post I will share the current Log4Net configuration that I am using. It is a moderately sophisticated configuration, relative to many of the posted samples. This example shows use of console appenders, event log appenders, file appenders, and multiple loggers. I also illustrate how to call Log4Net in a way that allows logging configuration changes to take effect immediately while a process is running.

Here are some points of interest. As I stumble through a configuration, the log4net.Internal.Debug AppSettings key is very helpful in showing me where my mistake lies. In order to take advantage of multiple loggers, it took me a while to figure out the secret sauce that made things work. First, loggers are hierarchical in nature. I found that I needed to specify all appenders used at the root logger level. Then each logger name I used specified the ones I used it for it. The key here was setting Additivity=”FALSE” otherwise I got an error indicating that the appender was closed.
I also found the concept of a single log message being able to be processed by multiple appenders to be powerful. One use was that I use a ConsoleAppender while I am testing to see the Log4Net output without having to pull up a log file. The message is still written to logs too, so I can verify that I log precisely what I intend. At the other end of the logging severity, I log all errors (both ERROR and FATAL) to a file but also log fatal errors to the Windows Event log for easy processing.
A couple of items on my to-do list are exploring buffered file appending and rolling over logs after they reach a maximum size. The BufferingForwardingAppender allows you to utilize the Unit of Work pattern to by aggregating a bunch of messages for the expensive, synchronous act of logging. The drawback to using this approach is that if you are concerned about troubleshooting program abends, you might lose the last few entries in this scenario. Also, although Log4Net is generally threadsafe, this appender is not.

<?xml version="1.0"?>
<configuration>
  <configSections>
    <section name="log4net" type="log4net.Config.Log4NetConfigurationSectionHandler, log4net" />
  </configSections>
  <appSettings>
    <!--if having trouble getting log4net to work turn its detailed internal logging on-->
    <!--<add key="log4net.Internal.Debug" value="true"/>-->
  </appSettings>
  <log4net>
    <root>
      <level value="ALL" />
      <appender-ref ref="FatalFile" />
      <appender-ref ref="ErrorFile" />
      <appender-ref ref="WarningFile" />
      <appender-ref ref="DebugFile" />
      <appender-ref ref="SomeInfoFile" />
      <appender-ref ref="SomeExtraFile" />
    </root>

    <logger name="PrimaryLogs" additivity="FALSE">
      <level value="ALL" />
      <appender-ref ref="FatalFile" />
      <appender-ref ref="ErrorFile" />
      <appender-ref ref="WarningFile" />
      <appender-ref ref="DebugFile" />
      <appender-ref ref="SomeInfoFile" />
    </logger>

    <logger name="ExtraLogs" additivity="FALSE">
      <level value="ALL" />
      <appender-ref ref="SomeExtraFile" />
    </logger>
    <appender name="FatalFile" type="log4net.Appender.EventLogAppender">
      <layout type="log4net.Layout.PatternLayout">
        <conversionPattern value="%date [%5thread] %type{1}.%method() - %message%newline%exception" />
      </layout>
      <filter type="log4net.Filter.LevelRangeFilter">
        <levelMin value="FATAL" />
        <levelMax value="FATAL" />
      </filter>
    </appender>
    <appender name="SomeExtraFile" type="log4net.Appender.RollingFileAppender">
      <file value="C:\logs\ AppLateMessage.txt" />
      <lockingModel type="log4net.Appender.FileAppender+MinimalLock" />
      <appendToFile value="true" />
      <staticLogFileName value="false" />
      <rollingStyle value="Date" />
      <datePattern value="'.'yyyyMMdd-HH'.log'" />
      <layout type="log4net.Layout.PatternLayout">
        <conversionPattern value="%date [%5thread] %type{1}.%method() - %message%newline%exception" />
      </layout>
      <filter type="log4net.Filter.LevelRangeFilter">
        <levelMin value="WARN" />
        <levelMax value="WARN" />
      </filter>
    </appender>
    <appender name="ErrorFile" type="log4net.Appender.RollingFileAppender">
      <file value="C:\logs\ AppError.txt" />
      <lockingModel type="log4net.Appender.FileAppender+MinimalLock" />
      <appendToFile value="true" />
      <staticLogFileName value="false" />
      <rollingStyle value="Date" />
      <datePattern value="'.'yyyyMMdd-HH'.log'" />
      <layout type="log4net.Layout.PatternLayout">
        <conversionPattern value="%date [%5thread] %type{1}.%method() - %message%newline%exception" />
      </layout>
      <filter type="log4net.Filter.LevelRangeFilter">
        <levelMin value="ERROR" />
        <levelMax value="FATAL" />
      </filter>
    </appender>
    <appender name="SomeInfoFile" type="log4net.Appender.RollingFileAppender">
      <file value="C:\logs\ AppSomeInfoFile.txt" />
      <lockingModel type="log4net.Appender.FileAppender+MinimalLock" />
      <appendToFile value="true" />
      <staticLogFileName value="false" />
      <rollingStyle value="Date" />
      <datePattern value="'.'yyyyMMdd-HH'.log'" />
      <layout type="log4net.Layout.PatternLayout">
        <conversionPattern value="%date [%5thread] %type{1}.%method() - %message%newline%exception" />
      </layout>
      <filter type="log4net.Filter.LevelRangeFilter">
        <levelMin value="INFO" />
        <levelMax value="INFO" />
      </filter>
    </appender>
    <appender name="WarningFile" type="log4net.Appender.RollingFileAppender">
      <file value="C:\logs\AppWarning.txt" />
      <lockingModel type="log4net.Appender.FileAppender+MinimalLock" />
      <appendToFile value="true" />
      <staticLogFileName value="false" />
      <rollingStyle value="Date" />
      <datePattern value="'.'yyyyMMdd-HH'.log'" />
      <layout type="log4net.Layout.PatternLayout">
        <conversionPattern value="%date [%5thread] %-5level %type{1}.%method() - %message%newline%exception" />
      </layout>
      <filter type="log4net.Filter.LevelRangeFilter">
        <levelMin value="WARN" />
        <levelMax value="FATAL" />
      </filter>

    </appender>
    <appender name="DebugFile" type="log4net.Appender.ConsoleAppender">
      <layout type="log4net.Layout.PatternLayout">
        <conversionPattern value="%-5level %type{1}.%method() - %message%newline%exception" />
      </layout>
      <filter type="log4net.Filter.LevelRangeFilter">
        <levelMin value="DEBUG" />
        <levelMax value="ERROR" />
      </filter>
    </appender>
  </log4net>
</configuration>

Tuesday, January 4, 2011

Monty on Modern Software Architecture

My favorite talk at the October 2010 Philly.NET camp was Modern Software Architecture by Michael “Monty” Montgomery. He gave his take on implementing SOA. Monty is a dynamic speaker who took on some of the elements of a gospel church preacher. An overstuffed room was hanging on his words. In response to one question, he said that an architect has to be a really strong developer as well, knowing your s**t backwards and forwards. His answers to other questions and his other talks demonstrate his developer knowledge. References to articles that contain his ideas can be found here on his web site as well as the PowerPoint deck for this talk.


Michael acknowledges his debt to Juval Lowy and the IDesign design method. Having just attended Juval’s master class in being a software architect, I can see the inspiration. Juval preaches the value of focusing on interfaces before dealing with classes that implement an interface. A use case should map one-to-one with an interface.


Michael spent a good portion of his talk discussing what he calls domain SOA, using DDD concepts to inform the SOA architecture of an application.
Any talk covering SOA has to deal with many misconceptions about SOA. SOA is a buzzword that has been subject to a lot of incorrect interpretations by bandwagon jumpers. Both people and companies try to define SOA in terms of how they already do things with accepting that a paradigm shift is necessary to truly employ SOA. Fittingly, Monty spent a good portion of his time talking about SOA antipatterns, what is SOA is NOT.



Monty’s SOA anti-patterns include

  1. Object-centric SOA.

  2. SuperService-centric SOA

  3. UI-centric SOA

  4. Data-centric SOA

  5. Code-centric SOA



SOA should be about messages, not object-oriented design with plumbing to distribute objects. OO design has become so engrained in the industry and drilled home in college curricula that I think this is the hardest anti-pattern to eliminate. Small fine-grained interactions, such as with properties, are not suitable for SOA.
Amen, brother Montgomery!