Friday, August 19, 2011
Nesting Depth Metric
Friday, August 12, 2011
NCover and working with Code Coverage tools
I have been reluctant to embrace TDD philosophy in my development. I have spent the last year with a philosophy of adding unit test after coding done with DI techniques. The tests were aimed at the “low hanging fruit” scenarios. These included lower level classes with fewer dependencies and logic centric classes. Dependencies like communications, file I/O, and timing based events were classes that tended to be ignored. Not surprisingly my defects were focused in areas not covered by tests. I did employ a practice of writing at least one unit test for each defect.
Recently, after getting encouragement from management to spend time improving unit tests (in part due to schedule slack), I spent some time working with NCover, one of the leading .NET code coverage tools.
NCover has been simply excellent to work with. I have used NCover 3 complete which, at $479, is pretty pricey. NCover 3 classic is reasonably priced at $199. The features provided by the classic level, suit my current usage fine. The quickstart documentation is emphasized by the UI presentation for new projects. The focused ribbon-based UI made it easy to navigate. I found it to be a nice workflow to move between production code viewed in NCover and test code viewed in Visual Studio. For me, I tend to open many documents in Visual Studio at once, and not having to bounce between test and production code files in Visual Studio helped me be more productive.
I spent some time later working with Visual Studio 2010 Code Coverage (available with Premium or Ultimate editions). I think it important to mention my development environment might well affect my perspective. My machine is modern but lacks a solid state hard drive. Visual Studio does crash or hang more than I would like. Visual Studio is also kinda slow and uses lots of virtual memory. I have never met an Add-In that I didn’t want so, so this could well be affecting my experience. These factors collectively encourage me to favor non-integrated tools. More importantly, my single monitor and 1024x768 display really limits my desire to minimize the number and size of visual studio windows open.
Having given these caveats, I strongly preferred my experience with NCover. First, my above issues with Visual Studio make it feel cramped as it is. Second, debugging MSTest unit tests and obtained code coverage are mutually exclusive. I can have on or other turned on at any given time. Hitting Control-R Control-A to debug into unit test is where my sweet spot for working with tests is. I really don’t like the idea of having to switch settings regularly. Third, NCover’s windows and icons are tightly focused on its core tasks. This has helped make learning NCover really easy. In contrast, Visual Studio leverages its existing code editor, status windows, menus, and icons. Using Visual Studio Ultimate, the code coverage-related options are scattered amidst a plethora of features. Fourth , VS requires instrumenting assemblies whereas NCover doesn’t. Since my assemblies are usually signed, this means the assemblies must be modified and then re-signed for each coverage run.
Thursday, June 16, 2011
Clean Coding in Conshohocken
This has inspired me to read his new book, Clean Coder. I really enjoyed reading the book. It is very much anecdotal in nature and he has great stories to tell. Having taken a course from Juval Lowy recently where he focused on the successes he had and how to generalize that and allow others to be successful, in contrast Bob’s stories are funny, self-deprecating, and even courageous in the level of self revelation. I would never have the courage to tell his story about a meeting regarding project estimation while drunk. He wasn't drinking on the job of course, just bringing work talk into an off hours social gathering. A generous measure of failure is important, especially when a perspective comes from someone like Uncle Bob, who is essentially a preacher for his perspective. Preachers without this perspective can otherwise come across to me as strident or humorless. Readers who don’t expect that Uncle Bob will offer sermons on agile and TDD perspectives may be disappointed or even hostile. His views, particularly in this book, represent ideals. Striving for 100% code coverage is definitely an extreme view.
He has a really intriguing quiz early on in the some more obscure topics in computer science. His questions are
- Do you know what a Nassi-Schneiderman chart is?
- Do you know the difference between a Mealy and a Moore state machine?
- Could you write a quicksort without looking it up?
- Do you know what the term “Transform Analysis” means?
Answer: I am guessing that this refers to Fourier Analysis. Despite learning about this in grad school, all I can say is that this is a way to solve problems whose answers can be found through analysis of differential equations. - Could you perform a functional decomposition with Data Flow Diagrams?
Yuck, I have done this all too often in my first professional project. I have found it to not be easily applicable to OO design. Cross-cutting concerns like logging or communication ending up being elements in far too many different diagrams. - What does the term “Tramp Data” mean?
Answer: A term describing data which is passed to a function only to be passed on to another function. - Have you heard the term “Connascence”?
Answer: Two software components are connascent if a change in one would require the other to be modified in order to maintain the overall correctness of the system. Connascence is a way to characterize and reason about certain types of complexity in software systems. I really like this term, I hope it sticks in my brain. - What is a Parnas Table?
Answer: Tabular documentation of function values. It inspired the creation of FitNesse.It is just one of many contributions made by the brilliant David Parnas. His collected papers can be purchased here at Amazon.
Although the mistake isn’t significant, it merely reflects my pedantic nature, age and level of geekdom to be bothered by the fact that he misquoted Yoda. It is “do or do not. There is no try”, not “Do or do not. There is no trying”. To be fair this does come across as something an editor might have corrected. For the best Yoda reference see, Udi Dahan’s videocast, http://www.infoq.com/presentations/Making-Roles-Explicit-Udi-Dahan. Although a misquote, Bob’s general point about language usage and estimation is one of my favorites in the book. As a developer, agreeing to try to meet a deadline is agreeing to the schedule. I have been in situations where other developers have been caught by trying to agree to something that deep down they realize is unattainable. It is much better to be upfront with management early and not yield to the try request. Bob emphasizes the value in analyzing hedge phrases like try, do my best, see what I can do on the developer side and similar words on the management side. He mentions several that I need to be more alert for as signs of non-commitment, including saying “we need to … “ or “let’s …”.
I found the coverage of individual estimation vs group estimation interesting. The ideas behind PERT estimation are familiar to me and being individually responsible for estimation is something I have done often. What I haven’t done, or learned much about, is techniques for group-based estimation. Bob talks about low-ceremony versions of using a Wideband Delphi process like flying fingers or planning poker. His discussion of a technique called Affinity Estimation was particularly intriguing. In Affinity Estimation a group of tasks are written down on cards. A group of people, without talking, must order the tasks. Tasks which keep moving in a volatile manner are set aside for discussion. Once the ordering of the tasks becomes stable, discussion can begin. I like the idea that the risk of Groupthink is lessened by preventing vocal communication for a time.
Sunday, March 20, 2011
Inaugural DevReady event on MVVM
There has been a tremendous amount of hype recently around the MVVM (Model-View- Viewmodel)pattern. Miguel Castro and DevExpress put together a developer day that shows the hype is backed by substance. There was a sold out crowd this past Saturday to see this event at the Microsoft office in Malvern PA. There was a convivial atmosphere created by the three
hispanic presenters: Miguel, Dani Diaz the Microsoft developer evangelist, and Seth Juarez, a DevExpress developer evangelist. The backgrounds of the three speakers, respectively Cuba, Dominican Republic, and Mexico family backgrounds, provided an ongoing source of levity and continuity. The back and forth between the speakers, both light-hearted and technical, helped make the event cohere into a single focused event, not a variety of individual topics like a code camp.
The MVVM pattern was first articulated by Martin Fowler, see http://martinfowler.com/eaaDev/PresentationModel.html. Martin named this the Presentation Model pattern. The MVVM pattern, despite being a really awkward palindrome, is a valuable pattern that has emerged in the rich client area. Web applications, with the need for routing preeminent, tend to emphasize the MVC pattern. Many developers are familiar with what a model and view are by this team. A Model represents the business logic of an application and a View represent a visual interface to an area of the application. The role of the ViewModel is to represent a logical representation of the visual interface.
The core of the event was Miguel’s three sessions on Xaml and MVVM. His Xaml talk really was an overview of WPF development. Attending this talk is a must for developers new to WPF, Silverlight, and WP7. It will be given again at Philly Code Camp on April 9, so check it out there! I can’t quantify just how much time it would have saved me in learning WPF. He talked a lot about layout,which is much different than in Windows Forms or other rich client development. I initially misconstrued the Grid as something like a DataGrid, when it is really intended purely for layout. As Miguel pointed out, the Grid is much like a table in HTML. Miguel drove the point that WPF layout is really driven a lot by how HTML and CSS combine to represent web markup. Miguel showed how using a drag and drop approach adds a lot of bad markup to the XAML. He did point out that the DevExpress WPF tools really are a great way to allow devs to deal with layout less. Miguel pointed out that the pain points in the native .NET DataGrid control for WPF is an area where the DevExpress grid really shines. I think investing in a good control library can be the most well-spent money in a project.
Dani Diaz gave a presentation that focused on Microsoft’s efforts to support multitargeting .NET applications. I personally think that this will be the most important reason for developers to use the MVVM pattern. The approach in MVVM to support a loosely coupled architecture increases the ease of reuse by cordoning off direct GUI dependencies into the view. Dani’s presentation focused on WPF, Silverlight, and WP7 multitargeting. I actually think that is likely that two platforms not discussed in this event, IPhone and Android will drive MVVM adoption by allowing developers to work in a .NET shop and still target the most popular mobile platforms. Right now, iPhone and Android development with .NET means through use of Novell’s MonoTouch and MonoDroid development tools, As Dani pointed out, write once run everywhere is a myth. Users, myself included, want apps that behave natively to its OS. MVVM is the perfect pattern to allow multitargeting.
Dani talked about the upcoming Microsoft effort to support what they call Portable Library Tools. The Portable Library Tools are in a CTP stage right now but probably aren’t that useful yet. As Dani explained, Microsoft is still in the process of adding .NET framework elements that exist in a compatible way on multiple platforms. At the moment, the ObservableCollection class is not publicly supported by the CTP but it is expected that future releases will include this class. The ObservableCollection class is pretty essential in taking advantage of data binding.
The final presentation of the day was by Seth Juarez of DevExpress. Seth writes about his research in Machine Learning here. Seth’s approach was to give himself a crash course in Prism and share the results with us. Seth’s candor in talking about his experiences in dealing with Microsoft’s effort to support composite application development was refreshing. Prism can be a little difficult to get ones head around and appreciate. Seth showed that presenting a topic as a newby, it can be a good way to provide a quickstart to using Prism. Miguel also made a good point in his talk that Prism itself doesn’t provide any support for validation. Miguel is working on a MVVM framework built on top of Prism that adds validation. Seth used CodeRush throughout his talk illustrating just how much of a performance boost a developer can get by using a tool like CodeRush to support refactoring and minimize typing. It was pretty impressive although I still find the giant arrows that CodeRush uses to be garish and over the top. A little subtlety would be preferable, and less of a distraction.
Tuesday, March 8, 2011
Wrasslin with Log4Net
Here are some points of interest. As I stumble through a configuration, the log4net.Internal.Debug AppSettings key is very helpful in showing me where my mistake lies. In order to take advantage of multiple loggers, it took me a while to figure out the secret sauce that made things work. First, loggers are hierarchical in nature. I found that I needed to specify all appenders used at the root logger level. Then each logger name I used specified the ones I used it for it. The key here was setting Additivity=”FALSE” otherwise I got an error indicating that the appender was closed.
I also found the concept of a single log message being able to be processed by multiple appenders to be powerful. One use was that I use a ConsoleAppender while I am testing to see the Log4Net output without having to pull up a log file. The message is still written to logs too, so I can verify that I log precisely what I intend. At the other end of the logging severity, I log all errors (both ERROR and FATAL) to a file but also log fatal errors to the Windows Event log for easy processing.
A couple of items on my to-do list are exploring buffered file appending and rolling over logs after they reach a maximum size. The BufferingForwardingAppender allows you to utilize the Unit of Work pattern to by aggregating a bunch of messages for the expensive, synchronous act of logging. The drawback to using this approach is that if you are concerned about troubleshooting program abends, you might lose the last few entries in this scenario. Also, although Log4Net is generally threadsafe, this appender is not.
<?xml version="1.0"?>
<configuration>
<configSections>
<section name="log4net" type="log4net.Config.Log4NetConfigurationSectionHandler, log4net" />
</configSections>
<appSettings>
<!--if having trouble getting log4net to work turn its detailed internal logging on-->
<!--<add key="log4net.Internal.Debug" value="true"/>-->
</appSettings>
<log4net>
<root>
<level value="ALL" />
<appender-ref ref="FatalFile" />
<appender-ref ref="ErrorFile" />
<appender-ref ref="WarningFile" />
<appender-ref ref="DebugFile" />
<appender-ref ref="SomeInfoFile" />
<appender-ref ref="SomeExtraFile" />
</root>
<logger name="PrimaryLogs" additivity="FALSE">
<level value="ALL" />
<appender-ref ref="FatalFile" />
<appender-ref ref="ErrorFile" />
<appender-ref ref="WarningFile" />
<appender-ref ref="DebugFile" />
<appender-ref ref="SomeInfoFile" />
</logger>
<logger name="ExtraLogs" additivity="FALSE">
<level value="ALL" />
<appender-ref ref="SomeExtraFile" />
</logger>
<appender name="FatalFile" type="log4net.Appender.EventLogAppender">
<layout type="log4net.Layout.PatternLayout">
<conversionPattern value="%date [%5thread] %type{1}.%method() - %message%newline%exception" />
</layout>
<filter type="log4net.Filter.LevelRangeFilter">
<levelMin value="FATAL" />
<levelMax value="FATAL" />
</filter>
</appender>
<appender name="SomeExtraFile" type="log4net.Appender.RollingFileAppender">
<file value="C:\logs\ AppLateMessage.txt" />
<lockingModel type="log4net.Appender.FileAppender+MinimalLock" />
<appendToFile value="true" />
<staticLogFileName value="false" />
<rollingStyle value="Date" />
<datePattern value="'.'yyyyMMdd-HH'.log'" />
<layout type="log4net.Layout.PatternLayout">
<conversionPattern value="%date [%5thread] %type{1}.%method() - %message%newline%exception" />
</layout>
<filter type="log4net.Filter.LevelRangeFilter">
<levelMin value="WARN" />
<levelMax value="WARN" />
</filter>
</appender>
<appender name="ErrorFile" type="log4net.Appender.RollingFileAppender">
<file value="C:\logs\ AppError.txt" />
<lockingModel type="log4net.Appender.FileAppender+MinimalLock" />
<appendToFile value="true" />
<staticLogFileName value="false" />
<rollingStyle value="Date" />
<datePattern value="'.'yyyyMMdd-HH'.log'" />
<layout type="log4net.Layout.PatternLayout">
<conversionPattern value="%date [%5thread] %type{1}.%method() - %message%newline%exception" />
</layout>
<filter type="log4net.Filter.LevelRangeFilter">
<levelMin value="ERROR" />
<levelMax value="FATAL" />
</filter>
</appender>
<appender name="SomeInfoFile" type="log4net.Appender.RollingFileAppender">
<file value="C:\logs\ AppSomeInfoFile.txt" />
<lockingModel type="log4net.Appender.FileAppender+MinimalLock" />
<appendToFile value="true" />
<staticLogFileName value="false" />
<rollingStyle value="Date" />
<datePattern value="'.'yyyyMMdd-HH'.log'" />
<layout type="log4net.Layout.PatternLayout">
<conversionPattern value="%date [%5thread] %type{1}.%method() - %message%newline%exception" />
</layout>
<filter type="log4net.Filter.LevelRangeFilter">
<levelMin value="INFO" />
<levelMax value="INFO" />
</filter>
</appender>
<appender name="WarningFile" type="log4net.Appender.RollingFileAppender">
<file value="C:\logs\AppWarning.txt" />
<lockingModel type="log4net.Appender.FileAppender+MinimalLock" />
<appendToFile value="true" />
<staticLogFileName value="false" />
<rollingStyle value="Date" />
<datePattern value="'.'yyyyMMdd-HH'.log'" />
<layout type="log4net.Layout.PatternLayout">
<conversionPattern value="%date [%5thread] %-5level %type{1}.%method() - %message%newline%exception" />
</layout>
<filter type="log4net.Filter.LevelRangeFilter">
<levelMin value="WARN" />
<levelMax value="FATAL" />
</filter>
</appender>
<appender name="DebugFile" type="log4net.Appender.ConsoleAppender">
<layout type="log4net.Layout.PatternLayout">
<conversionPattern value="%-5level %type{1}.%method() - %message%newline%exception" />
</layout>
<filter type="log4net.Filter.LevelRangeFilter">
<levelMin value="DEBUG" />
<levelMax value="ERROR" />
</filter>
</appender>
</log4net>
</configuration>
Friday, September 10, 2010
Souce Code and other materials from Grokking ORMs talk
Here is a few links covered in the talk.
- Julie Lerman's book, THE book on EF - Programming Entity Framework
- Frans Bouma, developer of LLBLGen has a lot of interesting, highly opinionated posts on his blog. This post explains his position on why LLBLGen works well with Entity Framework. I will post more on this topic but I do really like the LLBLGen designer and code generator used in combination with EF. Caveat Lector: I received a free developer license of LLBLGen to work on this presentation.
- NHibernate's Home is here at nhforge.org
- Upcoming NHibernate 3 Cookbook by Jason Dentler and his Hanselminutes podcast episode
- Jeremy Miller's MSDN article on persistence ignorance and the unit of work pattern.
An explanation of the Select N+1 problem that relates to lazy loading objects.
Sunday, August 15, 2010
Cardinal Rule of OO design: Internal Immutability
One of my bedrock OO programming practices is a use of internal immutability. I don’t know if there is an expression that describes this but that is what I call it. By this I mean calls to a procedure or function of a class from within another procedure or function of the same object shouldn’t change the state of any data members of a class. This is often referred to as avoiding side effects. Isn’t it fundamentally essentially that an object consists of both state and behavior? (As an aside, I am omitting the third characteristic of an object, identity.) State that is exposed to another object is allowable, and indeed essential for OO programming.
This practice is etched in mind through prolonged exposure to pain. I began my programming career by supporting the implementation of a subsystem developed by an experienced Fortran developer required to write his first C++ program. The essential flaw to this design was that each column of each database table used represented a data member in a SINGLETON class. What happened was that the state would get overwritten and lost in processing events. Development had started out very promisingly but slowed down by the time I joined the team. I assisted by testing the development code baseline. As I identified test case failures, the developer, who had both a phenomenal work ethic (12 hours a day, 7 days a week) and a keen ability in logic, would painstakingly correct the code. Once corrected, test cases would frequently fail again over time. Once implementation was undertaken, despite the fact that the system had no error-handling logic, it needed babysitting round the clock. I spent my evening onsite for this application querying and updating Oracle records to provide that fault handling. This is an extreme case but was helpful to develop OO religion.
This is a long-time practice of mine but not one that I could formally articulate until I recently read Growing Object-Oriented Software Guided By Tests, written by Steve Freeman and Nat Pryce. This book has a lot of insights into OO design that stand apart from its focus on TDD. Let me quote their wonderful explanation.
“
As well as distinguishing between value and object types (page 13), we
find that we tend towards different programming styles at different
levels in the code. Loosely speaking, we use the message-passing style
we’ve just described between objects, but we tend to use a more
functional style within an object, building up behavior from methods and
values that have no side effects. Features without side effects mean
that we can assemble our code from smaller components, minimizing the
amount of risky shared state. Writing large-scale functional programs is
a topic for a different book, but we find that a little immutability
within the implementation of a class leads to much safer code and that,
if we do a good job, the code reads well too.”
Wednesday, March 24, 2010
Too-Busy Constructors
In dealing with a more complicated OO design, it can be a mistake to put too much initialization logic into constructors. This is especially important for an OO design that uses inheritance significantly. In this post I offer three reasons against doing too much in a constructor. lazy loading, exception handling, and control over execution order.
First, lets look at control over execution order. It can become complex to reason about the order in which code gets executed. It is necessary to distinguish between the execution of data field initializers and the code in the constructor. I blog here about the exact sequences that occur in VB.NET and C#. The languages have a significantly different ordering! Moving initialization logic into a separate initialization routine can allow you to control this order more precisely, without having to fit logic into object initialization as implemented by your .NET language.
Second, the lazy loading pattern is an extremely useful technique to be aware of. Martin Fowler articulates the lazy load pattern well within his Patterns of Enterprise Application architecture. The basic idea behind lazy loading is that it can be significantly more efficient to delay gathering data until it is actually needed. It is possible that the data may never actually be needed. At the same time, a calling class need never know about the lazy implementation. It can call the object that employs lazy loading and simply assume that the data will be available when it is needed. For some things that are naively done in the constructor, using the lazy load technique is the best way to go.
Third, handling exceptions generated from within a constructor can prove difficult. If the exception should prove critical and can’t be easily handled then the code execution flow can be messy. You have to allow for disposal of all relevant objects and resources obtained so far. You also need to consider what the results of the exception unwinding the call stack will be.
In my own experiences with profiling .NET applications, object creational concerns have proven to a significant cause of inefficiency. In one case, I found that there were an excessive number of Oracle connection objects being created. These connections weren’t actually being opened (thankfully!) but they were instantiated nonetheless. Having constructors do less and instantiating fewer aggregated objects can result in code that is more manageable and more efficient.