Friday, November 16, 2012

Static Code Contract analysis in VS 2012 Pro!!

I have been pretty excited about the technology behind Microsoft's Code Contracts for a while.  However,  as a user that typically is not situations where the Premium or Ultimate SKUs are affordable.  With Visual Studio 2010, static code analysis was not included at the Professional level.  For me, it is the static code analysis, not the run-time analysis that is where the value truly lies.  As a long-time Microsoft developer I was pretty bummed about that.  I sent an e-mail off to ScottGu about that and he gave a nice response and passed along my feedback.  This is typical of the amazing support Microsoft has given the developer community over the years.  I have been an Apple hardware user since the late 80s and have been very happy with Mac OS and iOS as a user.  However, I don't think any of the major players, be the IBM, Oracle, Apple, treats developers as well as Microsoft does. If Windows 8 succeeds as a Tablet OS, which it should based on technical merit, the great Microsoft developer community will be part of that.  Having said that, it is great to see Code Contracts coming to the masses with VS 2012 Pro.  It is still a separate download for the analysis tools, although the actual Code Contract libraries are themselves baked into the .NET Framework.
   As a footnote, I am not a fan of the strategy of the One OS To Rule Them All approach.  I am not thrilled with my Windows 8 laptop setup.  The UI between PC and tablet should be similar but fundamentally different.  Unless my laptop has a touch screen, I want a keyboard/mouse centric UI with a conventional start menu.  As a developer, I want to go further, and do as much as I can with a keyboard, without even having to reach to the mouse.  Working in an office, I certainly can't use speech recognition either.  It is important for Microsoft to allow business units that focus on different devices to innovate on their own.  Long live skunkworks!  After ScottGu's nice response, maybe I should pass these thoughts along to Steven Sinofsky as well ... Oh too late,  nice knowing you Steven.

Tuesday, September 13, 2011

Passing objects of anonymous types as parameters to method

One of the underpinnings of LINQ is anonymous types. When you use a projection (i.e. select) you can choose to return data as an anonymous type. However, you are constrained by the fact that you can't pass that data out of this method because the type is anonymous. A consequence of this is that a method with LINQ in it can become bloated due to not being able to pass anonymous data out. A solution to this problem is that you can use the dynamic keyword to allow you to pass out this dynamic object. This object will be read-only. Here is a small example of that.
 void Main()

{

dynamic foo;

var bar = new { name = "John", title="code monkey"};

var bar2 = new { name = "John", title="junior code monkey"};

foo = bar;

WriteIt( foo);

WriteIt(bar2);

}

static void WriteIt( dynamic dynamo)

{

Console.WriteLine(dynamo.title);

}

One important gotcha with using dynamic typing is that extension methods aren't currently available. That is a real shame since these two techniques would go together naturally for more ruby-esque development. But at some point, if you really want or need to program with dynamic techniques in .NET you are better of with IronRuby or IronPython. Sadly, Microsoft's interest in the DLR appears to have waned after successfully making programming for Microsoft Office easier in .NET. You can perform a cast though, but only if your type is not anonymous.

Tuesday, August 23, 2011

Compiler As a Service (CAAS) for VB.NET

Microsoft has been promoting a futures feature of its .NET compilers called Compiler As A Service(CAAS). CAAS will allow direct access to the functionality of the compiler. From a Microsoft job posting described here CAAS allows “REPL, write-your-own refactorings, C# and VB as scripting languages, and taking .NET to the cloud”. I think VB.NET, in particular, stands to gain with the potential to have C# code be convertible to VB.NET.


“Tightening the feedback loop” has been my mantra over the past year. In just one sense of this phrase, REPLs (read evaluate print loop) are a way to get immediate feedback on your code. I have been using, and loving, LinqPad a lot lately. It is far more than way to perform Linq queries. It is a very well thought out code snippet compiler. It isn’t truly a REPL, code is compiled with the benefit of behind the scenes code that LinqPad provides to make a complete class library. Also, unlike a REPL, once code is run, its variables aren’t treated as globally available for continued use. Essentially it is read-evaluate-print without the loop. Linqpad does succeed in providing much quicker feedback. Besides LinqPad, PowerShell and the Firebug for Firefox Javascript commandline are things I use frequently.


Aspect-oriented programming with tools like PostSharp could be greatly enhanced by CAAS. Postsharp works as a post-compilation code weaving. I think it might be significantly easier to weave code with the compiler functionality opened up. The job posting suggests that Refactoring processes could benefit on the other direction as a mini-compilation step done in the background to assist in changing the codebase. I wouldn’t want to speculate how though at this point.


As a LinqPad user in VB.NET you are a second class citizen since Intellisense is only provided for C#. Joe Albahari articulates on stackoverflow that CAAS would allow him to more easily provide VB.NET intellisense.


Putting a VB.NET specific spin on CAAS, is the potential ability to seamlessly convert between VB.NET and C#. One of the obstacles facing VB.Net is the necessity of converting code snippets available only in C#. For example, trying to convert a C# Linq statement to VB.NET fails utterly using Telerik’s converter, http://converter.telerik.com/. This will help a real pain point in using VB.NET.

Friday, August 19, 2011

Nesting Depth Metric

One of my fundamental coding practices is to avoid nesting depth in methods. A really good explanation of this point has been made by Patrick Smacchia, author of NDepend. NDepend, speaking as a purchaser of the product, is a very cool tool for measuring an application. I won't belabor the technique, since Patrick explains it well, but I just want to stress it because I think it so critical. I call this the early exit strategy. Rather than have nested if statements, I return from a function as soon as possible. So, instead of having a more complex boolean statement or nested ifs, I would have a simpler expression with two if statements, each of which would return from its function if appropriate. As Patrick shows, this also applies to a continue statement, to abort further process on the current iteration of a for loop.

Friday, August 12, 2011

NCover and working with Code Coverage tools

I have been reluctant to embrace TDD philosophy in my development. I have spent the last year with a philosophy of adding unit test after coding done with DI techniques. The tests were aimed at the “low hanging fruit” scenarios. These included lower level classes with fewer dependencies and logic centric classes. Dependencies like communications, file I/O, and timing based events were classes that tended to be ignored. Not surprisingly my defects were focused in areas not covered by tests. I did employ a practice of writing at least one unit test for each defect.


Recently, after getting encouragement from management to spend time improving unit tests (in part due to schedule slack), I spent some time working with NCover, one of the leading .NET code coverage tools.


NCover has been simply excellent to work with. I have used NCover 3 complete which, at $479, is pretty pricey. NCover 3 classic is reasonably priced at $199. The features provided by the classic level, suit my current usage fine. The quickstart documentation is emphasized by the UI presentation for new projects. The focused ribbon-based UI made it easy to navigate. I found it to be a nice workflow to move between production code viewed in NCover and test code viewed in Visual Studio. For me, I tend to open many documents in Visual Studio at once, and not having to bounce between test and production code files in Visual Studio helped me be more productive.


I spent some time later working with Visual Studio 2010 Code Coverage (available with Premium or Ultimate editions). I think it important to mention my development environment might well affect my perspective. My machine is modern but lacks a solid state hard drive. Visual Studio does crash or hang more than I would like. Visual Studio is also kinda slow and uses lots of virtual memory. I have never met an Add-In that I didn’t want so, so this could well be affecting my experience. These factors collectively encourage me to favor non-integrated tools. More importantly, my single monitor and 1024x768 display really limits my desire to minimize the number and size of visual studio windows open.


Having given these caveats, I strongly preferred my experience with NCover. First, my above issues with Visual Studio make it feel cramped as it is. Second, debugging MSTest unit tests and obtained code coverage are mutually exclusive. I can have on or other turned on at any given time. Hitting Control-R Control-A to debug into unit test is where my sweet spot for working with tests is. I really don’t like the idea of having to switch settings regularly. Third, NCover’s windows and icons are tightly focused on its core tasks. This has helped make learning NCover really easy. In contrast, Visual Studio leverages its existing code editor, status windows, menus, and icons. Using Visual Studio Ultimate, the code coverage-related options are scattered amidst a plethora of features. Fourth , VS requires instrumenting assemblies whereas NCover doesn’t. Since my assemblies are usually signed, this means the assemblies must be modified and then re-signed for each coverage run.


Wednesday, August 10, 2011

Thanks to attendees of my Introduction to Linq as a Language Feature talk.

Here is a LINQ to a zip file with my presentation and all sample LinqPad files.

Here are a set of LINQs covered in the talk.

Thursday, June 16, 2011

Clean Coding in Conshohocken

I was pleased to find out that Brian Donahue has lured another great teacher and presenter to the Philadelphia area. From February 20 to the 23 2012, Uncle Bob Martin is coming to Conshohocken to teach TDD! at http://www.eventbrite.com/event/1804937617. I have signed up and am looking forward to it.
This has inspired me to read his new book, Clean Coder. I really enjoyed reading the book. It is very much anecdotal in nature and he has great stories to tell. Having taken a course from Juval Lowy recently where he focused on the successes he had and how to generalize that and allow others to be successful, in contrast Bob’s stories are funny, self-deprecating, and even courageous in the level of self revelation. I would never have the courage to tell his story about a meeting regarding project estimation while drunk. He wasn't drinking on the job of course, just bringing work talk into an off hours social gathering. A generous measure of failure is important, especially when a perspective comes from someone like Uncle Bob, who is essentially a preacher for his perspective. Preachers without this perspective can otherwise come across to me as strident or humorless. Readers who don’t expect that Uncle Bob will offer sermons on agile and TDD perspectives may be disappointed or even hostile. His views, particularly in this book, represent ideals. Striving for 100% code coverage is definitely an extreme view.
He has a really intriguing quiz early on in the some more obscure topics in computer science. His questions are

  • Do you know what a Nassi-Schneiderman chart is?

  • Do you know the difference between a Mealy and a Moore state machine?

  • Could you write a quicksort without looking it up?

  • Do you know what the term “Transform Analysis” means?
    Answer: I am guessing that this refers to Fourier Analysis. Despite learning about this in grad school, all I can say is that this is a way to solve problems whose answers can be found through analysis of differential equations.

  • Could you perform a functional decomposition with Data Flow Diagrams?
    Yuck, I have done this all too often in my first professional project. I have found it to not be easily applicable to OO design. Cross-cutting concerns like logging or communication ending up being elements in far too many different diagrams.

  • What does the term “Tramp Data” mean?
    Answer: A term describing data which is passed to a function only to be passed on to another function.

  • Have you heard the term “Connascence”?
    Answer: Two software components are connascent if a change in one would require the other to be modified in order to maintain the overall correctness of the system. Connascence is a way to characterize and reason about certain types of complexity in software systems. I really like this term, I hope it sticks in my brain.

  • What is a Parnas Table?
    Answer: Tabular documentation of function values. It inspired the creation of FitNesse.It is just one of many contributions made by the brilliant David Parnas. His collected papers can be purchased here at Amazon.


Although the mistake isn’t significant, it merely reflects my pedantic nature, age and level of geekdom to be bothered by the fact that he misquoted Yoda. It is “do or do not. There is no try”, not “Do or do not. There is no trying”. To be fair this does come across as something an editor might have corrected. For the best Yoda reference see, Udi Dahan’s videocast, http://www.infoq.com/presentations/Making-Roles-Explicit-Udi-Dahan. Although a misquote, Bob’s general point about language usage and estimation is one of my favorites in the book. As a developer, agreeing to try to meet a deadline is agreeing to the schedule. I have been in situations where other developers have been caught by trying to agree to something that deep down they realize is unattainable. It is much better to be upfront with management early and not yield to the try request. Bob emphasizes the value in analyzing hedge phrases like try, do my best, see what I can do on the developer side and similar words on the management side. He mentions several that I need to be more alert for as signs of non-commitment, including saying “we need to … “ or “let’s …”.
I found the coverage of individual estimation vs group estimation interesting. The ideas behind PERT estimation are familiar to me and being individually responsible for estimation is something I have done often. What I haven’t done, or learned much about, is techniques for group-based estimation. Bob talks about low-ceremony versions of using a Wideband Delphi process like flying fingers or planning poker. His discussion of a technique called Affinity Estimation was particularly intriguing. In Affinity Estimation a group of tasks are written down on cards. A group of people, without talking, must order the tasks. Tasks which keep moving in a volatile manner are set aside for discussion. Once the ordering of the tasks becomes stable, discussion can begin. I like the idea that the risk of Groupthink is lessened by preventing vocal communication for a time.