2.25.2010

PostSharp Goes Comercial

PostSharp Goes Comercial! It's a sad day. Don't get me wrong, I'm glad to see that the postsharp guys who have been doing a fantastic work are finally about to get rewarded. I'm just sad because I believe that the companies I work for are never going to buy a license just because I like to produce clean code with aspects. If there is another way of them not spending money on a license, I'm not expecting them to do it. So what if the produced code is cleaner, easier to maintain and I get to reduce coupling...

Ghost in the code

I've recently had an healthy discussion with two colleagues about a design decision, in which we defended two very different approaches.
The subject under discussion was the consumption of a resource (figure it as a database, even though it isn't one), which could in two different ways:
- through a mapping mechanism which binds an entity (call it a business object) to the resources at compile time
- bounded at runtime by name

Up until know, everyone has been using something similar to the second alternative: runtime bounding. What two of us were advocating was the urge to change this.
Why? With the current situation, if you decide to change a data structure, you are creating an uncertain and potentially gigantic universe of latent errors, because there's no documentation of these structures, which can be in use by several applications and no one really knows which applications use it. The approach we're defending here has the advantage of turning this "uncertain and potentially gigantic universe of latent errors" into a small universe of concrete errors (typically compilation errors).

One of the main advantages of this approach is when we do change a data structure (as in the example above), because by then it will be a mather of dealing with a few easily and quicky identifiable bugs which generally will have a quick fix. With the current situation, you'll have to deal with an unknown ammount of errors which may reveal themselves through an unexpected application behaviour, making it hard to track in source code, thus meaning that even if one finds how to reproduce the bug, he'll probably loose most of the time figuring where the bug is located in the source code. Furthermore, he might never be sure to have covered all the situations.
Besides, Murphy's Law will make sure these latent bugs arise at the worst possible time when you have to deliver a complex piece of software within a very very tight schedule and a deadline hanging above your head, thus leading to an hazardous situation!

This pretty much maps to a simplified discussion of static vs dynamic languages, although with some small and meaning differences. I believe languages are tools and as such, we should choose the right tool for each job. I'm not defending static or dynamic languages here, as I believe each of them have different purposes and none is the best tool for every situation.
Static VS Dynamic languages discussion has been around for years, and need not be discussed here. However, a few points are of interest for the discussion.
Most of the disavantages of dynamic languages can be attenuated through:
- Unit testing, because it reduces the need/importance of static type checking, by making sure the tested situations comply with the expected behaviour;
- Reduced coupling and higher cohesion, so that the impact of source code change is most likely in a single specific location and has no impact in further application modules;

However in this case, what we know and all agree with is that the current solution has a pathological coupling and coincidental cohesion, in that every functionality is potentialy scattered thorough several places, and you're never sure who's using which resource, how it's being used and wich purpose is being used for. Also, several thumb rules such as DRY (Don't repeat yourself) are repeteadly violated, so we know always to expect the worst from the implemented "Architecture".
Let's call it Chaos!

So in this case, when it comes to Safety vs Flexibility, I choose Safety! As the risks of choosing Flexibility are far greater in my humble opinion.

Pick your poison!

2.17.2010

Quick and Dirty Logging

Ever felt the need to quickly implement logging in your application but you don't want to loose time learning/remembering how to use a true logging framework or creating your own utility logging class?
There are some developers who tend to bring their own utility classes in their backpack just in case they stumble upon one of these situations. Most of us however, don't keep track of the utility classes we've written throughout the years or tend to loose that code. So last time it happened to me, I've written this "Quick and Dirty Logging" class using a TraceListener.



Usage examples:



Implementation Details 
Note that I've decided to break the debugger whenever I stumble with an error or warning as long as the assembly is compiled in debug mode. I made this decision because at the time I wanted to be able to fix the errors using the "Edit and Continue" visual studio feature thus shortening the debug-stop-rebuild-restart cycle and saving some time. This code is not supposed to be a production code, but that won't be a problem as long as you make sure you don't deploy debug assemblies anywhere other than a developer machine.

Also as we're using the System.Diagnostics.Trace features, make sure you have tracing enabled in your project (the TRACE compilation symbol needs to be defined), which is the default for visual studio C# projects.

Remember that there are logging frameworks far more customizable and faster than this small logging class. This isn't supposed to replace any other logging mechanism, it's just a quick solution empowered by its simplicity.

Conclusion
Next time I feel the urge to create a quick logging mechanism I'll try to remember this post and gather this code. But then again, when that happens, I'm pretty much sure Murphy's Law will make sure I won't get internet access. Probably the proxy will die on me while I'm opening the browser!

2.11.2010

Visual Studio Myth Buster

Microsoft created a Visual Studio "Myth Busting Table" which allows you to pick a visual studio version (2005, 2008 and 2010) and see the major features grouped by functional areas like "Developing Rich Clients" or "Working with Data" while also demistifying some of the myths surronding these products.

If you're either just trying to grasp the new features that come with these products or looking for facts to convince your boss, you should have the latest VS version, this is a site you should visit: http://msdn.microsoft.com/en-gb/ee679805.aspx

Blog RESET

I've once again decided to start blogging. I hope this decision lasts!
I've deleted old posts in this blog and started from scratch.

This blog will be mostly about .NET and C#

Microsoft Visual Studio 2010 RC

Haven't you heard? VS2010 RC is available for public download!

Last night, I've updated my VS2010 Beta2 to VS2010 RC.
It does look faster (but then again, I'm hardly the first to notice or mention this), so maybe I'll switch definitely to VS2010. So far I was using VS2008 for most of the stuff, because it was much faster (running inside a VM).

I'll try it out a bit before a definitive migration. Last night I've only had the time to test diagram generation (sequence diagrams and dependency diagrams) and did a bit of coding to get a better feeling on how the code editor  and the intellisense responds. One of the things I'll be definitely trying is the ADO.NET Entity Framework model designer.

Get yours at http://www.microsoft.com/downloads/details.aspx?FamilyID=457bab91-5eb2-4b36-b0f4-d6f34683c62a&displaylang=en