I have been paying real close attention to exceptions that are being thrown by third party libraries and the .NET framework to try to understand and be more specific about the exceptions that I need to raise in my applications. So, I noticed that when I set the Filter property of a FileDialog class with a value like ‘rpt’, an exception is raised at runtime because the value ‘rpt’ is not valid for the Filter property. But the interesting part about it is that it raises an ‘ArgumentException‘. The message on the exception reads “Filter string you provided is not valid. The filter string must contain a description of the filter, followed by the vertical bar (|) and the filter pattern. The strings for different filtering options must also be separated by the vertical bar. Example: “Text files (*.txt)|*.txt|All files (*.*)|*.*”“. The stack trace reads that the exception occurred at “System.Windows.Forms.FileDialog.set_Filter(String value)“.
The MSDN documentation for the ArgumentException reads ‘The exception that is thrown when one of the arguments provided to a method is not valid’. But in this case, I am setting a property on a class and not passing an argument to a method. So, is a wrong exception being thrown by the .NET framework or is it a valid use of the ArgumentException. If it is a valid use of the ArgumentException, the documentation should definitely reflect it which I don’t think it currently does.
I created a little cursor helper today to make it easy to change the cursor in a windows forms app. When a button click event in a form triggers a long operation, you would want to convey a visual feedback that the operation is in process and that the operation is complete by changing the cursor to a Wait cursor and back. If your code to do the above looks like this,
Cursor c = this.Cursor;
this.Cursor = Cursors.WaitCursor;
//Do long operation
this.Cursor = c;
then you can save some pain in your wrist by using the CursorHelper class and your code can look like this.
using (new CursorHelper(Cursors.WaitCursor))
//Do long operation
Here is the CursorHelper utility class.
public class CursorHelper:IDisposable
public CursorHelper(Cursor pnewCursor)
_oldCursor = Cursor.Current;
_newCursor = pnewCursor;
Cursor.Current = _newCursor;
#region IDisposable Members
public void Dispose()
Cursor.Current = _oldCursor;
We have been using Scrum to manage our projects for almost the past five months now. It has been working great for us so far and has helped boost team work and increased the morale of the entire development team. I think it has enabled the team to keep their eyes on the ball and keep making progress by avoiding unnecessary distractions. One thing I have been struggling with in the Sprint planning meeting are with writing the UserStories. I am finding it tricky to come up with good clearly defined UserStories from the discussions we have been having. The INVEST acronym from the “User Stories Applied” book by Mike Cohn I think should serve as a good guideline for defining UserStories. The following is from Doug Seven‘s take on INVEST.
INVEST stands for Independent, Negotiable, Valuable, Estimatable, Small and Testable.
The story should not carry with it dependencies, which can lead to estimating problems. Instead the story should be completely independent so that it can be worked on without pulling in a set of other stories.
Stories should have room to negotiate – they are a starting point, not a contract.
The story should communicate the value to a user or customer, not to the developer. The story should define not only what a user can do, but what value the user gets from the implemented story. If there is no value, cut the story.
You need to be able to estimate the amount of work required to implement the story. If it is too big and too daunting (an epic), break it up into smaller stories.
Similar to the previous, stories need to be small. Large stories are too complex to manage, and are typically more than one story compounded together.
The implementation of the story needs to be testable. Define the tests that can be performed to validate the story was correctly implemented.
This article explains why catching exceptions as the base ‘Exception’ is always a bad idea. I have to confess that I have that mistake before. I agree with the author in his ideas that exceptions should not be caught unless you know what kind of exceptions you are dealing with. But it does kind of get real hard to implement in most cases because most third party libraries do not do a real good job of documenting the exceptions that they will be throwing internally. So, given the fact that you are working with a black box of exceptions you are almost forced to catch the base Exception if you don’t want your customers thinking that your application was the origin of the crash. Anyway my 2 cents on the issue… But the article is a real darn good read.
In my last post I had mentioned the Common Compiler Infrastructure (CCI) being used in the MRefBuilder, I thought I would elaborate on it a little bit in this post. Most .NET developers are aware of Reflection and its API. Reflection is a very expensive operation and should be avoided when possible. Also reflection data should be cached whenever possible. I recently came across another way of getting to access the Type information inside an assembly ‘Introspection‘. This is relatively new technology which gives you access to the assembly metadata like Reflection only without the Dynamic Methods Invocation. In other words, it gives us a read only access to assemblies. So, that eliminates its use in places where Dynamic Method Invocation is required.
The Introspection APIs are not available as a part of the standard .NET Base Class Libraries but can be found in the ‘Microsoft.Cci.dll’ assembly available with FxCop. FxCop uses Introspection to access the assembly metadata in order to perform its code analysis. The advantages of Introspection over Reflection are that it does not lock the assemblies and it supports multi-thread access. If you are curious like me, download this presentation to engage your curiosity.
I recently successfully created the code documentation for our projects using Sandcastle. Sandcastle is a documentation compiler for managed class libraries. It helps generate documentation for your .NET projects in a familiar MSDN format. Sandcastle can be configured to generate documentation in different styles. I created our documentation in the VS2005 format and couldn’t find too much help in creating help in custom styles. Sandcastle does ship with a couple of other documentation styles like ‘prototype’ and ‘hana’. Sandcastle is still in CTP and has not yet moved to RTM yet. So, the documentation is still mostly missing thus making it harder to customize using Sandcastle in your build process.
The compilation of the documentation in Sandcastle happens in a series of steps chained together to produce the desired end result. The figure shown below sheds more light into the steps that are involved in the process.
Sandcastle uses three separate tools to achieve its end goal. The three tools used by Sandcastle are
All three executables can be found in the “ProductionTools” folder under the Sandcastle installation path. A new user environment variable called ‘DXROOT’ will be added to your machine after the Sandcastle installation. So if you don’t know where Sandcastle is installed just execue ‘echo %DXROOT%’ at the command prompt. The function of the aforementioned tools are as follows
MRefBuilder – This executable uses CCI (Common Compiler Infrastructure) to generate reflection metadata in a XML format for the specified DLLs. In other words, it outputs a file containing information in XML format on all the Types in the assembly. The schema of the generated file is quite complicated but it is essentially a list of ‘api’ elements.
<api id=”cer” />
<!– many child elements containing reflection data on cer –>
<!– more api elements –>
XslTransform – This executable applies XSL transformations to XML files. This is a general purpose utility that takes in a XSL file and a XML file and produces an output file which is the result of applying the supplied XSL file transformations to the supplied XML file.
BuildAssembler – This executable is the core engine behind Sandcastle. It collates information from the comment files generated by the .NET compiler and the reflection information files from MRefBuilder and XslTransform into a set of HTML pages. BuildAssembler controls the component stack at the center of Sandcastle. The component stack that will be executed by the BuildAssembler should be passed in to it as a configuration file argument. The configuration file ‘sandcastle.config’ used for the VS2005 style documentation can be found in the ‘Presentation\vs2005\configuration’ folder under the Sandcastle installation path.
More to come …
I am fiddling around with some batch files for generating SandCastle documentation for the project I am working on. I hope to be blogging about SandCastle and its setup in a little while. Anyway, the batch takes parameters and needed to handle them appropriately when the user enters the parameters within quotes or not. Being very rusty on my batch commands knowledge, I scoured the Internet for help and came across this documentation that helped me out. It explains the use of modifiers with batch file parameters to get what you need. Here is the information about the modifiers below.
Using batch parameters
You can use batch parameters anywhere within a batch file to extract information about your environment settings.
Cmd.exe provides the batch parameter expansion variables %0 through %9. When you use batch parameters in a batch file, %0 is replaced by the batch file name, and %1 through %9 are replaced by the corresponding arguments that you type at the command line. To access arguments beyond %9, you need to use the shift command. The %* batch parameter is a wildcard reference to all the arguments, not including %0, that are passed to the batch file.
For example, to copy the contents from Folder1 to Folder2, where %1 is replaced by the value Folder1 and %2 is replaced by the value Folder2, type the following in a batch file called Mybatch.bat:
xcopy %1\*.* %2
To run the file, type:
mybatch.bat C:\folder1 D:\folder2
This has the same effect as typing the following in the batch file:
xcopy C:\folder1 \*.* D:\folder2
You can also use modifiers with batch parameters. Modifiers use current drive and directory information to expand the batch parameter as a partial or complete file or directory name. To use a modifier, type the percent (%) character followed by a tilde (~) character, and then type the appropriate modifier (that is, %~modifier).
The following table lists the modifiers you can use in expansion.
Expands %1 and removes any surrounding quotation marks (“”).
Expands %1 to a fully qualified path name.
Expands %1 to a drive letter.
Expands %1 to a path.
Expands %1 to a file name.
Expands %1 to a file extension.
Expanded path contains short names only.
Expands %1 to file attributes.
Expands %1 to date and time of file.
Expands %1 to size of file.
Searches the directories listed in the PATH environment variable and expands %1 to the fully qualified name of the first one found. If the environment variable name is not defined or the file is not found, this modifier expands to the empty string.
The following table lists possible combinations of modifiers and qualifiers that you can use to get compound results.
Expands %1 to a drive letter and path.
Expands %1 to a file name and extension.
Searches the directories listed in the PATH environment variable for %1 and expands to the drive letter and path of the first one found.
Expands %1 to a dir-like output line.Note
In the previous examples, you can replace %1 and PATH with other batch parameter values.
The %* modifier is a unique modifier that represents all arguments passed in a batch file. You cannot use this modifier in combination with the %~ modifier. The %~ syntax must be terminated by a valid argument value.
You cannot manipulate batch parameters in the same manner that you can manipulate environment variables. You cannot search and replace values or examine substrings. However, you can assign the parameter to an environment variable, and then manipulate the environment variable.
In my last post I talked about REST services in GIS. If you are interested in digging deeper into REST, a whole bunch of information can be found here. But who is using REST data services out there? Well, both the two big software giants are. Both Google and Microsoft are offering their own style of REST protocols for accessing databases over HTTP. Google is offering Google Base and Microsoft is working on Astoria. Being a Microsoft guy, I have been playing around with Astoria and am excited about its possibilities. Microsoft is still actively working on Astoria and are currently inviting insight into its URL syntax style and security. Dare Obasanjo has a great blog post comparing the above two technologies.
Adding the ‘format’ query parameter to the URL retrieves the data in JSON format like shown below.
REST services have been growing in popularity for quite a while now. REST has especially been a great fit for the web mapping world. A whole bunch of web map service providers out there are already using REST in their offering. The main use of REST in the online web mapping services has been around the access and delivery of cached map tiles. REST services just standardize the way in which map tiles are requested by clients, and the ways that servers describe their holdings. Or in other words, an URL containing information on the map center point, map scale, layer and the image height and width as query parameters maps directly to pre-cached map tiles sitting on the server. This also enables caching of the map tiles on the client side thus helping speed up the application and reduce bandwidth usage. ESRI realized the importance of REST in GIS and are offering their ArcWeb Services in REST. More information on REST access to ArcWeb services can be found here. Even OSGeo is working on a “Tile Map Service Specification“.
But there are not too many REST based vector data service. There is work being done in this area also and the demo found here illustrated how REST services can be utilized for vector data. The demo actually loads vector data using a REST service. The vector data is transferred around in the JSON format specifically the still under development GeoJSON format. The demo also allows users to update the database using a REST service. It is actually exciting to see all these technologies come together seamlessly.
Today is was skimming over MSDN when I was in the process of creating a custom exception to be used in our application. Normally, I stay clear of creating custom exceptions as I do not see a real need for using them in most cases and they just tend to deepen the exception hierarchy and not add too much value. But I felt that the situation I was trying to tackle did warrant a custom exception. So, I set about creating my own Exception class inheriting the ApplicationException class as suggested in the MSDN documentation. The MSDN says
ApplicationException is thrown by a user program, not by the common language runtime. If you are designing an application that needs to create its own exceptions, derive from the ApplicationException class. ApplicationException extends Exception, but does not add new functionality. This exception is provided as means to differentiate between exceptions defined by applications versus exceptions defined by the system.
Then I just so happened to stumble upon the “Community Content” for the ApplicationException class and was surprised to find information in there suggesting that inheriting from ApplicationException is useless. So, the “Community Content” does add value to MSDN… Anyway, I started digging around and found the following statement that concisely explains why inheriting from ApplicationException is useless.
JEFFREY RICHTER: System.ApplicationException is a class that should not be part of the .NET Framework. The original idea was that classes derived from SystemException would indicate exceptions thrown from the CLR (or system) itself, whereas non-CLR exceptions would be derived from ApplicationException. However, a lot of exception classes didn’t follow this pattern. For example, TargetInvocationException (which is thrown by the CLR) is derived from ApplicationException. So, the ApplicationException class lost all meaning. The reason to derive from this base class is to allow some code higher up the call stack to catch the base class. It was no longer possible to catch all application exceptions..
GOOD NEWS though… If you are currently using custom exceptions that inherit from ApplicationException then there is no need to go in and fix them. Inheriting from ApplicationException is not harmful just useless. That is what my dad used to say about me…