Vishful thinking…

Loading Shapefiles into the ESRI Silverlight map without uploading to the server

Posted in Uncategorized by viswaug on May 24, 2009

I have been working with Silverlight and in particular the ESRI Silverlight API recently and have been having a lot of fun with it. It is a refreshing change from working with the HTML/CSS/JS combo, even though I do miss the the simplicity and power of HTML/CSS/JS a lot of times. Silverlight brings with it a lot of capabilities that wasn’t possible with JavaScript. These capabilities in Silverlight can help simplify some of the workflows and increase the usability of some of the features needed in a lot of the web-based mapping applications. Like for example, one of the common requirements is for users to be able to view Shapefiles on their machines on the web maps.

With the JavaScript APIs, the only way this could be achieved was to upload the Shapefiles to the web server and then render the shapefiles either as images on the server that are sent down to the browser or as SVG/VML in the browser. Well, this is where the power of Silverlight helps us simplify things by allowing us to open and read the shapefiles in the Silverlight plug-in of the browser itself. This eliminates the need for uploading the shapefiles to the server and thus simplify the workflow involved.

There are a lot more things that the Silverlight plug-in is capable of that a lot of web-mapping applications can use. The ESRI Silverlight API packs a lot of punch but there are still a lot more holes to be filled. So, to help fill those holes, I have started an open-source project on CodePlex where I have uploaded some of the useful things that I have been working on. Please find the link to the project below.

ESRI Silverlight API Contrib

The project currently contains the following features.

1) A custom GeoRSS layer type that can be added to the ESRI Silverlight API map.

2) An custom map layer where the image is created dynamically in the browser itself. The current layer regenerates the image for the layer multiples times a second to simulate ripples on the map.

3) Utility classes that will help users load shapefiles from their computer directly on to the Silverlight API map without uploading the shapefile to the server

 

Please download and use the project as you see fit. Even better, you can sign-up as a contributor and help grow the project and the codebase. I also welcome any and all inputs on other ideas for new features that you might want to see added or critiques of the current codebase.

Advertisements

Template for creating Server Object Extensions

Posted in Uncategorized by viswaug on March 29, 2009

I had created a multi-project C# Visual Studio 2008 project template and having been using it for a while now for creating solutions for creating Server Object Extensions for the ArcGIS Server. I managed to package it into a Visual Studio Installer that I have made available for download below(click image below).

SOETemplate

Just double-click the file after downloading and it should install the project template into the correct directories. The installer will throw up a warning message before installing it. But you should be able to install by saying it is okay to proceed. Or if you want to heed to the warning, then you can also SOE Visual Studio 2008 project template and copy it to the “[My Documents]\Visual Studio 2008\Templates\ProjectTemplates\Visual C#” folder. You will have to check the “Register for COM interop” option in the properties page of both the projects that will be created. But I would suggest that you create a install project to COM register the assemblies at the deployment location instead of registering them at the “bin\Debug” location of the project. The project template will add a custom action class in both the projects that can be used in the setup & deployment projects to do the COM registration.

The SOEXplorer utility should help you register and un-register the Server Object Extensions with the ArcGIS Server.

Note: The support for creating multi-project templates is not great. For e.g. there is no way to use variables defined in the multi-project template file in the individual project template files.

POCO pollution in ORMs

Posted in Uncategorized by viswaug on February 12, 2009

One of the improvements that have been made in the ORM universe is the movement towards using POCOs (Plain Old CLR Objects) instead of bloated types which pack different methods and extraneous properties that don’t really belong there. What does that really mean? Most of the ORM frameworks used to force your entity classes to inherit from a bloated base class that provided a lot of functionality that life a lot easier. So, your entity classes where not really a POCO. Entity classes that are POCOs should only contain properties and fields that they need persisted and nothing else. Using POCOs helps you achieve a clean separation of concerns that helps keep your design clean and flexible. One of the problems that you will quickly run into when using entity classes that inherit from bloated base classes is with serialization both to XML and JSON.

NHibernate is one ORM that I have used and like. The framework lets us keep the entity classes class by using POCOs and letting us store all other information like property to table column mapping etc in XML files. The framework is also committed to not polluting the POCOs. When using NHibernate, you say “Session.Save(customerEntity);” and not “customerEntity.Save();”. Meaning, the customer entity does know how to save itself but instead the database session is the one that knows how to save a customer entity. And thus the customer entity can be stored in any storage mechanism required (as XML to files etc) and not just a database.

I have been using LLBLGen (for an Oracle back-end) lately and even though it is very well designed for most use cases. But they definitely are not committed to keeping POCOs just POCOs. The definitely use heavy use of base classes. But I do have to admit that the functionality that the base classes pack with them really do save time during development for just purely database persistence.

The Enterprise Validation Application Block (VAB) also allows you to decorate your entity classes with Validation attribute decorators that really tie in domain logic into your POCOs. This validation logic can really be removed from there and provided in code but it does involve work. And as the developers, we have to make the decision as to whether that extra effort is worth it given the requirements for your projects.

Even the Entity framework team from Microsoft have realized the advantages of not polluting the POCOs. The framework started with using bloated entity classes and have slowly started their migration to using POCOs. Even thought that migration is not complete, they have made efforts towards it and have currently adopted using the Three Amigos of the non-existent IPOCO interface (which by itself defeats the cause of moving to POCOs). The framework has these interfaces in place for POCO wannabes…

So, paying more attention when selecting your ORM and to whether you are currently using pure POCOs (not pretenders) or bloated types in your application framework should help you down the lane when are trying to persist your types to someplace other than the database or trying to JSON serialize them.

Creating a LEAN and FAST custom marker overlay for Google Maps

Posted in Uncategorized by viswaug on December 18, 2008

Before I start, I need to say that the link to the code below was provided to me by Pamela Fox. The sample below illustrates how to create a light weight custom marker GOverlay for Google maps that is REALLY FAST. And it allows for a multitude of customizations as you see fit. It was just too good not to share. Please find the the tweak I made to the code to allow for it to do a little bit of thematic mapping in just minutes. The demo below also allows you to check out the performance of the custom overlay with large number of markers loaded.

CustomGOverlay

Here are the reasons provided by Pamela Fox for the default marker overlay in Google Maps to be slower than the custom marker overlay below.

The default marker class (GMarker) is designed so that it works well across-browsers, has hot spot shape, can be draggable, has a shadow, prints well in any browser, etc. It is not optimized for performance.

Here is the custom marker overlay code sample.

   1: /* A Bar is a simple overlay that outlines a lat/lng bounds on the
   2:  * map. It has a border of the given weight and color and can optionally
   3:  * have a semi-transparent background color.
   4:  * @param latlng {GLatLng} Point to place bar at.
   5:  * @param opts {Object Literal} Passes configuration options - 
   6:  *   weight, color, height, width, text, and offset.
   7:  */
   8: function MarkerLight(latlng, opts) {
   9:   this.latlng = latlng;
  10:  
  11:   if (!opts) opts = {};
  12:  
  13:   this.height_ = opts.height || 32;
  14:   this.width_ = opts.width || 32;
  15:   this.image_ = opts.image;
  16:   this.imageOver_ = opts.imageOver;
  17:   this.clicked_ = 0;
  18: }
  19:  
  20: /* MarkerLight extends GOverlay class from the Google Maps API
  21:  */
  22: MarkerLight.prototype = new GOverlay();
  23:  
  24: /* Creates the DIV representing this MarkerLight.
  25:  * @param map {GMap2} Map that bar overlay is added to.
  26:  */
  27: MarkerLight.prototype.initialize = function(map) {
  28:   var me = this;
  29:  
  30:   // Create the DIV representing our MarkerLight
  31:   var div = document.createElement("div");
  32:   div.style.border = "1px solid white";
  33:   div.style.position = "absolute";
  34:   div.style.paddingLeft = "0px";
  35:   div.style.cursor = 'pointer';
  36:  
  37:   var img = document.createElement("img");
  38:   img.src = me.image_;
  39:   img.style.width = me.width_ + "px";
  40:   img.style.height = me.height_ + "px";
  41:   div.appendChild(img);  
  42:  
  43:   GEvent.addDomListener(div, "click", function(event) {
  44:     me.clicked_ = 1;
  45:     GEvent.trigger(me, "click");
  46:   });
  47:  
  48:   map.getPane(G_MAP_MARKER_PANE).appendChild(div);
  49:  
  50:   this.map_ = map;
  51:   this.div_ = div;
  52: };
  53:  
  54: /* Remove the main DIV from the map pane
  55:  */
  56: MarkerLight.prototype.remove = function() {
  57:   this.div_.parentNode.removeChild(this.div_);
  58: };
  59:  
  60: /* Copy our data to a new MarkerLight
  61:  * @return {MarkerLight} Copy of bar
  62:  */
  63: MarkerLight.prototype.copy = function() {
  64:   var opts = {};
  65:   opts.color = this.color_;
  66:   opts.height = this.height_;
  67:   opts.width = this.width_;
  68:   opts.image = this.image_;
  69:   opts.imageOver = this.image_;
  70:   return new MarkerLight(this.latlng, opts);
  71: };
  72:  
  73: /* Redraw the MarkerLight based on the current projection and zoom level
  74:  * @param force {boolean} Helps decide whether to redraw overlay
  75:  */
  76: MarkerLight.prototype.redraw = function(force) {
  77:  
  78:   // We only need to redraw if the coordinate system has changed
  79:   if (!force) return;
  80:  
  81:   // Calculate the DIV coordinates of two opposite corners 
  82:   // of our bounds to get the size and position of our MarkerLight
  83:   var divPixel = this.map_.fromLatLngToDivPixel(this.latlng);
  84:  
  85:   // Now position our DIV based on the DIV coordinates of our bounds
  86:   this.div_.style.width = this.width_ + "px";
  87:   this.div_.style.left = (divPixel.x) + "px"
  88:   this.div_.style.height = (this.height_) + "px";
  89:   this.div_.style.top = (divPixel.y) - this.height_ + "px";
  90: };
  91:  
  92: MarkerLight.prototype.getZIndex = function(m) {
  93:   return GOverlay.getZIndex(marker.getPoint().lat())-m.clicked*10000;
  94: }
  95:  
  96: MarkerLight.prototype.getPoint = function() {
  97:   return this.latlng;
  98: };
  99:  
 100: MarkerLight.prototype.setStyle = function(style) {
 101:   for (s in style) {
 102:     this.div_.style[s] = style[s];
 103:   }
 104: };
 105:  
 106: MarkerLight.prototype.setImage = function(image) {
 107:   this.div_.style.background = 'url("' + image + '")';
 108: }
 109:  

Google Maps API or the VirtualEarth API? From a developer’s viewpoint

Posted in Uncategorized by viswaug on December 16, 2008

I came across an interesting blog post by the Redfin developers about their experience with moving from the VirtualEarth API to the Google Maps API. That got me thinking about my experiences with both the APIs. I am not going to be talking about whose imagery looks better or has a higher resolution here. But, as a developer, I should say that I actually prefer the Google Maps API over the VirtualEarth API. For most cases, both the APIs are really to use and are comparable in their feature sets. But here are something in the Google Maps API that I miss in the VirtualEarth API.

Did I miss anything else either with the Google Maps API or the VirtualEarth API?

Conceptualizing AtomPub for Map services

Posted in Uncategorized by viswaug on December 8, 2008

I have written about some of the advantages that a REST API can bring to exposing GIS data. This is not just for delivering map images but also for delivering and managing vector data. We have been building a RESTful API to not only expose vector GIS data, but to perform all of the CRUD operations on the data as required. Since most of our mapping applications are built using the ESRI JS API, the REST API makes it a breeze to access those functionalities through URL endpoints.

Not to revisit the topic again but the idea behind REST is to use HTTP as an ‘application protocol’ rather than as a transport protocol like SOAP web services do. One protocol that uses HTTP the way it is meant to be used is the AtomPub protocol. I am not going to go into details about the AtomPub protocol here since there a lot of documentation available online that do a good job of explaining the AtomPub protocol. But in brief terms,

The Atom Publishing Protocol is an application-level protocol for publishing and editing Web Resources using HTTP [RFC2616] and XML 1.0 [REC-xml].

AtomPub is both a format and a protocol. And I think that the AtomPub is a natural way to expose data especially also GIS data. I am going to go over how in my opinion GIS data exposed as map services seamlessly falls into the AtomPub data container structure. The figure below illustrates a simplified data container structure in AtomPub.

AtomPub

As you can see from the above figure, an AtomPub service document contains a list of ‘Workspaces’ which in turn contains a list of ‘Collections’. The ‘Collections’ are represented by Atom Feed documents. The Atom Feed document contain a list of ‘Entrys’. In my opinion, here is how the ‘Service Document’ in the AtomPub protocol relates to a map service. (I am only referring to ESRI Map services below).

Service Document – The Map Service itself The entry point that describes the location and capabilities of collections

 

Workspace – Data Frames or Group Layers A logical grouping of collections

 

Collection/Atom Feeds – Feature Layers A Resource that contains a set of Member Resources. Collections are represented as Atom Feeds.

 

Entry – Feature – Represents an individual piece of content/Data

 

Categories Document/Categories Element – Feature Type – Provides metadata to describe an Atom entry. Categories Document describe Categories that are allowed in collections.

AtomPub-MapService

The AtomPub ‘Category’ element can also be used to represent feature types like ‘LinearRing’, ‘Polygon’, ‘LineString’, ‘MultiLineString’, ‘MultiPoint’ and be attached to Atom entries to provide metadata. They can also be used to represent the domain entity represented by the Feature Layer like National Forest etc…

 

Service Document Sample:

<?xml version=“1.0” encoding=‘utf-8’?>

<service xmlns=http://www.w3.org/2007/app&#8221; xmlns:atom=http://www.w3.org/2005/Atom&#8221;>

  <workspace>

    <atom:title>Transportation</atom:title>

    <collection href=http://vishcio.us/NationalHighways&#8221; >

      <atom:title>National Highways</atom:title>

      <categories href=http://vishcio.us/NationalHighways&#8221; />

    </collection>

    <collection href=http://vishcio.us/Rivers&#8221; >

      <atom:title>Rivers</atom:title>

      <accept>image/json</accept>

      <accept>image/xml</accept>

    </collection>

  </workspace>

  <workspace>

    <atom:title>Water Bodies</atom:title>

    <collection href=http://vishcio.us/Lakes&#8221; >

      <atom:title>Lakes</atom:title>

      <accept>application/atom+xml;type=entry</accept>

      <categories fixed=“yes”>

        <atom:category scheme=http://vishcio.us/geometry/polygon&#8221; term=“polygon” />

        <atom:category scheme=http://vishcio.us/Lakes&#8221; term=“Lakes” />

      </categories>

    </collection>

  </workspace>

</service>

 

Atom Feed Document:

<?xml version=“1.0” encoding=“utf-8”?>

<feed xmlns=http://www.w3.org/2005/Atom&#8221;>

 

  <title>Example Feed</title>

  <subtitle>A subtitle.</subtitle>

  <link href=http://example.org/feed/&#8221; rel=“self”/>

  <link href=http://example.org/&#8221;/>

  <updated>2003-12-13T18:30:02Z</updated>

  <author>

    <name>John Doe</name>

    <email>johndoe@example.com</email>

  </author>

  <id>urn:uuid:60a76c80-d399-11d9-b91C-0003939e0af6</id>

 

  <entry>

    <title>Atom-Powered Robots Run Amok</title>

    <link href=http://example.org/2003/12/13/atom03&#8221;/>

    <id>urn:uuid:1225c695-cfb8-4ebb-aaaa-80da344efa6a</id>

    <updated>2003-12-13T18:30:02Z</updated>

    <summary>Some text.</summary>

  </entry>

 

</feed>

 

Categories Document:

<?xml version=“1.0” ?>

<app:categories xmlns:app=http://www.w3.org/2007/app&#8221; xmlns:atom=http://www.w3.org/2005/Atom&#8221; fixed=“yes” scheme=http://example.com/cats/big3&#8221;>

  <atom:category term=“animal” />

  <atom:category term=“vegetable” />

  <atom:category term=“mineral” />

</app:categories>

 

The above is just my take on how the AtomPub elements can relate to map services. I think this is the next logical step in making map services more resource oriented and will allow for dynamic discoverability and ease in application development. ESRI map services already have the concept of a service document that can be accessed at the links below.

http://sampleserver1.arcgisonline.com/arcgis/rest/services

http://sampleserver1.arcgisonline.com/arcgis/rest/services?f=pjson

Supporting the AtomPub output format for that service document should be pretty simple.

URL & URI What’s the difference?

Posted in Uncategorized by viswaug on December 3, 2008

I am going to attempt to highlight the difference between URLs and URIs the way I see it. I am sure the purist might disagree with my take on it. So here goes nothing…

URI – Uniform Resource Identifier – This identifies a resource uniquely. The identifier used could optionally contain the location and the name or not. Ok, you have read that before…what does it really mean? Here is how I see it.

Take ISBN number for instance. It helps you identify the book uniquely. Let’s assume you are interested in the book with the ISBN number ‘ISBN-10: 0596519206′. Using an ISBN number you can confirm as to whether any book you are looking at is the book you are interested in. But using the number, you cannot get access to the book, you can only confirm whether you have found the book or not. This doesn’t mean that an URI cannot tell you how to access the resource also or not, but it SHOULD at least let you differentiate that resource from all others.

URL – Uniform Resource Locator – This lets you know how to get access to that particular resource. In most cases, URI is contained as a sub-part within the URL. URL contains not just the resource identifying information (URI) but also the resource locating information. Well, take for example, the above book resource, a URL for the book could be ‘http://www.amazon.com/dp/0596519206/‘. Here you can see that the ISBN number (URI) is contained as a part of the URL. But the URL also provides you more information that lets you gain access to it. Like, it lets you know that the resource can be accessed on the ‘World Wide Web’ via the HTTP protocol at the Amazon address using the ISBN number.

 

That said, I would also like to add that the two terms URL and URI are used so interchangeably these days that the subtle difference really doesn’t have a lot of significance.

HTTP header status

Posted in Uncategorized by viswaug on November 26, 2008

We have been building a completely RESTful interface for editing data served up by the ArcGIS Server MapServices. It is a supplement to the ESRI REST services which does not expose all the functionality that we need through its REST API. It has turned out great so far and has been amazingly easy to work with when building applications using the JS API. One of the important things to keep in mind when building RESTful interfaces to your data is to abide the REST principles. So, being true to the REST principles, we need to respond to the HTTP requests that produce an error on the server when processing it with the corresponding HTTP status headers. It is tricky to walk through all those conditions in your head and decide on the correct HTTP status that needs to be returned. Well, this flowchart below from Alan Dean should help with process and make it easier to figure out what status code to return.

http-headers-status

 

We also have built a JavaScript “Table of Contents (TOC)” control built for the ESRI JS API which uses a custom RESTful service on the back-end to return all the data needed to build the Table of Contents. We also use the ArcGIS Server SOAP API in our TOC REST service to get the data and the swatches for the Table of Content. Which should allow us to deploy the TOC REST service on any web server without needing a server license. I hope to be posting a link to a working demo of the Table of Contents component pretty soon. In the meanwhile, if you are building a Table of Contents component yourself, one bit of advise that you might consider is to leverage the ASP.NET cache to store the response for the TOC data. Depending on the complexity of your MapService, the response from ArcGIS Server when requesting map legend data could range from long to extra-ordinarily long. In the case of our MapService, when requesting the map legend data from ArcGIS Server, the response took 7 seconds on an average. But the response from our TOC REST service almost take no time at all once it has been cached in the ASP.NET cache.

Windsor Container – Satisfying dependencies

Posted in Uncategorized by viswaug on November 24, 2008

I have been using the Windsor container Dependency Injection (DI) framework for my Inversion of Control (IoC) needs lately and have been highly pleased with the flexibility it offers to satisfy the different dependencies for components. Some of the ways by which Windsor container allows for the dependencies to be satisfies are:

1) Setting constructor parameters
2) Setting properties on components
3) Setting up dependencies on other components that have been configured – the ${…} syntax
4) Setting up common properties that can be used to instantiate different components – the #{…} syntax
5) Passing in the dependencies as a dictionary
6) Using the Factory Facility to obtain the dependency from a custom factory class

That should cover all the scenarios that you would ever come across right? Wrong. I had a weird requirement wherein I wanted to pass in one of the dependencies (as a dictionary) and configure two more dependencies in the configuration file.

Here is the definition of the class that I had to create

public LOBLayer(IFeatureClass featureClass, string layerName, string keyFieldName)

I wanted to configure the ‘layerName’ and the ‘keyFieldName’ constructor parameters in the configuration file and pass in the ‘featureClass’ object in the dictionary parameter of the Resolve(…) method. So, I needed to be able to do the following

<component id=”LOB” lifestyle=”transient” service=”GIS.IGISLayer, GIS” type=”BizLogic.LobLayer, BizLogic”>
  <parameters>
    <layerName>LOB</layerName>
    <keyFieldName>OBJECTID</keyFieldName>
  </parameters>
</component>

and

System.Collections.IDictionary parameters = new System.Collections.Hashtable();
parameters.Add(“featureClass”, featureLayer.FeatureClass);
IGISLayer destinationLayer = _container.Resolve<IGISLayer>( featureType, parameters );

 

Initially, I was skeptical as to whether Windsor would be smart enough to handle it. But, the Windsor container came through for me. I was able to configure two of the constructor parameters in the configuration file and pass in the third dependency in a dictionary to the Resolve(…) method.

Writing better javascript – Part 9

Posted in Uncategorized by viswaug on November 23, 2008

Use Event Delegation and be mindful of the number of event handlers that you are adding

Every time you are adding event handlers to the DOM elements of your web page you are adding to the memory usage of the DOM. So, adding a lot of event handlers in the web page can increase your memory usage. Also, there are greater chances of memory leaks happening if you are not careful writing your code. This may not be a big factor in most cases but it is always good to be aware of what is happening in your code. Especially because most of the JavaScript frameworks can make it real easy to write code which may not be optimal in some cases. This is especially a factor when dealing with tables and list etc. Where the number of elements you are attaching an event handler could potentially be a large number.

Consider the case where you want to the users to be able to delete a row in a table by clicking on the first cell in the row. When thinking about implementing this requirement, it is easy to write something like this when using the jQuery framework

$(“table#large tr td:first-child”).click(function(e) {
    $(this).parent(“tr”).remove();
});

In the case above, you are attaching an event handler to a cell in every row of the table. If your table contains are large number of rows, you will be attaching potentially thousands of event handlers to your DOM. This may not be an optimal solutions. These kinds of circumstances can be tackled in another better way using the event bubbling property of the DOM. We can attach attach just a single event handler to the table itself instead of every first cell of every row in the table and use the ‘originalTarget’ property of the jQuery event object to determine whether the user clicked on the first cell in a row and then proceed to remove the row as needed. This technique of attaching the event handler to the container and to the individual elements is sometimes referred to as ‘Event Delegation’. The above code snippet can be re-written to read like below using Event Delegation.

$(“table#large”).click(function(e) {
    if (e.originalTarget.tagName == “TD”) {
        if (e.originalTarget === $(e.originalTarget).parent(“tr”).find(“:first-child”).get(0)) {
            $(e.originalTarget).parent(“tr”).remove();
        }
    }
});

You shouldn’t have to worry about this scenario too much unless if the number of items you are dealing with is considerably large. But it is good to be aware of it and this knowledge might come in handy when you are trying to trace performance issues or memory leak origins.

Another HUGE benefit of using the Event Delegation technique is that when new rows are added to the table via AJAX then we don’t have to worry about attaching the event handlers to the newly added rows. Everything will just keep working as expected as we have just attached the event handler to the containing DOM element which then checks for the expected conditions.