Vishful thinking…

Running multiple versions of nodejs and npm on windows

Posted in Uncategorized by viswaug on February 11, 2016

I use NVM to run different versions on node on my Ubuntu development machine. So, when i wanted to do the same on my windows development machine, i was out of luck since NVM does not support windows. But nodejs is available for windows as just a downloadable .EXE file. This means that i can just include the required version of the tiny binary file in my projects and i should be good to go. Well, the ride got a little bumpy along way and am hoping that this post will help someone trying to do the same.

So, now that i had my node.exe in my project, the very first thing i wanted to do was to use NPM to install dependencies. How do I go about doing that? Well, i could download NPM as ZIP files from it’s releases on github. So, where do we put it? I had to created a “node_modules\npm” folder alongside the node.exe and extract the contents of the “npm-x.x.x” folder inside the npm zip file download into this folder. That’s not all, copy the files named “npm” and “npm.cmd” from the “node_modules\npm\bin” directory to the same directory as node.exe. Now, you should be able to use both nodejs and npm from the command line while referring to their qualified paths. Remember node and npm have installed in that project only and not globally. Invoke “npm install” and the dependencies in your packages.json should be installed into your local “node_modules”. Sounds like we have achieved our goal.

Great, we got everything working locally and checked in. The build server should be able to check it out and call the local npm and install all the project dependencies locally. Well, for me, things blew up on the build server. Why? NPM was having trouble with installing the node-sass package. This turned out to be a native dependency. The reason why everything worked on my machine but ended up failing on the build server was because I had VisualStudio 2015 installed on my development machine and it wasn’t installed on the build server (neither should it be IMHO). That is, my development machine had a C++ compiler available and the build server did not. This took me back to the days when I used to have the same exact problems with installing PyPI packages on windows machines. But, the Python community had already solved this problem using pre-built binaries in wheels distributions. BTW, if you are working in GIS & Python on windows,  Christoph Gohlke has a treasure trove for you here. Most of the consensus in online discussions seemed to be to install VisualStudio 2015 on the build server. But there is a glimmer of hope, Microsoft has recognized the issue and has made the Visual C++ Build Tools 2015 available exactly for this purpose (standalone c++ tools for build environments). You will also need to install Python 2.7 on the build server. The build server can run some scripts to configure the environment before it builds these native dependencies. First off, it needs to add Python to its PATH and configure NPM to use Python 2.7 with the following command

npm config set python python2.7

The Microsoft C++ build tools version can be set globally with this command

npm config set msvs_version 2015 –global

But, we can also set the C++ build tools version when like NPM install like shown here

npm install –msvs_version=2015

After all that, things should get working. I am hoping that this gets easier in the future if NPM moves to a Python wheels like pre-built binary ditribution for native dependencies.

TableauShapeMaker – Adding custom shapes to Tableau maps

Posted in Uncategorized by viswaug on June 29, 2015

Hello fellow map herders, I recently wrote this little utility that can convert shapefiles into a CSV format that is consumable by Tableau. Tableau has a collection of built-in shapes that are sufficient for most mapping needs inside Tableau. But sometimes, there are valid reasons for wanting to add custom lines or polygons on to the Tableau map. This utility processes a shapefile in any coordinate system and optionally simplifies the shapes to reduce the number of vertices being displayed and outputs a CSV file that can be consumed by Tableau and blended with other data sets to produce the required visualization.

Options:
-h, -?, –help Show this message and exits
-i, –input=VALUE (Required) Input shape file in any geographic or projected coordinate system
-o, –output=VALUE (Optional) Output CSV file name
-t, –tolerance=VALUE (Optional) Tolerance value to use for simplifying the shape.

Example command

TableauShapeMaker.exe -i “C:\VA_JURIS.shp” -o juris.csv -t 0.001

The utility can handle displaying multi-part polygon/line shapes like the Hawaiian islands properly. But the utility cannot handle converting holes in polygons. This is due to the fact that I could not find information on how to represent them in the Tableau format. If anyone can help me with the representation for holes, I can add that feature to the utility also.

To display the shapes in the generated CSV file on the Tableau maps, add the CSV file as a data source in Tableau. The “Latitude” and “Longitude” fields should be identified as “Measures”, if not please mark them as such. The “Point ID”, “Polygon ID” and “Sub Polygon ID” fields should be identified as “Dimensions”. Double click the “Latitude” and “Longitude” measures to add them into the row and column shelves. Select “Polygon” from the “Marks” drop down. Drag the “Point ID” dimension to the “Path” shelf followed by the “Polygon ID” dimension to the “Detail” shelf and the “Sub Polygon ID” dimension to the “Detail” shelf. This should display the shapes on the map. To color the shapes by a certain dimension, say “Name”, drag the “Name” dimension to the “Color” shelf.

Since, holes are not supported by TableauShapeMaker, you will need to ensure that the smaller shapes are on top of the larger shapes ONLY IF they can be on top of each other. The shapes drawing order can be controlled/rearranged by dragging their legend entries to the right position in the legend list.

Obligatory screenshot…

Capture

Download TableauShapeMaker

Something to consider before using a relational storage as a service for your app

Posted in Uncategorized by viswaug on July 21, 2012

If you are planning on using any one of the great services available these days that let you store, retrieve, update and delete (perform CRUD operations) your relational data (geographic or not) using a HTTP API, here is something that you might need to consider. There are multiple services that allow you to do this these days. Some of these services are Google FusionTables, ArcGIS Online, CartoDB, ArcGIS Server 10 and up etc. Even though the backend storage for Google FusionTables is not relational, the service presents a relational facade to user by allowing tables to be merged etc.

All these services allow you to perform CRUD operations on your data over HTTP. This makes things really easy for front end developers since they can perform CRUD operations right from javascript and maybe just using the web server as a proxy relaying requests to the storage service.

But one of the things it takes away from us is the ability to perform multiple CRUD operations as one atomic unit. That is, let us assume that we have a polygon layer in our ArcGIS Server map service which also contains a table which contains attribute data related (one to many) to the polygon layer data. In this case, if one of our requirements is to update attribute data in a single polygon layer table row AND one or more rows with related data in the standalone attribute data which either succeed or fail together as a unit, then this becomes very hard to accomplish (if it is possible to do so). That is, we could end up with cases where the polygon layer table does get updated, but the update operations on the related table fails due to one of many possible reasons. This may allow bad data to be stored in the database with loss of data integrity.

Non-relational storage services are more resistant to this drawback since the data structure design for those databases expect us to handle this during design based on how the app uses the data being stored.

If you have any thoughts or suggestions on this issue, please leave a comment.

TileCutter – A small utility to generate tile cache in the MBTiles format from ArcGIS Dynamic Map Services

Posted in C#, ESRI, GIS, Uncategorized by viswaug on June 12, 2011

Thought I would share a little utility I had written up to generate tile caches in the MBTiles format for ArcGIS Dynamic Map Services. The MBTiles cache format is very simple and makes moving caches between machines very easy since you just have to transfer one file instead of the thousands of files that need to be copied for normal tile caches. The TileCutter is a console utility and accepts the scale range and the extent in latitude/longitude for which the cache should be generated. It also takes a few other options listed below.

Options:
  -h, --help                 Show this message and exits
  -m, --mapservice=VALUE     Url of the ArcGIS Dynamic Map Service to be
                               cached
  -o, --output=VALUE         Location on disk where the tile cache will be
                               stored
  -z, --minz=VALUE           Minimum zoom scale at which to begin caching
  -Z, --maxz=VALUE           Maximum zoom scale at which to end caching
  -x, --minx=VALUE           Minimum X coordinate value of the extent to cache
  -y, --miny=VALUE           Minimum Y coordinate value of the extent to cache
  -X, --maxx=VALUE           Maximum X coordinate value of the extent to cache
  -Y, --maxy=VALUE           Maximum Y coordinate value of the extent to cache
  -p, --parallelops=VALUE    Limits the number of concurrent operations run
                               by TileCutter
  -r, --replace=VALUE        Delete existing tile cache MBTiles database if
                               already present and create a new one.

Example Usage:

To just try it out, just run TileCutter.exe, it will download tiles for some default extents from an ESRI sample server

TileCutter -z=7 -Z=10 -x=-95.844727 -y=35.978006 -X=-88.989258 -Y=40.563895 -o=”C:\LocalCache” -m=”http://sampleserver1.arcgisonline.com/ArcGIS/rest/services/Demographics/ESRI_Population_World/MapServer”

The TileCutter does request tiles in parallel threads and allows to you controls the number of concurrent operations. I don’t want to write too much about it yet since i am still working to add more capabilities to it. I am planning on providing the ability to generate MBTile caches for WMS services, OSM tiles etc in the future and more if there is interest. Also, planning on implementing  a way to avoid storing duplicate tiles(for example, empty tiles in the ocean etc). I just wanted to get the tool out there early to get feedback and guage interest. So, if you have queries/interest/feature requests, let me know 🙂

Also, I will blog about a little piece of code for a IHttpHandler that will serve up the MBTiles to web clients pretty soon. Stay tuned…

TileCutter can be download here.

Drag & Drop support for graphics in the ESRI Silverlight API

Posted in Uncategorized by viswaug on June 30, 2010

I have recently added drag and drop support for graphics in the ESRI Silverlight API to the ESRI Silverlight API Contrib library. The library has also been upgraded to ESRI Silverlight API Beta 2. Will soon upgrade it to RC. The drag & drop functionality can be added to specific graphics or to all the graphics in a graphics layer. Enabling drag and drop is real easy. Just call the ‘MakeDraggable(map)’ method on either a Graphic object or a GraphicsLayer object. ‘MakeDraggable’ is an extension method that takes a reference to the ESRI Silverlight API map control to do it’s magic. The return from the method call is an ‘IDisposable’. To stop the graphic or the graphics on the graphics layer from being draggable, just call ‘Dispose’ on the ‘IDisposable’ object returned above. The map remains entirely usable when the graphics are draggable, so no worries there.

I am also currently looking into adding custom drawing tools to the library that will have the map be pannable when drawing and allow adding new drawing tools like circle etc if needed. If you want to see some new interesting features added to the ESRI Silverlight API Contrib library, let me know 🙂

Helper classes for mock testing WebClient

Posted in Uncategorized by viswaug on May 17, 2010

The WebClient class in .NET and Silverlight is used for fetching (GET) resources from the web or sometimes locally. This scenario inherently does not lend itself very well to unit testing since it means the component being written is dependent on an external resource. Unit testing seeks to test the component by itself and eliminate the request to the external resource. Mock testing comes to our rescue in such scenarios by letting us mock the touch-points where the component interfaces with the external resource. But, the use of WebClient to make the request to the external resource makes mocking tricky, since WebClient does not implement interfaces or virtual methods to expose its functionality. Most of the widely used mocking frameworks can only mock members of an interface or virtual methods. To overcome this short-comming, I have created and used the following helpers classes to help mock web requests made using WebClient. It consists of a ‘WrappedWebClient’ class that wraps the methods supported by WebClient and implements interfaces defined alongside to help with the mocking of those methods.

The sample below contains a service class that fetches stories from the Digg web service using a search term.

  1. public interface IServiceRequest<T, K>
  2. {
  3.     void Run( T input );
  4.  
  5.     event EventHandler<ServiceRequestEventArgs<K>> SelectionCompleted;
  6.  
  7.     object State
  8.     {
  9.         get;
  10.         set;
  11.     }
  12. }
  1. public class DiggSearchService : IServiceRequest<string, IEnumerable<DiggStory>>
  2. {
  3.     string template = http://services.digg.com/search/stories?query={0}&count={1}&appkey=http://www.vishcio.us”;
  4.  
  5.     public IWebDownloader WebDownloader
  6.     {
  7.         get;
  8.         set;
  9.     }
  10.  
  11.     public int StoryCount
  12.     {
  13.         get;
  14.         set;
  15.     }
  16.  
  17.     public DiggSearchService()
  18.     {
  19.         WebDownloader = new WrappedWebClient();
  20.         StoryCount = 10;
  21.     }
  22.  
  23.     #region IServiceRequest<string,IEnumerable<DiggStory>> Members
  24.  
  25.     public void Run( string input )
  26.     {
  27.         WebDownloader.DownloadStringCompleted += new EventHandler<WrappedDownloadStringCompletedEventArgs>( WebDownloader_DownloadStringCompleted );
  28.  
  29.         Uri address = new Uri(string.Format(template, input,StoryCount));
  30.         WebDownloader.DownloadStringAsync( address, State );
  31.     }
  32.  
  33.     void WebDownloader_DownloadStringCompleted( object sender, WrappedDownloadStringCompletedEventArgs e )
  34.     {
  35.         if( e.Error != null )
  36.         {
  37.             RaiseSelectionCompletedEvent( null, e.Error );
  38.             return;
  39.         }
  40.  
  41.         XDocument xmlStories = XDocument.Parse( e.Result );
  42.  
  43.         IEnumerable<DiggStory> stories = from story in xmlStories.Descendants( “story” )
  44.                       select new DiggStory
  45.                       {
  46.                           Id = ( int ) story.Attribute( “id” ),
  47.                           Title = ( ( string ) story.Element( “title” ) ).Trim(),
  48.                           Description = ( ( string ) story.Element( “description” ) ).Trim(),
  49.                           HrefLink = new Uri( ( string ) story.Attribute( “link” ) ),
  50.                           NumDiggs = ( int ) story.Attribute( “diggs” ),
  51.                           UserName = ( string ) story.Element( “user” ).Attribute( “name” ).Value,
  52.                       };
  53.         RaiseSelectionCompletedEvent( stories, null );
  54.     }
  55.  
  56.     public event EventHandler<ServiceRequestEventArgs<IEnumerable<DiggStory>>> SelectionCompleted;
  57.  
  58.     public object State
  59.     {
  60.         get;
  61.         set;
  62.     }
  63.  
  64.     #endregion
  65.  
  66.     private void RaiseSelectionCompletedEvent( IEnumerable<DiggStory> data, Exception ex )
  67.     {
  68.         if( SelectionCompleted != null )
  69.             SelectionCompleted( this, new ServiceRequestEventArgs<IEnumerable<DiggStory>>( data, ex ) );
  70.     }
  71. }

The service class above uses the WrappedWebClient instead of the WebClient class itself. This enables the service class to be unit tested by mocking out the web request to the digg service. The WrappedWebClient implements the following interfaces

  • IWebDownloader
  • IWebUploader
  • IWebReader
  • IWebWriter

This enables us to mock the various types of operations that can be performed by the WebClient. The Digg service class above exposes an IWebDownloader property which makes it clear that the service class uses the ‘DownloadStringAsync’ method on the WebClient to access the service. To test the service class above, a sample response for the request can be downloaded beforehand and saved as a XML file. This file can then be compiled as a ‘Resource’ into the test assembly. During the test, the XML file can be retrieved from the test assembly and can be used as the return value from the mock object for the IWebDownloader interface. The Silverlight unit test below illustrates how the service class can be unit tested using the helper classes and the Silverlight mocking framework ‘Moq’.

  1. [TestClass]
  2. public class DiggSearchServiceTest : SilverlightTest
  3. {
  4.     [Asynchronous]
  5.     [TestMethod]
  6.     public void DiggSearchServiceSuccessTest()
  7.     {
  8.         DiggSearchService target = new DiggSearchService();
  9.  
  10.         StringBuilder sb = new StringBuilder();
  11.         StringWriter sw = new StringWriter( sb );
  12.         XDocument.Load( “/WrappedWebClient;component/Tests/stories.xml” ).Save( sw );
  13.  
  14.         var mock = new Mock<IWebDownloader>();
  15.         mock.Setup( foo => foo.DownloadStringAsync( null ) ).Verifiable();
  16.         target.WebDownloader = mock.Object;
  17.  
  18.         bool isLoaded = false;
  19.         IEnumerable<DiggStory> result = null;
  20.         target.SelectionCompleted += ( sender, e ) =>
  21.         {
  22.             isLoaded = true;
  23.             //Get the results from the service
  24.             result = e.EventData;
  25.         };
  26.         target.Run( “LSU” );
  27.  
  28.         EnqueueDelay( 1000 );
  29.         //Mock the service request results
  30.         EnqueueCallback( () => mock.Raise
  31.             (
  32.                 bar => bar.DownloadStringCompleted += null,
  33.                 new WrappedDownloadStringCompletedEventArgs( sb.ToString(), null, false, null )
  34.             ) );
  35.  
  36.         EnqueueConditional( () => isLoaded );
  37.         EnqueueCallback( () => Assert.IsNotNull( result ) );
  38.         EnqueueCallback( () => Assert.IsTrue( result.Count<DiggStory>() == 10 ) );
  39.         EnqueueTestComplete();
  40.     }
  41. }

Suggestions welcome…

Download the source code with the helper classes here

Working with ESRI token secure services

Posted in ArcGIS, ESRI, GIS, Uncategorized, Web by viswaug on March 29, 2010

At the ESRI developer summit this past week, I ran into some people that were either having a hard time with using the ESRI token authentication or were leaving their systems vulnerable to hacks given their use/abuse of long lived tokens. I thought it might be useful to share one way that we have been using ESRI token secured services in our web mapping applications.

Token secure services require the client to request a token with their username & password which should then be used/included in all other future requests to access the services. The token provided to the user by AGS is also valid only for the time period requested by the user. The AGS server also applies a upper limit to how long the token can be valid.

One of the main reasons for troubles with using such token secure services in a web mapping application is that the user logs into the web application that he is using and not actually the AGS server(s) that the web application is using map services from. So, in order to use the map services in the web application, the user has to log-in (again) to the AGS server also. Having the user log in again after they have already logged into the web application is highly undesirable. To prevent the user from having to enter in the credentials to access AGS services again, some may decide to use a long lived token and hard-code the token into the web application or hard-code the username & password to access AGS services in the mapping client application. I don’t think I need to explain why hard-coding the username & password in the client web mapping application is dangerous. But this still leaves the application highly vulnerable to hacks since anybody who can read the URL being used to access the services have access to the long-lived token. Using the long-lived token, anybody can obtain access to the AGS services since the only defense is the ClientID (or the HTTP Referrer header) and that can be spoofed easily since it is never verified. Also, the long-lived token doesn’t expire often and leaves the hackers a lot of time to get the token and access the secure AGS services

To get around this, there is an easy way to setup the web application to use and better secure the AGS services. We might have two main ways of sharing username & password between the web application and the web application. The first way is to have AGS and the web application share the membership/permission/roles datastore. In this case, the web application can use the same username & password combination to obtain a token from the AGS server. The second way is to have all users of the web application use the same name username & password to access the AGS services. The second way could work because the user has already been authenticated by the web application and so he can be trusted to access the AGS services also. In this case, the username & password that will be used to log-in all web application authenticated users can be stored in the web application configuration file (web.config). This credential can be used to obtain a token from AGS. This is generally how Bing map services are also handled. The Bing credentials are stored in the web.config and used to obtain a Bing token when the page with the map is loaded.

So, once the user logs into the web application, the username & password from the shared datastore/web.config can be used to make a request to the AGS ‘GetToken’ URL endpoint and obtain a short-lived token for AGS access. This token can then be sent down to the client as a part of the HTML / ASPX page. Another technique is to write a HTTPHandler that accepts a GET request without a username & password and uses credentials from the shared datastore/web.config to obtain a token to access AGS services and sends the token down to the web application client. Is method is secure because the HTTPHandler itself can be secured by either windows/forms authentication of the host web application.

Another thing to note about AGS tokens is that AGS does NOT require a ‘Referrer’ (IP address/ Site URL) to generate a short-lived token (long-lived tokens do require them). If you are generating a token from the AGS web page to generate a token, you will have an option to not specify the ‘Referrer’ (ClientID), but if you are just making a HTTP request to the GetTokens endpoint, you can obtain a short-lived token without the ClientID. When using short-lived tokens obtained without the ClientID, AGS does NOT enforce checks on where the calls are originating from. Actually, this is the reason why Silverlight clients are currently able to consume token secure map / AGS services. Silverlight 3 & under clients do not include the ‘Referrer’ HTTP Header for all outgoing HTTP requests, so ClientID origin checks are not enforced on Silverlight API clients. This issue has been fixed in Silverlight 4.

Unfortunately, the authentication tokens generated by ASP.NET to secure web applications and the ones generated by AGS are generated using different techniques. The key used to generate the token is different, ASP.NET uses the machineKey from web.config and AGS token uses a key from the AGS configuration file. If this wasn’t the case, we could technically have the ASP.NET web application and AGS share the same token…

Quick lesson learnt implementing continuous zoom operations for the ESRI Silverlight API

Posted in ArcGIS, ESRI, GIS, Uncategorized by viswaug on February 1, 2010

One of the nicer navigation features that enhance suser experience on mapping applications is the continuous zoom in & out and pan operations. Also, users that use Google and Bing maps frequently expect the continuous zoom and pan experience in all the mapping applications 🙂 The ESRI Silverlight API Toolkit does not include support for continuous zoom and pan operations. But implementing the continuous navigation operations for the ESRI Silverlight API is real easy with the DispatcherTimer class that was introduced with Silverlight version 2.0. The DispatcherTimer class calls the ‘Tick’ event listeners on the UI thread so developers don’t have to worry about accessing UI controls from across threads. Implementing it is as simple as setting the map’s extent very frequently with new extent calculated for zoom in/out/pan operations. To provide a good experience, I had to reset the map’s extent every 10 milliseconds.

I had initially overlooked the fact that every time I reset the map’s extent every 10 milliseconds a map image request was being made for every DynamicLayer that was loaded on the map. The problem doesn’t occur with the TiledMapServiceLayer because requests for tile images only occur after the zoom/pan operation has caused the map’s extent to exceed the extent of the tile map images already loaded on to the map and the browser also caches the tiles client-side. The excessive number of map image requests (1 request every 10 milliseconds) generated by the continuous zoom operation would bring any GIS server (and did bring our server) to its knees pretty soon. There is no perfect solution to the problem here in my humble opinion. But an acceptable solution to the problem is to hide (change visibility to false) all the DynamicLayer(s) on the map when the continuous navigation operation begins and to reset the layers back to their previous state when the continuous navigation operation ends. Since the base map layers are TiledMapServiceLayer(s), they don’t pose the problem with the excessive number of requests and we can leave them visible. This actually provides an acceptable user experience during the continuous navigation operations.

The code snippet below illustrates the implementation of a sample continuous zoom in & out operations.

Note – In Silverlight any control that inherits from ButtonBase like Button, CheckBox, RadioButton etc will not raise the ‘MouseLeftButtonDown’ and ‘MouseLeftButtonUp’ events even though the events are available on the classes.

public partial class MainPage : UserControl

{

    private DispatcherTimer _timer;

    Dictionary<Layer, bool> _mapLayersVisibilityState = null;

 

    public MainPage()

    {

        InitializeComponent();

 

        IDisposable subscription = null;

        subscription = myMap.Layers.GetLayersLoadCompleted().Subscribe

            (

                ( args ) =>

                {

                    txtStatus.Text = "All layers have loaded.";

                    subscription.Dispose();

                }

            );

        txtStatus.Text = "Loading Layers onto map …";

    }

 

    void zoomOut_MouseLeftButtonUp( object sender, MouseButtonEventArgs e )

    {

        if( _timer != null )

            _timer.Stop();

        EndContinuousOperation();

    }

 

    void zoomOut_MouseLeftButtonDown( object sender, MouseButtonEventArgs e )

    {

        BeginContinuousOperation();

        _timer = new DispatcherTimer();

        _timer.Interval = new TimeSpan( 0, 0, 0, 0, 10 );

        _timer.Tick += _zoomOutTimer_Tick;

        _timer.Start();

    }

 

    void zoomIn_MouseLeftButtonUp( object sender, MouseButtonEventArgs e )

    {

        if( _timer != null )

            _timer.Stop();

        EndContinuousOperation();

    }

 

    void zoomIn_MouseLeftButtonDown( object sender, MouseButtonEventArgs e )

    {

        BeginContinuousOperation();

        _timer = new DispatcherTimer();

        _timer.Interval = new TimeSpan( 0, 0, 0, 0, 10 );

        _timer.Tick += _zoomInTimer_Tick;

        _timer.Start();

    }

 

    void _zoomInTimer_Tick( object sender, EventArgs e )

    {

        ZoomMapByFactor( 0.99 );

    }

 

 

    void _zoomOutTimer_Tick( object sender, EventArgs e )

    {

        ZoomMapByFactor( 1.01 );

    }

 

    private void BeginContinuousOperation()

    {

        _mapLayersVisibilityState = GetMapLayersVisibilityState();

        HideAllDynamicMapLayers();

    }

 

    private void EndContinuousOperation()

    {

        SetMapLayersVisibilityState( _mapLayersVisibilityState );

    }

 

    private Dictionary<Layer, bool> GetMapLayersVisibilityState()

    {

        Dictionary<Layer, bool> visibilityStates = new Dictionary<Layer, bool>();

        myMap.Layers.ForEach<Layer>( lyr => visibilityStates.Add( lyr, lyr.Visible ) );

        return visibilityStates;

    }

 

    private void SetMapLayersVisibilityState( Dictionary<Layer, bool> visibilityStates )

    {

        if( visibilityStates == null )

            return;

 

        foreach( var entry in visibilityStates )

        {

            entry.Key.Visible = entry.Value;

        }

    }

 

    private void HideAllDynamicMapLayers()

    {

        myMap.Layers.ForEach<Layer>( lyr =>

        {

            if( lyr is DynamicLayer )

                lyr.Visible = false;

        } );

    }

 

    void ZoomMapByFactor( double factor )

    {

        if( myMap != null )

        {

            Envelope env = myMap.Extent;

            env.Expand( factor, myMap.RenderSize );

            myMap.Extent = env;

        }

    }

}

The code above makes use of the ‘Expand’ extension method for the ‘Envelope’ type provided below.

public static void Expand( this Envelope extent, double factor, Size mapSize )

{

 

    if( extent == null )

        throw new ArgumentNullException( "extent" );

    if( mapSize.Width <= 0 )

        throw new ArgumentOutOfRangeException( "mapSize.Width" );

    if( mapSize.Height <= 0 )

        throw new ArgumentOutOfRangeException( "mapSize.Height" );

 

    if( extent.XMax == extent.XMin )

    {

        double resolution = extent.Width / mapSize.Width;

        double xVal = extent.XMin;

        //using a 10 pixel buffer

        extent.XMin = xVal 10 * resolution;

        extent.XMax = xVal + 10 * resolution;

    }

 

    if( extent.YMax == extent.YMin )

    {

        double resolution = extent.Height / mapSize.Height;

        double yVal = extent.YMin;

        extent.YMin = yVal 10 * resolution;

        extent.YMax = yVal + 10 * resolution;

    }

 

    MapPoint center = extent.GetCenter();

    double dX = ( extent.Width / 2 ) * factor;

    double dY = ( extent.Height / 2 ) * factor;

 

    extent.XMin = center.X dX;

    extent.XMax = center.X + dX;

    extent.YMin = center.Y dY;

    extent.YMax = center.Y + dY;

}

Enhance your development experience with the ESRI Silverlight API using the Microsoft Reactive Extensions

Posted in .NET, ArcGIS, C#, ESRI, Uncategorized by viswaug on January 31, 2010

In my humble opinion, Reactive Extensions is one of the most exciting libraries to come out of Microsoft lately. And that is saying a lot since there have been quite a bit of good libraries coming out of Redmond recently. The Rx for .NET takes a lot of pain out of asynchronous programming and has drastically reduced the amount of code required to handle and orchestrate responses from multiple web services. Given that Silverlight is a client-side technology, most of the Silverlight applications make calls to web services and display results to the user as needed. A common application scenario is where the Silverlight client needs to make multiple calls to multiple web services and orchestrate calls to other web services based on responses from previous web service calls. Handling this scenario can get quite a bit complicated when you have to maintain contexts to store response states from the various web services. This is where Rx for .NET comes in real handy. The following scenario and the solutions should illustrate how Rx for .NET can help when programming with the ESRI Silverlight API for example.

When adding ArcGIS Map service layers to the Map control in the ESRI Silverlight API, the map control makes a HTTP request for every layer added to request the metadata for the map service. Depending on whether the request to the service metadata succeeds or fails the layer object would raise a ‘Initialized’ event or a ‘InitializationFailed’. So, a layer ‘load’ can be considered to be complete when either the ‘Initialized’ or the ‘InitializationFailed’ has fired. If we want to initiate another sequence of operations when all the ‘load’ of all the layers on the map have completed, like say populate a legend/ Table of Contents, then we will have to wait for all the layer ‘load’ to be complete (Success/Failure). Making a request to the legend web service in the above scenario can get quite cumbersome. But Rx for .NET can simplify things quite a bit. Follow the code sample below to accomplish the task.

The two extension methods below will help us watch for the ‘load’ complete for a layer and to watch for all the layers in a LayerCollection to complete. As you can see, we use the ‘Merge’, ‘Take’ and the ‘ForkJoin’ observable extension methods on the IObservable to watch for the right sequence of events.

public static class ExtensionMethods

{

    public static IObservable<IEvent<EventArgs>> GetLayerLoadCompleted( this Layer layer )

    {

        if( layer == null )

            throw new ArgumentNullException( "layer" );

 

        if( layer.IsInitialized == true )

        {

            IEvent<EventArgs>[] events = new IEvent<EventArgs>[ 1 ] { null };

            events.ToObservable<IEvent<EventArgs>>();

        }

 

        IObservable<IEvent<EventArgs>> oSuccess = Observable.FromEvent<EventArgs>

            (

                ev => layer.Initialized += ev,

                ev => layer.Initialized += ev

            );

 

        IObservable<IEvent<EventArgs>> oFailed = Observable.FromEvent<EventArgs>

            (

                ev => layer.InitializationFailed += ev,

                ev => layer.InitializationFailed += ev

            );

 

        IObservable<IEvent<EventArgs>> obsCompleted = oSuccess.Merge<IEvent<EventArgs>>( oFailed );

 

        IObservable<IEvent<EventArgs>> oFirst = obsCompleted.Take<IEvent<EventArgs>>(1);

 

        return oFirst;

    }

 

    public static IObservable<IEvent<EventArgs>[]> GetLayersLoadCompleted( this IEnumerable<Layer> layers )

    {

        if( layers == null )

            throw new ArgumentNullException( "layers" );

 

        return Observable.ForkJoin<IEvent<EventArgs>>

            (

                from item in layers

                select item.GetLayerLoadCompleted()

            );

    }

}

And now, we can add a handler to be executed when all the layers have loaded like below.

public MainPage()

{

    InitializeComponent();

 

    IDisposable subscription = null;

    subscription = myMap.Layers.GetLayersLoadCompleted().Subscribe

        (

            ( args ) =>

            {

                HtmlPage.Window.Alert( "All layers have loaded." );

                subscription.Dispose();

            }

        );

}

ESRI SL API Contrib – Point in polygon & buffer point for the WGS84 spatial reference system

Posted in Uncategorized by viswaug on May 27, 2009

I just added some utility methods to the ESRI Silverlight API Contrib project that do the following

  1. Determine if a point is inside a polygon. Find the code here
  2. Buffer a point in the WGS84 spatial reference system. Find the code here
  3. Midpoint of two points in the WGS84 spatial reference system. Find the code here
  4. Find the end point given an origin point with a distance and an angle. Find the code here

The above methods should help you avoid some HTTP requests back to the ArcGIS Server to perform these spatial operations. Thus reducing the load on your ArcGIS Server and also increasing the performance of your web application by sticking to Steve Souders’s performance rule number one even though this is a Silverlight client.