Vishful thinking…

Raster statistics + SIMD === winning

Posted in .NET, C#, GIS by viswaug on February 8, 2016

Recently, the .NET framework 4.6 enabled features that would allow us to leverage SIMD (Single Instruction Multiple Data) capabilities that have been present in our CPUs since the 2000s and deliver faster performance for calculating raster statistics.

We were using the .NET bindings for GDAL to calculate a few simple raster statistics for reporting needs. I got curious as to the kind of performance boost that we could be seeing if we leveraged SIMD for this purpose. So, I created a little console app to explore the performance gains. Here is what i observed.

To start off, we need to check if our CPU supports SIMD instructions. Although, I expect that everyone’s computer will support SIMD, here is simple way to check. Download and run CPU-Z. Keep an eye out for the SSE instructions as shown below.

cpuz

Clone the git repo or just download the executable from here. And run the SIMD.exe file with a GeoTIFF file as a parameter. Needless to say, the console app is pretty rudimentary and was written solely to observe SIMD performance gains. I would highly encourage you to clone the repo and explore further if you are interested. The app currently just reads all the raster image into memory and sums all the values to produce a total sum. Not very useful, but i am hoping it can get you started in the right direction to explore further. The amount of code required or in other words to modify your current app to leverage should be manageable and shouldn’t require a complete rewrite. Also, currently the SIMD support in the .NET framework only works under 64-bit and in release mode. That had me stumped for a bit. But, there is very useful “IsHardwareAccelerated” Flag that helps you find out if your vectorized code is actually reaping the benefits of the SIMD hardware.

Raster X size: 28555
Raster Y size: 17939
Raster cell size: 512248145
Total is: 146097271
Time taken: 4568
SIMD accelerated: True
Another Total is: 146097271
Time taken: 1431

Here is the result from a run on a sample raster image. With the SIMD vectors, the execution went from 4568 to 1431 milliseconds. That’s almost 1/3 of the original time. Pretty tempting isn’t it?

The performance measure doesn’t really prove anything and is purely meant to satisfy my own curiosity. Your mileage may vary. To eek out any gains out of using SIMD vectors, you will need to be processing a large enough array that will justify the overhead of creating the SIMD vectors.

I realize that pretty much everything needs to be available in JavaScript to be relevant these days 🙂 So, if JavaScript is your cup of tea FireFox has experimental support for SIMD vectors. Soon we will be running raster statistics on the browser. Actually, we currently are running some raster statistics but without the SIMD bits. Hopefully, i will get around to writing about that soon.

Clone the repo here

Advertisement

MBTilesViewer

Posted in .NET, ArcGIS, C#, ESRI, GIS, Utilities by viswaug on June 28, 2011

I also created this bare bones MBTiles cache viewer to view the tile cache in MBTiles format. This application does not do much, just display the tilecache on a map. To view a MBTiles file, just fire up the viewer and start dragging and dropping MBTiles cache files on to it. You can drag and drop multiple MBTiles cache files at the same time if needed. And also, you can create MBTile caches with the TileCutter. The viewer was created using the ESRI WPF map control. If you would like to see this viewer do more, let me know 🙂

MBTilesViewer can be downloaded here.

TileCutter update – With support for OSM and WMS map services

Posted in .NET, ArcGIS, C#, ESRI, GIS, OpenSource, Utilities by viswaug on June 28, 2011

I just added support for creating MBTiles caches for WMS map services and also to download OSM tiles into the MBTiles format. MBTiles cache for WMS map services would improve map rendering performance, but why did i add support for OSM tile sets? Well, they will come in handy for disconnected/offline use cases. So, here are some usage examples

ArcGIS Dynamic Map Service:

TileCutter.exe -z=7 -Z=9 -x=-95.844727 -y=35.978006 -X=-88.989258 -Y=40.563895 -o=”C:\LocalCache\ags.s3db” -t=agsd -m=”http://server.arcgisonline.com/ArcGIS/rest/services/I3_Imagery_Prime_World_2D/MapServer”

WMS Map Service 1.1.1:

TileCutter.exe -z=7 -Z=9 -x=-95.844727 -y=35.978006 -X=-88.989258 -Y=40.563895 -o=”C:\LocalCache\wms111.s3db” -t=wms1.1.1 -m=”http://sampleserver1-350487546.us-east-1.elb.amazonaws.com/ArcGIS/services/Specialty/ESRI_StateCityHighway_USA/MapServer/WMSServer”

WMS Map Service 1.3.0:

TileCutter.exe -z=7 -Z=9 -x=-95.844727 -y=35.978006 -X=-88.989258 -Y=40.563895 -o=”C:\LocalCache\wms130.s3db” -t=wms1.3.0 -m=”http://sampleserver1-350487546.us-east-1.elb.amazonaws.com/ArcGIS/services/Specialty/ESRI_StateCityHighway_USA/MapServer/WMSServer”

OSM:

TileCutter.exe -z=7 -Z=9 -x=-95.844727 -y=35.978006 -X=-88.989258 -Y=40.563895 -o=”C:\LocalCache\osm.s3db” -t=osm -m=”http://tile.openstreetmap.org”

And always just type “TileCutter -h” for usage information.

Want to customize the parameters with which the maps are being generated? Just use the “-s” command line option ans specify the setting in a query string format.

TileCutter.exe -z=7 -Z=9 -x=-95.844727 -y=35.978006 -X=-88.989258 -Y=40.563895 -o=”C:\LocalCache\ags.s3db” -t=agsd -m=”http://server.arcgisonline.com/ArcGIS/rest/services/I3_Imagery_Prime_World_2D/MapServer” -s=”transparent=true&format=jpeg”

Also, if some of the tile requests result in errors, the level, column, row and error message information would be logged into a text file in the same directory as the MBTiles cache.

And now, for the best new feature of TileCutter, the program does not store duplicate tiles. That is, if the area you are caching has a lot of empty tiles in the ocean etc, the MBTiles cache created by TileCutter will only store one tile for all those duplicated tile images. Should help save disk space 🙂

TileCutter can be downloaded here

CI Starter Kit

Posted in .NET, Agile by viswaug on March 30, 2010

I have made the template following the project directory structure I described in my ESRI Developer Summit 2010 Continuous Integration Talk and the build scripts to along with it ready for download here. Couple that with VisualStudio shortcuts I described here and that should help any project get started quick. In latter posts, I will detail how you can get the above project & build scripts in the template setup on the TeamCity Continuous Integration server

Enhance your development experience with the ESRI Silverlight API using the Microsoft Reactive Extensions

Posted in .NET, ArcGIS, C#, ESRI, Uncategorized by viswaug on January 31, 2010

In my humble opinion, Reactive Extensions is one of the most exciting libraries to come out of Microsoft lately. And that is saying a lot since there have been quite a bit of good libraries coming out of Redmond recently. The Rx for .NET takes a lot of pain out of asynchronous programming and has drastically reduced the amount of code required to handle and orchestrate responses from multiple web services. Given that Silverlight is a client-side technology, most of the Silverlight applications make calls to web services and display results to the user as needed. A common application scenario is where the Silverlight client needs to make multiple calls to multiple web services and orchestrate calls to other web services based on responses from previous web service calls. Handling this scenario can get quite a bit complicated when you have to maintain contexts to store response states from the various web services. This is where Rx for .NET comes in real handy. The following scenario and the solutions should illustrate how Rx for .NET can help when programming with the ESRI Silverlight API for example.

When adding ArcGIS Map service layers to the Map control in the ESRI Silverlight API, the map control makes a HTTP request for every layer added to request the metadata for the map service. Depending on whether the request to the service metadata succeeds or fails the layer object would raise a ‘Initialized’ event or a ‘InitializationFailed’. So, a layer ‘load’ can be considered to be complete when either the ‘Initialized’ or the ‘InitializationFailed’ has fired. If we want to initiate another sequence of operations when all the ‘load’ of all the layers on the map have completed, like say populate a legend/ Table of Contents, then we will have to wait for all the layer ‘load’ to be complete (Success/Failure). Making a request to the legend web service in the above scenario can get quite cumbersome. But Rx for .NET can simplify things quite a bit. Follow the code sample below to accomplish the task.

The two extension methods below will help us watch for the ‘load’ complete for a layer and to watch for all the layers in a LayerCollection to complete. As you can see, we use the ‘Merge’, ‘Take’ and the ‘ForkJoin’ observable extension methods on the IObservable to watch for the right sequence of events.

public static class ExtensionMethods

{

    public static IObservable<IEvent<EventArgs>> GetLayerLoadCompleted( this Layer layer )

    {

        if( layer == null )

            throw new ArgumentNullException( "layer" );

 

        if( layer.IsInitialized == true )

        {

            IEvent<EventArgs>[] events = new IEvent<EventArgs>[ 1 ] { null };

            events.ToObservable<IEvent<EventArgs>>();

        }

 

        IObservable<IEvent<EventArgs>> oSuccess = Observable.FromEvent<EventArgs>

            (

                ev => layer.Initialized += ev,

                ev => layer.Initialized += ev

            );

 

        IObservable<IEvent<EventArgs>> oFailed = Observable.FromEvent<EventArgs>

            (

                ev => layer.InitializationFailed += ev,

                ev => layer.InitializationFailed += ev

            );

 

        IObservable<IEvent<EventArgs>> obsCompleted = oSuccess.Merge<IEvent<EventArgs>>( oFailed );

 

        IObservable<IEvent<EventArgs>> oFirst = obsCompleted.Take<IEvent<EventArgs>>(1);

 

        return oFirst;

    }

 

    public static IObservable<IEvent<EventArgs>[]> GetLayersLoadCompleted( this IEnumerable<Layer> layers )

    {

        if( layers == null )

            throw new ArgumentNullException( "layers" );

 

        return Observable.ForkJoin<IEvent<EventArgs>>

            (

                from item in layers

                select item.GetLayerLoadCompleted()

            );

    }

}

And now, we can add a handler to be executed when all the layers have loaded like below.

public MainPage()

{

    InitializeComponent();

 

    IDisposable subscription = null;

    subscription = myMap.Layers.GetLayersLoadCompleted().Subscribe

        (

            ( args ) =>

            {

                HtmlPage.Window.Alert( "All layers have loaded." );

                subscription.Dispose();

            }

        );

}

Hunting down memory leaks in Silverlight

Posted in .NET, Silverlight by viswaug on July 13, 2009

Recently, I was noticing some memory issues with a Silverlight application I was working on. I was creating a dialog window (user control) dynamically where the user could edit and save data. The dialog user control was data bound to a ‘ViewModel’ type which populates the data controls in the dialog and is then used to populate and save a business object. After, the user clicks on the OK or the Cancel button, i was disposing away with the user control from memory by setting its reference to NULL. But, when i repeated the process of opening and closing the dialog multiple times, the memory used by Internet Explorer always kept going up. This made me suspicious that even though I was setting the user control’s reference to NULL, the user control was never disposed away from memory.

To confirm that multiple instances of the user control were still in memory, I resorted to using “windbg”. Following the steps below, you can confirm and debug memory leaks in Silverlight and also in the .NET CLR.

  1. Download and install windbg from here.
  2. Run the Silverlight application and bring it to a state where you think a memory problem exists. In my case above, I had to open and close the suspect dialog multiple times.
  3. Run windbg and attach to the Internet Explorer process running your Silverlight application.

    windbg1windbg2

  4. Load the SOS debugging extension for Silverlight. This is installed with the Silverlight SDK on your system. The ‘SOS.dll’ is found under the Silverlight installation directory. For me it is at “C:\Program Files\Microsoft Silverlight\2.0.40115.0\sos.dll”. To load the SOS debugging extension execute the command “.load C:\Program Files\Microsoft Silverlight\2.0.40115.0\sos.dll” in the windbg window.

    windbg3

  5. Run the “!dumpheap -stat” command to view the objects in memory and the count of their instances.
  6. Run the “!dumpheap –stat –type [TypeName]” command to view the number of instances of the specified type in memory. In my case, I found that the number of instances of the dialog user control object in memory was always equal to the number of times I had opened and closed the dialog user control. This indicated that the user control object were not getting disposed from memory even when I had set its reference to NULL.
  7. Run the “!dumpheap –type [TypeName]” command to view all the memory addresses of the object’s instances.
  8. Run the “!gcroot [Address]” command to view how the instance of the object can be reached. In other words, it tells us what is holding on to the reference of the object and thus keeping the object in memory without being cleaned up by the garbage collector. The output here is hard to read, but the pain of going through this exercise will definitely pay off when the memory leak in the application has been fixed.

In my case, it turned out that the ‘ViewModel’ object the dialog user control was being data bound to was holding on to a reference to the user control object. So, to fix the memory leak, all I had to do was to set the ‘DataContext’ property of the user control to NULL before setting the user control reference itself to NULL.

Automated Build Starter Kit

Posted in .NET, Agile by viswaug on February 18, 2009

Keeping with the automated build topic like the last two posts, you can download an “Automated Build Starter Kit” here (click image below)

Automated Build Starter Kit

The goal of this starter kit is to enable to quickly setup an automated build for any of your .NET projects using NAnt. To get started with the kit, all you need to do is unzip the contents of the zip file and rename the folder to something like the name of the project. And that’s it you are done. You can build the sample project in the zip file by opening up the command prompt at that directory location and execute the command “go build”. The zip file also contains a sample web application and a sample web site in it. To publish the sample web application to the IIS on the current machine execute “go publishWebApp” and to publish the sample website to the IIS on the current machine execute “go publishWebSite”. To publish both the sample web application and web site, execute “go publish”. The build script automatically registers the published locations as virtual directories with IIS also. Please the ReadMe.txt file in the zip contents for more information.

Launching your automated build scripts from within Visual Studio

Posted in .NET, Agile by viswaug on February 13, 2009

In my last post, I had talked about the benefits of using a common directory structure for all your projects. Well here is another good one. I used to be opening up my command prompt and changing directories to the root directory of the project to launch my build scripts every time I needed to build. Opening up the command prompt to launch the build can get old on you pretty soon. But I was able to add a couple of buttons to my Visual Studio toolbar to launch my automated build and deploy scripts eliminating the need for the command prompt. Adding commands to the Visual Studio toolbar is pretty easy and can be done as shown below. Go to the “Tools -> External Tools” item on the Visual Studio menu bar.

The batch file I use to launch my automated build is always called “Go.bat” and resides at the root of the project directory and its contents look like this

@tools\nant\NAnt.exe -buildfile:MyProject.build %*

So, I just need to launch the “Go.bat” file with the project root as the working directory with the right NAnt targets as arguments. Since, my “Go.bat” file and my Visual Studio solution file reside in the same directory (project root). I can setup my external commands as below.

Make sure to check the “Use Output Window” option so that you can watch the output from your build script in your Visual Studio output window. Cool huh…

GoBuild

GoPublish

I can setup two external tools that invoke the “Go.bat” file with the working directory set to the path where the Visual Studio solution file ($(SolutionDir)/) resides. Note that I am passing in “build” and “publish” as the arguments to the “Go.bat” file when I launch it to either run the “build” NAnt target or run the “publish” NAnt target. Once the two entries have been setup here, they will appear in the “Tools” menu options in Visual Studio.

external_tools_menu

The last thing that is left to do is the trickiest part and I am not sure why Microsoft has left it this way. Go to “Tools -> Customize” and the dialog below should pop-up. Select “Tools” in the “Categories” list on the left. And this should list the externals commands you just added on the right. But the strange part is that the externals tools you setup will not be listed with the name you set them up with but with just general names like “External Command 2” & “External Command 3” etc. The correct names will not show up until you drag the tools from the dialog and drop them into one of your Visual Studio toolbars. Weird… but it works. I had a new toolbar setup for these two buttons and dropped them on to there.

VSCustomize

So, I am happily building and deploying my projects with a single click from within Visual Studio.

VSCommands

Project Directory Structure

Posted in .NET, Agile by viswaug on February 13, 2009

The directory structure that is used for projects could either make your life a whole lot easier or make it a pain when you are setting up your automated builds. Apart from being a crucial part of my automated build workflow, it also helps me makes sure that I can check out the project on a new machine and have the project be build-able right away both with the build script and in Visual Studio. Here is the project directory structure that I have been using for a while now. I have been very satisfied with it. Since I use the same project structure for all my projects, I can use a template build script that I can use right away by changing a couple of values in it.

What directory structure do you use for your projects? Thoughts? Suggestions?

ProjectDirectoryStructure

Inserting spatial data in SQL Server 2008

Posted in .NET, C#, GIS by viswaug on September 29, 2008

I have been working on inserting data from FeatureClasses into SQL Server 2008 tables. There are a lot of examples online showing how spatial data can be inserted into SQL Server 2008 tables. Here is one from Hannes.

I took from cue from there and proceeded to do what every good programmer should do…parameterize the INSERT query. Here is a simplified version of the scenario I was trying to tackle…

sqlCommand.CommandText = INSERT INTO foo (id, geom) VALUES(1,@g);

sqlCommand.InsertParameter(“@g”, ‘POLYGON ((0 0, 10 0, 10 10, 0 10, 0 0))’);

The above INSERT command worked fine. But I also wanted to insert the geometry or geography with the WKID value from the FeatureClass. So I changed the above INSERT statement to use the geometry’s static method and provided the WKID as below

sqlCommand.InsertParameter(“@g”, ‘geometry::STGeomFromText(POLYGON ((0 0, 10 0, 10 10, 0 10, 0 0)), 0)’);

But that didn’t go quite as expected. The INSERT statement was throwing an error saying that it did not recognize ‘geometry::STGeomFromText’ and was expecting ‘POINT, POLYGON, LINESTRING…’ etc.

System.FormatException: 24114: The label geometry::STGeomFrom in the input well-known text (WKT) is not valid. Valid labels are POINT, LINESTRING, POLYGON, MULTIPOINT, MULTILINESTRING, MULTIPOLYGON, or GEOMETRYCOLLECTION.

This led me to use the Udt SqlParamter type to get around the problem like below. This way I could use the SqlGeometry type in the ‘Microsoft.SqlServer.Types’ namespace and set the SRID directly on its ‘STSRID’ property. The ‘SqlGeometry’ and ‘SqlGeography’ types can be found in the Feature Pack for SQL Server 2008 that can be found here. The SqlGeometry reference can then be used as the value for the SqlParameter.

SqlParameter parameter = new SqlParameter( parameterName, value );

parameter.Direction = ParameterDirection.Input;
parameter.ParameterName = parameterName;
parameter.SourceColumn = sourceColumn;
parameter.SqlDbType = SqlDbType.Udt;
parameter.UdtTypeName = “GEOMETRY”;
parameter.SourceVersion = DataRowVersion.Current;

command.Parameters.Add( parameter );

The ‘UdtTypeName’ should be set to ‘GEOGRAPHY’ for inserting geography fields.

And it worked like a charm.

Update: Heard from Morten saying that using the SqlGeography and SqlGeometry types are almost twice as fast as using WKB (Well Known Binary). Using WKB is by itself faster than using WKT (Well Known Text).

Tagged with: