JSON over XML ?
I am pretty sure a whole lot of people have been faced with this question for their web applications. A while ago I was faced with that question myself. For the kind of web application I was working on, the JSON seemed like a clear winner. It was easy to work with in javascript and for scenarios like sending a dataset over to the browser, JSON seemed to be a great option to reduce the payload.
But XML is a much more mature technology and has other supporting technologies like XSLT that make working XML easier. Javascript libraries like Sarissa exist that can convert XML data from the server to HTML using XSLT transformations. XSLT can be considered a templating language’ for XML. But no such mature templating language exist for converting JSON to HTML. Libraries like JSONT have popped up but it is not a complete or well tested solution like XSLT. So, that is real big road block for blindly adopting JSON over XML right away.
Even, the existing javascript templating languages (JSONT, JSLT) are inherently slower since they make extensive use of ‘eval‘ to transform the data. Also, they get real hard to read (especially for .NET pampered developers like me). This post by John Resig goes over some metrics that will help make the JSON/XML decision. Shamelessly stealing from his post, the following graph illustrates the advantage of transferring JSON over XML data (25 – 1600 data records).
But then again, all that performance gain is blown away by the poor performance of JSONT in transforming the JSON data to HTML. The XSLT transformation to transform XML to HTML performs much better when compared to the JSONT transformation as illustrated in the graph below, again from John Resig’s post (25 – 200 data records).
John, stops there with his analysis but I was curious as to how the performance would be affected if I wrote my own javascript to transform JSON to HTML. Does coding the transformation in javascript help the performance any? So, to test it out I added to John Resig’s test script to output metrics for transforming JSON to HTML in javascript. Get the additional test files here.
Time (in seconds)/ Run | ||||
Test Runs | 100 | 200 | 400 | 1600 |
XML / XSLT | 0.01014 | 0.010845 | 0.0116225 | 0.01213875 |
JSON / JSONT | 0.01919 | 0.01958 | 0.0200075 | 0.02013375 |
JSON / Javascript | 0.00421 | 0.004215 | 0.00394 | 0.004153125 |
The figures above illustrates that transforming JSON data to HTML by writing javascript is much faster than using either XSLT or JSONT. So, if you are willing to dish out the javascript code to do the transformation yourself that JSON seems to be a better alternative than XML.
JSON Developer Utilities
I have been using a couple of utilities to make my life easier when developing applications using JSON. These utilities might come in real handy if you are doing the same. The application I have been using the most is JSON Viewer. This little tool has a standalone application and also a Visualizer for Microsoft Visual Studio.NET that comes in real handy. I have listed the utility in my developer utilities page. See some screenshots below.
If you are not interested in installing JSON Viewer, you can use another similar tool called JSONEditor online.
Tiling Vector Data for VirtualEarth applications
What does tiling vector data even mean? Well, I have wondered about the capabilities of the browser and the DOM to handle large vector datasets when they are uploaded to VirtualEarth. Infact, the Shape2VE project that I had undertaken was a direct result of that curiosity. When developing the Shape2VE application, I soon came to realize that once you exceed adding 300-400 shapes on to VirtualEarth, the performance starts degrading really fast and soon the application becomes unusable. Also, when adding large amounts of data to map, the browser stays unresponsive. Thus, I had to write Shape2VE so as to load the data asynchronously.
So, I was glad to know that Microsoft Research (the team that brought you MapCruncher) is striving to solve that same problem. They have released a TiledVectorsDemo application that attempts to solve this problem. They are utilizing a Python script to create tiles for the vector data which is in a GPX format. The Python offline preprocessor script works on the GPX data file which is about 2.5MB is size and chops it up into tiles and stored just like raster tiles. Unlike, raster tiles, the vector tiles are stored as JavaScript files containing first-class Javascript objects. The VirtualEarth application loads up only the tiles that are in the visible map area. And as the tiles go out of the visible extent of the map, they are deleted from VirtualEarth. Individual vector tiles are added to VirtualEarth as different layers. Since, the vector data is stored in vector tile files, they can be cached on the client so that they can be loaded faster if needed later. Pretty nifty I would say…
Can’t yet comment on the viability of this kind of technology for larger datasets, but it would be great to see this kind of technology be integrated with other technologies like GoogleGears and leverage its client-side database capabilities. But it certainly seems like a positive step in the right direction. I am kind of bummed that the script extracting vector tiles from GPX files is in Python and not in .NET…
Here is the TiledVectorsDemo.
And here is more information regarding the project.
View/Upload shapefile to VirtualEarth (Asynchronously)
I dug up this research project I did about 8 months ago to share with everyone who is interested in displaying their Shapefile(s) data in VirtualEarth. This web application is completely free to use (Free as in Beer!!!) as it has no dependencies on ESRI or any other expensive GIS libraries. This application actually uses the .NET wrapper (created by David Gancarz) for ShapeLib library from MapTools. I also use the JayRock library for JSON serialization of .NET types.
You should be able to run the project right after downloading it and see some of the sample datasets that I have included with the project. The web application can be downloaded here. The project which I am calling Shape2VE is also available through Assembla.
Here is a screenshot of the application before the shapefiles are displayed. In order to display the shapefiles, click on the ‘AddShapefile’ button.
Here is a screenshot after the shapefile has been uploaded.
Application Features:
- The Shapefiles are uploaded asynchronously to VirtualEarth so that the user can use the web application during the upload and the map is still responsive.
- Supports points, lines, and polygons.
- The shapefile data is transferred to the browser in a JSON format so as to minimize payload.
- Easily configurable. In order to configure the application to display your own shapefiles, modify the ‘OpenShapefile()’ function in ‘Default.aspx.cs’ to point to your own shapefiles.
-
private void OpenShapefile()
{
veShapefileLoaderCollection loaders = new veShapefileLoaderCollection();
loaders.Add(new vePolylineShapefileLoader(Request.PhysicalApplicationPath + “App_Data\\ftc\\onelnstr.shp”));
loaders.Add(new vePolygonShapefileLoader(Request.PhysicalApplicationPath + “App_Data\\ftc\\natarea.shp”));
loaders.Add(new vePointShapefileLoader(Request.PhysicalApplicationPath + “App_Data\\ftc\\addressSmall.shp”));
Session[“Loaders”] = loaders;
Session[“CurrentLoader”] = _currentLoader;
}
-
- Displays the attributes of the shapes from the shapefile when you mouse over the pushpins.
Notes:
- The Shapefiles should be in WGS84 format.
- The application currently only supports simple features and not multi-part features.
If you have any questions on the application please contact me using the ‘Contact Me’ link in the right-side pane.
2 comments