Posted:

At Google I/O 2009, we had the opportunity to meet many of our favorite developers in the Sandbox and Office Hours, and deliver several advanced talks on geo topics. Check out the description of those sessions below (originally posted on the Google Code blog), or jump straight to the embedded player and watch them yourself.

Mano Marks and Pamela Fox started with a grab bag session covering the vast spectrum of Geo APIs, discussing touring and HTML 5 in KML, the Sketchup Ruby API (with an awesome physics demo), driving directions (did you know you can solve the Traveling Salesman Problem in Javascript?), desktop AIR applications, reverse geocoding, user location, and monetization using the Maps Ad Unit and GoogleBar. Pamela finished by sneak previewing an upcoming feature in the Flash API: 3d perspective view.

In the session on performance tips for Maps API mashups, Marcelo Camelo announced Google Maps API v3, a latency-oriented rewrite of our popular JS Maps API. Also see Susannah Raub's more in-depth talk about Maps API v3. Then Pamela gave advice on how to load many markers (by using a lightweight marker class, clustering, or rendering a clickable tile layer) and on how to load many polys (by using a lightweight poly class, simplifying, encoding, or rendering tiles). Sascha Aickin, an engineer at Redfin, showed how they were able to display 500 housing results on their real estate search site by creating the "SuperMarker" class.

Mano and Keith presented various ways of hosting geo data on Google infrastructure: Google Base API, Google App Engine, and the just-released Google Maps data API. Jeffrey Sambells showed how ConnectorLocal used the API (and their own custom PHP wrapper) for storing user data.

On the same day as announcing better integration between the Google Earth and Google Maps JS APIs, Roman Nurik presented on advanced Earth API topics, and released a utility library for making that advanced stuff simple.




Posted:

Recently, there has been a lot of interest in clustering algorithms. The client-side grid-based MarkerClusterer was released in the open source library this year, and various server-side algorithms were discussed in the Performance Tips I/O talk. We've invited the Travellr development team to give us insight on their unique regional clustering technique.

Travellr is a location aware answers service where people can ask travel-related questions about anywhere in the world. One of its features is a map-based interface to questions on the site using Google Maps.



Figure 1. An example of the Travellr Map, showing question markers for Australia.


Clustering for usability

We learned that the best way to display markers without cluttering our map was to cluster our questions depending on how far you zoom in. If the user was looking at a map of the continents, we would cluster our questions into a marker for each continent. If the user zoomed-in to France we would then cluster our questions into a marker for each region or city that had questions. By clustering our data into cities, regions/states, countries, and continents, we could display relevant markers on the map depending on what zoom level the user was looking at.


Optimizing for Clustering

Our next challenge was how to extract clustered data from our database without causing excessive server load. Every time the user pans and zooms on the map, we need to query and fetch new clustered data in order to display the markers on the map. We also might have to limit the data if the user has selected a tag, as we're only interested in a questions related to a topic (ie: "surfing"). To execute this in real-time would be painstakingly slow, as you would need to to cluster thousands of questions in thousands of locations with hundreds of tags on the fly. The answer? Pre-cluster your data of course!


Step 1. Structure your location data

When a question is asked about a city on Travellr, we also know its region/state, country and continent. We store more than 55,000 location points as a hierarchy, with each location "owning" its descendent nodes (and all of their data). Our locations are stored in a Modified Preorder Tree (also called Nested Sets). Modified Preorder Trees are a popular method of storing hierarchical data in a flat database table, having a focus on efficient data retrieval, and easy handling of sub trees. For each location we also keep a record of its depth within the tree, its location type (continent, country, region/state, or city), and its co-ordinates (retrieved using the Google Maps geocoder).


Step 2. Aggregate your data

We calculate aggregate data for every branch of our locations tree ahead of time. By storing aggregate data for cities, regions/states, countries, and continents, we provide an extremely fast and inexpensive method to query our locations database for any information regarding questions asked about a particular location. This data is updated every few minutes by a server-side task.

Our aggregations include:

  • Total question count for a location
  • Most popular tags for that location
  • Number of questions associated with each of those tags.

How we query our structured, aggregate data on the map

Whenever the user zooms or pans the map we fire off a query to our (unpublished ;) API with the tags they are searching for, the current zoom level, and the edge co-ordinates of the map's bounding box. Based on the zoom level (Figure 2) we work out whether we want to display markers for continents, countries, states, or cities. We then send back the data for these markers and display them on the map.



Figure 2. Clustering at different zoom levels (blue = continents, countries, pink = states, cities)


Everyone Wins

So what is the result of structuring and aggregating our data in such a way? It means that we have nicely organized, pre-clustered data that can be read from cheaply and easily. This allows us to provide a super-fast map interface for Travellr that puts minimal load on our infrastructure. Everyone is happy!

Comments or Questions?

We'd love to hear from you if you have any questions on how we did things, or suggestions or comments about Travellr's map. This article was written by Travellr's performance and scalability expert Michael Shaw (from Insight4) and our client-side scripting aficionado Jaidev Soin.

You can visit Travellr at www.travellr.com, or follow us on Twitter at twitter.com/travellr.

Posted:

When you're showing satellite imagery with our Maps API, it's often the case that you want to show the most detailed imagery available. But it's always been tricky figuring out the best zoom level for a particular location. If you don't zoom in far enough, your users won't immediately get the most detailed image available. If you zoom in too far, you might get the dreaded message "We are sorry, but we don't have imagery at this zoom level for this region", and no imagery at all.

What if there were a way to know programatically what the maximum zoom level was for any point in the world? Fortunately, now there is.

It's not easy to solve this problem naively; the world is a big place. At zoom level 22, there are 4 to the power of 22 potential satellite tiles -that's over 17.5 trillion. The zoom level for satellite imagery that exists varies wildly all over the world. Sydney's Bondi Beach has imagery right up to zoom level 22, whereas the centre of the Pacific Ocean only goes up to zoom level 9. (I make no accusations about whether this means the Google Maps team prefers to look at tanned, sunbathing Aussies).

But with a good search algorithm, and data based on the most frequently viewed areas of the earth, we've been able to make a search for the existence of imagery very efficient, and we are now exposing this functionality to our API developers.

The new solution is an asynchronous function which is part of the GMapType class: getMaxZoomAtLatLng. The function takes a GLatLng and returns the maximum zoom level at which imagery exists. Because the function requires a call to Google's servers (much like GClientGeocoder.getLocations()), you must also provide a callback parameter, which is a function which will deal with the response.

As an example, here's a function which will set the center of the given GMap2 object to the maximum zoom level at the given GLatLng:

 
function setMaxZoomCenter(map, latlng) {
  map.getCurrentMapType().getMaxZoomAtLatLng(latlng, function(response) {
    if (response && response['status'] == G_GEO_SUCCESS) {
      map.setCenter(latlng, response['zoom']);
    }
  });
}

As you can see, the response object contains a status code, and, if the response was successful, a zoom field containing the maximum zoom at that point.

Click on the map below, and it will zoom to the highest zoom level available at the point at which you clicked.

Note that this function is only implemented for satellite imagery, and not roadmaps, whose zoom levels don't vary nearly as much. It works for both the G_SATELLITE_MAP and G_HYBRID_MAP map types. The full reference is available here.

We hope this function makes developing with satellite imagery a simpler, easier and fuller experience. Please provide any feedback in the Maps API Google Group.

Posted:
Developer Qualification

Last week at Google I/O we released the Google Maps API (JavaScript version) addition to the Developer Qualification program. Designed for professionals who currently develop or want to develop applications based on Google and Google-sponsored Open Source APIs, the Google Qualified Developer program will help promote developers to the Google community, provide credibility, and leverage the wisdom of the masses in rating and recognizing best in class developers. In this program, we assess developers in four areas, each of which provides a score towards an overall total required for qualification. Developers must maintain a minimum number of points to remain qualified within the program. Points are awarded for examples of development work, community participation, professional references, and scores on examinations.

With the addition of the Google Maps API to the available qualifications, the program landing pages and registration have been moved to the Google Code site at http://code.google.com/qualify. The new landing pages provide information on the program and available APIs, details about qualification requirements, answers to frequently asked questions, and an opportunity to apply as a candidate in the qualification program.

We've also recently partnered with 3rd party training vendors who can help you get ready to qualify. The Developer Qualification program provides a mechanism by which Google can evaluate and promote the best developers in the community, but does not provide training in preparation for qualification. With the success of the program there exists a business opportunity for 3rd party training vendors to develop and deliver this training. In order to stimulate the growth of this ecosystem, several vendors have been identified and are working closely with Google to develop initial training efforts for the Google Maps API qualification.

To read more about the program, take a look at our site. We look forward to expanding our API support and growing the Developer Qualification program. Please reach out to us with questions and feedback at devqual-proctors@google.com.