2023
Setting up requires a bit of initial work because you need an API key from Google Cloud services. After obtaining the key, simply import the .hda into your Houdini project and refer to the instructions provided on the GitHub repository.
A couple things to note:
The .hda allows you to specify what level of detail textures and meshes you can get in the form of a LOD specification. The highest detail was said to be 2 (the implication being detail levels 1 and 0 might be added eventually), and would range all the way to a very low res 60,000. The downloads are pretty hefty and the number of points/verts/polygons in the downloaded data is not-insignificant. It also takes quite a while to download the thousands of files for a decent sized patch. Be sure to define your cache folder, because the setup will skip re-downloading any sections you have already grabbed which definitely came in useful.
My initial attempt to download a 1km square area of buildings centered on Times Square, but Houdini would crash before showing the 20+ million points of geo. I changed my search area to Chinatown where my company's office is located, and reduced the amount of detail for areas outside of a 500m square. I did a couple renders of this spot and quickly decided on a tilt-shift effect. It doesn't take much scrutiny at all to see how reductive the low-poly maps looked and I didn't want to draw attention to them.
I tried all the usual locations. First I thoroughly explored Manhattan and spent a lot of time grabbing content from just South of Central Park. I then moved on to older locations like the Colosseum and Edinburgh Castle. The more I explored, the more I began to grasp the limitations of the data.
For one, these are not water-tight meshes. They are intersecting planes that often don't share corner points. They are sloppily uv'd and at a variety of scales and rotations, as the image to the left shows.
Additionally, there is no sense of what might be highly reflective vs. matte. I tried experimenting with making darker parts of the texture have less roughness, because a majority of the skyscraper windows tended towards black. But this rule wasn't quite good enough, and accidentally making the rough parts of the building more reflective accentuated the bad geo as shown below.
The mesh is comprised of weirdly proportioned triangles. They are mostly planar, but with the wrong lighting it becomes really obvious that the mesh is constructed oddly. Having a material that is overly reflective with low roughness will accentuate this quite a bit. I experimented with smoothing the normals, but didn't spend much time on it as it seemed like a rabbit hole.
I kept thinking of ideas that would be stunning if the geo was more predictably constructed. Perhaps it is on Google's to-do list, but honestly these meshes work extremely well for their intended use case so I am not too bitter about it. Perhaps A.I. optimization technology will improve enough that in a couple years, these meshes will be auto-cleaned.
I was surprised to see that JFK Airport in New York City comes complete with commercial jets at terminals and on the runway. It was a nice touch. Some places even had high enough resolution of parking lots to get something resembling a car-shaped textured mound for every car in the scene. While exploring the NYC mesh, there was indeed an attempt to represent electrical boxes on the sidewalks and in some cases even mounds of trash on the side of the road. Of course, this was not a decision by Google. I haven't looked into it too much, but I assume it is all automated and the Google satellite doesn't know a trash mound from a building -- its just 3D positions plotted as accurately as the satellite cameras (or governments) will allow.
There are places scattered throughout Google Maps/Earth where the resolution is purposefully decreased to almost become pixel art. Below is a view of the Mykonos International Airport. No planes, no buildings, and unrealistic elevations. Not entirely sure why they felt the need to obscure a small airport, and yet the busiest airports in the US all have 3D data for buildings, runways, and in most cases, a sampling of airplanes. Wonder what Streisand would have to say about this.
Despite there not being predictable building meshes, I was able to do something pretty interesting by adding in a flocking simulation that was able to avoid buildings. I started by downloading a decent sized mesh of San Francisco. I then used the Houdini Labs tools and the MapBox API to grab street-map geo from OSM and overlaid it on the Google Maps geo. (This OSM step was totally unnecessary, but it was an interesting exercise.)
I could then spawn a few hundred thousand objects on the street and basically tell all of them to find the nearest polygon and make sure they don't come within 1 or 2 meters of that surface. Combined with the flocking rules, the objects did a really good job of avoiding the buildings but still attempting to form interesting murmuration patterns. I hope to revisit this soon (I didn't render a video because it would tie up my machine for a couple weeks to do a long enough render to be worth it).
I am a big fan of Reuben Wu's Aeroglyphs series made with long-exposure and drones so I tried a few in the spirit of his stellar work. There is definitely something more imaginative to be done here, but I will leave it to Reuben to show us how it's done IRL.
Once I got the hang of how to control the LOD, I was able to create larger scenes that showed much more of the city (in this case, NYC and Paris). I created a setup that would grid out a path based on a starting and ending lat/long. For the first version, I dropped two pins in Google maps along 7th Avenue in Manhattan, starting at Central Park and working its way down to Canal Street. Because I couldn't grab this much geo (limited by the API requiring a square plot) and because these parts of NYC were particularly geo heavy, I had to deal in smaller tiles that were 500m x 500m.
I created my path of tiles, 10 in total, and made sure to delete any overlapping geometry. The image on the right shows the first 4 tiles. (The missing textures were due to Houdini capping the number of individual material calls to 5000. I opted to not increase it further). This got the point/vertex/polygon count down to a reasonable level (approximately 16 million points), and I was able to fill out the rest of NYC using a very low resolution filler meshes. All I had to do was make the camera zippy enough and motion-blurry enough that you wouldn't really notice the degredation in the building shapes. You can see the results below.
Worth noting this isn't free. You have to set up a Google Cloud account and you are charged for this data. I just received my bill for September which covers all my Maps API experimentation: $3.68.
Also worth noting that you probably can't go using this stuff in your professional work. I doubt the licensing is very accommodating.