To share Street View imagery with you in Google Maps, the Street View engineering team is hard at work behind the scenes. Here’s a glimpse into what the team is doing to bring Street View to you.
First of all, we need to actually drive around and photograph the locations to be shown in Street View. We pay close attention to the sun when planning our driving, and need the sun to be high enough so that shadows don't obscure buildings. We also consider weather and temperature, since we don’t want snow, fog or rain to cause driving delays or blurred imagery.
To drive in the right light and weather, when we drive in the United States, for example, we’ll start in the southern states and move north as it gets warmer, and then go south again as winter comes. In Europe, this means that we might start driving in southern Italy and gradually move up to Sweden.
We then think about which places we should start photographing and, since Street View will be useful for the greatest number of people in large metro areas, we start driving there. We often begin in the city centre to collect popular downtown areas, then move outwards.
We need to figure out exactly where each image was taken, so that you can see an image of the right place when you’re looking in Street View. To do this, we combine signals from several sensors on the car, including a global positioning system (GPS) device and monitors that measure speed and direction.
The GPS device shows us the car’s exact location most of the time, but sometimes factors like tall buildings in a city centre block the signal, and data from the other sensors helps us to fill in those gaps. We can construct the car’s route accurately by combining these signals.
Since we know exactly when along the line and in which direction each picture was taken, we can then match each image to a specific location and even tilt and align the images with hilly terrain.
When we photograph for Street View, we don’t want gaps in the imagery, so adjacent cameras on the car take overlapping pictures. To remove the overlap and create a continuous 360-degree image, we “stitch” the images together.
We know the geometry of all of the different cameras in the system and, from that, we determine where in the images we should “stitch” to create a unified panorama. We then apply special image-processing algorithms to lessen “seams” where the images meet, thus creating a smooth transition.
Here you can see the original photos:
Then the photos are stitched together to create a continuous panorama:
This image looks distorted, since it is a flattened representation of a spherical shape. Think about how the world globe looks if you flatten it out onto a piece of paper.
We then re-project this imagery onto a sphere, so that it looks normal in Street View.
There are several factors to consider in showing you the right image. When you drag Pegman onto the map, we calculate the nearest matching panorama and show you the portion of the image that fits in your browser window. As you pan around to see other angles, other sections of that same 360-degree panorama will be loaded.
When you navigate down a street, we need to determine which image to show you as you move. We figure this out based on signals that the car collects, such as data from three lasers. How quickly the lasers reflect off surfaces tells us how far each building or object is, and enables us to construct 3D models. When you move your mouse to an area in the distance, this 3D model determines the best panorama to show you for that location.
We use this same data in Google Earth, projecting the Street View imagery onto the 3D models to provide an immersive experience.
We also apply cutting-edge face and licence plate blurring technology to help ensure that passers-by and cars in the photographs can't be identified. For more information on imagery blurring, please visit the privacy section of this site.