This article presents an overview of Google's autonomous car. There are three components that make Google’s driverless cars go: sensors, software, and Google’s mapping database. Most of these sensors are neatly tucked away in the car’s body rather than mounted, laboratory-style, on a roof rack. The exception is the rotating sensor mounted on the roof. It is a Velodyne high-density LIDAR—light detection and ranging—that combines 64 pulsed lasers into a single unit. The system rotates 10 times per second, capturing 1.3 million points to map the car’s surroundings with centimeter-scale resolution in three dimensions. This lets it detect pavement up to 165-feet ahead or cars and trees within 400 feet. Automotive radars, front and back, provide greater range at lower resolution. A high-resolution video camera inside the car detects traffic signals, as well as pedestrians, bicyclists, and other moving obstacles. The cars also track their positions with a GPS and an inertial motion sensor.

You do not currently have access to this content.