Some people may be surprised to know that early generation cars from Google barely used their cameras. The LIDAR sensor is useful, but it could not see lights and color, and the camera was mostly used to recognize things such as red and green lights.
Google has since become one of the world's leading players in neural network technology. It has made a substantial effort to execute the sensor fusion of LIDARs, cameras, and other sensors. Sensor fusion is likely to be very good at using neural networks to assist Google's vehicles. Other firms, such as Daimler, have also demonstrated an excellent ability to fuse camera and LIDAR information together. LIDARs are working today, and are expected to become cheaper. However, we have still not crossed that threshold to make the leap toward new neural network technology.
One of the shortcomings of LIDAR is that it usually has a low resolution; so, while not sensing an object in front of the car is very unlikely, it may not figure out what exactly the barrier is. We have already seen an example in the section, The cheapest computer and hardware, on how fusing the camera with convolutional neural networks and LIDAR will make these systems much better in this area, and knowing and recognizing what things are means making better predictions regarding where they are going to be in the future.
Many people claim that computer vision systems would be good enough to allow a car to drive on any road without a map, in the same manner as a human being. This methodology applies mostly to very basic roads, such as highways. They are identical in terms of directions and that they are easy to understand. Autonomous systems are not inherently intended to function as human beings do. The vision system plays an important role because it can classify all the objects well enough, but maps are important and we cannot neglect them. This is because, without such data, we might end up driving down unknown roads.