The first one is building a safe system. In order to replace human drivers, the SDC needs to be safer than a human driver. So, how do we quantify that? It is impossible to guarantee that accidents will not occur without real-world testing, which comes with that innate risk.
We can start by quantifying how good human drivers are. In the US, the current fatality rate is about one death per one million hours of driving. This includes human error and irresponsible driving, so we can probably hold the vehicles to a higher standard, but that's the benchmark nonetheless. Therefore, the SDC vehicle needs to have fewer fatalities than once every one million hours, and currently, that is not the case. We do not have enough data to calculate accurate statistics here, but we do know that Uber's SDC required a human to intervene approximately every 19 kilometers (KM). The first case of pedestrian fatality was reported in 2018 after a pedestrian was hit by Uber's autonomous test vehicle.
The car was in self-driving mode, sitting in the driving seat with a human backup driver. Uber halted testing of SDCs in Arizona, where such testing had been approved since August 2016. Uber opted not to extend its California self-driving trial permit when it expired at the end of March 2018. Uber's vehicle that hit the pedestrian was using LIDAR sensors that didn't work using light coming from camera sensors. However, Uber's test vehicle made no effort to slow down, even though the vehicle was occupied by the human backup driver, who wasn't careful and was not paying attention.
According to the data obtained by Uber, the vehicle first observed the pedestrian 6 seconds before the impact with its RADAR and LIDAR sensors. At the time of the hazard, the vehicle was traveling at 70 kilometers per hour. The vehicle continued at the same speed and when the paths of the pedestrian and the car converged, the classification algorithm of the machine was seen trying to classify what object was in its view. The system switched its identification from an unidentified object, to a car, to a cyclist with no identification of the driving path of the pedestrian. Just 1.3 seconds before the crash, the vehicle was able to recognize the pedestrian. The vehicle was required to perform an emergency brake but didn't as it was programmed not to brake.
As per the algorithm's prediction, the vehicle performed a speed deceleration of more than 6.5 meters per square second. Also, the human operator was expected to intervene, but the vehicle was not designed to alert the driver. The driver did intervene a few seconds before the impact by engaging the steering wheel and braking and bringing the vehicle's speed to 62 kilometers per hour, but it was too late to save the pedestrian. Nothing malfunctioned in the car and everything worked as planned, but it was clearly a case of bad programming. In this case, the internal computer was clearly not programmed to deal with this uncertainty, whereas a human would normally slow down when confronted with an unknown hazard. Even with high-resolution LIDAR, the vehicle failed to recognize the pedestrian.