Automakers and tech companies are doing whatever it takes to get driverless vehicles on the road. Renowned companies like Google and Tesla are already in the works of making driverless cars commercially vehicles available in the next five years or so. Goldman Sachs already estimates that driverless vehicles could make up at least 60% of overall car sales by 2030.
The reasons for automakers and tech companies to invest in the technology do vary, but they all seem to agree that driverless vehicles have the potential to improve road safety exponentially.
Of course, it may seem unreasonable to put complete faith in such technology since driverless cars are expected to have no controls for its passengers, i.e., brake pedal or steering wheel. But then again, it is highly unlikely that cars will remain the same in the future as they are today, as technology is always evolving.
When it comes to technology, nothing is truly black or white. The same is the case with driverless vehicles, where there are complications when it comes to defining ‘safety.’
What does safety really mean when it comes to driverless vehicles? How can it be measured? Is it restricted to precision or how often a driverless car will break the law? These are some common questions that plague automakers and tech companies.
On the other hand, there are different variations of automation. There are driverless vehicles that offer assistance when required, while there are fully automated vehicles that need no input whatsoever, which complicates matters even more.
According to Steven Shladover, a research engineer at the University of California, Berkley; there is no absolute way to measure safety for self-driving cars at this time. For this reason, Shladover recommends that U.S. regulators and industry members learn from the German government, seeing how they have invested substantial resources to effectively determine the safety of driverless vehicles.
Dangers associated with Driverless Vehicles:
The U.S. Department of Transpiration has presented guidelines regarding safer driverless vehicles to Silicon Valley and Detroit incumbents. However, federal law regulations are greatly lacking for driverless vehicles and drive-assist technologies, giving tech-companies leeway to put as many driverless cars on the roads as they want for ‘research purposes.’
Some might say it is actually worth the risk to test driverless vehicles considering how human error resulted in 4.4 million injuries and 38,300 deaths in the U.S. in 2015 alone. Many have come out in support of this approach, with Uber being one of the biggest supporters of them all. The ride-sharing giant took it upon itself to launch its very own pilot in Pittsburg two years ago. But the company did not stop there and launched several other autonomous vehicles in San Francisco until it was forced to halt its tests after being criticized by Californian regulators for the lack of testing permits. Despite halting tests with autonomous vehicles on the roads, Uber has stated that such tests are essential for further advancements and also to gain the trust of the people.
Even with promising results, the vulnerabilities of the technology became apparent when a Tesla Model S crashed in 2016, causing the death of the passenger. There is no denying the fact that this could have been the result of recklessness shown by the market that gets to decide the tolerance for the technology’s risks. Despite this horrendous incident, Elon Musk, CEO, and founder of Tesla believes that moving on is the only option.
Musk announced two new cars, the Model X and Model S with ‘shadow mode’ built-in. The purpose of doing so is to train autopilot technology to prevent such disasters from taking place ever again. What is really interesting about the new shadow mode feature is that it is constantly learning, even when the autopilot is disabled. But that is not all, as the knowledge gained can then be shared with other Tesla cars, referred to as ‘Fleet Learning.’
Difficulties with Testing Driverless Vehicles:
Problems with driverless vehicles seem never-ending especially their negative impact on the economy. Determining whether said vehicles are safe or not depends on how long they need to be tested on the roads in the first place.
Rand Corp’s report from 2016 states that autonomous vehicles will need to drive billions of miles to be able to offer valuable data in regards to their safety in preventing deaths and injuries.
The report further explores how available autonomous vehicles will need to be on the roads for a considerable amount of years before the data obtained is significant enough for safety assessments and comparisons to be made.
To further understand this statement, take into consideration how 100 cars would need to drive no less than 275 million miles to meet safety standards of vehicles being driven today. When the Tesla Model S crashed in 2016, a mere 130 million miles had been clocked by Tesla car owners. The disparity for the required number of miles to meet safety standards is evident, but it can be overcome if tech companies choose to share their data with others, which of course is highly improbable due to competition.
Companies are just not comfortable with the idea of sharing test data with their competitors, fearing losing out on the race to creating the perfect driverless vehicle. In this case, regulators can force such companies into sharing data for the greater good.
Even if the definition of safety is established, the biggest problem for driverless cars is to defeat skepticism from humans. Be it a motorist or even a pedestrian for that matter, not everyone is going to be comfortable around an autonomous vehicle. This obstacle can only be dealt with if the transition from traditional to autonomous cars is transparent as ever. The bottom line is that this driverless technology can be welcomed with open arms only if it instills trust in human drivers.
Read More :