Unmanned vehicles are a new and evolving industry, with implications not just in various job sectors, but the public’s everyday life. Each year we grow closer to fully automated vehicles, with companies vying for an early foothold in the new market front, releasing ever increasingly sophisticated and polished models to fight it out in the automotive arena. Most projections put full saturation of the market at 2070, and therefore we can expect that around 50% of vehicle sales and 30% of all vehicles in use would be fully autonomous by 2040, the impact of which will be profound.
There is no doubt that automotive vehicles will be a huge boon to the economy. Current estimates by KPMG show that by as early as 2030, level 2 & 3 automotive vehicles will have opened new revenue opportunities of £51 Billion from software, hardware, and sensors – an industry which already accounts for 4% of GDP within the UK (60.5 Billion) and also provides employment for more than 700,000. However, we must, as always, temper innovation and progress with ethical considerations. Are we ready for autonomous vehicles? What effects could they have on everyday working people? And what are some of the hard ethical questions engineers must ask themselves when designing these wondrous machines?
[Image Source: Tesla]
The Safer Choice?
It will seem obvious to anyone who has driven before, and perhaps even those who have not, but driving and our roads are, statistically, not safe. The UK alone, in 2014, suffered 194,477 casualties of all severities, 22,807 of which were severe injuries and a further 1775 were fatal. Of these, driver error was cited as the cause of 94% of incidents. The economic cost of this is staggering; due to road traffic accidents, the UK lost 2% of its GPD, and also, in turn, cost the Government and NHS £16.3 billion.
Now compare this to Google’s self-driving car, which has covered over 1.7 million miles, in 6 years. So far, it has only been involved in 14 minor accidents, all of which were reportedly the fault of the cars being manually driven, not the automated car itself; suddenly the argument for self-driving cars looks pretty healthy. Further to this, KPMG predicts that self-driving cars could save more than 2,500 lives and negate over 25,000 accidents a year by 2030. Yet, they can never be perfectly safe, which poses some difficult questions. How should the car be programmed in the event of an unavoidable accident? Should we minimize the loss of life, even if it means the occupants are sacrificed, or should we protect the occupants at all costs, regardless of other extraneous variables? Should it choose between these two extremes in a logical sense, or should it be totally random? These are important ethical questions currently faced by engineering teams around the globe and could very well have a significant impact in the way self-driving cars are viewed and accepted by society.
What can initially seem to be sensible programming can easily turn into complicated and confusing ethical dilemmas. To give an example, let's suppose that an autonomous car is faced with the terrible decision to crash into one of two objects: an XC90 Volvo or a Smart car. Do we look at which is the heavier vehicle and thus better able to absorb the impact, or do we choose the car with the better passenger safety? In both cases, the XC90 would be the selected target. Either way, we now have intentional discrimination against a particular type of vehicle to collide with, where the owners would have to bear the burden through no fault of their own, bar their concern with safety, or needing a large car for work or family. A logical solution to a problem has rapidly presented more problems than it solves.
Computer Vs Humans: Ethics
To delve into this further, if we look at crash optimization, ideally we should program a car to crash into whatever can best survive the collision. Now whilst this sounds good in theory, when you look at it a bit more in depth, it can get confusing. Take for instance a cyclist, if you prioritize whatever could best survive a crash, a program's algorithm would be able to account for the much higher odds someone has of surviving a collision with a helmet on than without one. Now you have a car that, in certain situations, would deliberately target cyclists with helmets on rather than those without, due to its statistical and logical decision-making process. This would effectively punish cyclists for being responsible and thinking about his or her safety. Additionally, if our self-driving car becomes a sizeable proportion of the cars driven, we may inadvertently be encouraging cyclists to not wear helmets, in order to not stand out as a favored target of the vehicle, due to the increased likelihood of surviving the collision. In our previous scenario, we may also unintentionally affect sales of high safety-rated cars, such as the Volvo, as people choose less safe models so as not to become a target of our autonomous car.
[Image Source: Pixabay]
Though who is to say that our car must decide between one extreme or another at all? Why not assign a random number to each option and let a random number generator determine the outcome, thereby removing any calculated choice altogether and thus any possibility that the programming of the car is discriminatory in any fashion, whether this be against large vehicles, safety records, cyclists, etc. This presents us with another problem however, by better mimicking human behavior, we have overlooked the fact that autonomous vehicles should be better than us at making choices. Further, while we can defend human drivers for making a split-second decision during a collision, we cannot allow that same freedom to our robot cars, to whom a split second is more than enough time to calculate and process millions of different outcomes and options.
Some might think that it is case closed after we solve the answers to a few of the questions above. However, we also have to contend with the level of detail algorithms require, do we become ageist and discriminate against the elderly, who, compared to a young child have led a full and healthy life? Or do car manufacturers look at the average settlement cost, and decide on the area, where, and who to crash into? This would mean opening ourselves up to socio-economic discrimination against the poor who either don't pursue legal cases or settle for less compared to the wealthier in society. Consequently, by increasing traffic in poorer districts, and lessening it in more affluent ones, would place a considerable strain on an already underdeveloped infrastructure. There is also the added constraint of just how you sell a car that prioritizes based on total loss of life for a single driver, who will predominantly always be in the minority and thus have his car decide against him the majority of the time.
Expanding our view outside of purely engineering and ethical concerns regarding our robots, we also must ask ourselves of the impact autonomous cars will have on society as a whole. We may no longer have to worry about manual driving, as our cars can drive us safely home in a fully autonomous mode, but will this encourage a culture of increased alcohol consumption as the risk of drunk driving diminishes? Despite the growth in jobs and economic output it could bring, sectors such as Taxicabs and cargo could see hundreds of thousands of jobs lost overnight as transport becomes autonomous, with no need for manned manual driving, potentially removing the sole income for many families and condemning them to poverty, if, albeit, temporarily.
Cars may be one of the most iconic technologies ever developed, forever changing societal, cultural, and economic landscapes. They have made possible types of work once thought to be impossible, and accelerated the rate and pace of business. They help us rush countless people to hospitals and deliver medical goods to rural areas. They allow friends and family to be closer to one another, yet also further away from each other. They kill over 30,000 in the USA alone, each year, and cause us to waste time when sat in rush hour traffic. They are the primary cause of increased pollution and global warming, driving a greater and greater strain on the resources of our planet.
Automated cars promise both great benefits, and unplanned, difficult to predict side effects. However, the technology is coming regardless. Change is inescapable. The intrinsic, deep complexity in forecasting potential problems these vehicles may run into is possibly one of the greatest challenges engineers will face this century, and one we must, inevitably, rise to.
Author: Northumbria University Engineering Student Kyle Spencer