Though the fact that Google claims that their Self-Driving Cars have already taken the ride across 1.7 million miles on the roads of America and still have not been the cause of an accident during the time, yet what comes to everyone’s mind is the ethical puzzler that all philosophers term as the Trolley puzzle.
Based on what Google claims now Volvo has come out in the same venture informing that by 2017 they will have their Self-Driving Cars models on the highways of Sweden. Elon Musk commented that based on the technology he is ready to have his current model Teslas take on the major road by summer.
Inspite of the fact that Google Self-Driving Cars are able to manage the real world situations on road, however the fact is that accident are inevitable and Google Self-Driving Cars have been through quite a few situations of accidents too but Google gets back blaming it o human drivers.
Now the question here is how these future generation cars will be able to handle situations for example like a flat tyre. No doubt that the computer programmes can take a decision in milliseconds but the most important question is that do they have to be programmed to benefit their owner or put less danger as the priority.
The path that is considered philosophically is that the car should be programmed in such a way that it will produce the greatest happiness for the greatest number of people that are save but on the other had the others say that even if comes to a situation of the trolley puzzle, a Self-Driving Cars should not be programmed in such a way that it will to choose to sacrifice its driver to keep others out of harm’s way and if it has been programmed that way then why would we ever need to board such a car?
Now to the get to the crux question, can a computer be programmed to handle both the situations that it can make the best decision of who to save as both situations are different and will have a unique decision to be made but will the computer be programmed to know what is best to do?
Now just like each of us has several questions in mind the members of UAB’s Ethics and Bioethics teams too have been battling to get to the bottom of the decision though the fact is that both the teams are lead by Dr.Pence, there has been a turn around when they met Dr. Sandra Frazier who specializes in physicians’ health issues.
It was then that they learned to treat addiction as a disease which totally changed the course of their case. They realised that as per the trolley puzzle where you can save either 5 to kill 2 people as per the number programme or vice versa, your decision will be inevitable to the fact that you commit a crime by taking either of the decisions. Now to conclude this, the one big question is that if as humans we are not able to take a situational decision, will a computer programme be able to do so ?