Order from us for quality, customized work in due time of your choice.
Introduction
The Trolley Problem is ethics thought experiment that deals with the choice of saving several people by killing one. It has found widespread use in various moral and ethical discussions, such as law, medicine, and, recently, artificial intelligence (AI) and automated vehicles. This essay will discuss my views of the Problem and ethical considerations in the situation it describes before describing its current practical applications in the field of self-driving cars’ AI.
My Reaction
Personally, in the initial Trolley Problem, assuming I can act rationally and have time to think, I would pull the lever. Following utilitarian logic and considering that all six people are strangers to me, the only consideration available here is how many to save. It is unless we start expanding the problem with options like derailing the trolley or sacrificing myself, impossible to save everyone. Therefore, I believe it is the ethical option to save as many as possible, even if it means I involve myself in the problem.
In the bridge scenario, however, the five will have to die. The difference from the basic Problem is not only involvement but also the fact that the one man in question is not already in danger or immobilized by the Problem’s premise. Therefore, he is capable of acting of his own volition and making ethical decisions. Thus, pushing him would mean forcing my ethics on someone else, depriving him of his freedom of choice. His judgment should be applied here, and mine is ultimately irrelevant.
Self-Driving Cars and the Trolley Problem
Currently, the Trolley Problem is often seen in discussions around automated vehicles, notably self-driving cars. In these discussions, the Problem is referenced directly, as a given car’s AI might find itself in a very similar situation when a collision is unavoidable. Therefore, as Nyholm and Smids (2016) phrase it, these vehicles need to be “programmed for how to crash” (1278). This decision is not always easy because it must evaluate the lives of its passengers versus those of pedestrians — should one passenger always be kept safe, even if it means killing or injuring two non-passengers? Is jaywalker more acceptable to kill than someone crossing on a green light? What about animals, children, and the elderly. Should any of those lives be prioritized? Those are all ethical considerations that must be programmed into a self-driving car’s AI.
Its use in automated vehicles’ AI is perhaps the most literal interpretation of the Trolley Problem. When moving a pure thought experiment into reality, many more factors need to be considered, and the options are rarely so binary. For example, JafariNaimi (2018) presents a situation where “there are four young adults on the one side and an elderly woman walking slowly with a walker on the other” (9). In this situation, it is not merely a choice of four versus one that must be considered, but those people’s capacity for action. The four appear to be more likely to avoid the vehicle once they notice it or surviving injuries if it collides with them.
Conclusion
The Trolley Problem remains a relevant topic for discussing questions of morals and ethics. My views on it are mainly utilitarian and acknowledge other people’s autonomy and ability to make ethical decisions. Currently, the most recent and relevant field of application is AI, particularly that of self-driving cars. This field offers what may be the most literal interpretation of the Problem that calls for direct, multifaceted discussions that must be resolved before such vehicles become commonplace.
References
JafariNaimi, N. (2018). Our bodies in the trolley’s path, or why self-driving cars must* not* be programmed to kill. Science, Technology, & Human Values, 43(2), 302-323.
Nyholm, S., & Smids, J. (2016). The ethics of accident-algorithms for self-driving cars: An applied trolley problem? Ethical Theory and Moral Practice, 19(5), 1275-1289.
Order from us for quality, customized work in due time of your choice.