There will come a time when artificial intelligence is better at certain things than human intelligence.
Take driving a car, for example. Up until now, the only intelligence on the planet great enough to handle the driving task has been humans. That is changing. There are currently self-driving cars being tested on the road. Manufacturers are scrambling to be ready for when regulators allow the general public to own them.
The problem with people driving, of course, is that along with the intelligence, you get a bunch of other stuff such as emotions, fatigue, distraction, drinking, smoking weed, etc.
The biggest obstacle to self-driving cars is that people love driving for reasons I will likely never understand. I tolerate driving because I have to. It is a necessary evil of modern life. If I could turn that task over to a machine, I would do so in a heartbeat.
Right now, scientists and regulators and entrepreneurs are trying to work out all the potential ramifications.
In a perfect world, all vehicles would be self-driving, but as I just pointed out, a lot of people are simply not going to want to give up the wheel. Many of us are simply control freaks. And we all know that backseat driver who is constantly pressing an imaginary break. Imagine them in a driverless car?
In any event, in the future there are always going to be self-driven cars alongside people-driven cars, which is a little bit scary.
Self-driving A.I. will not be perfect, of course, but it will be better than us at driving and self-driving cars will be involved in far fewer collisions. And the crashes they are involved in will be mostly the fault of human drivers.
But that brings up the most salient point of the whole question.
In philosophy, there is a thought experiment referred to as the “trolley dilemma” proposed by philosopher Philippa Foot in 1967.
In a nutshell, you are standing by a train track and notice a runaway train hurtling toward five workers standing in its path oblivious to their impending doom. There is a lever nearby that you could pull to divert the train onto a sidetrack, but there is one equally oblivious worker on that track.
Would you kill one person to save five?
The crux of the dilemma is whether you act passively versus actively. If you pull the lever, you are actively killing the one worker, whereas if you do nothing, five people instead of one will die. But that would have happened regardless.
The permutations of the decision are a very deep rabbit hole. We do not know anything about these workers. Maybe the five are just regular Joes who go about their evenings drinking beer and bowling while the other is a student working his way through medical school who is going to discover a cure for cancer.
The bottom line is, there is no right answer. It is a value judgment.
Apply a similar scenario to driving. You are driving and realize you are going to crash head on into a building. Your only choices are to continue and almost certainly kill your self or swerve into a group of school children and surely kill some of them.
We would all like to think we would take our chances and save the children, but we likely have much less choice in that situation than we think. Self-preservation is almost certainly going to kick in and small people are going to die.
That is not a problem for a machine. It is a problem for the programmers of that machine, however. Computers, even intelligent ones, can and will only do what we tell them to do. As a self-driving car engineer/philosopher, do you tell the software to always minimize loss of life or to always save the occupants of the car if possible?
Or do you let consumers decide?
“Okay, Mr. and Mrs. Jones, do you want the extended warranty with that and how would you like us to program your trolley dilemma software?”