The Trolley Car Dilemma is a morality thought experiment: a runaway trolley car is heading towards five immobilized people… and you have the ability to throw a switch that will move the trolley to tracks where only one person will be killed. What do you do? Steve Green brings in a story where the Trolley Car Dilemma is presented to an artificial intelligence system and the moral decisions it made will blow your mind.
Listen here on Soundcloud:
4.8
17
votes
Article Rating
Gents, I don’t know if any of you are fans of the show “The Good Place” but they had an episode dedicated to the Trolley Car Dilemma, which was actually quite amusing. The gist was the Trolley Car setup was actually played out multiple times to torture a philosophy professor put at the helm of the Trolley.
What you said about not accepting the premise reminded me of something I heard about Captain Kirk: When he was in training, they used simulations to teach decision-making in different scenarios. He figured out how to rewrite the simulation.
Funny – That was the first thing I thought of too…
Kobiashi Maru
Orange man bad, George Floyd good. AI is only as good or bad as those that program the AI.
AI is to binary choice, as I am to The Kobayashi Maru. Change the rules of engagement. Don’t know about all’y’all, but I don’t like to lose (especially to machines like PS4).
Saavik: Then you’ve never faced that situation . . . faced death.
Kirk : I don’t believe in a “no-win” scenario. [pulls out his communicator] Kirk to Spock.
Maybe they should rename it SkyNet. This is like the early versions of driver assist. It didn’t know how to react to motorcycles. Too often AI anything is taking responsibility away from the human. I have no use for it.
You’ll get a chuckle out of this I’m sure … Not long ago as the military was developing a new sentry software to detect human beings attempting to infiltrate a designated area the programmers decided the system was ready for a test. They got Marines to act as their test subjects, they wanted to see if these Marines could sneak up on their AI sentry. The AI sentry was very sophisticated and complicated and was supposed to “recognize” human beings whether standing, walking, crawling or whatever. The Marines went to a dumpster and pulled out a bunch of cardboard boxes.… Read more »
I also like “Semper Gumby.” Always flexible.😉
My thoughts EXACTLY!
Excellent episode, Gentlemen. I appreciate the movie quote, “The only winning move is not to play” from War Games. The discussion also brought to mind Captain Kirk’s comment on reprogramming the Kobyashi Maru test, “I don’t believe in the no-win scenario.”
AI isn’t coming up with these absurd answers. People are. Specifically evil people. Because the answers are based upon input from people with strong, evil biases. AI is a machine and it can be turned off. It will never become self aware, never become self perpetuating, never become self healing the way way our God created selves are.
AI doesn’t scare me. Bat guano crazy people, and head-in-the-sand sheep people do. May God have mercy on us!
Exactly, I was going to point out the same thing and you beat me to it. At the end of the day, AI isn’t the problem it’s fallen human nature and how people use AI that’s the problem and it always will be. You can’t trust AI simply because you can’t trust the person who programmed it. There are other obvious examples that don’t cloud the issue by using the word “intelligence”. These examples are valid and relevant because AI isn’t really true intelligence anyway. It is merely a means for processing information and nothing more. For instance, I trust… Read more »
Not having read the books, I do not know if the movie I, Robot ends the same way. If so, Asimov needs a fourth or fifth rule to eliminate that option for the robots, as Scott pointed out. The logical end of the question “How do I stop humans from hurting themselves?” is to prevent them from doing anything hurtful. Then we end up with what the stasi are trying to and make the world into the movie Demolition Man. I think AI is not going to be the end of humanity. As you pointed out, nukes let us do… Read more »
In Asimov’s Robot books the Three Laws of Robotics are a plot device that makes it feasible to have human created “artificial persons”. Without that plot device you just end up with a very brief story where robots rule the world in whatever arbitrary manner their soulless circuitry deems best. Which can range from human beings being selectively bred, kept and pampered like precious cattle to destroying the human race to “save us” from pain and suffering. The plot device of The Three Laws provide a dynamic tension. They are always in danger of subversion, corruption, nullification or “creative” interpretation… Read more »
Ah. I think I misread part of your comment to say that AI would become/not become an actor on its own and that by doing so it would become a cause of the end of humans.
Instead, you were saying it was only a tool that might be used by humans to end humanity which makes a lot more sense.
I also fully agree with your last statement.
Yeah, no matter the means and method, the point of failure is always going to be human beings. Everything else is just ancillary to that basic fact. There’s no way around that unless we’re talking about something that was never in human control or development to begin with. Like for instance an extinction level global strike or an alien invasion. If we make it and it bites us in the ass, that’s our own fault. An existential threat is an existential threat and if we create our own existential threat then we can deal with it or not as a… Read more »
I found it interesting that as Asimov continued the foundation series he came full circle and had the original humanoid robot from the Caves of Steel as the protagonist. That robot came up with the Zeroth Law of Robotics – no robot may harm humanity or through inaction allow harm to come to humanity. This ended up being a harbinger of the end of humanity as we know it. (is it a spoiler if the book is 30+ years old?) The robot got to a point where it needed a human to make a critical decision as to the path… Read more »
I am not sure many of the AI developers are evil. Some of them certainly are but others are just incompetent. As Scott said, Garbage In, Garbage Out. The trolley problem requires one particular caveat: the people strapped to the tracks are … people. Choosing between Trump and a sock should return a “You want me to pick between a living human and an inanimate object?” response or whatever action is required to not have the human die (depending on where the questioner puts the sock and where the human). If they did have such a check in place and… Read more »
Bat guano crazy people, and head-in-the-sand sheep people My wife works part time in a boutique in our downtown area to keep active and for the social aspect; the store owners and workers all get along and are very friendly. One of the guys at the Coop is Kamalacrazy (he was Biden crazy before that). He is incapable of saying why he supports her, just that Trump is evil. It is really quite sad. I think back to an ad from my youth – a mind is a terrible thing to waste. Willful ignorance is a real thing. I have… Read more »
I’d be willing to bet if your subject 60 year old man were pressed to name the reasons WHY he hates Trump he would be no more coherent than he is for why he wants a dingbat for President. He would probably sputter a bunch of nonsense and debunked lies as his reason if he could even do that much. In either case whatever reasons he gives for adoring Kamalameleon or hating Donald Trump — They’re not the real reasons he actually acts upon. Those are just excuses he gives the world, and likely lies to himself about, for his… Read more »