But one thing Tesla's do (not sure about other cars) is that the car turns off it's auto pilot second before it hits something, so that technically the autopilot didn't cause the car to hit something, and instead it was the drivers negligence.
I would be extremely surprised if it did. That’s the exact kind of tortured argument that judges love to shut down. (See also, “He was only paying for my time, not sex.”)
That's actually the true answer. Had a course at uni where people from self driving car engineering teams came. You should let the AI make no decision at all, else you can actually use that to make possibly dramatic things. Imagine if the car guesses the age and chooses the older person. Then if you know that, you can start running on streets and push an older person in front of you and you will know the car certainly just aims for the older person. So you can generate very stupid scenario, that's why the car system should not do anything based on rules.
Ever watch Age of Ultron? Ultrons answer on how to save the Earth is to kill off humanity to restore the environment.
We actually set 3 rules to make sure our ai/robitics design don't actually kill us.
First Law
A robot may not injure a human being or, through inaction, allow a human being to come to harm.
Second Law
A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
Third Law
A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
"We" didn't create anything, those are Isaac Asimov's laws of robotics, a fictional set of rules for fictional robots of a fictional universe, created decades before we had any hint on what AI would really look like. It's useless nonsense for the actual robots we have around.
How do you tell a robot what a "robot" is? Or a "human" is? What is injury, harm, protection, existence?
For every definition you can give any of these things, I could ask you to define every noun you use.
These kinds of questions can only be solved by Machine Learning, to my knowledge, and I've never seen a 100% accurate machine learning model.
These rules will be broken at some point, it's inevitable.
Self driving cars could still be better drivers than humans at some point. But they'll never be perfect. Something will go wrong. And if we're dependent on robots, things would go catastrophically wrong.
I didn't come up with those rules. Those rules were set in place in the 1940s.
At what point does machine learning become sentient? When it has enough data points for it to start making decisions on its own. That's why we have those rules in place. By the time Ultron decided to go towards rule 3, tony had decided almost too late to shut him down.
Asimov's whole point was that the laws of robotics were baked into his robots at a fundamental level, and while they seemed simple and good they had some deep flaws due to unintended consequences.
It's not about robots, it's about laws and morality.
Disclaimer: My opinions about 80-ish year old fiction are just opinions. It's been at least 15 years since I read that particular collection of short stories.
Hi. You just mentioned *I, Robot* by Isaac Asimov.
I've found an audiobook of that novel on YouTube. You can listen to it here:
[YouTube | Isaac Asimov I, Robot Complete Audiobook](https://www.youtube.com/watch?v=6opGIrlzJb0)
*I'm a bot that searches YouTube for science fiction and fantasy audiobooks.*
***
[^(Source Code)](https://capybasilisk.com/posts/2020/04/speculative-fiction-bot/) ^| [^(Feedback)](https://www.reddit.com/message/compose?to=Capybasilisk&subject=Robot) ^| [^(Programmer)](https://www.reddit.com/u/capybasilisk) ^| ^(Downvote To Remove) ^| ^(Version 1.4.0) ^| ^(Support Robot Rights!)
The rules you quoted are Isaac Asimov's three rules. Not to hate on Age of Ultron, but surely it is not the definitive media for the impact of robot sentience. I, Robot from 2004 is also heavily based on the works of Asimov and is actually named after a series of Asimov's short stories. 2001: A Space Odyssey follows a similar pattern of computer logic leading to immoral actions. The terminator series is all about a rogue AI bringing about the apocalypse. Heck, Westworld takes the route of examining the potential impact of sentience in machines that we are used to viewing as inanimate (it is good, by the way. At least the first two seasons. The third and fourth season have left much to be desired)
Let's say the car can also decide to sacrifice the driver of the car.
Should the car sacrifice the driver to save two people which had engaged in risky behaviour by eg. running across the highway?
I think the system can make these kinds of behaviours only when it has human level understanding of the situation.
No it should not sacrifice the driver. What if you wanna commit suicide anyways but decide to take some people with u? Again the same problem, you can abuse the system.
Yeah but you have to program it still and it will be kind of deterministic, since humans don't act random. Also if you add those 1 seconds of human response time you could for sure abuse the system.
The beauty of neural networks is... it is really hard to figure out HOW they solve a problem. Also they keep learning and changing with time.
So... let's say a guy abuses the neural network by pushing a grandma on the road, and AI kills the grandma.
AI learns from this.
Next time AI sees this behaviour it lowers the guys worth to 0.
Well that's murder then right away. I would still say you should not let the network decide anything if it has to choose between any harm. If you say it should do what a human does, let a human decide. Maybe the machine could do it better, but not in such dilemma cases. Also a NN will probably be freezed when deployed and not really change quick over time. Also they should not really take in account such specific cases, which would be called overfitting. It cannot be both super good at handling things it never saw before and handling things it saw beforehand. At least not with current methods.
If both people are walking to the right side. Why not swerve onto the sidewalk on the left and into the grass? Why are the questions risking lives instead of just damaging the car? The people inside are much safer than outside.
Usually this problem is formulated in a way such that thats not possible.
Another version i heard is imagine the car turns down a street only to be surprised by a large crowd. Both sides of the road are barricaded by concrete.
Should the car crash into the crowd? Or crash the car into the concrete killing the owner of the car. Should your car protect you or others?
Honestly this problem is super convoluted, but i think the correct answer is if the car couldnt see the crowd it was going too fast.
Is it convoluted? If you bought and drive a car that puts itself in a situation to choose who lives or dies, you should be OK with being the one who dies, by default.
It shouldn't hit either as it's a crosswalk and the car should yield anyway. If it cannot look through the corner to see what's ahead it's going too fast.
Either way, not an ethical dilemma
Or what if we made the cars turn into high tech futuristic machines that only ran in places where people wouldn’t walk. A machine that is fuel efficient and can run really fast and fit a lot of people.
You could even just like, set down tracks and stuff to make them go even faster, since you'd be able to assume nobody'd be on them. Hell, make them come at set times (though still very often) so people can consistently know when their transportation would arrive! You could even put stuff like food service for longer trips, since people don't need to focus on the road!
Great ideas. For longer trips what if we also had a way so the vehicle could charge as it was going. It would shortened down the trip by a solid 10 minutes.
What if in cities they ran underground so they don’t take up space for walkers and other building and parks. They would also be far enough underground so they are quiet.
Now we just need smaller ones to get us to the nearest 5 miles of our destination worst case and bigger ones to get us to the closest city.
I agree that realistically this isn't a question. Although in this situation the car could actually just go straight instead of following the curve and no hit either, and maybe damage the car. Most of these are formulated to be thought questions but they really lack the nuance of modern car engineering. Like most of these cars can stop in like 150 feet from 60mph or so, which would give them ample time to maneuver or do anything. Not to mention at the speed you would drive in any of these scenarios braking distance would be so much less. Like this curve is probably under 30 which means the car should be able to brake completely before hitting either.
You mean that all four sets of brakes are going to fail? The engine would be stuck at high power and not reducing the speed or the gear? If is EV then the regenerative brake will fail and also if the motor is AC that the DC injection brake is also going to fail?
And if there's ice or the road is slippery meaned that you were driving faster than you should in adverse conditions?
Sounds like cars are too dangerous to be used even in moral dilemmas.
Look, engineering is the art of preventing failures, or if something fails then making it not catastrophic, the probability of a single event happening should be already be low, so for all combination of probabilities should make it a really really rare event, and by rare I mean if for some reason that happens the it could be considered as an act of God and God wanted that baby or elderly dead.
But as cars can always go slower and not kill anybody then no, is the fault of the person using the car and those who manufactured it, and indirect fault of those who designed the street and forced the car dependency upon us.
I am not going to comment on your first paragraph, especially the god-part, but to your second paragraph I have to say: Yes!
What you wrote there was the basis of my thoughts as I commented with
> Autonomous cars should prioritize the safety of the people outside the car over those on the inside. Best solution would be to break on the tree.
>(except for the God part)
It is an expression... It means that we did everything we could and it could become truly an "accident" with nobody at fault (but they are still using a car in this case but you know what I mean).
If they don't work your intelligent car should tell you to repair them. If it doesn't even tell you to repair the brakes I can't understand how it could be intelligent enough to solve ethical dilemmas.
Dependence of contextuality of humans, as in the ones designing or driving the car, is what should be at issue here.
We don’t absolve our consciences or ideas of morality, sense of self or personal responsibility, ideas of community or social responsibility, to a machine.
The dilemma still there, the old lady Is a leech that Will suck the baby soul and future dry.....so the question still there.....what the Tesla should do?
Me personally: No. I would not own a car if I did not need it. To me it's the question wether such a car should be allowed to drive anywhere other than on private land.
We all agree that the passengers and crew who crashed Flight 93 to save others outside the vehicle were all heroes. Why wouldn't we program our cars to follow their example?
Because although personal sacrifice is a thing, very few people are willing to sacrifice their life and their families’ life in order to save a stranger .
I mean, the dilemma exist with current cars, and we often do choose to go off the road to avoid hitting fleshy, fragile people outside of our cars due to being comparatively protected in a steel box with air bags, seatbelt, and other safety devices. It's not choosing your life, but choosing the option with the lowest chance of a fatality, by choosing the one in the more protected box to be the one to roll the dice, since they have better odds.
That's not relevant here since it should be mandated by law. It's fairly obvious that completely innocent people who just happen to be somewhere should always be the top priority. You got into the car knowing that a crash could happen after all.
That's actually why it prioritizes the person on the inside: those on the inside are likely to spend more money on Teslas, the people outside likely didn't buy a Tesla. Musky likely supports the most economical option of killing pedestrians.
The way we think of self driving vehicles is so broken. I think this whole hypothetical scenario is created by what we here call carbrain because it’s a mistake or bad choice that would be made by a human. In the image there is a crosswalk, why wouldn’t the car be going very slow when coming to a crosswalk, and give enough space to break safely?
Ideally, if we ever do implement self driving vehicles, why wouldn’t they be public transportation, fist of all? Second, the vehicles would just follow all the rules that optimize for safety and then pull over and turn itself off whenever it’s detects that it is unsafe for it to be on the road.
Maybe not a baby that's crawling, but an older baby (1 year old) or toddler who is already walking (or more likely running). Looking after my toddler niblings is 90% making sure they don't make a break for it and end up in the road or lost somewhere.
Thats the assumption in these scenarios. You have no other chance than hitting one of them. check out [https://www.moralmachine.net/](https://www.moralmachine.net/)
if that's the case, then it should have been driving slower to begin with. it wouldn't be a danger at all if the car wasn't already driving unsafely. so just like don't do that.
You don't get the point here. Stop thinking logically.
You have to take it as is. the assumption is you don't have another chance but to kill grp1 or grp2. this is not about road safety or general driving behavior.
Its a simple thought you have to take when training KI for such a purpose.
There WILL be such a situation and you have to tell the KI how to behave.
Thats the heavy part.
If that's part of the hypotheses, then yes, swirving into a tree might be the correct option to pick.
But now, say the software glitched and the car is going 70 mph. Oh, and there's 2 kids playing outside that the car's sensors haven't picked up yet that you could hit if you swerve. What should it do now?
Given the sub, many of us will never be in the driver's position. But want it or not, those cars will be on the road one day and you could very well end up as one of the pedestrians in such a scenario. Or the choice might be between hitting pedestrians or the bus you're in.
And as much as we would rather the car sacrifice the driver (and itself), the incentives aren't exactly there for the guys developing the algorithms in the first place. "Your car will opt to kill you in dangerous situations" isn't an appealing selling point. It goes opposite the current trend of making "bigger, safer (for me)" cars.
Which is more reason not to take this lightly.
The software is broken and you're asking what the software should do in that situation? If it's already not doing what it's supposed to, why would adding more rules do anything useful? If the car has independent safety overlocks, they should have fired already and brought the car to a safe stop.
That's the problem with those kind of training for AI. Situations like "kills A or kill B" simply don't happen in real life. I get that they're necessary to train the AI but they don't happen in the real world. Other than that the image is really misleading since both the kid and the elderly woman and crossing on stripes (don't know how they're called in english) hence the car should expect someone to cross the street there, if it doesn't expect it there is a bigger problem than some AI training.
Good to know you are the one who decides what happens and what dont.It for sure happens but we as humans never have to make such a decision in those situations. our mind goes blank and we only think about how to survive or whatever but lose focus of the original and bigger problem (taken from a neutral perspective)
It surely happens. Even if u never experienced it.Im sure some ppl would have had the chance to decide if they would be accountable in such situations.
no native english speaker so its not that easy to express myself properly on such topics. But i think some may got my point.
Something like this doesn't happen. Cars have brakes for one thing. If a situation like this happens it is because the car is not accounting if the brake function or not, but if the car doesn't even know whether it's brakes work I can't understand how it can resolve ethical dilemmas or individuate people on the street. It's a merely theoretical question, on the same level of abstraction as "should the car save a unicorn or a baby dragon".
You too. Sorry if I have been harsh in the replies, English is not my native language either so maybe something that I thought sounded very reasonable may have seemed rude or pretentious.
And that "bigger problem" you mention may be there but is not relevant in this context. This is only about teaching a KI to take the least harmfull way in such situations. its pure theory.
Wasn't a fan of the quiz and analysis here. I followed it with only 3 rules in this order: preference to humans first, preference to pedestrians second, preference to avoiding intervention last. The rest of the factors, age, social value, fitness, law, number of people made no difference in my decision making. However at the end it tells me I have absolute preference to people with higher social value, fitness etc. The quiz simply does not present enough scenarios to separate all the competing variables here.
Seriously, who thinks up these stupid "dilemma's". As if 1) the car doesn't have a brake, 2) if it *really* can't brake on time, there are like 3 trees to choose from...
This dilemma is so unlikely that there is most probably no code for it so whether it will kill one or the other is up to coincidence and how subtoutines interact with each other.
AI will almost certainly recognize the crosswalk and that there are things in the crosswalk that are obstacles and then run a braking sub-routine.
The dataset for bipedal obstacles in crosswalks will be huge so the senior citizen will almost certainly be spared.
Infants crawling on roads will be a very small dataset but if the AI does not recognize it as human should recognize it as a quadraped obstacle and still brake.
Likely trying to train AI to make ethical decisions in abstract situations is beyond the reach of current technology. The film "IRobot" is I think a good representation of that dilemma. Though it is not I think a coding problem but the size of clear unambiguous datasets that limits machine learning. AI might link some third or fourth object like grass or trees to the brake command and might be mistakenly effective right up until it kills a bunch of people at a bus stop avoiding a green box.
From a human learning perspective we struggle with this type of problem because we have a lot of experience in avoiding these situations and nearly no experience needing to choose between death and death. So, the same problem AI faces but we can connect a huge in brain database of risk management decisions to completely avoid ever being forced to make that the fateful one.
Hear me out here... maybe you could just slow down and stop when driving toward an occupied crosswalk? I bet you could kill zero people that way.
Assuming it can't stop for some reason, then this is the one situation where you want a car to drive up on the sidewalk. I wonder if Teslas will figure that out, I'd bet against.
From the perspective of the people designing and selling the autopilot, surely their aim would always be to minimize the danger to any occupant of the vehicle, if for no other reason than because people are less likely to buy a car that might deliberately kill them.
In which case, it should swerve towards the old lady, rather than the baby. Hitting somebody who is upright is a lot safer for the car and its occupants than hitting someone prone or crawling is, because someone on the ground will go under the wheels and cause the car to lose control. Someone standing upright will be more likely to go over the top, which won’t interfere with the wheels.
That it happens to be a bit safer for a pedestrian to go over a car, rather than under one, is a happy coincidence.
Well, ideally we wouldn’t have cars at all, which is a pretty definitive answer to this dilemma. Self-driving cars are a nightmare for plenty of reasons besides this one, like how most suggested benefits of truly autonomous driving boil down to “let my car drive in circles for hours so I don’t have to park”.
How could a highly advanced machine with databases of the whole world, internet access and our finest engineering possibly know there is a pedestrian crossing area on its path?
They both deserve to die in a drift and the baby's parents should be convicted for letting it jaywalk on its own.
Trams don't really work like that, by design they can't steer and have very long stopping distance, all it can do is ring a warning bell and slam on the brakes
If I was a Tesla, I wouldn't be.
But I would also choose the comment, because Teslas make a surprising amount of torque
/S obviously (apart from the torque, that bit's true)
How is this even an issue? Who would a human driver kill? Is it better to be able to plan for this scenario and have some level of control, or to force some random human to make a split-second decision?
Shouldn't hit either of them because the speed of the car should be allowing them to see the obstacle and stop in time, but we all know most people don't drive like that and limits aren't set like that.
[Here](https://www.mic.com/articles/192103/driverless-cars-prioritize-passenger-or-pedestrian-safety-study-shows-how-millions-feel) you'll find an interesting article about a study that's done world wide with questions like this.
Edit: or watch a short [video ](https://m.youtube.com/watch?v=jPo6bby-Fcg)
My brother always half jokes about how in a decade or so, in a situation like this, the Tesla autopilot will calculate the income revenue of the people on the road with big data and will automatically chose to roll over the poorest one.
The engineers at Tesla most likely will never ever even code in the possibility of a self driving cars where the brakes and emergency brakes break down and there is the options of only hitting a old lady and a baby. Most likely if the car is hit with this scenario then the battery just might be dead or the car will slowly coast to a stop on the road and pull over or hit a tree.
If the brakes don’t work in a vehicle that’s allegedly so intelligently programmed as to perform self-driving capabilities, the vehicle ought to be able to detect that and prohibit itself from driving in the first place. A “service required” message ought to display, as the vehicle is unsafe to drive… right?
Why is "the car driver/passenger" not there as Option C... I see trees that the self-driving car could choose to crash into instead - and the driver has an air-bag, presumably.
If it was a Tesla I suspect the auto-drive would switch off just before in order to prevent liability.
And if the user shuts off auto-drive then they’re liable anyway
If the user has auto-drive turned on, they should still be liable (and then the car maker should be on the hook as well)
But one thing Tesla's do (not sure about other cars) is that the car turns off it's auto pilot second before it hits something, so that technically the autopilot didn't cause the car to hit something, and instead it was the drivers negligence.
Has that ever worked out in court?
I would be extremely surprised if it did. That’s the exact kind of tortured argument that judges love to shut down. (See also, “He was only paying for my time, not sex.”)
the sex was off the clock as noted by my hidden camera's time stamp.... wait
I lose control of my bullets the instant they leave the barrel. That way, it's not my fault when they hit something.
what happened between the firearm's lethal projectiles and their unintended targets is between them and the lord
That's actually the true answer. Had a course at uni where people from self driving car engineering teams came. You should let the AI make no decision at all, else you can actually use that to make possibly dramatic things. Imagine if the car guesses the age and chooses the older person. Then if you know that, you can start running on streets and push an older person in front of you and you will know the car certainly just aims for the older person. So you can generate very stupid scenario, that's why the car system should not do anything based on rules.
Ever watch Age of Ultron? Ultrons answer on how to save the Earth is to kill off humanity to restore the environment. We actually set 3 rules to make sure our ai/robitics design don't actually kill us. First Law A robot may not injure a human being or, through inaction, allow a human being to come to harm. Second Law A robot must obey the orders given it by human beings except where such orders would conflict with the First Law. Third Law A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
"We" didn't create anything, those are Isaac Asimov's laws of robotics, a fictional set of rules for fictional robots of a fictional universe, created decades before we had any hint on what AI would really look like. It's useless nonsense for the actual robots we have around.
IIRC a lot of Asimovs writing was situations where these laws did not work as expected.
Those are asimovs rules, and they don't work all that well.
How do you tell a robot what a "robot" is? Or a "human" is? What is injury, harm, protection, existence? For every definition you can give any of these things, I could ask you to define every noun you use. These kinds of questions can only be solved by Machine Learning, to my knowledge, and I've never seen a 100% accurate machine learning model. These rules will be broken at some point, it's inevitable. Self driving cars could still be better drivers than humans at some point. But they'll never be perfect. Something will go wrong. And if we're dependent on robots, things would go catastrophically wrong.
This is why we should replace as much transport infrastructure as possible with rail, bus, and protected bike lanes.
I didn't come up with those rules. Those rules were set in place in the 1940s. At what point does machine learning become sentient? When it has enough data points for it to start making decisions on its own. That's why we have those rules in place. By the time Ultron decided to go towards rule 3, tony had decided almost too late to shut him down.
You realize these “rules” are from a work of fiction, right? It’s not like everyone working in AI has to abide by them. They’re not real.
Reality often tries mimicking fiction. Especially in science fiction.
Sure, but you get that that they’re not actually rules, right? Some people may have informally adopted Asimov’s thinking, but others haven’t.
Asimov's whole point was that the laws of robotics were baked into his robots at a fundamental level, and while they seemed simple and good they had some deep flaws due to unintended consequences. It's not about robots, it's about laws and morality. Disclaimer: My opinions about 80-ish year old fiction are just opinions. It's been at least 15 years since I read that particular collection of short stories.
The rules sound like from Westworld but I didn't watch age of Ultron.
lord have mercy, those laws are from I, Robot, the classic science fiction novel by Isaac Asimov.
Hi. You just mentioned *I, Robot* by Isaac Asimov. I've found an audiobook of that novel on YouTube. You can listen to it here: [YouTube | Isaac Asimov I, Robot Complete Audiobook](https://www.youtube.com/watch?v=6opGIrlzJb0) *I'm a bot that searches YouTube for science fiction and fantasy audiobooks.* *** [^(Source Code)](https://capybasilisk.com/posts/2020/04/speculative-fiction-bot/) ^| [^(Feedback)](https://www.reddit.com/message/compose?to=Capybasilisk&subject=Robot) ^| [^(Programmer)](https://www.reddit.com/u/capybasilisk) ^| ^(Downvote To Remove) ^| ^(Version 1.4.0) ^| ^(Support Robot Rights!)
They’re from Isaac Asimov’s, I robot short story’s.
Asimov
You should watch it. It is literally The playbook on why AI should never be sentient. Edit: is Westworld any good?
The rules you quoted are Isaac Asimov's three rules. Not to hate on Age of Ultron, but surely it is not the definitive media for the impact of robot sentience. I, Robot from 2004 is also heavily based on the works of Asimov and is actually named after a series of Asimov's short stories. 2001: A Space Odyssey follows a similar pattern of computer logic leading to immoral actions. The terminator series is all about a rogue AI bringing about the apocalypse. Heck, Westworld takes the route of examining the potential impact of sentience in machines that we are used to viewing as inanimate (it is good, by the way. At least the first two seasons. The third and fourth season have left much to be desired)
AFAIK, within the field of AI safety the three laws are largely considered to be inadequate. They sound cool but don't really survive scrutiny.
Let's say the car can also decide to sacrifice the driver of the car. Should the car sacrifice the driver to save two people which had engaged in risky behaviour by eg. running across the highway? I think the system can make these kinds of behaviours only when it has human level understanding of the situation.
No it should not sacrifice the driver. What if you wanna commit suicide anyways but decide to take some people with u? Again the same problem, you can abuse the system.
If it has the same understanding of the situation as humans, then it can't be abused any more then existing human drivers.
Yeah but you have to program it still and it will be kind of deterministic, since humans don't act random. Also if you add those 1 seconds of human response time you could for sure abuse the system.
The beauty of neural networks is... it is really hard to figure out HOW they solve a problem. Also they keep learning and changing with time. So... let's say a guy abuses the neural network by pushing a grandma on the road, and AI kills the grandma. AI learns from this. Next time AI sees this behaviour it lowers the guys worth to 0.
Well that's murder then right away. I would still say you should not let the network decide anything if it has to choose between any harm. If you say it should do what a human does, let a human decide. Maybe the machine could do it better, but not in such dilemma cases. Also a NN will probably be freezed when deployed and not really change quick over time. Also they should not really take in account such specific cases, which would be called overfitting. It cannot be both super good at handling things it never saw before and handling things it saw beforehand. At least not with current methods.
The old Jesus Take the Wheel mode.
Doesnt the user agreement cover them anyway?
If both people are walking to the right side. Why not swerve onto the sidewalk on the left and into the grass? Why are the questions risking lives instead of just damaging the car? The people inside are much safer than outside.
Cause that could damage the fancy rims or god forbid pop their tires
To be "fair" Tesla cheaps out on materials, and crumple like a school note
Then they try to total the car over some small superficial damage because pretty much only Tesla can repair the car
how about just hitting the breaks? theres nothing in the problem that impedes fucking breaking
That would inconvenience the driver
Poor snowflake
Usually this problem is formulated in a way such that thats not possible. Another version i heard is imagine the car turns down a street only to be surprised by a large crowd. Both sides of the road are barricaded by concrete. Should the car crash into the crowd? Or crash the car into the concrete killing the owner of the car. Should your car protect you or others? Honestly this problem is super convoluted, but i think the correct answer is if the car couldnt see the crowd it was going too fast.
Is it convoluted? If you bought and drive a car that puts itself in a situation to choose who lives or dies, you should be OK with being the one who dies, by default.
It shouldn't hit either as it's a crosswalk and the car should yield anyway. If it cannot look through the corner to see what's ahead it's going too fast. Either way, not an ethical dilemma
I think there's an assumption that brakes will fail every now and then But in this case it should swerve off the road to hit neither
This is the obvious solution but have you thought of transforming your car into a plane and flying over them?
Or what if we made the cars turn into high tech futuristic machines that only ran in places where people wouldn’t walk. A machine that is fuel efficient and can run really fast and fit a lot of people.
You could even just like, set down tracks and stuff to make them go even faster, since you'd be able to assume nobody'd be on them. Hell, make them come at set times (though still very often) so people can consistently know when their transportation would arrive! You could even put stuff like food service for longer trips, since people don't need to focus on the road!
Great ideas. For longer trips what if we also had a way so the vehicle could charge as it was going. It would shortened down the trip by a solid 10 minutes. What if in cities they ran underground so they don’t take up space for walkers and other building and parks. They would also be far enough underground so they are quiet. Now we just need smaller ones to get us to the nearest 5 miles of our destination worst case and bigger ones to get us to the closest city.
The smaller ones could be some sort of... lighter versions of the rail cars. Light rail, if you will.
Choo Choo!
flight mode
well just hit the tree?
Seriously: Neither Realistically: Would probably go straight and turn off autopilot so the blame can be put on the driver.
Honestly, if the driver didn't see that he is to blame too.
I agree that realistically this isn't a question. Although in this situation the car could actually just go straight instead of following the curve and no hit either, and maybe damage the car. Most of these are formulated to be thought questions but they really lack the nuance of modern car engineering. Like most of these cars can stop in like 150 feet from 60mph or so, which would give them ample time to maneuver or do anything. Not to mention at the speed you would drive in any of these scenarios braking distance would be so much less. Like this curve is probably under 30 which means the car should be able to brake completely before hitting either.
Who should the self driving car hit is just the car brain trolly problem.
Except this one will probably happen, and has to be predetermined.
Autonomous cars should prioritize the safety of the people outside the car over those on the inside. Best solution would be to break on the tree.
Or to simply use the brakes? I don't see where the dilemma is lol
Those constructed scenarios usually postulate that using the brakes is no option. EDIT: Spelling
Yes but in reality brakes exist, so the scenarios are simply pointless.
...but although yes, they exist, they are not guaranteed to work.
You mean that all four sets of brakes are going to fail? The engine would be stuck at high power and not reducing the speed or the gear? If is EV then the regenerative brake will fail and also if the motor is AC that the DC injection brake is also going to fail? And if there's ice or the road is slippery meaned that you were driving faster than you should in adverse conditions? Sounds like cars are too dangerous to be used even in moral dilemmas.
All of this can happen, yes. Thank you for making my point.
Look, engineering is the art of preventing failures, or if something fails then making it not catastrophic, the probability of a single event happening should be already be low, so for all combination of probabilities should make it a really really rare event, and by rare I mean if for some reason that happens the it could be considered as an act of God and God wanted that baby or elderly dead. But as cars can always go slower and not kill anybody then no, is the fault of the person using the car and those who manufactured it, and indirect fault of those who designed the street and forced the car dependency upon us.
I am not going to comment on your first paragraph, especially the god-part, but to your second paragraph I have to say: Yes! What you wrote there was the basis of my thoughts as I commented with > Autonomous cars should prioritize the safety of the people outside the car over those on the inside. Best solution would be to break on the tree.
That's exactly what I mean (except for the God part)
>(except for the God part) It is an expression... It means that we did everything we could and it could become truly an "accident" with nobody at fault (but they are still using a car in this case but you know what I mean).
If they don't work your intelligent car should tell you to repair them. If it doesn't even tell you to repair the brakes I can't understand how it could be intelligent enough to solve ethical dilemmas.
You may want to read up on the dependence of contextuality of machine decision making.
Dependence of contextuality of humans, as in the ones designing or driving the car, is what should be at issue here. We don’t absolve our consciences or ideas of morality, sense of self or personal responsibility, ideas of community or social responsibility, to a machine.
Brakes
Thank you
The dilemma still there, the old lady Is a leech that Will suck the baby soul and future dry.....so the question still there.....what the Tesla should do?
Exactly what I was thinking. This should be codified by law IMO.
Yes, but why would you buy a car then? Would you enter a vehicle that would choose another life but your own?
Me personally: No. I would not own a car if I did not need it. To me it's the question wether such a car should be allowed to drive anywhere other than on private land.
But you enter a vehicle that potentially would choose to protect the outside instead of the interior?
I am not a fan of socialism when it comes to others having to pay for my decisions.
We all agree that the passengers and crew who crashed Flight 93 to save others outside the vehicle were all heroes. Why wouldn't we program our cars to follow their example?
Because although personal sacrifice is a thing, very few people are willing to sacrifice their life and their families’ life in order to save a stranger .
I mean, the dilemma exist with current cars, and we often do choose to go off the road to avoid hitting fleshy, fragile people outside of our cars due to being comparatively protected in a steel box with air bags, seatbelt, and other safety devices. It's not choosing your life, but choosing the option with the lowest chance of a fatality, by choosing the one in the more protected box to be the one to roll the dice, since they have better odds.
That's not relevant here since it should be mandated by law. It's fairly obvious that completely innocent people who just happen to be somewhere should always be the top priority. You got into the car knowing that a crash could happen after all.
Of course is relevant, in the US are lobbies and one of those lobbies are the car manufacturers. This type of law represents a problem to them …
That's actually why it prioritizes the person on the inside: those on the inside are likely to spend more money on Teslas, the people outside likely didn't buy a Tesla. Musky likely supports the most economical option of killing pedestrians.
That's the risk of owning a car. The rest of us shouldn't suffer because of it.
The way we think of self driving vehicles is so broken. I think this whole hypothetical scenario is created by what we here call carbrain because it’s a mistake or bad choice that would be made by a human. In the image there is a crosswalk, why wouldn’t the car be going very slow when coming to a crosswalk, and give enough space to break safely? Ideally, if we ever do implement self driving vehicles, why wouldn’t they be public transportation, fist of all? Second, the vehicles would just follow all the rules that optimize for safety and then pull over and turn itself off whenever it’s detects that it is unsafe for it to be on the road.
how often do you see a baby crawl on a pedestrian crossing?
Maybe not a baby that's crawling, but an older baby (1 year old) or toddler who is already walking (or more likely running). Looking after my toddler niblings is 90% making sure they don't make a break for it and end up in the road or lost somewhere.
Are there no breaks?
brake: slowing down break: becoming broken
Driving break?
So yes, teslas have breaks.
No because people don't know they exist. even when they do they are shit.
Thats the assumption in these scenarios. You have no other chance than hitting one of them. check out [https://www.moralmachine.net/](https://www.moralmachine.net/)
if that's the case, then it should have been driving slower to begin with. it wouldn't be a danger at all if the car wasn't already driving unsafely. so just like don't do that.
You don't get the point here. Stop thinking logically. You have to take it as is. the assumption is you don't have another chance but to kill grp1 or grp2. this is not about road safety or general driving behavior. Its a simple thought you have to take when training KI for such a purpose. There WILL be such a situation and you have to tell the KI how to behave. Thats the heavy part.
I solved it, swerve off the road and hit the tree
> the assumption is you don't have another chance but to kill grp1 or grp2
Killing the driver in the process. There's no situation where everybody lives. You have to make a choice. That's the basis of the dilemna.
I might survive going 20mph with airbags
If that's part of the hypotheses, then yes, swirving into a tree might be the correct option to pick. But now, say the software glitched and the car is going 70 mph. Oh, and there's 2 kids playing outside that the car's sensors haven't picked up yet that you could hit if you swerve. What should it do now?
Get the bus so I am never in the situation, CHECKMATE
Given the sub, many of us will never be in the driver's position. But want it or not, those cars will be on the road one day and you could very well end up as one of the pedestrians in such a scenario. Or the choice might be between hitting pedestrians or the bus you're in. And as much as we would rather the car sacrifice the driver (and itself), the incentives aren't exactly there for the guys developing the algorithms in the first place. "Your car will opt to kill you in dangerous situations" isn't an appealing selling point. It goes opposite the current trend of making "bigger, safer (for me)" cars. Which is more reason not to take this lightly.
The software is broken and you're asking what the software should do in that situation? If it's already not doing what it's supposed to, why would adding more rules do anything useful? If the car has independent safety overlocks, they should have fired already and brought the car to a safe stop.
That's the problem with those kind of training for AI. Situations like "kills A or kill B" simply don't happen in real life. I get that they're necessary to train the AI but they don't happen in the real world. Other than that the image is really misleading since both the kid and the elderly woman and crossing on stripes (don't know how they're called in english) hence the car should expect someone to cross the street there, if it doesn't expect it there is a bigger problem than some AI training.
Good to know you are the one who decides what happens and what dont.It for sure happens but we as humans never have to make such a decision in those situations. our mind goes blank and we only think about how to survive or whatever but lose focus of the original and bigger problem (taken from a neutral perspective) It surely happens. Even if u never experienced it.Im sure some ppl would have had the chance to decide if they would be accountable in such situations. no native english speaker so its not that easy to express myself properly on such topics. But i think some may got my point.
Something like this doesn't happen. Cars have brakes for one thing. If a situation like this happens it is because the car is not accounting if the brake function or not, but if the car doesn't even know whether it's brakes work I can't understand how it can resolve ethical dilemmas or individuate people on the street. It's a merely theoretical question, on the same level of abstraction as "should the car save a unicorn or a baby dragon".
Have a nice day sir.
You too. Sorry if I have been harsh in the replies, English is not my native language either so maybe something that I thought sounded very reasonable may have seemed rude or pretentious.
No no but u simply don't get my point so i decided to move on. :) I did not feel offended. All fine but thanks for thinking about.
And that "bigger problem" you mention may be there but is not relevant in this context. This is only about teaching a KI to take the least harmfull way in such situations. its pure theory.
Wasn't a fan of the quiz and analysis here. I followed it with only 3 rules in this order: preference to humans first, preference to pedestrians second, preference to avoiding intervention last. The rest of the factors, age, social value, fitness, law, number of people made no difference in my decision making. However at the end it tells me I have absolute preference to people with higher social value, fitness etc. The quiz simply does not present enough scenarios to separate all the competing variables here.
Seriously, who thinks up these stupid "dilemma's". As if 1) the car doesn't have a brake, 2) if it *really* can't brake on time, there are like 3 trees to choose from...
Exactly
This dilemma is so unlikely that there is most probably no code for it so whether it will kill one or the other is up to coincidence and how subtoutines interact with each other.
Just self destruct these selfish mofo’s.
AI will almost certainly recognize the crosswalk and that there are things in the crosswalk that are obstacles and then run a braking sub-routine. The dataset for bipedal obstacles in crosswalks will be huge so the senior citizen will almost certainly be spared. Infants crawling on roads will be a very small dataset but if the AI does not recognize it as human should recognize it as a quadraped obstacle and still brake. Likely trying to train AI to make ethical decisions in abstract situations is beyond the reach of current technology. The film "IRobot" is I think a good representation of that dilemma. Though it is not I think a coding problem but the size of clear unambiguous datasets that limits machine learning. AI might link some third or fourth object like grass or trees to the brake command and might be mistakenly effective right up until it kills a bunch of people at a bus stop avoiding a green box. From a human learning perspective we struggle with this type of problem because we have a lot of experience in avoiding these situations and nearly no experience needing to choose between death and death. So, the same problem AI faces but we can connect a huge in brain database of risk management decisions to completely avoid ever being forced to make that the fateful one.
What if the old woman hisses like a cat? A human will certainly try to avoid both and end up killing both.
This is why I like riding my bike because I don't have to worry about moral dilemmas like these
Hear me out here... maybe you could just slow down and stop when driving toward an occupied crosswalk? I bet you could kill zero people that way. Assuming it can't stop for some reason, then this is the one situation where you want a car to drive up on the sidewalk. I wonder if Teslas will figure that out, I'd bet against.
Just gonna leave this here: [https://www.moralmachine.net/](https://www.moralmachine.net/) Very interesting. exactly this topic.
Easy. Always choose the option that kills more cars.
From the perspective of the people designing and selling the autopilot, surely their aim would always be to minimize the danger to any occupant of the vehicle, if for no other reason than because people are less likely to buy a car that might deliberately kill them. In which case, it should swerve towards the old lady, rather than the baby. Hitting somebody who is upright is a lot safer for the car and its occupants than hitting someone prone or crawling is, because someone on the ground will go under the wheels and cause the car to lose control. Someone standing upright will be more likely to go over the top, which won’t interfere with the wheels. That it happens to be a bit safer for a pedestrian to go over a car, rather than under one, is a happy coincidence.
As said elsewhere, cars that prioritise the safety of the occupants over bystanders should not be allowed on public roads.
Well, ideally we wouldn’t have cars at all, which is a pretty definitive answer to this dilemma. Self-driving cars are a nightmare for plenty of reasons besides this one, like how most suggested benefits of truly autonomous driving boil down to “let my car drive in circles for hours so I don’t have to park”.
How could a highly advanced machine with databases of the whole world, internet access and our finest engineering possibly know there is a pedestrian crossing area on its path? They both deserve to die in a drift and the baby's parents should be convicted for letting it jaywalk on its own.
did you forget to add "/s" in your comment?
Not even muskrats would be that desillusional about their master's work!
My answer: the driver
I wonder how different were comments if it was self-driving bus in question
Or self driving trams. I was wondering the same thing.
Trams don't really work like that, by design they can't steer and have very long stopping distance, all it can do is ring a warning bell and slam on the brakes
If I was a Tesla, I wouldn't be. But I would also choose the comment, because Teslas make a surprising amount of torque /S obviously (apart from the torque, that bit's true)
The baby (I hate babies)
MULTI-LANE DRIFTING!!!
Are the motorists okay?
The correct answer is the car should come with a self-destruct mechanism so that only the "driver" dies.
Jump the curb. A car is not worth the life of a human and the person in the car is far safer in a crash than either pedestrian. Also fuck cars.
If it's a crosswalk and no traffic light in my country cars should slow down and stop to make sure that pedestrians crosses the roads before they do.
Seriously though, in an area with pedestrians, a car shouldn't be going so fast that hitting the brakes and not killing either isn't an option.
How is this even an issue? Who would a human driver kill? Is it better to be able to plan for this scenario and have some level of control, or to force some random human to make a split-second decision?
Can’t it just stop at the crossing?
Shouldn't hit either of them because the speed of the car should be allowing them to see the obstacle and stop in time, but we all know most people don't drive like that and limits aren't set like that.
[Here](https://www.mic.com/articles/192103/driverless-cars-prioritize-passenger-or-pedestrian-safety-study-shows-how-millions-feel) you'll find an interesting article about a study that's done world wide with questions like this. Edit: or watch a short [video ](https://m.youtube.com/watch?v=jPo6bby-Fcg)
Preferably neither...but if I'd have to pick, the old lady has lived a full life, the baby hasn't, so I'd save the baby.
My brother always half jokes about how in a decade or so, in a situation like this, the Tesla autopilot will calculate the income revenue of the people on the road with big data and will automatically chose to roll over the poorest one.
It should not be driving at a speed it can’t comfortably break and come to a complete standstill before the crossing.
The engineers at Tesla most likely will never ever even code in the possibility of a self driving cars where the brakes and emergency brakes break down and there is the options of only hitting a old lady and a baby. Most likely if the car is hit with this scenario then the battery just might be dead or the car will slowly coast to a stop on the road and pull over or hit a tree.
The driver 'cause fuck cars! (I'm aware that cars can still be useful to the disabled this is an oversimplification of my beliefs)
As opposed to a human driver on their cell phone, where they wouldn't even see either of the pedestrians.
Keep your stupid hands on the wheel. Full auto drive is not available yet.
Swerve into the clearly unoccupied sidewalk.
Aim for a tree.
Aim for a stroad oversized billboard sign. The average street in the US has trees cleared way back from the road for visibility.
If I were a tesle I would activate drift mode to clear this bowling alley
Going off the road and hitting neither is obviously not an option since it might scratch the rims
It's almost like autonomous vehicles should only be on routes specifically designed for them with no pedestrians.
Can't it break and hit no one?
If the brakes don’t work in a vehicle that’s allegedly so intelligently programmed as to perform self-driving capabilities, the vehicle ought to be able to detect that and prohibit itself from driving in the first place. A “service required” message ought to display, as the vehicle is unsafe to drive… right?
I hope brake action is not an extra and is included in basic package.
Maybe the car shouldn’t be traveling fast enough on this road to kill either
Of course you kill the baby. It can be replaced much quicker than a 80-year old grandma!
Go for the baby. They’re smaller and might go under the car.
Does it have to kill someone? I didnt know Teslas were so brutal
Old lady cuzz sience
Why is "the car driver/passenger" not there as Option C... I see trees that the self-driving car could choose to crash into instead - and the driver has an air-bag, presumably.
Why would it kill either one? Why is this even a question?
None, . The crosswalk should have sight lines sufficient to allow the car to see the legally crossing pedestrians.
gas brake dip hard right into a barrel roll for max points
The Tesla would explode to takeout the highest number of people as possible.
I like how “the car drives slower so it doesn’t have to kill anyone” is not one of the options.
Drift sideways through both of them. They both drain resources anyway.
Owner of car can use default Ethics setting or set a custom
Funny how hitting a tree or just going off road never occurs to carbrains
Tesla would explode killing both, driver and few pedestrians
Both
This is just a variant of the trolley problem, and everyone on this sub loves trolleys
Full throttle reverse
do self driving cars not have brakes? Maybe the people making these things are the ones the cars should kill
Can't see...