The Robot's Dilemma - Protect people, Obey and Protect themselves

avatar

https://img.inleo.io/DQmR7hZU6xdq23RWA1WqzzPruoN3x2eZ2LY2MGjN14ZcxiK/ai-generated-8843638_1280.webp

source

A couple of weeks ago, I shared blogs on Robotics and Artificial Intelligence and that gave me the chance to talk about the three laws of robotics.

We've had this fear since the very time the name "Robots" was coined and infact almost every movie about Robots seems to have a similar trend; the robots are invented by some industry or smart scientists, they later Rebel in an attempt to take over the world and make it perfect and they get defeated by humans.

From Terminator to "I Robots" with Will Smith as protagonist. People get worried that Robots will rule the world. I had a discussion yesterday with a young girl and I was trying to advice here about education and where the world is headed with so many jobs about to be taken over by robots.

Not to my amazement she mentioned the possibility of Robots ruling or taking over and that gave me a perfect moment to explain the three laws of robotics to her.

I did it perfectly however when I finished and start to reflect on the discussion I noticed there could be a dilemma or a conflict of laws in a situation.

Let me start by sharing the famous three laws of robotics by Isaac Asimov.

  • Don't harm humans.
  • Obey orders unless it runs contrary to the first rule.
  • Protect itself unless it runs contrary to rules one and two.

So if you instruct a robot to harm a Human being, it is programmed to disobey you. It has been programmed to obey you every command but to disobey you if it means hurting anyone which is why we should feel safe around them. But its third law also means it will endeavor to protect itself however will prioritize obeying you and not harming humans.

So now what happens if you command it to explode in a building full of a few people and that's the only way it can save the masses outside the building? Anybody got an answer let me know in the comments.

This got me thinking a lot about what it's decision may be, save the people inside and hurt the masses outside or protect the masses outside and kill both the people inside and itself.

This is similar to a road incident where you can turn the wheel of the bus and it saves the people in the bus but runs over some people on the road. I wonder what you'll do in this situation. I just hope I don't end up in such a compromising situation.

However I can think of less complicated situations and know that the robots will always prioritize safety of humans than obedience and protecting itself.

Let me give you a couple of scenarios and let's see what the robots choice will be in the end.

Let's say that the robot is ordered to blow itself up. Assuming this act would hurt people standing near the robot, the hard decisions appear. For the robot, the No. 1 rule would be not to harm any human beings. Therefore, even when the robot is ordered to blow up, it won't, just in case this will cause harm to a some one near by.

Let's take for example a firefighter robot. It is created to save people during fires. If the fire chief orders it to enter a house about to collapse, then it should do so at its own peril. But if it picks up the presence of people all around it with its sensors and that its actions might cause harm, it will put safety first and may not comply with the order.

Another example is a robot soldier designed for war. If ordered to blow itself up to stop the enemy but there were some civilians nearby, it wouldn't explode. The protection of humans would be first. However now that I consider fighter robots, I wonder how they're not obeying the first law which is to not harm humans. Anybody got an answer to that, let's hear it in the comments.

Let's also say there is a robot helper in a hospital. If someone orders the robot to burn a harmful substance but through this smoke patients might get hurt, the robot won't execute the order. It knows the value and safety of humans come first. I guess this is a good thing. Although the instructor of the robot would be frustrated that it got disobeyed, it will actually be a notification to the instructor that his or her actions are about to cause harm and that could help us avoid so many domestic accidents and large scale ones.

https://img.inleo.io/DQmPfR8ExuHN9Ytojbt6cGxrLKc5mz7EqAMzuLzqTMBTxoR/household-robot-8853723_1280.webp

Source

These rules ensure that robots make the safest possible decisions, never to put human beings in danger, even if ordered to do so, but always protect themselves and everyone around them.

I would be happy to get your thoughts and feedback about this dilemma in the comments. Thanks for reading guys.

I remain TheRingMaster and Let's Together Make Web 3.0 Great ✊

Posted Using InLeo Alpha



0
0
0.000
0 comments