When it comes to robotics, people weren’t as accepting as they are today. Back in the day, the major thing that brought robotics to light was plays depicting how bad robots were and how they were expected to rule the world today. Even some of the movies released in the 21st century (like I Robot) showed robots ruling over human beings.
Even though this was mostly a conspiracy theory that a small percentage of people started believing. But, this percentage started rising till it was actually considered a threat since there’s no determining how far robot AI could go. And, people actually accepted this as a possibility. That’s when a certain individual came up with the three laws of robotics that prohibited automated machines from ever opposing the human race. We’re not going to spoil the laws right now. We’ll get to them a little later.
The Proprietor of the Rules
Born January 2nd, 1920, Isaac Asimov had no technical background and was actually a writer and a professor of biochemistry. Could you believe that the person who wrote the three laws of robotics actually had no knowledge in the field whatsoever?
Well regardless, he’s still called a genius nonetheless. Basically, while he was writing his eight-story collection named The Rest of Robots, he started wondering that the main storyline of science fiction these days was robotics. And, like we said, the main plot was robots destroying their creators. So, he started meeting up with a few experts and he would state many years later that he already had a rough sketch of the rules in his mind at that very moment. However, he didn’t ever explicitly mention the three laws together till his book Runaround. Even then, they were just mentioned, never mentioned directly. These rules kept on being molded over the course of the next few books by Asimov till one day, they just existed. Asimov sincerely believed that these laws existed long before he became the one to create them. He said that he was just the one to bring them to pen and paper, the rules were already in place and everyone knew them subconsciously. He used the following metaphors to describe the rules.
Law 1
A tool must always be helpful but never unsafe for the user. It must have some sort of safety mechanism to protect the user. The user could obviously get injured regardless of the security protocols, but this should be because of the user’s negligence, not a design fault in the tool.
Law 2
A tool should always perform its function, but not in cases where that function could harm the user. The safety of the user is of the utmost importance.
Law 3
The tool should remain intact until the user determines that it is no longer needed.
Asimov believed that he simply just followed these basic rules that imply to literally any tool made to help humans and implemented them to robotics.
First Law
A robot may never injure a human being, or through inaction, allow a human being to come to harm.
Second Law
A robot must always obey order given to it by human beings except when they come in conflict with the first law.
Third Law
A robot must protect it’s own well-being and ensure it’s protection as long as this doesn’t conflict the first two laws.
Their Role in Robotics
A majority of the world today believes that the Three Laws of Asimov are real. We have Hollywood and the film industry to thank for this. But, the reality is that these laws aren’t given any real importance in the world of robotics. It’s just science fiction that has led us to believe that there is such thing as the three laws of robotics that are deeply integrated into the development of robotics when the truth is the exact opposite. The Three Laws are a nice touch to the mainstream Hollywood scenario where robots start to act as rebel towards humans. But, that’s all there is. Scientists and experts in the field of robotics have never thought about the laws in terms of something that needs to be followed. And, to be perfectly honest, the laws are something that are only functional in movies where the word robot only means human-like machines implemented with AI that can talk and do several things like normal human beings. But, when you think of robotics as a general term that refers to anything with the capability to think on its own, you’d understand that the Three Laws are riddled with flaws. We’ll talk about things in a bit more detail next.
Are These Laws Required?
When we’re think of strictly the Three Laws of Robotics, the answer is no. Because, a lot of the applications that include robotics today can hurt human beings in some way or form, violating the first law. Every robot has protocols and if a human ever asks it to violate those protocols, it won’t do that, violating the second law. Some automated machines are loaded with self-destruct protocols meaning that they violate the third law.
Let’s dive a bit farther about the first law. It states that a robot may never harm a human being through action or inaction. But, what about robots being used in operation theatres? They’re cutting open human bodies, that is, in some way, hurting humans. Implementing the first law would force them to say no to cutting open a human, even if it’s for the person’s own good. Another example would be automated weaponry like unmanned drones. The US Military has been using these drones for several years now. Consider the situation when a drone has a clear shot to take out a high-profile terrorist target is some desert somewhere in the world. If the drone was made according to Asimov’s three laws of robotics, it would be unable to fire that missile even though taking that one live would save thousands more. When you talk about something as complicated as the greater good, you’ll know that it’s a trait that can’t be bounded by any specific set of rules and regulations. Just compare this situation to an ambulance. A road may not allows cars to go over 30, but an ambulance hosting a dying patient could do 80 on that road and no one would oppose the vehicle.
Things got a little off track here. Moving back on topic, the answer to the question “Are these laws required?” is no. They are absolutely not required. However, if you alter the question, “Are laws required?”, the answer is yes. With the recent advancements in the field of artificial intelligence, the primary target is to make a human think logically and understand emotions just like human beings. We’re giving robots the chance to make their own decisions and who knows, this decision could one day be that humans don’t deserve to live. There is no way of knowing how far this intelligence could exceed. Science opposes the idea, but there’s absolutely no way of ruling out a robot rebellion in the future. There has to be a certain set of rules and regulations that robots need to follow in order to ensure that there aren’t cases of robots murdering humans in the future. The Three Laws of Asimov are not perfect, as a matter of fact, they’re not even remotely usable. But, they do press on the idea that we may need something that does stop robots from hurting humans if push does come to shove one day.
Closing Thoughts
The Three Laws of Robotics are a pretty interesting topic to read about in books, see in movies and hear on the news. But, that’s all it is. It’s just a fancy way that did help out a lot back in the day when people were still extremely skeptical regarding the uses of robotics and whether it would help or just cause problems. It changed the way people perceived automated machinery and let them know that there was a backup plan. Even today, it’s a popular myth that Asimov’s laws are being implemented in the world of robotics, when the truth is that they don’t have any actual value in the eyes of professionals. As far as a robot rebellion is concerned, let’s just say that even though it may not be publicly declared by robot manufacturer, it is on the minds of the scientists behind the machinery.