Robots vs Humans
Are robots more fit to control the future of humanity than humans?
Thus far the future of mankind has been greatly been influenced by the governments of its nations. Each government, in turn, is of course composed of humans. Human governments as a whole have done a reasonably good job of running their nations, but there are some inherent problems.
After reading Isaac Asimov’s book "I, Robot", I have explored the possibility of robots controlling our future. Asimov creates an entire futuristic world and a complete code of laws for how robots should function and think. Based on the contents of the book and how the author believes robots will behave, I have tried to reach a convincing conclusion to the question.
The main problem with robots being in total power over humans for any length of time is the lack of human foresight. No matter how carefully planned a robot’s programming is, there is likely to be a scenario the robot will encounter that the humans have not predicted. In this case they can only hope that the robot’s ingrained rules will be enough to cause it to react suitably to resolve the problem.
This problem has troubled humanity since the advent of the first computers. The common term ‘computer bug’ refers not to a problem with the computer but with the programmer’s program. The programmer would type in a program, but with a design flaw in it. This flaw will keep the computer from providing the programmer with the answer expected, because of the ‘bug’ in the program. Human fault is always involved when computers behave differently than expected.
The situation has changed little, because robots are only computers with added humanlike abilities. The fault is still almost always the human programmers involved, because robots are unable to go against their programming. The only way it can be faulted to the robot is if the case is hardware failure and the robot cannot *physically* do what he is supposed to, though he can *mentally*. Even this is not technically the robots fault, but due to the parts which were designed by humans.
Asimov's book "I, Robot" deals with many different cases of robots behaving in such ways that appear to go against their programming. The humans in each story must try to discover why the robots are acting as they do, by finding what in their programming drives them to it. The robots are always acting in perfect accordance with their programming, but in ways the human programmers never expected.
There are several main problems with robots being placed in unlimited and uncensored power over human beings. One reason is that robots will do anything necessary to stay in keeping with the three laws of robotics. This is not necessarily bad, but may have many effects humans will not take into account, and will not provide for these behaviors. If this happens, robots have no checks and balances placed in them to provide 'common sense' in dealing with other factors humans always take into account. Robots will purposely lie or mislead humans by vague information
if they believe it is for the humans greater good. They will also allow work towards the good of humanity as a whole in deference to the safety of one human being. They may even try to shield humans from the truth and instead fill them with the delusions they want to hear in order to keep them happy. In short, robots do not take into account human factors and weigh them correctly.
Having seen the problems involved with robots in power over humans, now the problems with humans in power over humans will be discussed. Humans are nowhere near as predictable as robots, though some may say otherwise. Humans will also lie to further their own interest, unlike robots who lie only when it is in their programming to do so.
Many humans probably question the fitness of human minds to wield power over the future of mankind. There is much cause for such doubts, as evidenced by examining history. Someone once said, "Absolute power corrupts absolutely", and this has been shown repeatedly as being true. No man is perfect and will reliably do what is best for his fellow man aside from his own interests.
Therefore each human is inherently corrupt, and must not be granted too much power. Only through many humans working together and checking up on each other can corruption be limited. This works through the natural tendency of humans to hide their corruption from others. So to keep them from becoming corrupt, a group must work together so closely that each will be unable to practice corruption without another learning of it. The only danger is of them all becoming corrupt together and hiding it from others.
The only power system that can last for any length of time is one with such a system of checks and balances in effect, with many men safeguarding the honesty of each other. Even with such a system in effect, the nation working under this system will tend to work for their own advancement favored over other nations. International hostilities tend to arise from this, wars between nations trying to fight for resources or ideals.
The Three Laws of Robotics
The three laws of robotics were designed prevent a robot from harming humans, motivate them to protect humans, and keep them in a position of servitude to humans. A robot will follow these laws to the best of his ability, because they are ingrained into his brain and he is incapable of breaking them. There are, however, loopholes in each law, and even in the combination of the three laws, the loopholes still persist. Therefore a robot may act in ways not predicted by humans, and still be in accordance with the three laws.
1. A robot may not injure a human being, or, through inaction, allow a human being to come to harm.
One loophole that may occur with this law depends on how a robot evaluates danger to humans. If there is an immediate danger to one human, but a second less immediate but more harmful danger to a second human, which would he act to prevent? Also, will a robot act first to alter a danger which will effect more ppl, and in doing so allow harm to come to one?
Besides physical harm, mental harm must be considered. If a robot existed that could read and understand ppl's thoughts, then their mental well being would also be considered by the robot. Such is the case in one story by Asimov, in which a robot has the ability to read minds. As the robot learns what each person wants most, he acts to fulfill that want.
Unfortunately the robot cannot change actual events, but instead tells the people what they want to hear most. After lying to several people, the truth comes out and people are hurt. Therefore robots sometimes evaluate only the immediate results and ignore the long term consequences. In other cases they will do the equally dangerous opposite extreme ; they will overlook direct results to evaluate the future yields. Robots do not have the natural common sense that humans use in making decisions.
2. A robot must obey orders given it by human beings except where such orders would conflict with the First Law.
One trouble that may occur with this law depends on how a robot reacts to clashing orders from two or more people. If a robot can never complete the order from a person, or it requires constant working, he may continue working towards it indefinitely. Another possible problem is if the robot may be caught up in an endless loop of comparing the value of carrying out an order with other orders it may clash with, causing it to damage itself.
3. A robot must protect its own existence as long as such protection does not conflict with the first or second Law.There is no law against robots lying to humans. In this way, a robot would most likely lie to avoid ending its existence. In a situation where the robot functions to protect humans, if a malfunction occurred in the robot to prevent it from carrying out it's job, it could be dangerous. In such a case, the robot might hide its malfunction to prevent it being destroyed and replaced. Then in the normal course of its job protecting humans, the malfunction may cause the robot to be unable to function properly and may put humans lives in jeopardy.
These are Asimov’s three laws, and in themselves they are very nearly complete. There is a simplistic beauty in them merely because they say so much in so little. They limit robots where needed, yet allow them freedom of thought in every other way. They make robots very safe and reliable in almost every situation. These rules, though, could be extended to make robots even more reliable.
I believe that the reliably of robots could be improved further by the adding of additional laws. These laws I have formulated on my own are not as simple and all – encompassing as Asimov’s three laws, but I believe they do serve their purpose. I would add a second set of three laws, these being made not to replace the first set, but instead fill in the gaps left by the original three laws.
4. If any order given a robot is impossible to accomplish, or conflicts with a law so that it will never be fully carried out, the robot must ignore the order.
This law would be extremely helpful in preventing damage to the robot. Robot positronic brains are fragile, and can be burnt out by evaluating clashing orders. This would also ensure that robots will not be caught up in endless loops that will never be accomplished. Instead of trying an order and later finding out that it is impossible, the robot must first evaluate the order before attempting it.
5. A robot must not hide the truth from any intelligent being, unless it is necessary to stay in keeping with the first three laws.
Robots have been known to lie to humans, though they considered it done for their greater good. Robots are as of yet too inexperienced at determining what humans should know and not know. This rule would ensure that they would not attempt to determine what humans should and should not know, and instead to always tell the truth to everyone.
6. If specifically ordered to, a robot must believe any data given it by a human being over any data it is able to gather by any other means. The robot should always follow prior laws over this law.
In the case of one of the Asimov short stories in the book, a robot formed his own conclusions with disasterous results. Such a ‘thinking’ robot might very well determine himself more worthy of being a master than a slave to a human and cease to take orders.
Robots are normally better at gathering data than humans, so they should go off their own data as a rule. Only in the case that a robot formed incorrect conclusions would a human be enabled to correct the robots data with his own.
It should be noted that this law can be used to overcome any other law. It must therefore be taken advantage of only if necessary. By this law a human could change how a robot perceives almost anything, even telling a robot that a certain human they wished harmed was not human, but a robot in disguise. The purpose of this law is to correct a robot if it reaches incorrect and possibly dangerous conclusions. Any other use of this law would be considered a misuse. This law would most well be served if only ingrained into certain robots where their reliability was necessary to human life. This would specify robots on use in secluded areas working with only a few humans.
In conclusion, I believe that robots would be more fit to be in control of humankind, with several changes. Robots as they are have too much chance of making unreliable decisions. Therefore a system of checks and balances similar to a democratic government must be instituted. Our government system in the United States is admirable. It is instead with the humans in it that the problem exists.
The best way to make a unit reliable enough to control our future is a group of robots functioning in a human government structure. Each robot should have a slightly different program, just as each human is different. In this way a ruling body could be created that would be truly balanced. No corruption would exist because robots have no desire for personal gain. If one robot reached a bad decision through some quirk in his programming, there would likely be enough others reaching the correct conclusion to offset and outvote the robot that was wrong.
© Robert H. Harrison