Ethical Issues of Robots in Society Essay

Robots grow more and more capable all the time. Their abilities to see and comprehend the world around them are also increasing at a rate that far exceeds the scientific world’s initial expectations. With products

like ASIMO from Honda, it is clear that robots are making their way out of the lab and into the hands of consumers.

It is obvious that the ethical issue of machine slavery is relatively abstract. However, the real ethical questions that revolve around robots are their impact upon human society. Clearly, low skilled labor will experience the side effects of having their jobs replaced by machines. Thus leaving society with an over abundance of people with outdated skills and little education to fall back on. This could result in a serious economic and social backlash.

There is also the ethical question of how is a robot to be treated on a day to day basis. It sounds silly, but is it ethical to turn your robot off? Consider the flip side, maybe it is more unethical to leave your robot turned on for too long. These two questions lead us to ask at what point does a household device become worthy of moral protection. By moral protection, one means a societal sense of it being wrong for one to intentionally damage or injure the machine. This would closely resemble a machine version of animal cruelty laws.

Most researchers believe that robots are nowhere near a point to which they are advanced enough to even raise these questions. However, society has witnessed the result of not dealing with moral and ethical questions until the last minute or even after the fact on many occasions. How interesting it would be, to do something right from the beginning, before problems arise.

Most would agree that intentionally beating or breaking a robot is more a damage of property issue, than a moral, life-entity one. However, this is probably going to be the first real ethical question that arises with the coming of the robotic age. Where does the line get drawn between a device used for work and something that deserves moral protection?

A lot of what sets machines apart from animals in our psychological profile of them. Machines do not cry, show signs of distress, injury, nor do they act to avoid them. It is likely that robotic entities will be endowed with highly advanced self-preservation instincts programmed into them. Robots are expensive, and nobody wants their costly investment throwing itself into a pool one night after a hard day of labor.

These programs will require a kind of internal, negative feedback system to harmful situations. Biological life forms have a sense of pain; it is our internally wired system that reacts to negative stimulus. Most intelligent robots today have some rudimentary form of self-preservation such as an aversion to dropping off an edge. Even more advanced robots can identify areas they had difficulty performing in, remember where it was, and in the future avoid it. Pattern matching is common as well, so as to actually predict what areas will be met with difficulty, and avoid them entirely, without actually encountering it.

Perhaps as a result of the universally understood sense of pain, we have moral codes that believe it wrong to cause pain. Is it wrong to smash a robot appendage with a hammer? What if this machine has been endowed with a system that actively tries to avoid such situations, yet you were was able to overcome it?

The machines of today and the very near future stand at the blurry boundary of simple machinery and the neurological functionality equivalent to insects, reptiles, birds and even some simple mammals. They are intended to operate and interact with us in the real world much as these natural creatures, yet with a set purpose in mind. The question is how long can we push off dealing with moral and ethical issues that relate to creating life like organisms.

Scroll to Top