top of page

An ethical framework on how human attitudes towards artificial intelligence and robotic beings should be guided from a moral point of view as a means to achieve a balance between both sides. As artificial intelligence and robotics evolve, a moral framework needs to be developed to guide human-machine interactions. As it seems artificial intelligent beings are expected to unfold autonomously, ethical values such as responsibility, transparency, accountability, and incorruptibility should be depicted as an algorithm and programmed into their system to work accordingly. This ethical framework would determine the programming limits of AI, what should be allowed, and what uses should be universally forbidden. As soon as artificial beings acquire a certain level of autonomy, this set of ethical configurations would help machines understand how human society works and possibly integrate them as an extension of the humankind. However, to build this ethical and moral code, humans first need to establish a single set of ethical regulations for themselves before anticipating one fit for robots. Human ethics are heterogeneous, highly dependent on cultural and social attributes, besides other layers, including emotions, languages, and political beliefs. Machines work, by definition, through criteria and logical rules, yet, worldwide ethical behavior is blurry, sometimes full of exceptions that transcend the very same limit of human existence. Humans need to reinvent their upright basis and then ascertain a new language and formula to fit these artificial beings inside the human world. Many ethical debates are likely to pop up as soon as machines acquire their own consciousness or when robots become sentient. Questions should be employed such as until what extent humans should treat artificially intelligent agents merely as tools, or if machines would be able or not to gain the sufficient power to some day confront humankind. In front of this quandary, it is possible that an ideal ethical measurement would only come after years of human-machine interactions. As humans interact with machines, the resulting metrics would give insights to sharpen the way this introduction should be made. For instance, from a young age, children could learn how to behave and treat robots while in school. Adults and late adopters, on the other hand, could be taught how to act and behave with specific training courses and educational material. By analyzing these outputs, it is expected that humans would be able to develop affinities and create a more symbiotic acquaintance for both humans and robots. However, another matter to keep in mind would be the possible future creation of a hierarchy system between artificial and organic beings. With the increasing trend of humanizing artificial intelligence by enhancing conversational skills or by embodying it in humanoid robots, it is both the uncanny valley concept and the ethics of human-machine interaction that arise as areas of concern for further evaluation. Science fiction has been exploring these relationships for more than a hundred years, considering that robots have been mostly used as metaphors for the externalization of problems faced among individuals of the humankind. On the other hand, when actually dealing with machines that still reproduce many human aspects, it is the conflict between similarities and differences that require a more complex approach for the future of robotics and artificial intelligence. In the coming decades, people may have relationships with more sophisticated robots, which could lead to new forms of emotions that could complement and enhance human relationships. However, it is imperative to anticipate and discourage scenarios in which relationships could become socially destructive. The key question is not whether humans could prevent this from happening or not, but rather, what sort of human-robot relationships should be tolerated and encouraged.

Human-machine Interaction Ethics


Robots are becoming scalable