A robot may not intentionally damage a person or, by remaining motionless, permit a person to suffer harm. A robot must follow any instructions provided to it by humans unless doing so would violate the First Law. As long as it does not violate the First or Second Law, a robot must defend its very existence.
Isaac Asimov outlined The Three Laws of Robotics eighty years prior to the development of artificial intelligence. However, they wonderfully demonstrate how we as a species have responded to the moral dilemmas presented by technology: by safeguarding the users.
However, whether or not they are caused by technology, the ethical problems that humanity is currently confronting are more of a social issue than a technological one. In light of this, technology in general, and artificial intelligence in particular, could be utilized to empower users and lead us toward a future that is more ethically desirable. To put it another way, we can reconsider how we develop technology and artificial intelligence and use them to create a society that is more moral.
In his open-access article titled “Ethical Idealism, Technology and Practice: a Manifesto,” which was published in Philosophy & Technology, Joan Casas-Roma, a researcher at the SmartLearn group affiliated with the Faculty of Computer Science, Multimedia and Telecommunications at the Universitat Oberta de Catalunya (UOC), proposed this strategy. We need to go back in time a little bit in order to comprehend how to achieve this paradigm shift.
In comparison to now, the world was quite low-tech when Asimov originally published his laws of robotics. In 1942, Alan Turing had just begun formalizing the algorithmic ideas that would later play a crucial role in the creation of modern computing. Computers, the internet, and even robots that could operate on their own were nonexistent. But Asimov foresaw the anxiety that would arise if humans were successful in creating machines that were so sophisticated that they would eventually rebel against their masters.
But this was not the case. We came to realize that the data and the algorithms replicated the model or worldview of the person who was using the data or who had designed the system. In other words, the technology itself was not eliminating human biases, but rather transferring them to a new medium. “Over time, we have learned that artificial intelligence is not necessarily objective and, therefore, its decisions can be highly biased. The decisions perpetuated inequalities, rather than fixing them,” he said.
As a result, we have arrived at the spot where the Laws of Robotics predicted. From a defensive and reactive stance, concerns concerning ethics and artificial intelligence were raised. We made the decision to take action to limit the negative consequences of artificial intelligence once we discovered that it was neither fair nor objective. “The necessity to create a barrier to stop the negative impacts of technology on people from reoccurring gave rise to the ethical dilemma of artificial intelligence. To do so was required, “affirmed Casas-Roma
What ethically desirable outcomes might a group of artificial intelligences with access to an unprecedented amount of data help us to achieve, he explains in the manifesto, has prevented us from exploring another fundamental question in the relationship between technology and ethics over the past few decades. To put it another way, how might technology aid in the development of an ethically desirable future?
In the direction of an idealistic alliance between ethics and technology
Moving toward a more inclusive, interconnected, and cooperative society where people have a better knowledge of global concerns is one of the primary mid-term goals of the European Union. Technology and artificial intelligence may be a significant barrier to achieving it, but they may also be a valuable ally. A more cooperative society “may be fostered depending on how people’s relationship with artificial intelligence is designed,” said Casas-Roma.
There has been an undeniable boom in online education in recent years. Digital learning tools have many benefits, but they can also contribute to a sense of isolation. “Technology could encourage a greater sense of cooperation and create a greater sense of community. For example, instead of having a system that only automatically corrects exercises, the system could also send a message to another classmate who has solved the problem to make it easier for students to help each other. It’s just one idea to understand how technology can be designed to help us interact in a way that promotes community and cooperation,” he said.
According to Casas-Roma, an ethical idealist perspective can rethink how technology and the way users use it can create new opportunities to achieve ethical benefits for the users themselves and society as a whole. This idealistic approach to the ethics of technology should have the following characteristics:
Expansive. Technology and its uses should be designed in a way that enables its users to flourish and become more empowered.
Idealist. The end goal that should always be kept in mind is how technology could make things better.
Enabling. The possibilities created by technology must be carefully understood and shaped to ensure that they enhance and support the ethical growth of users and societies.
Mutable. The current state of affairs should not be taken for granted. The current social, political and economic landscape, as well as technology and the way it is used, could be reshaped to enable progress toward a different ideal state of affairs.
Principle-based. The way technology is used should be seen as an opportunity to enable and promote behaviors, interactions and practices that are aligned with certain desired ethical principles.
“It’s not so much a question of data or algorithms. It is a matter of rethinking how we interact and how we would like to interact, what we are enabling through a technology that imposes itself as a medium,” concluded Joan Casas-Roma.
“This idea is not so much a proposal concerning the power of technology, but rather the way of thinking behind whoever designs the technology. It is a call for a paradigm shift, a change of mindset. The ethical effects of technology are not a technological problem, but rather a social problem. They pose the problem of how we interact with each other and with our surroundings through technology.
In the ever-evolving landscape of modern agriculture, artificial intelligence (AI) is emerging as a game-changing…
The Internet of Things (IoT) is reshaping the way we live, work, and produce goods.…
Introduction Have you ever wondered how some manufacturing industries consistently deliver high-quality products while minimizing…
In the ever-evolving landscape of modern agriculture, the integration of Internet of Things (IoT) technology…
Introduction Have you ever imagined diagnosing equipment issues without even being on-site? Welcome to the…
In the ever-evolving world of manufacturing, staying competitive means adopting innovative solutions to optimize every…
This website uses cookies.