The possible danger of robots in wars

in Popular STEM3 days ago

The possible danger of robots in wars




The Chinese army warns of barbarism


And the future that we see in science fiction, where on the battlefields there are flesh and blood soldiers sharing space with armies of humanoid robots, autonomous armed machines capable of deciding who lives and who does not. It already worries the entire world. And now the Chinese military has publicly stated that concern.


And a rare warning. Generals of the largest army in the world, in terms of active military numbers with more than 2 million soldiers, are calling for urgent rules to prevent military robots from committing atrocities. If you are not a fan of any country, you will have already realized that China is at the forefront of the global race to develop humanoid robots capable of operating in various areas, including combat zones.


These machines combine precise mechanical arms, advanced sensors and artificial intelligence that can execute missions with minimal supervision. According to the official newspaper of the People's Liberation Army, the PLA Daily, robots like this offer enormous advantages in flexibility and versatility, being able to perform complex tasks that drones and unmanned vehicles cannot, however, all that autonomy comes with a problem. How to guarantee that an armed machine follows orders, respects human lives and does not make fatal mistakes in the midst of the chaos of a war?



Image source:


Who controls the on and off button?


The PLA Daily warning echoes a classic ethical concern, the violation of the famous three laws of robotics created in fiction by Isaac Asimov. One of those laws says that a robot cannot harm a human being, but in practice soldiers or robots are designed to neutralize targets, that is, injure, even eliminate under military orders.


The Chinese article argues that before deploying machines like this in the field, it is necessary to create clear rules. A combat robbery must obey commands, but also know when to refuse an illegal order, thus avoiding excessive use of force. To do this, experts propose smarter sensors, protocols on the use of force and algorithms that detect situations of abuse or error.


The problem is that the technology for a robotic “moral conscience” is still a distant dream. Despite the risks, China does not intend to be left behind. In recent months, companies in the country have presented humanoid robots that are increasingly similar to human beings, both in shape and movements. The goal is to use this technology not only on the battlefield, but also in factories and in our homes. But in war scenarios, the debate takes on another dimension.


The combination of already autonomous weapons and life-or-death decisions puts governments and international treaties under pressure. Other countries such as the United States and Russia are also investing heavily in military robotics and there is no sign of global regulation limiting the use of silicon soldiers.


The fear is that their uncontrolled proliferation will lead to lethal mistakes or unintended massacres. A programming error, a bug or a misinterpreted decision by an AI can transform a precise weapon into a machine out of control.


In the midst of this accelerated progress the question remains, who should control the on and off button of a soldier who does not bleed, but can kill?



References 1


Follow my publications with the latest in artificial intelligence, robotics and technology.
If you like to read about science, health and how to improve your life with science, I invite you to go to the previous publications.
You want to win, play HARRY-RAID