Global experts resist developing killer robots. Why are killer robots horrible?
Not long ago, the Korea Academy of science and technology announced the cooperation with the top Korean consortium to set up an artificial intelligence laboratory. In April 4th, more than 50 researchers in the field of artificial intelligence jointly boycotted the establishment of laboratories, warning that we should guard against "killer robots".
More than 50 experts from all over the world boycotted South Korea's R & D killer robot.
In April 4th, an open letter was signed by researchers from more than 30 countries and regions of the United States and Japan, sponsored by Professor of artificial intelligence at the University of New South Wales, Australia, to urge the Korean Academy of science and technology to stop the development of artificial intelligence weapons. The letter said, "once the automatic weapon is mature, the speed and scale of the war will be unprecedented. They may be used by terrorists. " Experts say robots and AI technologies can play an advantageous role in the military field, but human beings can't turn the right to decide the life and death to the machines.
Earlier, the Korean Academy of science and technology, in cooperation with Han Hua group, one of the ten largest conglomerates in South Korea, opened the center for artificial intelligence research and development in February this year. The purpose is to develop artificial intelligence technology suitable for operations command, target tracking and underwater transportation.
Why is the killer robot horrifying?
Why do most of the scientific researchers in the relevant field once know that AI should be combined with military weapons? What's so frightening about the killer robots combined with AI and weapons?
On this issue, hundreds of experts from the field of artificial intelligence from Canada and Australia jointly submitted an open letter last year, calling on the government to put in place a policy to ban AI weapons for artificial intelligence. Experts say that if we build highly intelligent AI into killer robots, the scale of armed conflicts will further escalate. Because they will have more precise strike capability than human soldiers, and the use of some weapons of mass destruction is no longer limited by individual strength. Once the highly automated weapon is developed, the future battlefield will become the world of "cold blooded killer" without life and death consciousness. If these "killer robots" fall into the hands of terrorist organizations, the consequences will be unthinkable.
On the other hand, many people are also worried that the prediction in science fiction movies will become a reality someday. In the movie terminator, the advanced computer control system developed by human is out of control. After having self will, the robot begins to fight against humans. In recent years, there have been many accidents caused by operation errors or system failures. People worry that once the "killer robot" is developed, when these weapons are out of control, the human life and death can only be held in the hands of these robots.
Congratulations @maranran! You have completed some achievement on Steemit and have been rewarded with new badge(s) :
You published 4 posts in one day
Click on any badge to view your own Board of Honor on SteemitBoard.
For more information about SteemitBoard, click here
If you no longer want to receive notifications, reply to this comment with the word
STOP
Do not miss the last announcement from @steemitboard!