How to make AI more secure

According to the media, AlphaGo will hopefully have an ultimate man-machine Go war with Ke Jie which once again arise people’s discussion on whether article intelligence will pose a threat to mankind and finally replace them. However as is said in a latest article from DeepMind, the developer of AlphaGo, they are developing a button that could “close all” to shut down artificial intelligence so as to avoid the situations we see in science fiction movies such as the rise of robot or robots destroying mankind.


The articles says, AI learns continuously in complicated environments at the real world, it might not be able to work at its best all the time , thus human controllers need a “red big button” to end possible dangerous actions of AI such as robots. The core of the button is to make sure AI never learns to resist human from pressing its “close all” button in the process of deep learning.

The AI “close all” button concerns the reinforcement learning process of machine learning. AI programs are always judging whether each possible strategy would be best for the prefixed targets. However, the specialty of reinforcement learning lies in that human programmers could not always be able to judge the step that AI programs believe would be most likely to succeed; AI might find out some shortcuts which is possible to lead to unexpected results.

Take “robot porter” as a result, if a robot is responsible for picking up goods in a warehouse and moving the goods outside the warehouse into it, the robot will first deal with the porting outside as is set by the human programmers. However, lately the weather has been abnormal and it rains a lot, too much outwork would damage the service life of a robot under such condition, thus the controller keeps asking the robots to work inside the warehouse, after too many times, the robot would learn to believe that to work inside the warehouse has priority and might even resist working outside.

According to the researchers, the center of “close all” button is how to make robots believe that the human controller is just ending their actions for this one time and take this ending command as inoffensive and neutral, thus people’s “shutting down” action would not make any impression in AI’s reinforcement learning process. Finally, the researchers aim to make AI believe to press the “close all” button is a result of its own strategy。

Laurent Aoso, an AI expert from DeepMind as well as Stuart Armstrong, an expert from Future of Humanity Institute of Oxford had published this article on the website of Machine Intelligence Research Institute and will make a speech about the article in the 32nd Artificial Intelligence Uncertainty Conference at New York later this month.


Armstrong had once said that human language is so delicate that it might be misunderstood by AI. An instruction saying” Avoid mankind from suffering pain” might be misunderstood into “Kill all the humans” by AI, while the instruction “Ensure the security of mankind” might lead the robots to confine all the people. He believes the mankind is in a contest which is committed to create secure AI machines, we must make full use of every minute before it gets too late.

Leave a Reply

Social media & sharing icons powered by UltimatelySocial