The integration of nuclear weapons with artificial intelligence is causing fears among experts who argue it can be compromised. Russia, China and the US have been integrating their security systems with AI for better precision and response to external threats. This has lead to growing concerns that this might one day lead to an all-out nuclear war.
Scientists are warning on the dangers of advanced development of artificial intelligence. Both nuclear scientists and defense experts have warned that continued integration of artificial intelligence into missile technologies will result in far more dangers than gains. They have argued that the AI systems are capable of taking control of such systems and causing destruction to the human population.
Integration of artificial intelligence
Countries have already started integrating artificial intelligence into their missile systems, according to a report in the Bulletin of Atomic Scientists. Russia, for instance, has incorporated AI into its new Poseidon nuclear torpedo capable of wiping out any major city in Europe.
China has also indicated plans to integrate its nuclear missiles with AI and are investing heavily in such technologies. The US has been at the forefront of developing AI systems capable of running missile programs and both Russia and China are playing catch up games. This increased integration of the technology in missile programs has resulted in fears among experts who argue that this may spell danger in the future.
Experts were much concerned about the possibility of slip up of the technology. In case such a situation happens, it can lead to an all-out nuclear war between warring countries. The case of Lt. Col. Petrov, who defied automation bias and correctly identified that there was no nuclear threat from the US, thereby preventing a nuclear war between the US and Russia is an example of how AI integration can lead to nuclear wars between countries.
Michael Horowitz, an author of the report, said:
While so much about it is uncertain, Russia’s willingness to explore the notion of a long-duration, underwater, uninhabited nuclear delivery vehicle in Status-6 shows that fear of conventional or nuclear inferiority could create some incentives to pursue greater autonomy.
Experts concluded by saying that automation should be done to reduce the frequency of accidents and increase human control on nuclear devices. This is the safest way to ensure that nuclear weapons are safeguarded. The experts also said that AI is vulnerable to hacking and in cases where they are integrated with nuclear weapons, they can be used to attack the host nations, thereby caution must be taken at all costs.
Featured image by Pixabay