Skip to content

Warfare in the New Era – Artificial Intelligence Weapons

Do you remember the movie Terminator from the mid 80’s? If you haven’t seen it by chance, it’s worth seeing.

After watching this film again a few days ago, I began to wonder whether the development of artificial intelligence has a positive or negative impact on mankind?

Now, imagine a battleground. As the enemy closes in, a retreating army scrambles to getaway. Hundreds of tiny drones, indistinguishable from quadcopters used by amateurs and filmmakers, descend from the sky, using cameras to scan the landscape and onboard computers to determine what seems to be a target on their own. Suddenly, they start bombing vehicles and individual troops, generating even more terror and bewilderment by exploding on impact.

This sounds like science fiction, but it isn’t.

Drones have always been an element of combat, but they’ve been mostly controlled remotely by people. Autonomous drones may now be mass-produced on the cheap by combining freely accessible image-recognition and autopilot software.

More weapon systems have adopted features of autonomy in recent years. Some missiles, for example, may fly within a defined region without precise orders, but they still require a human to start an assault.

Improvements in artificial intelligence algorithms, sensors, and electronics have made it simpler to construct more complex autonomous systems, increasing the threat of robots that can decide on their own whether to employ fatal force.

International Regulation

A rising number of countries, including Brazil, South Africa, and New Zealand, contend that deadly autonomous weapons, including chemical and biological weapons and land mines, should be subject to treaty restrictions. Germany and France are in favor of placing limitations on some types of autonomous weapons, particularly those that may be used against people. China is in favor of a very limited set of regulations.

Other countries, like the United States, Russia, India, and Australia, are opposed to a prohibition on deadly autonomous weapons, claiming that they must develop the technology to avoid being at a strategic disadvantage.

What are the risks?

The frontier risks that could emerge from the full militarization of autonomous weapons include catastrophic fallout from army raids and a human existential crisis in the age of machine sentience.

Cyber-Security Challenges

Algorithms are far from safe, and they are vulnerable to errors, viruses, prejudice, and manipulation. And, since machine learning relies on machines to teach other machines, what happens if the training data is tampered with or manipulated? While security concerns exist everywhere, connected devices enhance the possibility of cybersecurity breaches to occur from remote locations, and security is extremely difficult to provide since the code is opaque. As a result, when AI goes to war with other AI (whether for cyber-security, geo-security, or space-security), the ongoing cybersecurity difficulties will pose significant threats to humanity’s future.

While autonomous weapons systems appear to be here to stay, the issue we must all address individually and collectively is whether artificial intelligence will drive and dictate our strategy for human survival and security.

Final thoughts…Will the machines save us or?

Machines make mistakes, albeit less often than human soldiers do. But while humans are held responsible for their actions, machines cannot suffer legal consequences. 

Further weaponization of AI is unavoidable as nations seek to obtain a competitive advantage in science and technology, both individually and collectively. As a result, the placement of autonomous weapons systems would change the basic definition of what it means to be human, as well as the principles of security, humanity’s future, and peace.

It’s critical to comprehend and assess what may go wrong if the autonomous weapons race cannot be avoided. It’s past time to admit that just because technology allows for the effective growth of AWS, that doesn’t mean we should. Weaponizing artificial intelligence may not be in humanity’s best interests! It’s time to take a breather.

What do you think? Should we worry about the excessive development of artificial intelligence? Comment below.



Leave a Reply

Your email address will not be published.

error: