Autonomous Weapon System

--

Autonomous Weapon System / Smart Weapons

Introduction

The future is still unknown; the evolution is still on the verge of poking new horizon of technological advancement; developments are still going on to strengthen the defense mechanism by different nations. Weapons and defense systems need to be handled in a much smarter way, by incorporating technology within them. Because hardcore manpower will not be enough to stop the mass destruction that modern weapons can pose. In this article, you will go through the understanding of some of the modern approaches of autonomous weapons that exist or will exist in the long run.
So have you ever imagined any weapon or a collection of weapons and weaponry assets that can not only identify enemies but also, target and kill them on their own — without letting others notice them or you can say with stealth. Yes, you are not reading any science fiction story, it’s a reality that exists and they can be programmed and trained to execute their task in a refined manner. These weapons are designed and developed with artificial intelligence, accustomed to image recognition, motion detection, thermal and radiation detection as well as machine learning capabilities included in it.

Today’s modern weapons are designed to track incoming missiles or detect enemy planes and UAVs or other radar signals and defect them or bring them down. These existing autonomous weapons and defense systems do not need human interaction to strike these non-human threats and can make their final decisions of their own. Trailing this approach, lethal autonomous weapons (LAW) are getting developed, with the accumulation of modern technology and algorithms to make advanced weapons that will be used against humans.

Smart Autonomous Weapons / Drones

In general, it can be said that LAWS are future generation weapons that can choose, perceive or spot & engage any target entity with little to no human involvement. Since there is no specific established definition of these LAW systems, the expression normally wraps up a wide array of these possible weapons systems which are completely self-directed weapons and which can launch attacks devoid of any human being’s participation as compared to semi-autonomous arms and weaponry which involve confirmatory human action for executing a mission. These lethal self-directed weapons are also facing some criticism which focuses largely on effusive autonomous weapons that are AI-based which are indestructible and comprised of all the latest arms and ammo. Critics are asking if they will respect the human’s existence & obey all the international humanitarian law (IHL) or not. Another threat that will exist with the large coming of such weapons — since these technologies are technology-based and are connected over a wireless network, there is a possibility of these systems get hacked by a cybercriminal; & they can take control of these LAW systems and harm the victims and other mankind.

Others nations such as the U.S. & UK government anticipate the endless prospective & advantages of this technology. According to them, the LAWS’s mechanized system target characteristics might essentially augment the nations’ capabilities to safeguard the citizens through amplified accuracy as well as efficiency in their attack and defense mechanism. But, yes! Things like internal technical security & switching from autonomous to human control are the key features that LAW makers must keep in mind before fully launching them as fledged weapons.

Smart Augmented systems as Weapons’Team

As the members of the public & private divisions begun to take tangible actions adjacent to LAWS, it is not the same for all nations to oppose such weapons. As per last year’s August GGE meet, 26 nations worldwide supported a complete ban on these lethal autonomous weapons. Nevertheless, 12 states — which include Russia, U.S. & the U.K. opposed negotiating a treaty on LAWS. In the meantime, in many private divisions, LAWS has harvested significant interest and limelight. In June 2018, Google went violent as thousands of its employees signed a petition influencing the company to stop participation in Project Maven — which was an agreement signed with the DoD (Department of Defense) for developing AI which can analyze drone footage (and in addition, Google employees frighten that this project could one day make possible the development or employ of LAWS). Receiving such huge pressure from tech experts and other employees, Google later announced its judgment not to replenish its contract for “Project Maven” and swear not to “design or deploy AI such technologies that can cause overall harm.” On July ’18, more than 200 organizations which include 3,000 individuals (including Google DeepMind’s founders, the popular Elon Musk (CEO of Tesla Motors + NeuraLink + SpaceX), & CEOs of various other robotics companies) pursued suit with an oath to “neither participate in nor carry the progress, manufacture, deal, or use of lethal autonomous weapons.” Hence, for these concentrated publicized events, the Defense Department lately tasked the Defense Innovation Board to help in developing (in collaboration with Tech companies and leaders) the development of AI-based systems in military weapons, operations & systems for smarter defense mechanisms.

So, the board has decided & planned to publicly release its recommendation in favor of a smart weapon system by June 2019. 2021 is already facing the global pandemic situation. What if we encounter such a situation where smart autonomous systems are attacking all over the world creating a hybrid WWIII?

--

--

Gaurav Roy CTO, Masters | BS-Cyber-Sec | MIT | LPU
Gaurav Roy CTO, Masters | BS-Cyber-Sec | MIT | LPU

Written by Gaurav Roy CTO, Masters | BS-Cyber-Sec | MIT | LPU

I’m the CTO at Keychron :: Technical Content Writer, Cyber-Sec Enggr, Programmer, Book Author (2x), Research-Scholar, Storyteller :: Love to predict Tech-Future

No responses yet