Добавить новость
ru24.net
News in English
Июль
2024

AI And The Changing Character Of War – OpEd

0

“Battlefield is a scene of constant chaos. The winner will be the one who controls that chaos, both his own and the enemies.” – Napoleon Bonaparte

Modern warfare is on the verge of witnessing another Revolution in Military Affairs (RMA) in the form of Artificial Intelligence (AI) based weapon systems which fundamentally transform the character of wars. These weapon systems have the potential to autonomously decide to acquire, identify, engage, destroy and carry out battle damage assessment of the intended targets in real time. Thus, not only challenging the meaningful human control in the decision making process, but also raising questions regarding the extent of delegation of decision making authority to machines in warfare.

The history of AI in warfare can be traced to the Second World War when the Allied powers developed Colossus in 1944 to crack Nazi codes and to secure their own sensitive communications. It argues that the computer was born in war and by war. A monograph titled The New Fire: War, Peace, and Democracy in the Age of AI illustrates that “AI is a new fire power and it will transform the destructive power of weapons. It also resembles the revolutions in military affairs that occurred due to inventions like ancient Greek fire and the gunpowder weapons of the medieval Europe.”

The use of AI-based weapon systems, on the one hand, can phenomenally shorten the decision making loop and improve the efficiency of military operations. On the other hand, owing to inherent vulnerabilities including misinterpretation of data, malfunction, cyberattacks, unwanted escalation and lack of accountability, it can lead to uncontrollable destruction and maximize collateral damage which is in contravention of the Law of Armed Conflict (LOAC) and the Rules of Engagement (ROE). In the same context, in December 2023, more than 150 countries supported United Nations Resolution L.56 identifying the challenges and concerns posed by lethal autonomous weapons and warned that “an algorithm must not be in full control of decisions involving killing.”

The ongoing Israeli bombing and genocide in Gaza reflect the lethality associated with AI-based target selection, acquisition and destruction. A report carried by The Guardian in December 2023 revealed that Israel is using an AI-based system called Habsora, also known as Gospel, to produce more than 100 targets in a day. According to the former head of Israeli Defence Forces (IDF) Aviv Kochavi, human intelligence-based systems could identify only up to 50 targets a year in Gaza. Consequently, by June 2024, Israel had destroyed 360,000 buildings and killed 37,746 Palestinians, mostly women and children, while injuring 84,932 civilians, by using an AI-based target selection system.

Ironically, the use of AI-enabled weapons undermines the essence of Fourth Geneva Convention (1949) on the ‘Protection of Civilian Persons in the Time of War’ which is a violation of International Humanitarian Law (IHL). In February 2024, Chief Executive of Israel’s tech firm “Startup Nation Central” Avi Hasson noted that “the war in Gaza has provided an opportunity for the IDF to test emerging technologies which had never been used in past conflicts.”

The United Nations Office for Disarmament Affairs (UNODA) identified in 2017 that an increasing number of states were pursuing development and utilization of the autonomous weapon systems that present risk of an ‘uncontrollable war.’ According to a 2023 study on ‘Artificial Intelligence and Urban Operations’ by the University of South Florida, “the armed forces may soon be able to exploit autonomous weapon systems to monitor, strike, and kill their opponents and even civilians at will.” The study further highlights that in October 2016, United States Department of Defence (US DoD) conducted experiments with micro drones capable of exhibiting advanced swarm behaviour such as collective decision making, adaptive formation flying and self-healing. Asia Times reported in February 2023 that the US DoD had launched Autonomous Multi-Domain Adaptive Swarms-of-Swarms (AMASS) project to develop autonomous drone swarms that can be launched from sea, air and land to overwhelm enemy air defences.

In South Asia, AI-based weapon systems could have serious impact on security dynamics given the existence of longstanding disputes between the two nuclear armed neighbours – Pakistan and India. India is significantly pursuing AI-based weapons and surveillance systems. In June 2022, India’s Ministry of Defence organized ‘AI in Defense’ (AIDef) symposium and exhibition where Defence Minister Shri Rajnath Singh introduced 75 AI-based weapon platforms that included robotics, automation tools, and intelligence and surveillance systems. Given the challenges associated with AI-enabled weapon systems, this could lead to catastrophic consequences for the South Asian region.

Pakistan, for its part, has actively advocated for a binding convention within the Convention on Certain Conventional Weapons (CCW) framework that would ban the advancement and use of autonomous weapons. Pakistan believes that the use of AI-based weapons poses challenges to IHL and was the first country to call for a ban on these weapons. The urgency of addressing this issue was also highlighted by the UN Secretary General Antonio Guterres in ‘2023 New Agenda for Peace’ by underscoring that “there is a necessity to conclude a legally binding instrument to prohibit the development and deployment of autonomous weapon systems by 2026.”

Notably, in January 2024, a group of researchers from four US universities found, while simulating a war scenario, using five AI programs including OpenAI and Meta’s AI program, that all models chose nuclear attacks over peace with their adversary. Findings of this study are a wake-up call for the world leaders and scientists to come together in a multilateral setting to strengthen the UN’s efforts to regulate AI in warfare.

History reminds us that the Scientific Director of Manhattan Project J. Robert Oppenheimer regretted making a nuclear bomb for America when he witnessed the massive destructive power of the weapon, detonated on 16 July 1945. While looking at the erupting fireball from the nuclear explosion he stated, “Now, I Am Become Death, the Destroyer of Worlds.” To avoid the same situation in the context of AI-based weapon systems, the world must act now.




Moscow.media
Частные объявления сегодня





Rss.plus




Спорт в России и мире

Новости спорта


Новости тенниса
ATP

Брисбен (ATP). 2-й круг. Димитров поборется с Вукичем, Лехечка – с Нишиокой






Лавров жёстко предупреждал: Россия "сдалась". Согласие на "договорняк": Главный итог года - Москва "проучила" Абхазию. История ультиматума

Дед Мороз поздравил россиян с Новым годом, призвав к дружбе и взаимопомощи в 2025-м

«Сбербанк» заработает по-новому с 1 января

Новый год, новая прибавка. Зарплаты российских бюджетников выросли на 13,2%