12 May 2026
The Russian-Ukrainian War demonstrates how Artificial Intelligence (AI) can be used on the battlefield. AI has been a buzzword for many industries, including the military. The introduction of AI decision-making models into the military Command, Control, Communications, Computers, Cyber, Intelligence, Surveillance, and Reconnaissance (C5ISR) infrastructure has sparked ethical and moral debates.
Wider AI integration and development are already underway, integrating artificial intelligence, robotics, and machine learning to accelerate decision-making, enhance surveillance, and automate weapon systems, creating an “AI-driven” force. The US military just signed a deal with seven AI companies and announced that it will be an ‘AI-first’ fighting force. Many AI systems aim to assist with logistical matters, such as ordering spare parts before they are needed.
AI is beginning to be integrated into military intelligence gathering, analysis, and decision-making. Its logistical applications are where they are making inroads. How embedded can these systems be in our military on the killing side, with no human oversight, or with what level of human oversight? If an autonomous system commits a war crime or causes unintended civilian harm, it is unclear who bears legal responsibility—the operator, the commander, or the AI developer. These will be ongoing ethical and moral issues as we advance.
However, there is another question about the effectiveness as we develop further. As the technology improves, the minimum skill, effort, or investment required lowers. A journalist for the Telegraph managed to program a drone to target his boss, who was represented by a cardboard cutout.
The technology is emerging and AI-driven systems will become more prevalent. Western military AIs will be going against Russian, Chinese, Iranian, North Korean, and various terrorist group AIs in the future. These opponent AI systems will be bound by their own unique Rules of Engagement and will operate differently from our own AI system. There will be a point in time when that distinction will make a difference on the battlefield.
In Ukraine, a single drone operator is still operating one drone. As AI technology advances, Ukraine aims to have a single operator control multiple drones. They are not the only ones who are trying to do so. Drone swarms are the future. The level of human interactions with drone swarms has not been fully discussed or examined.
The Debate
The ethical, legal, and moral questions center on how much we want to integrate AI into decision-making. What operational perimeters do we draw a line at?
Let’s put forward a hypothetical scenario. Western drones and other autonomous systems require an “okay” or “cancel” feature. Ground or aerial drones locate a target and require authorization to ensure that the target, such as a troop transport, is a military target and not an ambulance. At the same time, opponent AI systems may act without impunity if any movement occurs within a specified territory.
This brings us to BAE’s Taranis, an Uninhabited Combat Aerial Vehicle that has been under development since 2013. The premise was to set a geographical perimeter, upload images of potential targets, and then have the Taranis seek them out. Practical and legal questions arose about when a troop carrier could be considered a field ambulance. At the moral, legal, and ethical levels, it has been decided that there needs to be a human check on AI in lethal engagements.
Battlefield Reality
The battlefield reality does not meet our moral, ethical and legal standards that we are setting now for AI systems and that should be worrying. Russian troops in Ukraine have been known to use civilian cars, motorbikes and scooters as means of transport. Their en masse use of disguised vehicles means this is more split-second decision-making for a drone operator. Operating multiple drones in an urban centre will present unique issues with civilians.
Limiting collateral damage is a concern for Western militaries, but may not be for our opponents. It may be an operational goal. Russian soldiers have conducted a ‘human safari’ on Ukrainian urban centres, strictly targeting Ukrainian citizens as a means to terrorize the population. By late 2025, reports indicated over 200 civilians were killed and more than 2,000 wounded in Kherson. The human safari continues in Kherson and elsewhere. Terrorist groups, such as al-Qaeda, ISIS and Hezbollah, will have a similar disregard for civilian life.
This makes a battlefield more fluid and increases the potential for civilian casualties. This will draw additional resources from our militaries and allied security services to assist the civilian population. Creating chaos, sowing terrorism, and tying up resources are operational goals.
Attacks on a civilian population create a political constraint and pressure on the local government as well as the Western governments involved. Ongoing attacks on civilians create a demoralizing effect on soldiers, as we have seen in Iraq, as well as the local population. This can create a hostile environment for our soldiers.
How does a Western military’s ethical AI compete with an opponent AI that does not share its limitations? That is a question the West must embrace, and it may involve both AI software and hardware upgrades at the sharp end of the stick – our weapons systems. This is where opponent AI systems can outcompete our offensive AI systems. It will mean better AI-driven intelligence for predicting these attacks and AI defensive systems to guard against them. It will be a full-umbrella approach, which will place an organizational strain on Western forces.
The integration of AI into our military infrastructure will be an ongoing debate, but it should be noted that our adversaries may not be as philosophical as we are. And ultimately, it will be AI versus AI. Solutions will need to be found. The threat is not going anywhere; in fact, it is evolving with the ongoing Ukraine war and with the embrace of drone technology by terrorist groups from al-Qaeda in Africa to Hezbollah. We need to have these conversations now so we can plan for defensive and offensive capabilities against unhinged opponent AI systems.
The sci-fi imagery of Terminator cybernetics is now, but it will be one opponent versus another. It will be one AI-driven force against another AI-driven force.
* We described the article to an AI chatbot to generate video prompts and then asked an AI video generator to generate what the AI chatbot suggested – article narrated by myself and video below.
Featured Image: Northrup Gruman X-47B, Robert Sullivan, Flickr, 2026

