Home

A drone wall is intended to secure NATO’s eastern border against Putin’s tanks. The alliance is investing huge sums in new surveillance and weapons technology, including artificial intelligence. What if the military AI doesn’t live up to its promises? An investigative report by Andrea Rehmsmeier.

Radio segment in German, Deutschlandradio, March 2, 2026.

The intelligence of the eastern flank. Can AI defend NATO territory?
By Andrea Rehmsmeier, Deutschlandfunk, Science in Focus

The “Eastern Flank Deterrence Line” is planned to extend from Finland to the Black Sea in the near future – wherever NATO territory borders Russia and Belarus. (picture alliance / Sven Simon / Frank Hoermann)
News story and audio podcast in English (translated by NoteGPT.io)

A drone testing area in Finland. Drones fly towards targets. Defense companies present their latest systems. Soldiers discuss defense issues. Some frontline fighters from Ukraine have also accepted NATO’s invitation to Finland. They are the most sought-after conversation partners. How many drones would it take to protect Finland from a Russian attack, asks a reporter and looks anxiously towards the east. There on the horizon stretches a 1,000-kilometer land border that the sparsely populated country shares with Russia.

You mean kamikaze drones? Millions. That is exactly the plan. Not a contingent of soldiers, but a highly technical defense wall of AI-supported surveillance, drones, and autonomous weapons is supposed to protect the eastern NATO border. Many Finns find this sensible.

Artificial intelligence can identify and target objects. That is mathematics. There are more Russians than Finns. That’s why we need unmanned technology. Sorry, that sounds terrible. But that’s how it is.

NATO feels under time pressure. The defense wall is supposed to stand before Russia has upgraded its forces for an attack. But there is a problem. AI-driven technology that turns data and algorithms into lethal weapons is still in its infancy. It is still a military lottery. And as huge as the hopes are, so bloody are the nightmare visions. That is when AI does not deliver what it promises.

The intelligence of the eastern flank. Can AI defend NATO territory? By Andrea Rehmsmeier.

This is what NATO has named the high-tech security belt that is to stretch from Finland to the Black Sea in the near future. Wherever NATO territory borders Russia and Belarus, Drone wall is the catchy buzzword the German press has found for it.

But it’s about more than just drones. It’s about replacing soldiers missing for border protection with unmanned systems and artificial intelligence. The concept was developed by the European allied states fearing Putin’s tanks. They were supported by the US Army. But it is already thinking further.

The eastern flank deterrence line is a use case for army transformation. Innovation for next-generation homeland defense.

The threat from Russia as a use case and innovation driver. Technologies proven at the NATO eastern flank thus recommend themselves for the broader defense market. Tender procedures are still ongoing. Which manufacturers with which systems will protect NATO territory in the future is unknown so far. Officially, it concerns unmanned weapons, AI-supported target acquisition, and multilayered defense measures.

And therefore, coordinating everything effectively for a next-generation command system that uses real-time data to outsmart and outmatch the enemy.

Data-centered warfare is the concept that applies to border protection as well as homeland defense. A network of surveillance and weapon systems with lots of computing power, where information plays the central role. Sensors, radars, and satellites, intelligence services, and armed forces collect data.

Artificial intelligence creates situation pictures, military strategies, and tools for all command levels, from drone pilots to the supreme commander of the armed forces. Many AI-driven surveillance tools, weapons, and military platforms are still experimental. What is realistic, effective, and ethically responsible is another question.

In December 2025, NATO invited its alliance partners to the southern Finnish city of Rihimäki. Military strategists, weapons developers, and experienced Ukrainian frontline soldiers are to jointly evaluate prototypes and find new military AI applications. Journalists are also invited for a trip to the drone test site.

The drone takes off and zigzags over the test area. Two young entrepreneurs from Lithuania hold the control device up high. The display shows the flight from the drone’s perspective. The pilot targets the goal and lures it in. The drone needs no further information.

Now artificial intelligence has taken over control. Loitering munition, a wandering explosive device, is the weapon category widely used by both warring parties on the Ukrainian front. Many of these drones fly semi-autonomously. That means they can be controlled by humans or algorithms. With AI-supported object recognition, they look from the air for target objects to attack.

A crosshair appears on the display. The onboard operating system analyzes the live data of the video stream flowing through the drone’s camera lens. In a nosedive, it approaches the target on the lawn. With the right switch, the pilot could adjust the target coordinates. But right now, he doesn’t. Oh, missed. Overshot.

Failure. The drone flies past the target. The onlookers watch embarrassingly. Other drone tests go better that day.

Whether the hopes that NATO countries place in artificially intelligent weapons are justified, no one knows better than the Ukrainian frontline fighters testing prototypes in the war zone. Tech companies like Palantir, Google, and Microsoft, as well as German arms manufacturers, are interested in their reports. Many are active with their own test programs for AI-supported systems in frontline combat.

However, the soldier Yuri, who does not want to name his full name, does not reveal a word. Western-funded weapons tests are subject to secrecy. But Yuri is happy to report how the Ukrainian army uses artificial intelligence for its drone war.

We have deployed deep-strike drones. They are supposed to hit targets 200 to 300 kilometers away. Remote control operated by a human could not cover such distances. Autonomous works best.

Deepstrike missions destroy command centers, weapons, or logistics nodes deep in enemy rear areas. In the past four years of war, they have brought Ukraine significant success several times. But do AI-controlled drones also prove themselves in street fighting, where they could save soldiers from life-threatening missions? Yuri shakes his head.

Full autonomy requires many tests. Even if the drone only flies very short distances. It is enough that the weather changes. Then it changes its strategy and attacks its own soldiers. That could become dangerous.

The brain of the Ukrainian drone army is the AI-supported military platform Delta, developed by Ukraine itself. With the computing power of a cloud environment, it evaluates huge amounts of continuously incoming data. The US company Palantir also supplies Ukraine with data and software for algorithmic frontline situation analysis.

Military platforms and software tools like these are considered the all-purpose weapons of modern warfare. They will likely play a key role in protecting the eastern NATO areas as well. Their object recognition is supposed to detect enemy tanks illegally rolling into border areas. Their algorithms are supposed to assess risks, create mission plans, and calculate variants.

They are supposed to coordinate armed forces and suggest target objects for the command to engage. More than that, they are supposed to chain autonomous weapon systems. From sensor to shooter, as the military jargon goes. Sensor data from drones or satellites should be forwarded to weapon systems in near real-time to quickly engage targets. That is the hope.

Back at the drone test site in Finland soldiers gather around a compact, small vehicle on tank tracks. It is the size of a chest of drawers. Its roof forms a wide wing and is suitable for transporting everything soldiers need in a combat zone. Luggage, machine guns, grenade launchers, wounded comrades.

UGV, unmanned ground vehicle, is the vehicle’s technical term. The German manufacturer Arx Robotics developed it together with a Ukrainian partner. The product line is called Garion.

Garion is to demonstrate its capabilities in front of the military and journalists by independently driving a lap around the area. The operator, control device in hand, enters the waypoints.

The vehicle takes off from a standing start. First over the smooth lawn, then it nimbly climbs a several-meter-high earth mound. It can perceive its environment independently. Onboard sensors make this possible.

Adjust routes, avoid obstacles, follow people. No problem for the unmanned ground vehicle, reports company representative Duncan Faulkner. But these are just the basic functions. The AI-driven operating system can connect to platforms, networks, and clouds. This gives the vehicle access to all military services the AI era has to offer.

Then Faulkner tells of a military exercise that took place in November 2025 in Kenya. The British Army wanted to test an AI-supported attack chain. From sensor to shooter reconnaissance and weapon systems linked via a military platform were to detect, correctly identify, and destroy a target hidden on the test site.

Involved were two German drone manufacturers and Arx Robotics. Garion was used to explore the test site. It identified an armored vehicle, sent target data to the command system Lattice, which forwarded it to the single-effectors of the drone manufacturers Stark and Helsing. Racky Strike is the jargon for such an attack chain.

The commanders monitoring the exercise from headquarters had access via displays. But in an AI-supported attack chain, the human role is limited. Their main task is to authorize the target for firing. In this way, the vehicle extends the brigade’s range.

In the test, Garion operated 12 kilometers away from the troops. The tank it detected was even further away. That is far more than any light brigade can reach.

A scalable model for improving European land forces while increasing soldier safety, Arx Robotics said after the exercise. However, insiders leaked to the press that the drones’ accuracy was not particularly high.

In mid-February 2026, the University of Hamburg hosts the science congress The Promises of Algorithmic Warfare. Here, all those who research AI in military applications beyond state arms funding and private venture capital meet.

Lawyer Claudia Klonowska researches at the Paris Institute of Political Studies whether AI-supported military platforms that identify targets by object recognition comply with international law. If war crimes occur, involved humans should not be able to shift criminal responsibility to a misguided AI agent.

We have no specific legal framework for AI-supported systems. But we have comprehensive international humanitarian law. If there is uncertainty about a target object identified by AI, this framework requires involved humans to use their judgment and take precautions. Before issuing an operational order, they must assess the consequences of their actions.

However, what we currently see is worrying. Klonowska fears that military platforms developed in tech industry labs may not sufficiently consider this applicable law.

If a commander is to independently verify objects before the lethal strike, he needs alternative means to get an overview of the situation on site. Only then can he be sure it is actually an enemy tank and not a forestry machine of a lumberjack. Is this considered in military platform design? For her dissertation, the international law expert interviewed platform engineers in the USA and the Netherlands.

Such questions are quite complex. In my investigation in both countries, I found different approaches. What was striking, however, was that there was no best practice. Everyone experimented. Processes were modified during operation, algorithms retrained, AI models fine-tuned. Engineers struggled under time and success pressure from problem to problem.

They improved details but overlooked major systemic flaws. Now the international law expert fears such military platforms could gradually erode the limits of permissible military behavior. The speed at which military operations take place today is alarming.

Humans cannot overview the frontline fast enough to intervene in emergencies. Unfortunately, speed leads to even more speed. Probably the biggest systemic flaw of AI-supported military platforms are what computer scientists call a black box.

Data goes into a system at the start, results come out at the end. But how these are produced is not transparent. Computer scientist Nitin Sawhney believes decisions that can cost lives should not be generated in a black box. Because the more complex the system, Sawhney says, the more unpredictable the result.

But these systems are increasingly used for military purposes. Without guardrails, supervision, explainability, transparency, and accountability I find that extremely problematic.

Sawhney traveled from Helsinki. In Finland, the professor researches AI, how to use it responsibly, and its societal impacts. He fears the 1,000-kilometer-long belt of high-tech surveillance and weapons systems that is to seal off Finland to the east could deeply shape society.

Because Finns and Russians have lived in close neighborhood in the border area since time immemorial. Their families have intermingled, and until a few years ago, barriers were open for mutual visits. But in military logic that hardly plays a role.

Yes, we need more information. Ground reconnaissance, satellites, human sources – that has always been part of military reconnaissance. The goal is to improve security and avoid civilian casualties. But digital military platforms are far from the areas they are supposed to protect. Their situation analyses are created in a cloud.

The participants are in control centers instead of the battlefield. When acting in a data landscape far from any population, the likelihood increases that decisions are made quickly without accountability and understanding.

Data can be wrong, inaccurately selected, algorithmically biased, altered by enemy hackers. But warning voices are currently only faintly heard. Since tensions with Russia have risen, spending on military research and development has reached record highs. And data-centered warfare is a driving force of the arms boom in NATO countries.

Venture capital investments in deep-tech startups in defense and security have quintupled since 2019. In 2024, they reached an all-time high of over 5 billion US dollars, according to the data service D-Room. The sharply increased defense spending of states adds to this.

Germany, which is currently stationing a Bundeswehr brigade in Lithuania, is also investing in AI. Because the prospect that artificially intelligent systems could spare soldiers life-threatening missions on the eastern flank is compelling.

Uranos AI is a 25 million euro procurement project for monitoring large areas on the NATO eastern flank, approved by the budget committee at the end of 2025. The tender is considered security-relevant and is not commented on by the government or participating companies. It is clear that the defense sector in Germany also hopes for a technological leap.

November 2025. The Air Force Tech Summit in Berlin, a specialist congress for military, industry, and research. Start-ups present their latest systems. The lectures focus on networked operations management and drone swarms. I warmly welcome Sergej Sumleny and Sergej Johavid at this point.

Via video stream from Kyiv, a Ukrainian drone manufacturer is connected. Sergej Sumleny, CEO of United Unmanned Systems LLC, reports on everyday war at the Ukrainian front. About a death zone up to 30 kilometers wide, where anyone who stands out is killed by a drone within minutes.

And then, as an aside, Lenny mentions that all these drones are remotely piloted by humans. Artificial intelligence plays no role in the Ukrainian drone war. I believe there is much wishful thinking about AI rather than reality. I am not aware of any drone using AI. Not a single one.

For a very understandable reason, the targets look very similar in this war. Soldiers look similar, use the same equipment, and much civilian equipment. They cannot distinguish between enemy and friend. Market-ready AI technologies with self-learning systems that give Ukrainian soldiers real advantages at the front, the Ukrainian has not heard of. Confused questions from the audience.

But we don’t have to prepare for the current war, but the next one. How do you see the potential of AI and automation? The only plausible use of AI technology I can imagine is a terror attack. Then drones fly over Potsdamer Platz with a terrorist purpose. Then every moving person is a target. AI then selects targets by size and distance.

Klaus Decker also listened attentively to the Ukrainian’s lecture. He is chief engineer at Airbus Defence and Space, one of Europe’s largest arms manufacturers. Airbus is considered a hot candidate for the German procurement project Uranos AI. Decker is responsible for an arms project regarded as forward-looking in Germany for the Multi-Dimension Combat Cloud.

It is still in the test phase, but there is a big plan behind it. The combat cloud is to connect everything with everything. Surveillance and weapon systems with platforms, sensors, satellites, and radars with the computing power of a cloud environment.

Army, navy, and air force use the capabilities of the cyber and information domain, as well as human intelligence with artificial intelligence. But the Ukrainian drone manufacturer did not give AI a good report. When it comes to real war missions, drones cannot distinguish between own and enemy soldiers. How does Airbus want to make its cloud suitable for defense purposes?

Humans have the same problem. A pilot who pilots the drone has the same situational picture. That means he sees an image with three soldiers who all look similar. What we want to achieve is that we are more reliable and better than the human who has to make the decision at that moment. So you must not trivialize it and say AI solves everything, but responsible use is essential and starts with development.

For this, Airbus together with the Fraunhofer Society founded the Working Group on Technology Responsibility. An expert panel to set guardrails for research and development. But algorithms can be willful. Private users experience this daily.

AI chatbots like ChatGPT invent facts and sources. How can commanders be sure their AI assistants do not hallucinate just as much?

The problem is that ChatGPT now partly relies on its own data. That means they have a circular logic: what I said yesterday, I read tomorrow, and the day after, I draw conclusions from it. While in military data, they aim for a fixed data situation that is secured and on which they train their AI models. That means they avoid exactly this data incest.

Doubtful results must, of course, first be validated. For example, if an autonomously flying fighter jet suddenly wants to land on a water surface. For such cases, there should be a control system, says Decker, which encloses the algorithms in a kind of safety box.

If over time we have established that we trust the technology, that we have seen it generate appropriate results in our sense, then one is more inclined to say the technology gets a certain decision-making scope. And where we say a human operator still has to make a decision, we logically include him.

This is exactly the core question. Who will get how much decision-making space in the data-centered wars of the future, in which military teams consist of human operators and their artificially intelligent assistants? Will humans or machines decide on attack or non-attack, on letting live or die, on surrender or escalation?

This big question is also present now as eastern NATO countries feverishly upgrade their borders to high-security zones and engineers search for new applications for their military AI without a fixed legal framework.

Because the hope for an artificially intelligent automation that solves military issues faster, more precisely, and smarter is great. And it is fueled by the capital flowing into research and development.

However, computer scientist Nitin Sawhney believes artificial intelligence cannot replace human intelligence, not even on the eastern flank. To elevate these data platforms to a fetish is unfounded, as their capabilities are limited.

They can collect and visualize data. That can be helpful, even if this information is sometimes faulty or otherwise problematic. Of course, we can experiment with AI-supported military platforms, but never in high-risk situations. They should be used very cautiously in operations as an additional information source, but not as the main source.

The intelligence of the eastern flank. Can AI defend NATO territory? By Andrea Reemsmeyer.

Lisa Biel and Hussein Michael Tschirpici spoke. Sound and technology by Oliver Dannert. Direction Anna Panknin. Editorial Christiane Knoll. A production of Deutschlandfunk 2026.

And here is a recommendation. Understanding America ultimately means understanding democracy.

We make a podcast in which we look into history to better understand what is actually going on in the USA. And it is called Understanding America with Volker Deppkart. Well, I am actually a biography in the middle of the Atlantic. German-American, Americanist, historian. Who will explain how the USA actually ticks.

Understanding America. With Volker Deppkart. And with us. Monika Dittrich. And Philipp May. Every Saturday a new episode in the Deutschlandfunk app.