While aggressor’s functionnaires the engage in profitable and corrupt imitations of the development of artificial intelligence (AI), including in Russia-occupied Crimea, as our Association has previously reported, the world is moving forward with the real development of new technologies.
On February 5, 2026, the “REAIM Pathways to Action Declaration” was adopted at the summit in La Coruña, Spain, marking a significant milestone in the movement.
This Declaration, signed by 34 states, including Ukraine, addresses the use of AI in combat operations. The signatories stated that, when properly applied, AI reduces the risk to humans on the battlefield, helps to protect civilians, and upholds international law principles.
Here it is important to talk precisely about proper application. The Declaration specifically addresses what application of AI in the military sphere will comply with legal requirements.
The main requirement becomes human accountability for any consequences of using AI. This involves creating proper chains of command and control during the development, implementation, and application of AI solutions.
From the moment of development, AI must comply with the requirements of international humanitarian law and human rights law. This includes a thorough legal risk assessment throughout its entire lifecycle, ensuring trust in decisions and transparency in the decision-making process using AI.
The document adopted in La Coruña calls for intergovernmental cooperation and coordination at the multilateral level. Although it is not mandatory, the Declaration establishes political guidelines and recommendations aimed at creating norms and standards for international cooperation regarding AI in a military context.
The new declaration is part of a broader international effort to ensure that AI complies with legal and humanitarian standards. The military sphere is not the only one here, but it is the defining one, as it concerns the possibility of allowing a “machine brain” to make decisions about the use of weapons, and thus about killing or injuring people.
The complexity of the philosophical, ethical, and legal questions that arise in connection with this cannot be overstated. Their resolution requires both colossal expert work and the collective political will of all states.
And such a common political will exists. The UN General Assembly has already adopted two resolutions regarding the responsible use of AI in combat operations. 159 states voted for the first one with two against, and 167 states voted for the second one with five against.
Surely, our readers have guessed that Russia actively opposed both votes. Similarly, Russia is doing everything it can to sabotage the work of the UN group of governmental experts who are developing an international treaty on AI in warfare.
The aggressor consistently opposes responsible attitudes toward military AI, against any international legal norms, and ultimately against humanity.
In light of this, Crimea once again emerges as an absolutely unique place. This is not the only occupied territory in the world, but it is the only occupied territory where the occupying state is engaged in developments in the field of AI.
This occupying state does not recognize any legal or ethical limitations and uses its developments for war against another state that acts from the position of law and morality.
Uncontrolled work with AI in the “gray area” of international law poses a direct threat to the civilian population both in the occupied territory due to the poor quality of technologies and beyond its borders due to the development of weapons with AI.
Our Association has already written about this, but allow us to briefly remind you of the occupiers’ statements that since the beginning of 2025, one of the “state enterprises” in Crimea has been working on AI based on open-source neural networks with the aim of developing a drone management platform that is supposedly capable of “coordinating unmanned aerial vehicles even without a GPS signal.”
These efforts by the occupiers may be related to the development of surface unmanned aerial vehicles. The first of such drones was the “Sargan” project, which was pompously presented at the “St. Petersburg International Economic Forum” in 2023.
However, even then, Ukrainian specialists noted that the presented unit could hardly be considered a full-fledged combat drone due to its small size and extremely weak combat payload.
Interestingly, unlike Ukrainian combat drones, the effectiveness of which in naval warfare could be confirmed by the crews of the destroyed ships of the Russian Black Sea Fleet, nothing was heard about the “Sargan” for another two years after its first demonstration.
However, in June 2025, the “Sevastopol state university” announced the presentation of a whole line of maritime drones with elements of AI under the names “Sargan,” “Barabulka,” and “Kalgan,” as well as the “MK” platform and the underwater apparatus “Chersonesus.”
These devices were once again presented at an exhibition stand, not in real combat conditions, and not even in water.
Perhaps it’s worth celebrating such vague “successes” of the occupiers, in which, evidently, the corrupt component interests the developers much more than the technical one.
However, we cannot overlook their well-known ability to steal technologies, which makes the probability of real weapon systems equipped with AI eventually emerging in Crimea far from zero.
The problem here is not limited to the developments of the occupying “scientific institutions” and “state enterprises” located in Crimea. Their activities should be considered in the broader context of the enemy’s attempts to put “electronic brains” to their service.
It is about both attempts to install AI on long-known means of terrorizing civilians, such as “Shahed”, and entirely new developments. On the contrary, according to available reports, when training the artificial intelligence of Russian drones, either the distinction between civilians and military is not taken into account at all, or such drones are deliberately programmed to strike civilian populations and civilian objects.
In the near future, the aggressor may attempt to escalate terror against the civilian population to a new level by using swarm drone technologies with AI.
In their developments regarding the combat use of AI, the occupiers are far from the elementary principles of law, both those proclaimed in La Coruña and those discussed on other platforms.
In their development and application, the requirement for compliance with international humanitarian law and international human rights law is in no way taken into account.
It is unclear how accountability for the use of such AI can be ensured, and it is unlikely that this concerns the clients and executors in Moscow, Sevastopol, and other places.
The latest weapon is being developed by the aggressor with the same absolute disregard for legal norms that he has already demonstrated in the use of more traditional weapons.
Another alarming trend is the possibility of using aggressor’s enterprises, including those deployed in Crimea, to test AI technologies in the military sphere, the development of which is conducted in other countries.
The latter obtain an “ideal platform” in Crimea for secretly testing such technologies, while openly declaring their commitment, or at least neutrality, toward international rules of warfare using AI.
Thus, the occupier’s AI “Hunger games” in Crimea pose a global threat. Only the de-occupation of Crimea can eliminate this threat, and until then, our Association will continue to monitor the developments and report on them for our readers.

Similar Posts