Associate Professor Andrii Chvaliuk, conducting an analysis specifically for our Association, emphasized that in occupied Crimea, artificial intelligence (AI) and augmented reality technologies are used not as “tools for digital development,” but as tools for “managerial legitimization” and propaganda.
Using materials from “public discussions” concocted by the aggressor, the occupiers-controlled “media”, and a case study of a “tourism campaign,” this article demonstrates how the rhetoric of “neutral digital transformation” is combined with the “normalization” of algorithmic control, gamified audience engagement, and the reinforcement of “politically advantageous versions” of events.
The risks of non-transparent data collection, the effects of “symbolic integration” of the occupied territories, and the conflicts associated with content generation in the style of “literary classics” are analyzed separately.
On May 27 last year, the occupier-controlled “public chamber of the republic” in Simferopol served as a cover for a roundtable discussion on “Digital transformation and information transparency.”
The title sounds perfectly neutral, but the context of the event is far from neutral, as it concerns a structure embedded within the vertical Russian occupation structures in Crimea.
Formally, the “chamber” is listed as a “civil society institution.” In reality, it was created by the aggressor’s intelligence services in an environment where political competition and independent public mechanisms are absent, and the media is “centralized” and purged of alternative positions.
Under such conditions, “public debate” becomes not a forum for debating positions, but a means to confirm the aggressor’s already chosen “managerial course.” This was evident in the agenda of regular discussions – members of the “chamber” repeatedly criminally called for changing Ukraine’s borders and “justified” Russia’s armed aggression.
At the same time, the “chamber” conveyed the rhetoric of “inevitability,” portraying a political issue as a “technical” one. The key narrative at the roundtable was conveyed quite predictably: AI has already become “part of our reality.”
On the surface, this is simply a statement of fact. But in essence, it is a rhetorical move with three consequences: to present AI as a “historically inevitable technology”; to relegate the discussion of the boundaries and conditions of AI implementation to the background; and to replace human rights issues such as privacy, access to information, and the ability to challenge decisions with a discussion of how to make AI “user-friendly,” while the accountability of “authorities” is replaced with technical details.
In other words, instead of the question “Is the expansion of algorithmic governance permissible in an unfree environment?”, the question is being asked “how to do this accurately?” This is a convenient framework for the occupiers, which “reduces conflict” and simultaneously marginalizes the issue of “authority” responsibility.
The “crocodile tears” of the roundtable participants about how to “use neural networks without compromising personal data” were obviously bogus. In a system lacking independent oversight, judicial guarantees, and public scrutiny, such statements become mere declarations.
When the structures expanding digital control tools call themselves “guarantors of digital security,” a clear conflict of roles arises. As a result, the conversation about rights shifts to a more “pragmatic question” – how to “more effectively manage” their data in the interests of the aggressor.
The topic of “millions of bots writing comments” came up at the table. However, without an analysis of their origin, goals, and coordination, including the possible use of automated influence by the aggressor’s “authorities” on their proxies, this framing of the issue does not lead to a solution.
The result is a scheme convenient for the occupation “authorities” – bots are discussed as a problem “in general,” but without specifying who exactly is launching and coordinating them. The threat is stated, but its source is deliberately blurred. For the “talking heads” described, this was clearly a “safe format,” so the occupiers-controlled “public structures” are not pursuing the issue, lest they inadvertently expose their employers.
The organizers of the “table” emphasized the participation of “authorities,” as well as “relevant experts, journalists, and public figures.” However, independent researchers, human rights organizations, and other participants capable of voicing systemic criticism of the political use of AI were clearly absent from these events.
This allows us to view the format not as an “open discussion,” but as an institutionally and artificially organized “publicity,” where a pre-defined group of participants discusses a pre-approved list of topics. This model predictably produced not a competition of arguments, but the articulation of a coherent narrative.
In this context, the AI aggressor’s “public discussion” format “fulfills” two practical objectives. The first is institutional legitimization, with the declaration that “civil society” in occupied Crimea is supposedly “working” and involved in “solving pressing political, managerial, and information problems” related to AI.
And the second is the aggressor’s preparation for digital administration, in order to establish as the “norm” practices of total control: monitoring public sentiment, automated moderation, and risk analysis. The development of such “official rhetoric” in the occupier-controlled “Crimean media” has been the practice of “talking AI.”
A September article in the aggressor-controlled “Krymskaya Pravda” newspaper, headlined “Artificial intelligence, tell us how the 2025 summer resort season went for Crimea,” is indicative.
The very format of the presentation, “in the name of AI,” was likely intended, according to the curators of the aforementioned article, to create the effect of “machine objectivity.”
But the resulting narrative was a typical “positive” one: an emphasis on a “turning point,” a “new chapter,” a “desirable resort,” and “record figures,” without comparison with alternative information sources, infrastructure constraints, and environmental costs – primarily water shortages, overloaded waste disposal systems, and the strain on coastal ecosystems during “peak season”.
Here, the allegedly used AI played the role not of an analyst, but of a technological credibility label. The “AI said so” formula serves the same function as the “experts think so” formula previously performed for the occupiers – only with a “stronger” effect of “neutrality.”
The mechanics of the aggressor’s influence conceived by the propagandists are quite transparent: a reference to AI increases the credibility of the message; the assessment is presented as neutral and “algorithmic”; the model remains opaque: the occupiers’ “media” does not explain which neural network was used or in what mode, making the data difficult to verify.
The publication is quickly picked up by Russian-controlled aggregators and “social media”, thus initiating a “self-confirmation” loop.
This case is important not as an isolated “editorial quirk,” but as an indicative element of a broader system. When fake “public institutions” supposedly “normalize” AI on public platforms as an “inevitable” and “useful” tool, while the “media” simultaneously present politically charged interpretations as the “opinion of a smart machine,” a coherent model of “trust production” by the aggressor emerges.
In essence, this is an entire communications infrastructure: AI content, botnets, automated comments, secondary quotes in “friendly media,” and their subsequent dissemination on aggregators. This loop cheapens the mass promotion of interpretations desired by the aggressor state, accelerates their entry into the public agenda, and increasingly blurs the lines between “analysis” and propaganda.
Therefore, in the reality of occupied Crimea, AI is not a “hero of modernization” or a neutral intermediary, but a tool for the aggressor’s well-established influence.
And the more often the occupiers present technology as an “objective arbiter,” the more important it is to verify specifics: who collects and selects the source data, who sets the criteria for evaluation and interpretation, who manages the distribution platforms, and who has the right to correct errors or challenge the system’s conclusions.
Also, last year, the Russian occupation “administration” promoted a “tourism campaign” called “Flourish in Crimea!” It allegedly utilized generative AI, such as “texts and videos,” and augmented reality (AR) in the mobile app “Revive!” This included scanning QR codes, animations, quests, and “greetings” from historical figures.
At first glance, this looks like ordinary digital marketing. But taken together, these solutions serve not the purpose of “tourism promotion,” but rather the symbolic “normalization” of the occupation. The campaign materials used “invitations” to Crimea from famous writers and poets – a mixture of real quotes and generated fragments “in a similar style.”
In our opinion, this creates several effects at once:
- the occupied territory symbolically receives the “approval” of figures in the cultural canon;
- the audience blurs the line between authentic text and generated counterfeit;
- the author’s name begins to function as a political resource in a context in which the author themselves could not have participated.
In other words, AI is used not for convenience, but to reconfigure cultural memory to suit the current agenda.
When synthetic content is not labeled, the user develops a false sense of “historical authenticity.” This is especially sensitive for Crimea: the dispute is not only about territory but also about the interpretation of history.
AR mechanics create an emotionally pleasing and engaging digital layer that pushes the political and legal context – occupation, militarization, and human rights violations – into the background. Gamification (quests, bonuses, repetitive actions) reinforces regular behavior and forms a habit of consuming the “right” narrative.
It’s also worth remembering that most AR apps collect technical and behavioral data: camera access, device identifiers, activity statistics, and sometimes geolocation.
When connecting to cloud-based recognition services, the risks of audience profiling, user segmentation by region, and location-based movement analysis increase.
For the occupied territory, this isn’t an abstract risk, but a potential element of a broader system of control- especially given the lack of public transparency about what data is actually collected, where it’s stored, and to whom it’s shared.
Targeting residents of the occupied parts of Donetsk, Luhansk, Zaporizhzhia, and Kherson regions, which the aggressor state designates as “new regions,” is particularly important.
In this case, the “tourism campaign” launched by the occupiers not only addresses an “economic objective,” but also creates a false image of a “common space” – “common mobility,” “common symbolic territory,” and the “ordinariness” of the “new status” of the occupied lands.
The “Flourish in Crimea!” campaign is clearly a PR effort, yet the organizers do not disclose which AI models were used to generate the content. This lack of transparency directly raises the legal question of the limits of permissible use of cultural heritage.
According to the principles of civil legislation and international law existing in many countries, property rights to works remain in effect for the author’s lifetime, several decades after their death.
The texts of many of the “classics” mentioned in the campaign (Gogol, Chekhov, Yesenin, Mayakovsky, Voloshin, Bunin) are declared by its promoters to be “public domain,” which “allows quoting their works without the consent of their heirs.”
However, personal non-property rights are protected in perpetuity in any legal system: the right of authorship, the right to one’s name, and the right to the integrity of a work. Furthermore, the rules on the protection of intangible assets allow for the protection of honor, reputation, and image even after a person’s death.
In this case, the violation of generally accepted norms lies not in the quotation itself, but in the generation of new phrases “in the author’s style,” which can be perceived as the position of a real historical figure whose name and image are being used in the aggressor’s propaganda campaign.
Therefore, phrases generated “in the author’s style,” presented as their “possible position,” directly contradict the principle of the integrity of a work, the author’s right to a name, and the prohibition on attributing to the author views they did not express.
A comprehensive analysis of “official” rhetoric, “media practices,” and AI and AR campaigns in occupied Crimea reveals the presence of a stable model of the aggressor’s informational and managerial influence.
The introduction of AI is presented to the public as a “neutral technological modernization,” and algorithmic management tools are described as an “acceptable norm.”
At the same time, the narrative of “balance and humanity” downplays issues of human rights, accountability, and responsibility of the “authorities.” The very format of “AI told” in the “media” reinforces and reinforces the “ideologically correct” and “desirable” interpretation of events for the occupiers, while AI-powered “tourism campaigns” are used as a tool for “integration” and “normalization” of the occupation regime.
Thus, AI and AR act in occupied Crimea not as neutral tools of digital development, but as a connected infrastructure for “legitimization,” propaganda influence, and the “soft integration” of the occupation regime.
The occupiers’ “official” rhetoric, “media” formats, and digital “engagement mechanisms” combine to reduce critical perceptions among the population, reinforce “politically advantageous” interpretations of events, and expand the aggressor’s practices of data collection and analysis in the absence of transparency in such procedures.
Therefore, the focus should not be on abstract debates about the “friendship” or “rivalry” of the occupiers with digital technology, but on the concrete challenges of the aggressor’s use of AI: the lack of independent oversight, the manipulative use of synthetic content, and the blurring of the line between quotation and dishonest stylization “as the author.”
However, under the current occupation, legal mechanisms for protection, as well as real opportunities for public and judicial oversight for Crimean residents, are completely absent in this area. Our Association will continue to expose the aggressor’s manipulation of AI in the occupied territories of Ukraine in the relevant competent global and European forums.


