Summary
Global tech companies have repeatedly rolled out new digital applications in Myanmar without adequate due diligence, driving disinformation, hatred and enabling atrocity crimes. As AI now enables instant audio-visual creation, it is critical that companies adopt rigorous, rights-based safeguards to avoid repeating these errors.
Jump to:
Over the past fifteen years, Myanmar has exemplified both the promise and the peril of rapid technological change. Cut off from global advances until political reforms began in 2012, the public leapfrogged to widespread use of smartphones, social media, and digital services. This shift brought major benefits, including broader access to information, economic growth, and new forms of civic participation. However, it also brought serious human rights harms, spreading disinformation and hatred, and leading to atrocity crimes. Myanmar’s experience offers vital lessons for regulating Artificial Intelligence (AI).
Human Rights Myanmar submits this report to the UN Special Rapporteur in the Field of Cultural Rights to underscore the need for rights-based AI governance, particularly in repressive and conflict-affected contexts like Myanmar, where “heightened due diligence” is paramount.
Harnessing AI for inclusive creativity
AI could bring transformative creative potential for Myanmar, expanding access to cultural life to the country’s diverse communities, and facilitating the right of people to benefit from scientific progress. AI can:
- Recreate Bagan’s temples in virtual form, enabling local communities to guide restoration and interpretation efforts;
- Digitise endangered Chin oral stories, preserving them and making them searchable;
- Translate song lyrics from Rohingya, Shan and other minority languages into Burmese (and vice versa), fostering inter-community dialogue;
- Convert public-health messages into simple cartoons or sign-language animations, enhancing accessibility for low-literacy and deaf audiences.
AI as a tool of military repression
To date, AI applications in Myanmar have overwhelmingly served State repression. Given that the military actively targets everyone who expresses themselves, creatives are often primary targets of the regime and therefore have already been victims of AI-powered human rights violations.
For example, the military deploys AI-powered facial-recognition cameras supplied by companies such as Huawei and Dahua to monitor people, identifying and tracking individuals of interest. Since the coup, many creatives who have tried to anonymously protest against the military have been identified using such cameras, arrested, and imprisoned.
The military has also used AI-based deep-packet inspection systems to censor the internet, filtering and blocking communications and digital content nationwide. Creative content has often been the focus of these systems, invading the rights to privacy and freedom of expression.
Amplifying harm from text to creative AI
Myanmar has a tragic recent history of propaganda, disinformation, and incitement to hatred leading to atrocity crimes against the Rohingya, much of which spread through predominantly text-based social media.
The emergence of AI, however, vastly expands the formats in which false or inflammatory content can be created, including images, audio, video, and even interactive “deepfake” experiences. AI is capable of creating vast amounts of manipulative content, including, for example, racist imagery, extreme religious songs, or videos celebrating violence against women.
The adage “a picture conveys a thousand words” is correct, and can be extended with “and is a thousand times more believable”. AI cannot only create manipulative content in different formats, but can also quickly and cheaply embed it in cultural content, which is often more hidden and more influential than, for example, text-based news content. For instance, ethno-nationalism can be hidden within AI-generated artworks or even full movies.
Civil society and media outlets in Myanmar already struggle with countering manipulative text-based content in a society with few digital or media literacy skills. They will struggle to counter mass-produced AI-generated audio-visual content, too.
Furthermore, AI-generated content is much harder to identify the source. It can be anonymously created, anonymously distributed, difficult to trace, and therefore weaponised to inflame public sentiment and further marginalise vulnerable groups.
Cultural hegemonisation and the marginalisation of Myanmar content
AI models, particularly large language models, are predominantly trained on data originating from global north sources and are predominantly in English, or to a lesser extent, Chinese. Very little will be from Myanmar or in any of the languages spoken in the country.
This inherent bias in training data leads to AI systems that are inevitably more attuned toward dominant cultural norms and languages, resulting in the further marginalisation of creative content from the global south, including from Myanmar’s many communities.
The proprietary algorithms that govern how AI systems prioritise and present creative content are often kept secret. It is highly likely that these algorithms, consciously or unconsciously, prioritise content that aligns with the dominant training data and the interests of technology companies, further deprioritising creativity originating from Myanmar. This can lead to cultural homogenisation and hegemonisation, undermining the right of Myanmar’s diverse communities to participate freely in cultural life and to express their unique identities.
Copyright and the right to remedy
AI-generated creativity relies on vast existing works, raising issues of consent, compensation and copyright. In Myanmar, limited digitised content means AI outputs are more likely to mirror original creations so closely that they blur the line between transformation and infringement.
While Myanmar’s national Copyright Law may offer some recourse, global model training routinely circumvents local protections. Most AI companies are headquartered outside Myanmar’s jurisdiction, leaving creators with no effective remedy. A Myanmar creative would have little choice if trying to enforce their copyright.
Economic disempowerment of Myanmar creatives
Even before AI, many Myanmar creatives struggled to earn a stable income. If their clients and consumers can replace paid commissions with “zero-cost” AI outputs, the result will be widespread job losses and further contraction of an already precarious creative sector.
The effect is likely to be worse for those marginalised creatives already excluded from using AI, either due to a lack of access to the internet or due to a lack of relevant skills to take advantage of the technology. It is also likely that the effect will further hinder the development of a vibrant and diverse national creative sector.
Intersectional and sectoral impacts
AI’s effects will vary across creative fields and social groups. For instance, AI-generated content can amplify sexist stereotypes or normalise violence against women, while automated moderation algorithms can mislabel feminist content as hostile. Creatives with disabilities may find that “accessible” AI platforms still exclude them if user interfaces lack inclusive design.
Traditional crafts and oral history practices—already threatened by urbanisation and migration—risk further erosion if digital preservation projects neglect local knowledge holders. There is a need for intersectional and sector-specific impact studies. Dedicated impact studies are needed for each sector and vulnerable group.
Heightened due diligence and conflict-sensitive risk management
Under the United Nations’ Guiding Principles (UNGPs), AI companies must apply “due diligence” when working, and “heightened due diligence” in conflict-affected areas like Myanmar, where the risk of gross human rights abuses is elevated.
However, Myanmar’s experience has been that technology companies often fail to conduct adequate due diligence before rolling out new digital applications, particularly in countries in the global south. When companies do assess risks, they often prioritise consultation in the global north markets.
Given that creative AI poses a potentially significant risk, at the very least, AI companies should consult with independent experts to assess new AI applications for their potential to fuel serious human rights violations. The UNGPs include the principle “if in doubt, carry it out”. Assessments should be started before deployment. Companies should also implement “sunset clauses” whereby any AI system deployed is automatically reviewed and re-approved at regular intervals and based on fresh impact data.
Globally responsible regulatory approach
Any regulatory approach to AI must consider two critical concerns. Firstly, States may design AI regulations to address concerns within their borders, but in practice, have a global impact on how digital companies operate worldwide. AI regulations must therefore undergo their own form of due diligence to ensure they do not inadvertently harm human rights in other contexts, including in repressive conflict zones like Myanmar.
Secondly, not all States can be trusted to regulate AI in a manner that prioritises the public interest, including the protection of human rights. Allowing a military regime like the one in Myanmar, which has a documented history of human rights abuses, to regulate AI without international oversight would be detrimental to the protection of fundamental rights and could further entrench its authoritarian control.
Therefore, any regulatory framework must involve international human rights bodies and civil society organisations to ensure accountability and prevent misuse by oppressive regimes.
Conclusion
The intersection of AI and creativity presents profound implications for human rights globally, and the situation in Myanmar serves as a critical case study. The failure of technology companies to conduct adequate human rights due diligence in the past has had devastating consequences, and the deployment of AI in the current repressive conflict-affected environment carries significant risks of exacerbating existing human rights violations and undermining fundamental freedoms.
It is therefore imperative that AI companies, States, and the international community adopt a human rights-centred approach to AI governance, prioritising due diligence, transparency, and accountability. Learning from past tragic lessons from countries like Myanmar, we must ensure that the development and deployment of AI technologies are guided by international standards and a firm commitment to protecting the safety, dignity, and fundamental rights of all individuals.
Recommendations
- Embed human rights in AI design: Require human-rights impact assessments with affected communities. AI must augment—not replace—human creators, and recommendation algorithms must prioritise reliable, context-appropriate content.
- Mandate transparency and oversight: Oblige AI firms to publish model-training data sources and moderation rules by region. Creators must have the right to know and consent. Establish independent review bodies for AI harms.
- Apply heightened due diligence in conflict zones: Remind States and companies that the UNGPs demand extra scrutiny. In Myanmar, pause or alter AI deployments that enable surveillance or repression.
- Safeguard creators and cultural diversity: Insist on informed consent and fair pay when artists’ works train AI. Support UNESCO-backed legal and financial measures, and secure platforms for diaspora and underground artists.
- Defend online expression: Call on all States to guarantee an open internet, condemn shutdowns and censorship, and insist platforms resist undue takedown demands under consistent global standards.
- Strengthen international norms and enforcement: Urge the Human Rights Council to adopt binding standards—building on Res. 54/21—for mandatory AI impact assessments and victim remedies. Establish a UN crisis-monitoring mechanism and integrate rights-based AI controls into UPRs, treaty reviews, and similar processes.

