This blog post provides a brief overview of the impact of Regulation (EU) 2024/1689 of 13 June 2024 laying down harmonized rules on artificial intelligence (“AI Act”) on video game developers. More and more are integrating AI systems into their video games, including to generate backgrounds, non-player characters (NPCs), histories of objects to be founds in the video game. Some of these use cases are regulated under specific circumstances, and create obligations under the AI Act.

The AI Act entered into force on 1st August 2024 and will gradually apply over the next two years. The application of the provisions of the AI Act depends predominantly on two factors: the role of the video game developer, and the AI risk level.

The role of the video game developer

Article 2 of the AI Act delimits the scope of the regulation, specifying who may be subject to the AI Act. Video game developers might specifically fall under two of these categories:

Thus, video game developers will be considered (i) providers if they develop their own AI system and they will be considered (ii) deployers if they integrate existing AI system made by a third party into their video games.

The AI risk level and related obligations

The AI Act classifies AI systems into four categories based on the risk associated with them (Article 3(1) AI Act). Obligations on economic operators vary depending on the level of risk resulting from the AI systems used:

The EC stated that, in principle, AI-enabled video games face no obligation under the AI Act, but companies could voluntarily adopt additional codes of conduct (see AI Act | Shaping Europe’s digital future). It should be borne in mind, however, that in specific cases such as those described in this section, the AI Act will apply. Moreover, the AI literacy obligation applies regardless of the level of risk of the system, including minimal risk.

The AI literacy obligation

The AI literacy obligation applies from February 2025 (Article 113 a) AI Act) to both providers and deployers (Article 4 AI Act), regardless of the AI’s risk level. AI literacy is defined as skills, knowledge and understanding that allow providers, deployers and affected persons, to make an informed deployment of AI systems, as well as to gain awareness about the opportunities and risks of AI and possible harm it can cause.

The ultimate purpose is to ensure that video games developer’s staff are able to take informed decisions in relation to AI, taking into account their technical knowledge, experience, education and training and the context the AI system is to be used in, and considering the persons or groups of persons on whom the AI system is to be used.

The AI Act does not detail how providers and deployers should comply with the AI literacy obligation. In practice, various steps can be taken to achieve AI literacy:

Conclusion

The regulation of AI systems in the EU has potentially a significant impact on video game developers depending on the way AI systems are used within particular video games. It is early days for the AI Act and we are carefully watching this space particularly as the AI Act is evolving to adapt to new technologies.

Listen to this post

Leave a Reply

Your email address will not be published. Required fields are marked *