When the EU AI Act entered into force in August 2024, the immediate attention fell on technology companies, platform operators and the manufacturers of high-risk AI systems. That focus was understandable. But the Act's reach extends further than many anticipated, and conference organisers are among those who need to pay attention.
Registration and Delegate Profiling
Many modern conference platforms use algorithmic tools to match delegates, recommend sessions or prioritise networking suggestions. Under the AI Act, systems that profile individuals and make automated decisions about their experience at an event may fall within the scope of the regulation, particularly if those decisions affect access to information or networking opportunities.
For organisers working in the policy space, where access to the right conversations can have material consequences, this is not a trivial concern. If your registration platform uses machine learning to segment delegates or prioritise certain attendees for exclusive sessions, you will need to understand where that system sits within the Act's risk categories.
Facial Recognition and Access Control
Several venue operators and event technology providers have begun offering facial recognition for badge-free entry and attendance tracking. The AI Act places significant restrictions on biometric identification in publicly accessible spaces, with limited exceptions for law enforcement. Conference venues typically qualify as publicly accessible spaces, which means organisers should be cautious about deploying such systems without a thorough legal assessment.
At EUConvention, we made the decision early on not to use biometric identification at our events. The privacy concerns alone would have been sufficient reason, but the regulatory position under the AI Act reinforces that decision.
Content Moderation and Programme Design
Some conference platforms now offer AI-assisted content curation — tools that analyse speaker submissions and rank them by predicted audience interest. These systems raise questions under the Act about transparency and the right to human oversight. If a speaker's proposal is deprioritised by an algorithm, should they be informed that an automated system was involved in the decision?
The answer, under the AI Act's transparency provisions, is likely yes. Organisers who use such tools will need to disclose their use and ensure that meaningful human review remains part of the programme design process.
What Organisers Should Do Now
The full enforcement timeline for the AI Act stretches to 2027, but organisers should begin preparing now. Three practical steps are worth considering:
- Audit your technology stack. Identify every tool that uses automated decision-making or machine learning, from registration platforms to matchmaking algorithms.
- Assess risk categories. Determine where each tool falls within the Act's risk framework and whether any qualify as high-risk systems.
- Review supplier agreements. Ensure that your technology providers can demonstrate compliance with the Act's requirements, including transparency, documentation and human oversight provisions.
A Broader Point
The AI Act is an expression of a distinctly European approach to technology governance: one that prioritises transparency, individual rights and democratic accountability. Conference organisers who work in the European policy space have a particular responsibility to model the values that our events are designed to promote. Compliance is not merely a legal obligation; it is a matter of credibility.
We will be examining these issues in greater depth at the European AI Governance Forum in September 2025. If you work in event technology or conference organisation and would like to contribute to the programme, we welcome your interest.