The second session of AINoEs Horizons | Q&A Sessions, entitled ‘AI Policy and Standards’, brought together some of the leading European projects in artificial intelligence with a common goal: to reflect on how public policies, regulatory frameworks and technical standards can promote responsible, trustworthy and people-centred AI.

The meeting, aimed at researchers, professionals, policy makers and the general public, was attended by experts from five leading European projects. Three of them belong to the AI NoES network: Sandra Engel (ELSA), Sabrina Bianchi and Astik Samal (Horizon ENFIELD) and Kurt Tutschku (dAIEDGE); and two are external to the network: Lauren Romary (NoLeFa), a two-year pilot project launched at the end of 2024 and aimed at laying the foundations for these infrastructures, and Antonino Rotolo (EUSAiR), an initiative aimed at supporting the implementation of regulatory test environments for artificial intelligence throughout the European Union, with the aim of promoting innovation and competitiveness, strengthening legal certainty for innovators and facilitating compliance with the AI Act. The presence of these specialists from outside AI NoES highlighted the remarkable growth of this series of meetings and the relevance of the network in Europe's positioning in artificial intelligence.
Through a dynamic format based on presentations and an open question and answer session, the 56 attendees addressed the main challenges and opportunities facing Europe in the field of AI governance, based on eleven key questions.
At dAIEDGE, as a Network of Excellence specialising in distributed, efficient and scalable AI at the Edge, we actively participate in this forum for exchange. We also reaffirm our role as leader of the AI NoEs Communication Club, a strategic function aimed at strengthening the consistency, clarity and reach of messages emanating from the European ecosystem of excellence in AI.

One of the focal points of the debate was the influence of European policies, in particular the AI Act, on the way projects communicate their work. Participants agreed that standards and regulations should not be seen as barriers, but as key tools for ensuring safety, transparency, protection of rights and public trust in AI systems.
Another relevant aspect was the challenge of communicating complex scientific results to diverse audiences. Through the Communication Club, led by us, common dissemination methodologies, understandable narratives and adapted formats are promoted to facilitate knowledge transfer and reinforce the social impact of research.
The session also highlighted the need for greater collaboration between projects in order to build a unified European voice on policy recommendations. This cooperation is essential to strengthen the position of AI NoEs vis-à-vis the institutions and to promote consistent implementation of the AI Act, in which market surveillance authorities play a key role.
The advisability of moving towards shared platforms for evaluating AI systems at national and European level was also discussed. Pooling resources would optimise efforts, ensure consistent criteria and improve the quality of certification and supervision processes.
With the conclusion of this second meeting, a space for collaboration has been established that will continue to grow in future sessions. At dAIEDGE, we would like to thank all the participants and renew our commitment to continue working towards distributed, efficient artificial intelligence that is aligned with Europe's fundamental values.