AP and RDI: Supervision of AI systems requires collaboration and must be arranged quickly
Cooperation between supervisory authorities is paramount in the supervision of artificial intelligence (AI), the Dutch Data Protection Authority (AP) and the Dutch Authority for Digital Infrastructure (RDI) write in their advice to the Dutch government. Decisions on which bodies will carry out the different supervisory tasks need to be made soon, as the first parts of the new European AI Act will come into force at the start of 2025.
The AP and the RDI emphasize that sufficient budget and staff must be available in time for all supervisory authorities involved. This will ensure they can begin their tasks promptly, such as providing guidance and enforcement.
The advice has been prepared by the AP and the RDI in collaboration with 20 other Dutch supervisory authorities that may play a role in AI-supervision. For over a year, these supervisory authorities have been jointly preparing for the supervision of AI. With this joint vision on for a national supervisory structure for AI, Dutch supervisors are leading the way in Europe.
AI Act
Last month, European ministers voted in favour of the AI Act, the world's first comprehensive law on artificial intelligence. The AI Act stipulates that high-risk AI systems may only be placed on the market and used if they meet strict product requirements. These systems will be given a CE marking, as has been mandatory for years for elevators, mobile phones and toys.
Aligning with existing product supervision
The AP and the RDI recommend that AI supervision in various sectors be aligned as much as possible with existing supervision. The supervision of high-risk AI products that already require CE marking can remain the same. For example, the Netherlands Food and Consumer Product Safety Authority (NVWA) will continue to inspect toys, even if they contain AI, and the Health and Youth Care Inspectorate (IGJ) will supervise AI in medical devices.
Angeline van Dijk, Inspector General of the RDI: ‘Cooperation is key when it comes to the concentration of knowledge and coordination in practice. Effective supervision that considers innovation can only arise if relevant supervisory authorities cooperate with developers and users of high-risk AI. Companies and organizations can explore with the RDI whether they need to comply with AI regulations and how they can do so. The efforts of the RDI to set up regulatory sandboxes, a kind of breeding ground for responsible AI applications, is an excellent example of this. This advice is an important milestone in that regard.’
New supervision of AI
The supervision of high-risk AI applications for which no CE marking is currently required should largely lie with the AP, in addition to sectoral supervision, the supervisory authorities write. It does not matter in which sector these systems are used, from education to migration and from employment to law enforcement. The AP should be the so-called “market surveillance authority” here.
AP Chairman Aleid Wolfsen says: ‘The market surveillance authority will ensure that AI placed on the market actually meets requirements in areas such as training AI, transparency and human control. This requires specialist knowledge and expertise, which is efficient if bundled together. It is also important that in this way AP can keep an overview, given that companies developing such AI often do not do so only for one sector. Cooperation with sectoral supervisory authorities is crucial, because they have a good overview of AI use in, for example, education or by employers. We will act quickly to set up this cooperation.’
The supervisory authorities propose two exceptions: in the financial sector, the Dutch Authority for the Financial Markets (AFM) and De Nederlandsche Bank (DNB) will handle market surveillance, while the Human Environment and Transport Inspectorate (ILT) and the RDI will oversee critical infrastructure. Additionally, the market supervision of AI systems used for judicial purposes must be set up in such a way that the independence of judicial authorities is ensured.
It is important that supervisory authorities are quickly appointed not only in the Netherlands, but also in other Member States. Cross-border and large AI systems require cooperation between supervisory authorities from different Member States and with the new European AI Office, which will supervise large AI models that, for example, underpin ChatGPT.
Urgent AI regulatory actions
Several issues need to be addressed in the short term. This includes identifying fundamental rights supervisory authorities, a role envisioned by the supervisory authorities for the Netherlands Institute for Human Rights and the AP. Attention is also needed for the notified bodies to assess AI systems’ compliance with European standards. The supervisory authorities urge the government to quickly appoint the relevant supervisory authorities concerned, so that guidance, enforcement and practical preparation for these new tasks can begin in time. For example, the ban on some forms of AI is likely to apply as early as January 2025. The supervisory authorities propose that the AP be responsible for supervising forbidden AI.