Data Protection principles applied in helpful AI Guidance issued from across the Channel!
As we continually seek to apply best practice and comply with data protection laws in the implementation of Artificial Intelligence, the French data privacy regulator, the CNIL, continues to provide helpful guidance and useful reminders of how we can use AI and safeguard data subjects.
Following on from its 2023 Artificial Intelligence (AI) action plan, the CNIL, has now released 7 factsheets on the development of AI for consultation. The factsheets are another method by which the CNIL are seeking to foster innovation while safeguarding individuals’ rights, and they provide helpful guidance to those operating in this space.
The starting point of the guidance is that the development of AI can be compatible with General Data Protection Regulation (GDPR) and that it is possible to develop ethical technology that citizens can trust. While, this guidance is not from a UK regulatory body or European wide regulator, the fundamental principles outlined such as purpose limitation, data minimisation and data retention, apply irrespectively.
A brief overview of the factsheets
The scope of the factsheets is limited to:
- AI in the development phase, and excludes AI in the deployment phase;
- Where there is processing of personal data, even if such data is only residual; and
- Machine learning and logic and/or knowledge-based systems.
Factsheet 1 - Applicable Regime
The CNIL confirm that should the AI be in scope of the factsheets, GDPR is likely to apply unless the law enforcement exemption applies. This means that if the processing is carried out by a competent authority for the purposes of the prevention, detection, investigation and prosecution of criminal offences or the execution of criminal penalties, GDPR will not apply.
Factsheet 2 - Purpose
The data processing must have a defined, explicit, and legitimate purpose. Data cannot be processed in a manner incompatible with the original purpose. The CNIL highlights some key considerations in this respect:
- A too broadly defined purpose cannot be definite and explicit.
- To be sufficiently precise, the purpose can refer to:
- The type of system developed; and
- The system functionality and capability, including any limitations by design.
- Those most at risk of the AI in the operational phase should be identified by the controller.
- The conditions for use of the AI should be specified.
Factsheet 3 - Legal Status of the AI System Providers
Data controller: actor who initiates the development of the AI and builds the training base.
When using a training data base created by another actor, the disseminator, who puts the personal data on, and the data re-user, who processes the data for their use, are separate data controllers.
Joint controllers: several controllers who feed the training database for a jointly defined purpose.
Subcontractor: actor who processes data on behalf of the Data Controller, as part of a service or benefit.
Factsheet 4 - Lawful Processing
A controller should define a legal basis for the data processing, this could include consent, legitimate interest, public interest, or contractual arrangements. However, if the controller is reusing data, they must determine whether the additional processing is compatible with the original purpose. Interestingly, the CNIL have highlighted that controllers should not use data where they are aware that it does not comply with GDPR or other privacy related rules.
Factsheet 5 - Impact Assessments
An impact assessment is required where there is a high risk to people’s rights and freedoms. The factsheet refers to the European Data Protection Board’s (EDPB) criteria to determine whether an impact assessment is required.
Factsheet 6 - Data Protection in System Design
This factsheet sets out questions controllers can use to ensure privacy by design with particular attention dedicated to minimisation of data and special category data. The factsheet also highlights where the use of an ethics committee may be appropriate as well as providing guidelines on the composition of the committee.
Factsheet 7 - Data Protection in Data Collection and Management
Some methods of ensuring privacy by design may include:
- Generalisation measures: these measures aim to generalize, or dilute, the attributes of the data.
- Randomisation measures: These measures aim to add noise to the data to weaken the link between the data and the individual.
- Review of data: these measures aim to ensure the data used is not corrupt.
The wider context
The CNIL’s consultation on the factsheets is timely: regulatory scrutiny of AI has been increasing. For example, the UK Information Commissioner’s Office recently fined Snap, the company behind Snapchat, in relation to their AI chatbot, and the Italian Data Protection Authority, Garante, banning Replika AI. Many have been calling for better regulation of AI, including the Scottish AI Alliance, in order to ensure that privacy and ethical concerns are covered from the outset.
How can we help?
Should you require any assistance with privacy issues related to Artificial Intelligence, please contact a member of our Data Protection and Cyber Security team.
This article was co-written by Helen McBrierty, Trainee Solicitor.