Setting the scene: Hong Kong Privacy Commissioner publishes first comprehensive AI-specific guidance

Written By

wilfred ng Module
Wilfred Ng

Partner
China

I am a partner in our Commercial Department based in Hong Kong. As a technology, media, telecoms and data protection lawyer, I am experienced in advising on all aspects of commercial, transactional and regulatory matters in the TMT space.

calista chiu module
Calista Chiu

Associate
China

I am an associate in our Commercial team based in Hong Kong. I advise clients on a wide range of commercial and regulatory matters with a particular focus on Technology, Media, Telecoms, Data Protection and Privacy laws and practices.

On 11 June 2024, the Office of the Privacy Commissioner for Personal Data (“PCPD”) published the “Artificial Intelligence: Model Personal Data Protection Framework” (the “Model Framework”). This is PCPD’s first guidance document targeted at organisations procuring, implementing and using artificial intelligence (“AI”) systems in the context of their compliance with the Personal Data (Privacy) Ordinance (the “PDPO”).

Who does it affect

The Model Framework is addressed to organisations that procure AI solutions from third parties and process personal data in their operation or customisation of AI system (“Organisations”). Accordingly, the Model Framework does not apply to firms that organically develop in-house AI solutions, as such firms are recommended to refer to the “Guidance on the Ethical Development and Use of Artificial Intelligence” published in August 2021 (“2021 AI Guidance”), an earlier PCPD publication which seeks to provide general guidance on privacy-friendly and ethical practice in the development and use of AI.

What kind of AI?

The Model Framework provides best practices for handling personal data when using both predictive AI and generative AI solutions. In other words, this guidance seeks to address both AI systems for content generation and making autonomous decisions, recommendations or predictions. Comparatively, the Singapore Personal Data Protection Commission has recently published advisory guidelines specifically on AI recommendation and decision systems, with a view to provide further guidance on generative AI systems (A summary is produced by our Singapore team here).

Why is this important to data users?

1. Recommendations for Organisations to comply with key privacy and security PDPO obligations in the full life cycle of the procurement, implementation and termination of AI systems

For example, data users are reminded of their data protection obligations arising from their role in the underlying data processing activities (as controller, joint controller or processor). If Organisations are engaging an AI system provider as a processor, the requisite contractual obligations to be imposed on security and unnecessary retention would apply. In the same spirit of good AI governance, if a joint controller relationship applies, it is likely that PCPD further expects Organisations to ensure the contractual arrangements clearly allocate data protection responsibilities between the Organisation and the supplier of the AI system.

Further, Organisations are recommended to put in place an AI Incident Response Plan as part of its continuous management and monitoring of any potential risks. AI incident could be considered as an event where the development or use of an AI system caused harm to a person, property or the environment. Where an AI Incident occurs as part of a data breach, the data breach incident response mechanism should be simultaneously engaged. This should also be considered in the context of the potential mandatory breach notification obligation included in the proposed PDPO amendments (Please refer to our earlier summary here on the proposed PDPO legislative changes). If a data breach is involved in the AI incident, the AI incident response mechanism should include appropriate considerations for triggering a report to internal stakeholders and external affected parties such as data subjects and regulatory authorities.

2. A risk-based approach to the use and procurement of AI systems

A risk-based approach provides a useful assurance for Organisations to “transpose” existing data protection compliance programme into specific measures for addressing risks arising from AI-related data processing activities. Importantly, this is part and parcel of the accountability approach in handling personal data advocated by the PCPD in the “Privacy Management Programme: A Best Practice Guide”, also a “Brussels effect” giving nod to the GDPR principle of accountability emphasising on the requirement to demonstrate compliance through proportionate data protection and security measures.  Notable examples in the Model Framework include:

  • AI governance committee comprising a cross-functional team should be considered to take ownership of the oversight of the whole procurement and implementation process of AI solutions. For any Organisation with an established Data Protection Office, there is significant room to leverage on existing dedicated privacy resources.For instance, a Chief Information Officer or Data Protection Officer could be nominated to lead a team of information security, system analysts and operational personnel. The committee is also encouraged consult industry framework such as the ISO/IEC 23894:2023 (Information technology - Artificial intelligence - Guidance on risk management) as part of the continuing risk management system.
  • Organisations should consider the need for conducting a Privacy Impact Assessment (“PIA”) prior to using AI systems. While PIA is not expressly required under the PDPO, it is an effective tool in detecting any privacy risks and facilitates the implementation of privacy enhancement technology from a privacy-by-design perspective. It is particularly pertinent when AI systems involve large-scale processing of sensitive information, such as biometric verification of facial images. In this case, due regard should be given to ensure AI solutions only process and collect personal data in a manner that is adequate, relevant and not excessive to the intended purpose, as required by the PDPO.

3. Increasing regulatory attention on AI-related processing

Earlier this year, the PCPD completed a set of compliance checks on 28 organisations regarding the implications of the development or use of AI across a number of sectors including telecommunications, finance and insurance, retail and education. PCPD issued a list of recommended measures following the compliance checks which are broadly aligned with the suggestions set out in the Model Framework.  Separately, the Hong Kong Monetary Authority has also announced earlier that they are actively exploring the establishment of a brand-new sandbox on generative artificial intelligence with a view to launch in late 2024. The sandbox will provide support from both technical and regulatory perspectives and enable regulated institutions to test out their generative AI solutions in a more risk-controlled environment.

Against the backdrop of increasing regulatory attention on the use of AI systems in different sectors, the Model Framework references an expectation for Organisations to monitor and assess not only operational and functional changes to the AI system, but also developments in the regulatory and technological environments. With the advent of AI regulations in other jurisdictions, it is expected that Hong Kong regulators will continue to look for reference points in the global AI regulatory landscape.

What should data users do next?

1. Consolidating and configuring existing privacy compliance and vendor management framework

The Model Framework encourages Organisations to leverage on existing data governance, accountability and vendor management frameworks. This is also advisable considering the implementation of third-party AI solutions often require attuning the Organisation’s existing technology infrastructure for the purposes of integrating with the AI systems. Accordingly, configuring and supplementing existing data protection assessment or gap analysis results is always a useful starting point. In doing so, Organisations should be reminded to consider in conjunction other PCPD guidance which may also be relevant to the underlying processing activities, such as the “Guidance on Collection and Use of Biometric Data” and the “Guidance on Data Breach Handling and Data Breach Notifications” issued by the PCPD.

2. Considering beyond privacy

The Model Framework emphasises the importance of taking a holistic approach to the testing and validation of AI systems. For example, industry standards in addressing information security risks should be observed when implementing AI solutions with open-source components. The Model Framework references the best practices for open source security set out in the Hong Kong Government’s InfoSec website. Additionally, Organisations should be expected to expand their existing monitoring mechanism for security advisories and alerts to cover open source or AI-related vulnerabilities (such as commonly used APIs for AI deployment purposes).

3. Pre-empting and actively managing AI-specific risks

In practice, a risk-based approach should not translate to a risk-tolerant AI strategy. Data users should ensure that mitigation measures should be proportionate to the level of identified risks. In recognising the uniqueness and complexities of AI systems, this includes actively communicating to data subjects of any residual risks that cannot be eliminated, and consider the need for a privacy impact assessment and early engagement with the AI system supplier.

4. Explainable AI                                                 

Data users should be reminded that transparency and accountability obligations in explaining AI systems to data subjects require a more rigorous understanding of the underlying processing and a pro-active approach in explaining the requisite information in a clear and concise manner. In fulfilling its notification obligation, data users should consider reframing the current privacy disclosure approach in accordance with the need to cater for the reader’s level comprehension of the technology, for example through interactive, multimedia content if appropriate, to ensure the privacy risks and impact of automated processing are clearly communicated to the data subject.

Latest insights

More Insights
Curiosity line blue background

A Deep Dive into China’s Network ID Proposal

Nov 06 2024

Read More
mountain scape

European Union Artificial Intelligence Act Guide

Nov 06 2024

Read More

California’s AI bill vs. the EU AI Act: a cross-continental analysis of AI regulations

Nov 06 2024

Read More