Currently, there are no AI-specific laws or regulations in Hong Kong (except measures adopted to ban certain AI products involving personal safety such as autonomous driving AI). Local regulators have also issued high-level guidance on AI and AI products, including the Hong Kong Monetary Authority’s High-level Principles on AI and Consumer Protection in respect of Use of Big Data Analytics and AI by Authorised Institutions (both published in November 2019), as well as the Privacy Commissioner for Personal Data’s Guidance on Ethical Development and Use of AI (published in August 2021). Most recently, the Hong Kong Government’s Office of the Government Chief Information Officer (OGCIO) developed an Ethical Artificial Intelligence Framework (Ethical AI Framework) originally designed for internal use, and an adapted version was published publicly in August 2023 to assist organisations in incorporating AI and big data analytics into IT projects whilst considering the ethical implications.
The Ethical AI Framework consists of:
The term “AI” is broadly defined in the Ethical AI Framework as a collective term for computer systems that can sense their environment, think, learn and take actions in response to the gathered data, with the ultimate goal of fulfilling their design objectives. “AI Systems” is defined as a collection of interrelated technologies used to help solve problems autonomously and perform tasks to achieve defined objectives without explicit guidance from a human being. Whereas “AI Applications” is used to refer to a collective set of applications whose actions, decisions or predictions are empowered by AI models, such as IT projects which have prediction functionality, or model development involving training data.
Key Actions to Adopt the Ethical AI Framework: In order to adopt the Ethical AI Framework, the OGCIO recommends the following Key Actions: (i) considering all Ethical AI Principles throughout a project lifecycle; (ii) reviewing any existing project management governance structures to ensure alignment with the AI Governance Structure, and setting up an optional ‘Ethical AI Committee’ if necessary; and (iii) following the AI Practice Guide as well as completing the AI Application Impact Assessment.
The Ethical AI Principles are rules to be followed when designing and developing AI Applications. The Ethical AI Framework defines two of the twelve principles, (1) Transparency and Interpretability and (2) Reliability, Robustness and Security, as “performance principles” which are fundamental principles which must be achieved to create a foundation for the execution of other principles.
The remaining principles are categorised as “general principles”, derived from the United Nations’ Universal Declaration of Human Rights and the Hong Kong Ordinances.
The twelve Ethical AI Principles are as follows:
“AI Governance” is defined as the practices and direction by which AI projects and applications are managed and controlled. It defines standard structures and roles and responsibilities over the adoption process of AI against practices set out in the Ethical AI Framework.
The three lines of defense model is adopted:
Line of defence | Roles and responsibilities |
First line | The project team, responsible for AI Application development, risk evaluation, execution of actions to mitigate identified risks, and completing the AI Assessment. |
Second line | A project steering committee and project assurance team, responsible for ensuring project quality, defining acceptance criteria for AI Applications, providing independent review and approving AI Applications. |
Third line | The third line of defence is responsible for the reviewing, advising and monitoring of high-risk AI Applications. It comprises of an IT board or chief information officer, and is optionally supported by an ethical AI committee of external advisors, whose purpose is to provide advice and strengthen organisations’ existing competency in AI adoption. |
The Ethical AI Framework guides the ethical use of AI in organisations by providing a description of activities to be covered throughout all stages of the AI Lifecycle, a structure to be followed when executing AI projects, and the corresponding capabilities required to apply ethical AI.
The 6 stages of an AI Lifecycle are:
The development process of an AI application places a significant emphasis on data, as the quality of data often dictates the quality of the AI model. Data sourcing and preparation is therefore a continuous exercise, as AI models can often benefit from more or better data for iterative model training during the development process. As such, the AI Lifecycle often involves a continual feedback loop between the stages of project development, system deployment, and system operation and monitoring for iterative improvements, differentiating it from a traditional software development lifecycle.
The AI Practice Guide provides practical guidelines for organisations to observe and apply, in various practice areas corresponding to stages in the AI Lifecycle, when incorporating AI in IT projects to ensure ethical adoption. These practice areas are assessed as part of the AI Application Impact Assessment.
The AI Assessment enables organisations to assess, identify, analyse, and evaluate the benefits, impact, and risks of AI applications over a set of practical considerations for implementing ethical AI. This ensures organisations are meeting the intent of Ethical AI Principles and helps determine the appropriate mitigation measures required to control any negative impacts within an acceptable level.
The AI Assessment consists of the following components:
Organisations may make use of the Ethical AI Framework when adopting AI in their IT projects or services. The Ethical AI Framework is designed not only to serve as a reference or guide for the project team during the development and maintenance process of an AI application, but also to provide guideline governance structures, enabling organisations to demonstrate accountability in building trust with the public upon the adoption of AI, by evaluating its impact, safeguarding the public interest and facilitating innovation.
The AI landscape in Hong Kong is rapidly evolving. However, these rapid changes may also come with ethical concerns. With the Ethical AI Framework in mind, organisations should remain alert to potential changes in the regulatory environment. Currently, AI regulation in Hong Kong primarily derives from the existing rules on intellectual property rights in relation to AI systems and AI-generated IP, data protection and privacy. While no specific legislation for AI systems has been proposed yet, it is anticipated that the regulatory framework could be readily adapted to address emerging challenges. The Government plans to establish a special task force to recommend the most effective approach in dealing with the revolutionary impact of Large Language Models such as ChatGPT, with future legislation being a possibility.
*Information is accurate up to 27 November 2023