Singapore: MAS Consults on Proposed Guidelines on Artificial Intelligence Risk Management

Contacts

kenneth lo Module
Kenneth Lo

Counsel
Singapore

I am a financial services regulatory lawyer, covering payments, capital markets services regulatory and crypto regulatory areas.

darveenia rajularajah Module
Darveenia Rajula Rajah

Associate
Singapore

I am an associate in Bird & Bird's Financial Regulatory Group, based in Singapore. I previously served as Senior Legal Counsel at the Monetary Authority of Singapore, and bring first-hand regulatory perspective to advising clients on financial services regulatory and compliance matters.

The Monetary Authority of Singapore (“MAS”) has issued a consultation paper proposing new guidelines (“Guidelines”) on Artificial Intelligence (“AI”) Risk Management for financial institutions (“FIs”). The proposed Guidelines set out MAS’ supervisory expectations on how FIs should govern, manage and mitigate risks arising from the use of AI, including Generative AI and AI agents.

The consultation reflects MAS’ recognition that while AI can deliver efficiency and innovation across the financial sector, newer and more complex AI technologies introduce heightened and less well-understood risks that require robust oversight and controls.

Who do the proposed Guidelines apply to?

  • All FIs regulated in Singapore (banks, insurers, capital markets intermediaries, payment service providers, etc.).
  • FIs of different sizes, with implementation expected to be proportionate to the FI’s size, business model and AI risk exposure.
  • Singapore branches or subsidiaries of overseas groups may rely on group-level AI frameworks, provided these meet MAS’ expectations.

What types of AI are in scope?

For the purposes of the proposed Guidelines, MAS adopts a broad definition of AI. The Guidelines apply to:

  • AI models, being methods or approaches that convert assumptions and input data into outputs such as estimates, decisions or recommendations.
  • AI systems, which may comprise one or more AI models together with other machine-based components.
  • AI use cases, referring to the specific real-world contexts in which AI models or systems are applied to.
  • AI models or systems that learn and/or infer from inputs to generate outputs such as estimates, predictions, content, summaries, recommendations or decisions that may influence physical or virtual environments, and vary in their levels of autonomy and adaptiveness after deployment.
  • This includes Generative AI, AI agents, and newer AI technologies that fall within the proposed scope of AI set out by MAS.

What is MAS concerned about?

MAS recognises the benefits of AI, but has highlighted that newer and more complex AI technologies introduce heightened and sometimes less well-understood risks, including:

  • Poor or biased outputs driven by inadequate data governance.
  • Lack of transparency or explainability for decisions affecting customers.
  • Over-reliance on automated outputs without meaningful human oversight.
  • Cybersecurity, privacy and data leakage risks.
  • Concentration and dependency risks arising from third-party AI providers.
  • Governance gaps where AI is deployed faster than risk controls can keep pace.

What does MAS expect FIs to do?

Focus area What MAS will expect to see
Board & senior management oversight Clear accountability for AI risk management, integration of AI risks into enterprise risk frameworks, and active oversight by the board and senior management. Dedicated cross-functional oversight may be expected where AI risk exposure is material.
Identification & AI inventory Clear identification of where AI is used across the organisation and an accurate, up-to-date inventory of AI use cases, systems and models.
AI risk materiality assessment Consistent assessment of AI use cases based on impact, complexity and reliance, to determine which AI applications warrant more stringent controls.
AI lifecycle controls Controls applied across the entire AI lifecycle, from development and testing to deployment, monitoring, change management and decommissioning.
Key AI risk areas Appropriate controls addressing data governance, explainability, fairness and bias, human oversight, third-party AI risks, cybersecurity, resilience and auditability.
Capabilities & capacity Adequate people, skills, training and technology infrastructure to support safe and responsible AI use.


What this means for financial institutions – key takeaways

  1. AI governance is now a board-level issue

    AI risk management is no longer just a technology or innovation concern. MAS expects boards and senior management to actively oversee AI risks and ensure clear accountability.

  2. You cannot manage what you have not identified

    FIs should expect supervisory scrutiny on whether they can clearly identify where AI is used, maintain a credible AI inventory, and assess which AI use cases are materially risky.

  3. Higher-impact AI attracts higher regulatory expectations

    AI used in customer-facing or regulated activities (such as credit decisioning, underwriting or financial advice) will be subject to more exacting standards around explainability, fairness, human oversight and testing.

  4. Third-party AI does not shift regulatory responsibility

    Reliance on vendors, cloud-based AI or open-source models does not reduce an FI’s accountability. MAS expects robust due diligence, contractual protections, contingency planning and ongoing oversight of third-party AI.

  5. Existing risk frameworks will need to be uplifted, not replaced

    The proposed Guidelines build on existing MAS frameworks such as Fairness, Ethics, Accountability and Transparency (FEAT), technology risk management, outsourcing and model risk management. Many FIs will need to extend and adapt existing frameworks to address AI-specific risks, particularly for Generative AI.

How can Bird & Bird assist you?

We are supporting FIs across the region as they assess the impact of MAS’ proposed Guidelines on Artificial Intelligence Risk Management and prepare for implementation. In particular, we can assist FIs with:

  1. AI governance and policy development
    We review and draft AI policies and governance frameworks to align with MAS’ proposed supervisory expectations, including board and senior management oversight.
     
  2. Gap analysis against MAS’ proposed Guidelines
    We conduct targeted gap assessments of existing AI use cases, governance structures and risk controls, with clear prioritisation of remediation actions.
     
  3. AI use case and risk materiality assessments
    We support the identification of AI use cases and assessing their risk materiality based on impact, complexity and reliance, to identify higher risk applications.
     
  4. Third party AI and outsourcing considerations
    We advise on regulatory and contractual risks arising from third party AI arrangements, including due diligence and alignment with MAS expectations.

This article is produced by our Singapore office, Bird & Bird ATMD LLP. It does not constitute legal advice and is intended to provide general information only. Information in this article is accurate as of 14 January 2026.

Latest insights

More Insights
Curiosity line yellow background

Recent Developments in Hong Kong Company Law: Key Updates for 2025

7 minutes Jan 30 2026

Read More
Curiosity line pink background

UK Payments and Cryptoasset Regulatory Outlook 2026: What Firms Should Expect

18 minutes Jan 29 2026

Read More
featured image

What’s Ahead for the Cosmetics, Beauty & Fragrance Sectors in 2026?

1 minute Jan 27 2026

Read More