The Monetary Authority of Singapore (“MAS”) has issued a consultation paper proposing new guidelines (“Guidelines”) on Artificial Intelligence (“AI”) Risk Management for financial institutions (“FIs”). The proposed Guidelines set out MAS’ supervisory expectations on how FIs should govern, manage and mitigate risks arising from the use of AI, including Generative AI and AI agents.
The consultation reflects MAS’ recognition that while AI can deliver efficiency and innovation across the financial sector, newer and more complex AI technologies introduce heightened and less well-understood risks that require robust oversight and controls.
For the purposes of the proposed Guidelines, MAS adopts a broad definition of AI. The Guidelines apply to:
MAS recognises the benefits of AI, but has highlighted that newer and more complex AI technologies introduce heightened and sometimes less well-understood risks, including:
| Focus area | What MAS will expect to see |
| Board & senior management oversight | Clear accountability for AI risk management, integration of AI risks into enterprise risk frameworks, and active oversight by the board and senior management. Dedicated cross-functional oversight may be expected where AI risk exposure is material. |
| Identification & AI inventory | Clear identification of where AI is used across the organisation and an accurate, up-to-date inventory of AI use cases, systems and models. |
| AI risk materiality assessment | Consistent assessment of AI use cases based on impact, complexity and reliance, to determine which AI applications warrant more stringent controls. |
| AI lifecycle controls | Controls applied across the entire AI lifecycle, from development and testing to deployment, monitoring, change management and decommissioning. |
| Key AI risk areas | Appropriate controls addressing data governance, explainability, fairness and bias, human oversight, third-party AI risks, cybersecurity, resilience and auditability. |
| Capabilities & capacity | Adequate people, skills, training and technology infrastructure to support safe and responsible AI use. |
AI governance is now a board-level issue
AI risk management is no longer just a technology or innovation concern. MAS expects boards and senior management to actively oversee AI risks and ensure clear accountability.
You cannot manage what you have not identified
FIs should expect supervisory scrutiny on whether they can clearly identify where AI is used, maintain a credible AI inventory, and assess which AI use cases are materially risky.
Higher-impact AI attracts higher regulatory expectations
AI used in customer-facing or regulated activities (such as credit decisioning, underwriting or financial advice) will be subject to more exacting standards around explainability, fairness, human oversight and testing.
Third-party AI does not shift regulatory responsibility
Reliance on vendors, cloud-based AI or open-source models does not reduce an FI’s accountability. MAS expects robust due diligence, contractual protections, contingency planning and ongoing oversight of third-party AI.
Existing risk frameworks will need to be uplifted, not replaced
The proposed Guidelines build on existing MAS frameworks such as Fairness, Ethics, Accountability and Transparency (FEAT), technology risk management, outsourcing and model risk management. Many FIs will need to extend and adapt existing frameworks to address AI-specific risks, particularly for Generative AI.
We are supporting FIs across the region as they assess the impact of MAS’ proposed Guidelines on Artificial Intelligence Risk Management and prepare for implementation. In particular, we can assist FIs with:
This article is produced by our Singapore office, Bird & Bird ATMD LLP. It does not constitute legal advice and is intended to provide general information only. Information in this article is accurate as of 14 January 2026.