On November 13, 2025, the Monetary Authority of Singapore (MAS) issued a consultation paper that quietly redefined AI governance expectations for the entire Asia-Pacific financial sector. The proposed Guidelines on Artificial Intelligence Risk Management represent the region's first comprehensive, mandatory framework for AI oversight in financial services.
This isn't just another compliance document. It's a signal that the era of voluntary AI principles is ending—and regulated, enforceable AI governance has arrived.
📋 Key Numbers
Consultation deadline: January 31, 2026
Implementation period: 12 months after finalization
Scope: All MAS-regulated financial institutions
Coverage: Traditional AI, GenAI, and AI agents
Why This Matters: From FEAT Principles to Enforceable Guidelines
Since 2018, Singapore has guided AI governance through FEAT—Fairness, Ethics, Accountability, and Transparency principles developed collaboratively with industry. FEAT established the ethical foundation, encouraging financial institutions to self-assess their AI systems against these values.
The 2025 Guidelines don't replace FEAT—they operationalize it. While FEAT says "be fair," the new Guidelines specify how to demonstrate fairness: through documented risk assessments, testing protocols, human oversight mechanisms, and audit trails.
| Aspect | FEAT (2018) | AI Guidelines (2025) |
|---|---|---|
| Nature | Voluntary principles | Supervisory expectations |
| Scope | AI and data analytics | All AI including GenAI & agents |
| Governance | Self-assessment encouraged | Board accountability required |
| Documentation | Implicit | AI inventory + risk assessments mandatory |
| Lifecycle | General guidance | Specific controls at each stage |
| Third parties | Not specifically addressed | Due diligence + ongoing oversight required |
The message is clear: principles without processes aren't sufficient for regulating increasingly powerful AI systems. Financial institutions need operational machinery—governance structures, control frameworks, and accountability chains—that can withstand regulatory scrutiny.
The Four Pillars: What MAS Actually Expects
The Guidelines are organized around four interconnected pillars. Each builds on the others, creating a comprehensive AI risk management architecture.
1️⃣ Board & Senior Management Oversight
AI governance is now a board-level issue. Clear accountability structures, risk appetite statements, and active oversight—not delegation to IT.
2️⃣ AI Inventory & Risk Assessment
Know what AI you have. Maintain accurate inventories. Assess each use case for impact, complexity, and reliance factors.
3️⃣ Lifecycle Controls
Controls from development through deployment, monitoring, change management, and decommissioning. Not just "go live" testing.
4️⃣ Capabilities & Capacity
Do you have the people, skills, and infrastructure to use AI responsibly? Training and resourcing are compliance requirements.
Pillar 1: Board Accountability is Non-Negotiable
Perhaps the most significant shift is elevating AI risk to board and senior management level. MAS expects:
- Clear accountability structures for AI risk management at the senior level
- Integration of AI risks into enterprise risk management frameworks
- Active oversight by board or designated committee—not passive reporting
- Risk appetite articulation for AI use across the organization
- Cross-functional oversight where AI risk exposure is material (consider dedicated AI committees)
💡 What This Means in Practice
Board members will need to understand AI risks well enough to provide meaningful oversight. Expect demand for board-level AI literacy programs and executive briefings. "The CTO handles that" is no longer an acceptable answer to regulators.
Pillar 2: You Can't Manage What You Haven't Identified
MAS places heavy emphasis on visibility. Before you can govern AI risks, you need to know:
- What AI you have: An accurate, up-to-date inventory of all AI models, systems, and use cases
- Where it's used: Business function, customer touchpoints, decision-making processes
- How material it is: Risk assessments based on impact (what happens if it fails?), complexity (how hard to explain?), and reliance (how dependent are you on it?)
This sounds basic, but many institutions will struggle. AI often proliferates through shadow IT, departmental experiments, and vendor integrations that bypass central oversight. The Guidelines force institutions to surface all AI use—not just the flagship projects.
Pillar 3: Lifecycle Controls—Beyond "Test Before Launch"
MAS outlines expectations across the entire AI lifecycle:
| Stage | Key Controls Expected |
|---|---|
| Development | Data governance, bias testing, documentation of design decisions |
| Testing | Evaluation against fairness metrics, adversarial testing, edge case analysis |
| Deployment | Human oversight mechanisms, fallback procedures, access controls |
| Monitoring | Performance drift detection, outcome monitoring, incident tracking |
| Change Management | Re-validation requirements, version control, impact assessment |
| Decommissioning | Data retention, audit trail preservation, transition planning |
Importantly, MAS acknowledges proportionality: not every AI system needs the same level of controls. A simple chatbot for FAQs doesn't require the same rigor as a credit decisioning algorithm. The Guidelines expect risk-based calibration—but the institution must document and justify its approach.
Pillar 4: People, Skills, and Infrastructure
Compliance isn't just about policies—it's about capability. MAS expects institutions to have:
- Sufficient skilled personnel to develop, deploy, and oversee AI systems
- Training programs to ensure staff understand AI risks relevant to their roles
- Technology infrastructure adequate for monitoring, logging, and controlling AI systems
- Audit capabilities to review AI systems and their outputs
⚠️ The Skills Gap Challenge
This pillar may be the hardest for many institutions. AI/ML talent is scarce. Compliance and risk professionals rarely have AI expertise. Building cross-functional teams that bridge these worlds will be a competitive challenge—and a compliance requirement.
Special Focus: Generative AI and AI Agents
Notably, MAS explicitly includes Generative AI and AI agents in the Guidelines' scope. This is forward-looking—many institutions are still experimenting with these technologies, but MAS is setting expectations now.
For Generative AI, key concerns include:
- Output reliability: How do you ensure GenAI doesn't hallucinate in customer-facing contexts?
- Data leakage: What proprietary or customer data enters GenAI systems?
- Third-party concentration: Heavy reliance on a few foundation model providers creates systemic risk
For AI agents (autonomous systems that take actions), the stakes are higher:
- Scope of autonomy: What decisions can the agent make without human intervention?
- Control mechanisms: How can humans override or halt agent actions?
- Accountability: When an agent causes harm, where does responsibility lie?
The Guidelines don't provide detailed answers yet—these are still consultation topics. But MAS's inclusion of these technologies signals that institutions deploying autonomous AI systems should prepare for heightened scrutiny.
Third-Party AI: No Outsourcing of Responsibility
One of the Guidelines' clearest messages: using third-party AI doesn't reduce your regulatory obligations.
Whether you're using:
- Vendor-provided AI software
- Cloud-based AI/ML platforms
- Open-source models
- Foundation model APIs (GPT, Claude, etc.)
MAS expects the same level of governance as internally developed AI. This means:
- Due diligence before adoption—understanding how the AI works, its limitations, and risks
- Contractual protections covering explainability, data handling, and incident notification
- Contingency planning if the vendor relationship terminates or the AI system fails
- Ongoing oversight—not "set and forget" vendor management
✓ Group-Level Frameworks
Singapore branches of international groups can leverage group-level AI frameworks—but only if those frameworks meet MAS expectations. This isn't blanket approval; regulators will assess whether the group framework actually addresses the specific requirements of these Guidelines.
Implementation Timeline: What to Do Now
With this timeline, full compliance will likely be required by early to mid-2027. That sounds distant, but for institutions without existing AI governance infrastructure, 18 months is aggressive.
Priority Actions for FIs
- Build or update your AI inventory. You can't comply with Guidelines if you don't know what AI you have. Start the discovery process now.
- Assess board/senior management readiness. Do your leadership understand AI risks? Do you have the governance structures to provide oversight?
- Gap analysis against Guidelines. Map your current controls to MAS expectations. Where are the gaps?
- Review third-party AI arrangements. Vendor contracts, due diligence processes, ongoing monitoring—do they meet the new expectations?
- Participate in the consultation. If you have practical concerns about implementation, MAS wants to hear them. This is your chance to shape the final framework.
APAC Ripple Effects: Beyond Singapore
While these Guidelines technically apply only to MAS-regulated institutions, their influence extends far beyond Singapore:
- Hong Kong SFC and HKMA typically align with Singapore on financial technology regulation—expect similar guidance within 12-18 months
- Regional headquarters in Singapore will likely apply these standards across APAC operations
- Regional banks operating in Singapore must comply, influencing their group-wide practices
- Vendors serving APAC financial institutions will need to meet these standards to remain competitive
For institutions operating across APAC, building to Singapore's standard now makes strategic sense—it will likely become the regional baseline.
The Bigger Picture: Singapore's AI Governance Stack
The AI Risk Management Guidelines don't exist in isolation. They're part of Singapore's layered approach to AI governance:
| Layer | Framework | Owner |
|---|---|---|
| National AI Strategy | National AI Strategy 2.0 | Smart Nation Office |
| General AI Governance | Model AI Governance Framework | IMDA |
| AI Testing & Certification | AI Verify | AI Verify Foundation |
| Financial Services Principles | FEAT Principles | MAS |
| Financial Services Guidelines | AI Risk Management Guidelines (NEW) | MAS |
This is deliberate design. Singapore positions itself as a jurisdiction where AI innovation can flourish within clear, predictable guardrails. For financial institutions, this reduces regulatory uncertainty—but demands investment in compliance infrastructure.
Our Take: A Proportionate, Pragmatic Framework
Having analyzed AI governance frameworks across APAC and globally, we find MAS's approach notable for several reasons:
Strengths:
- Proportionality: The Guidelines explicitly allow risk-based calibration, avoiding one-size-fits-all compliance burdens
- Technology-neutral: By defining AI broadly and including emerging technologies, the framework should remain relevant as AI evolves
- Practical: Requirements map to concrete actions (inventories, assessments, controls) rather than abstract principles
- Consultation-based: MAS is actively seeking industry input, suggesting flexibility in final implementation
Questions that remain:
- How will MAS assess compliance? What evidence satisfies the expectations?
- How will resource constraints of smaller FIs be accommodated?
- What's the enforcement approach for early-stage non-compliance?
The consultation period is the opportunity to surface these questions. We encourage affected institutions to engage constructively with MAS.
Need Help Navigating the New Guidelines?
APAC FINSTAB provides specialized analysis on AI governance and regulatory compliance for financial institutions. Our upcoming L1.5 compliance layer directly addresses MAS expectations.
Learn About Our Solutions