Artificial intelligence (AI) is beginning to be widely adopted in the payments and electronic money sector. It is poised to drive significant innovation and efficiency; however, it is a disruptive technology. It introduces novel risks that need to be managed at both a strategic and operational level. For financial institutions, finding, understanding and mitigating AI-related risks is a regulatory requirement and important for maintaining customer trust.
Understanding these emerging AI risks requires looking first at the strategic forces shaping adoption – both the maturity of AI as a technology and the broader economic conditions influencing its supply.

Strategic AI risks: technology maturity and macroeconomics
Understanding AI risks requires looking first at the strategic forces shaping adoption. Firms must establish clear objectives for AI’s capabilities as a technology, resisting the fear of missing out and looking for opportunities offering genuine return on investment. At the same time, market analysts are warning of an potential ‘AI bubble’ driven by financial links between model providers and hardware suppliers, ambitious infrastructure promises, and growing market concentration. These factors call for a more cautious investment approach. Mitigating these external risks suggests:
- diversifying AI providers to avoid a single point of failure;
- conducting effective due diligence and contractual arrangements that avoid provider lock-in;
- considering building internal AI capabilities to reduce external dependencies.
While strategic and market risks influence a firm’s overall position on AI adoption, effective control is rooted in its internal risk management processes.
The AI risk management cycle
When deploying AI internally, a firm’s objective is to maximise benefits while maintaining robust risk controls. This requires a cyclical risk management process comprising the following steps :
- Find: Proactively identify potential AI risks.
- Understand: Assess the likelihood and consequences of these risks materialising.
- Act: Implement risk treatment or avoidance strategies.
- Monitor: Continuously assess the adequacy of AI risk controls and ensure appropriate board oversight.
- Repeat: Periodically review and refine the AI risk controls. This can be triggered by incidents, regulatory changes, or technological advancements.
AI risk management is often complicated by the need to rely on third-party providers. In a highly competitive market, firms struggle to obtain clear information about providers’ training data, testing methods, and safety processes. Moreover, providers of foundation models cannot exhaustively envisage every use case to which deployers may put their models. Responsibility for model behavior may seem to be shared, but in financial contexts it is the deployer who the regulator is likely to hold responsible when the behavior of an AI model fails to meet expectations.
Each phase of the risk management cycle depends on a firm’s ability to identify specific AI-related risks across its activities. The following section outlines some practical resources reporting examples of AI risk.
Finding AI risks
The UK’s FCA emphasizes that AI deployment requires firms to consider operational resilience (SYSC 15A), particularly where AI supports important business services. They have also highlighted concerns about concentration risk within the small pool of foundation model providers. This is a theme echoing similar concerns to the concentration risk around cloud services that formed a core part of the EU’s DORA Act, which came into force earlier in 2025.
However, AI also introduces fundamentally new types of risk, including issues in decision explainability and the potential for bias inferred from historical datasets.
To effectively identify AI risks, financial institutions can draw on resources such as:
- MIT AI Risk Repository: A comprehensive taxonomy of over 1600 AI-related risks.
- AI Incident Database: A searchable database of reported AI incidents since 2021.
- OECD Framework for Reporting AI Incidents: A project aimed at identifying the risks and harms posed by AI systems to reveal emerging risk patterns.
AI risks across financial use cases
Having identified where AI risks can arise in general, firms must consider how these risks materialise in operational contexts. AI risks are not theoretical; they manifest in applications in customer onboarding, credit scoring, or transaction monitoring. In customer service, virtual assistants and chatbots introduce risks related to data privacy, ethical decision-making, and the potential for bias and discrimination. Similarly, in fraud prevention and financial crime monitoring, AI systems must be carefully managed to avoid false positives, ensure fairness, and detect and resist manipulation or misuse by malign actors.
Regulatory expectations and AI risk management standards
As global regulatory frameworks evolve, supervisors expect financial institutions to take a proactive stance on AI governance and risk management. This involves embedding accountability, transparency, and human oversight throughout the AI lifecycle. Regulators now emphasise the need for firms to explain and evidence effective control over the risks associated with AI-driven decisions. In Europe, the EU AI Act exemplifies this shift, adopting a risk-based model that requires oversight and controls proportionate to the impact of AI systems on consumers.
To meet these expectations, firms can also refer to established international standards. Examples include the NIST AI Risk Management Framework (AI RMF 1.0) and ISO/IEC 23894:2023, which provide guidance on AI risk management. These recognised frameworks outline best practices for identifying, assessing, and mitigating risks at each stage of AI design, development, and deployment.
Applying the principles in recognised standards give firms a strong basis to counter regulatory concerns and should help promote customer confidence in AI technology.
Beyond internal governance and compliance, firms must also address the threat of AI being misused by external adversaries.
External sources of risk: AI cyber threats
Finally, a firm’s approach to AI risk management must address the threat of AI misuse by external adversaries. AI is helping attackers to refine and advance existing tactics, techniques and procedures while also introducing fundamentally new classes of threat. AI-powered attacks now include hyper-personalised phishing messages, deepfakes that bypass authentication systems, and direct attempts to manipulate AI models through data poisoning or prompt injection. These threats require new thinking to discover them and robust cybersecurity measures to counter them if firms are to keep AI augmented financial systems and customer data safe.
Conclusion
While AI presents significant opportunities for the financial services sector, a robust and agile approach to AI risk management is needed. Firms must demonstrate strategic analysis of the maturing technology, of AI model supply chains and of regulatory dependencies. But they must also find, understand, and treat risks that come from their internal deployment of AI. By expanding existing risk management practices to include new AI risks and by effectively applying recognised AI risk management standards, firms can support controlled AI innovation while protecting operations and customers.
Posted: 20 Oct 2025
Want to comment or have questions? You can contact the AI team at Flawless via:
Disclaimer: The information provided in this blog post is for general informational purposes only and does not constitute advice, legal or otherwise.