AI in Finance: A Regulatory Perspective with Grace Chong
With the rise of AI-driven algorithms, this has introduced unparalleled capabilities in decision-making. Understanding its impact is crucial for firms to stay competitive and adopt a forward-thinking mindset. With the integration of such new technologies, there comes significant opportunities to capitalize on emerging market movements, especially with APAC being a global hub in the Fintech industry.
With over a decade of experience spanning high-profile cases, legislative reform, advisory roles for banks, asset managers, and fintech firms across Asia, Grace Chong, Head of Financial Regulatory in Drew & Napier of Singapore, dive into the complexities of the ever-evolving global financial markets in this interview. Drew & Napier of Singapore is highly ranked in Chambers FinTech and Legal 500 and is widely recognized for their expertise in emerging technologies, including digital assets, and the strategic integration of AI.
Grace mentions the challenges that firms might face in this new era: from the recent re-election of Donald Trump which could lead to a series of implications in the regulatory environment to the potential biasness in AI-powered tools when delivering personalized advisory services and investment strategies. She stresses that AI is revolutionizing the landscape of financial services, working beyond improving operational efficiencies to reshaping products, processes and even customer experiences. With over 43% of Financial Service Firms using AI to invest in personalized strategies, the future of such innovative solutions is yet to behold.
Lastly, she focuses on the strategies for navigating the complex geopolitical landscape and mitigating risks associated with cross-border transactions. With the recent regulatory developments in the digital asset industry, including the potential implications of the SEC’s recent actions against major crypto exchanges, their impact must be carefully weighed on and considered by firms as they design business models that are not only compliant but also poised for sustained growth in this ever-changing market.
Please share a bit about your background and your current role at Drew & Napier. How has your career path evolved, and what experiences have shaped your perspective on the intersection of law and technology?
My career journey over the last 12 years is relatively unusual. I entered private practice in the later part of my career, having spent my formative years in MAS and HSBC HK, and have had the opportunity to be involved in high-profile market misconduct and sanctions cases and legislative reform projects.
Since then, I have built my practice around advising banks, asset managers and fintech firms on complex regulatory matters across Singapore and wider Asia.
Envisioning the Future of Digital Assets at Singapore FinTech Festival 2024
How can the banking sector collaborate with regulators to establish robust governance frameworks that foster innovation in AI while mitigating risks, ensuring responsible development and implementation of AI technologies?
The banking sector's collaboration with regulators is pivotal in establishing governance frameworks that balance AI innovation with risk mitigation.
This partnership can be enhanced through several strategic approaches:
- Engaging in Regulatory Sandboxes: Regulatory sandboxes offer a controlled environment for banks to test AI applications under regulatory supervision. In Singapore, the Monetary Authority of Singapore (MAS) and the Infocomm Media Development Authority (IMDA) have established such sandboxes, enabling financial institutions to develop AI solutions that comply with regulatory standards while actively testing their viability. Similarly, the UK's Financial Conduct Authority (FCA) has implemented initiatives like the AI Lab to support innovators in developing advanced AI models. These sandboxes facilitate a collaborative space where banks and regulators can identify potential risks and refine AI models before full-scale deployment, fostering a safer AI ecosystem in financial services.
- Participating in Industry: Consortia Industry consortia provide a platform for banks and regulators to jointly develop standardized best practices for AI governance. These collaborations enable the sharing of resources, expertise, and regulatory insights, essential for setting benchmarks in model fairness, bias mitigation, and responsible AI practices. Initiatives like the AI Verify Foundation in Singapore bring together financial and tech-sector leaders to co-create tools and frameworks for responsible AI use. Such consortia ensure that guidelines evolve in line with the sector’s latest challenges, reinforcing AI governance standards applicable across jurisdictions and financial applications.
- Developing Standardized Tools for Algorithmic Transparency and Risk Assessment: A key area of collaboration is the development of standardized tools that enhance algorithmic transparency and risk assessment. For instance, the Veritas toolkit in Singapore provides banks with methodologies for assessing AI fairness and accountability. These tools enable consistent evaluations of AI models, addressing potential risks associated with transparency and data quality. Such frameworks facilitate compliance with regulatory expectations, offering structured approaches to ongoing AI evaluations and fostering public trust in AI-driven decision-making processes. By engaging in these collaborative initiatives, the banking sector can support AI innovation while ensuring responsible development, maintaining high standards of transparency, and safeguarding the integrity of financial services.
What are the specific strategies for implementing AI-driven trading algorithms to capitalize on market inefficiencies and volatility in the APAC region?
Implementing AI-driven trading algorithms in the APAC region requires strategies that blend advanced technological tools with region-specific insights, enabling firms to capitalize on market inefficiencies and volatility effectively. One primary strategy involves the use of Natural Language Processing (NLP) techniques, which allow algorithms to process vast amounts of unstructured data, including news feeds, social media, and economic reports in real-time. This capability is especially valuable in APAC markets where multiple languages and high-volume news cycles create unique data challenges. NLP can assess sentiment, detect patterns, and forecast potential market shifts with impressive speed, which is critical for executing trades that leverage market volatility quickly and effectively.
A second approach is the application of machine learning models designed to detect short-term and long-term trading opportunities by analyzing historical market data. These algorithms identify price anomalies and mean-reverting patterns unique to the APAC region, where market dynamics can differ significantly from those in Western markets due to regulatory variances and retail investor behavior. Tools such as EdgeLab leverage machine learning to assess risk dynamically, supporting asset managers in rebalancing portfolios based on market signals. This real-time risk assessment ensures that portfolios remain optimized in response to sudden shifts in volatility, particularly during periods of economic uncertainty or geopolitical events common in the region.
Algorithmic frameworks like CLSA’s Opportunistic ADAPTIVE algorithm provide another example of a sophisticated AI-based strategy that is adaptable to APAC’s fast-paced markets. The ADAPTIVE algorithm is specifically engineered to capitalize on short-term volatility, which is characteristic of APAC equity markets. It adjusts trading strategies based on current market conditions, allowing for both rapid entry and exit points in response to fluctuating price levels. This agility is essential in markets like Hong Kong and Tokyo, where quick shifts are often driven by retail trading activity and government policy announcements.
Additionally, platforms such as AlgoGene provide customizable algorithmic trading solutions that allow traders and asset managers in APAC to create and test bespoke trading models without extensive coding knowledge. By enabling the quick development and deployment of proprietary models, tools like AlgoGene democratize access to AI-driven trading, which can be tailored to specific market characteristics across APAC economies. This customization allows institutions to exploit localized market inefficiencies, optimizing algorithmic models based on specific market cycles and investor behavior patterns in APAC.
Finally, it is notable that the HK FSTB’s recent focus on building a framework for AI-driven investment strategies exemplifies a regulatory commitment to fostering innovation in the sector. By creating guidelines and supporting research into responsible AI applications, the FSTB’s framework aims to provide a stable yet flexible environment where AI can be deployed safely in trading. This regulatory support enhances confidence in AI technology adoption, ensuring that algorithms not only meet financial goals but also align with broader regulatory and ethical standards. Combined, these strategies create a robust foundation for implementing AI-driven trading algorithms to harness APAC’s unique market volatility and capitalize on emerging inefficiencies.
How might geopolitical tensions between major powers, such as the US and China, impact the development and adoption of digital assets in the APAC region? What strategies can be adopted to navigate this complex geopolitical landscape?
Geopolitical tensions between major powers—chiefly the United States and China—exert considerable influence on the development and integration of digital assets in the Asia-Pacific region. These dynamics permeate regulatory frameworks, restrict technological innovation, and shape investor confidence, all of which bear on the path digital assets will take in the years to come. At the heart of this influence lies a complex web of national priorities, often rooted in security and economic sovereignty, that pushes competing approaches to digital asset regulation.
The intensifying U.S.-China tech rivalry has led to heightened scrutiny and restrictive measures, particularly impacting technology firms with ties across these nations. The United States, citing national security risks, has tightened its grip on Chinese technology imports and investments, disrupting supply chains and narrowing access to essential technologies for firms engaged in digital asset initiatives. Meanwhile, China’s firm stance against decentralized cryptocurrencies—enacting outright bans on trading and mining—has driven these activities offshore, forcing digital asset enterprises to seek jurisdictions with more favorable regulatory climates. In such an environment, APAC countries are uniquely positioned to attract displaced talent and enterprises, should they balance regulatory clarity with innovation-friendly policies.
The recent re-election of Donald Trump as President of the United States adds an interesting layer to this evolving landscape. Trump’s administration has signaled intentions to end what it calls an “anti-crypto” crusade, including plans to replace the SEC Chair with a figure more receptive to digital assets. This shift promises a more favorable U.S. regulatory stance on cryptocurrencies, which could set off ripple effects across APAC. As the U.S. takes steps toward regulatory clarity, other nations may be prompted to adjust their own frameworks, fostering an environment more amenable to digital asset adoption on a global scale. These developments underscore a central truth: the regulatory frameworks governing digital assets are not only the product of economic and security concerns but also a reflection of a broader power struggle that will ultimately influence how these technologies shape our financial future.
Proactive engagement with local regulators is essential for firms to navigate the complex and evolving regulatory landscape in the digital assets and financial technology sectors. By actively participating in regulatory sandboxes and industry consultations, firms can better align their operations with shifting legal frameworks while advocating for balanced regulations that support innovation alongside compliance. We work closely with prominent industry associations, including ACCESS (Association of Cryptocurrency Enterprises and Startups Singapore), SFA (Singapore FinTech Association), AIMA (Alternative Investment Management Association), and ASIFMA (Asia Securities Industry & Financial Markets Association), on their regulatory initiatives with local and regional regulators. Through these partnerships, we aim to help our clients contribute meaningfully to the regulatory dialogue, ensuring their business models are not only compliant but also positioned for sustainable growth in a regulated and thriving APAC market.
With AI becoming increasingly integrated into banking systems, how can financial institutions mitigate the concerns in relation to the use of AI in decision-making processes?
Current financial regulations addressing AI in credit scoring and risk assessment vary significantly across jurisdictions, reflecting unique priorities and regulatory approaches in balancing transparency, consumer protection, and innovation. The European Union (EU), Singapore, the United Kingdom (UK), and the United States (U.S.) have each adopted distinct frameworks, with the EU leading in prescriptive requirements, while Singapore and the UK take principle-based approaches, and the U.S. relies largely on sectoral laws with emerging state-level initiatives. These differences illustrate both regulatory advancements and existing gaps, especially in transparency, bias mitigation, and human oversight, which are critical for ethical and responsible AI in financial services.
Singapore’s regulatory framework, led by the Monetary Authority of Singapore (MAS), is based on the FEAT principles (Fairness, Ethics, Accountability, and Transparency). The Veritas Initiative offers tools for assessing AI in credit scoring, enabling financial institutions to measure model fairness and transparency. Unlike the EU, MAS does not require strict conformity assessments, but the Veritas Initiative provides institutions with self-assessment methodologies to support ethical and transparent AI use. In risk assessment, MAS’s FEAT principles also prioritize data quality and accountability, encouraging firms to adopt explainable models and conduct human oversight. Singapore’s emphasis on collaboration, as seen in the UK-Singapore Financial Dialogue, aims to harmonize standards with other key jurisdictions.
To address concerns surrounding AI in decision-making, financial institutions can adopt several practical strategies focused on transparency, bias mitigation, and ongoing oversight. Key among these is prioritizing explainability, ensuring that AI-driven decisions—particularly those impacting consumers, such as credit scoring—are understandable to both regulators and customers. By implementing robust documentation protocols and providing justifications for AI outputs, institutions can make these decisions transparent and address stakeholder concerns regarding opaque "black-box" models.
Furthermore, bias mitigation remains crucial in preventing discriminatory or unintended outcomes. Tools such as the MAS Veritas Initiative help financial institutions to assess fairness and detect potential biases in AI models. Regular bias testing, combined with an ethical review framework, can support institutions in identifying disparities and adjusting algorithms to promote equitable treatment. Instituting these practices proactively aligns with the FEAT principles and reinforces customer trust in AI-driven financial services.
Finally human oversight and governance frameworks are essential to ensure accountability throughout the AI lifecycle. Instituting dedicated oversight committees and setting up cross-functional AI ethics panels can provide critical checks on AI models. In jurisdictions like the EU and UK, where oversight and human involvement are regulatory priorities, these practices serve as a compliance safeguard and reinforce the ethical deployment of AI. In Singapore, the Individual Accountability and Conduct (IAC) Guidelines similarly emphasize senior management’s role in AI oversight. Under these guidelines, senior managers must actively monitor the implementation of AI systems, ensuring they align with the organization’s values and regulatory obligations. By placing ultimate responsibility with senior management, regulators aim to prevent the diffusion of accountability, ensuring that there is a clear point of responsibility if AI systems cause harm or produce biased outcomes. Through these structured oversight measures, financial institutions can responsibly integrate AI into decision-making processes while adhering to both local and international standards. This approach not only mitigates regulatory risks but also fosters consumer confidence in the ethical use of AI.
How will AI reshape the landscape of financial services beyond just operational efficiency, impacting products, processes, and overall customer experience?
Artificial Intelligence stands ready to fundamentally alter the landscape of financial services, transcending operational improvements to reshape products, processes, and the entire customer experience. This transformation isn’t merely a leap forward in convenience; it’s an era defined by precision, insight, and foresight—qualities that cut to the core of any responsible institution's mission. With the power to personalize offerings, enhance judgment, and streamline service delivery, AI moves financial services closer to meeting the needs and aspirations of every individual, an ambition long held but seldom achieved.
Financial institutions now wield AI to deliver highly personalized advisory services and investment strategies that consider not just numbers, but the broader economic conditions and the individual’s own financial aspirations. These tools analyze vast datasets, tracking everything from market indicators to personal spending patterns, to provide guidance that’s both data-rich and human-centered. When a platform like Morgan Stanley's AI @ Morgan Stanley Debrief uses AI to streamline client interactions, it illustrates how far we’ve come: real-time, adaptive support is no longer a futuristic aspiration but a daily reality. AI-driven systems go beyond the transactional; they provide 24/7 guidance, assist in real-time budgeting, and suggest paths to achieve long-term savings—all tailored to each client.
In reshaping the industry, AI compels financial institutions to see themselves not simply as providers of a service but as proactive, integral partners in the financial journey of each customer. It’s a vision of financial services grounded in accountability and prudence, tempered by a deep commitment to enhancing customer lives. The role of AI in finance now extends beyond mere efficiency; it is a core strategic asset that challenges these institutions to achieve an integrity and foresight worthy of the trust placed in them.
Considering recent advancements and debates in the AI field, how do you envision the evolving relationship between AI and DEI in the future?
The evolving relationship between AI and Diversity, Equity, and Inclusion (DEI) is likely to become increasingly intertwined as advancements in AI amplify the need for inclusive technology and equitable outcomes. AI's rapid progression—from generative AI tools to predictive analytics and automated decision-making systems—has brought forth complex DEI considerations that extend beyond eliminating bias to proactively fostering fairness, transparency, and inclusivity. For example, as AI tools become integrated into hiring, financial assessments, healthcare, and other critical areas, their outputs have a significant, often irreversible impact on individual opportunities and social equity. Observations from advocates like Joy Buolamwini highlight how AI systems, particularly in applications like facial recognition, have exhibited significant biases that affect marginalized groups disproportionately. Buolamwini’s research, including the well-known "Gender Shades" project, demonstrated how AI facial recognition technologies tend to misidentify people with darker skin tones and women at notably higher rates than lighter-skinned males.
One key area where AI is reshaping DEI is in the development of datasets and model training practices that recognize diversity as a foundational component of accuracy and fairness. Recent advancements show that models trained on inclusive datasets are better equipped to generalize across different demographics, reducing the risks of biased or exclusionary results. However, curating and maintaining diverse datasets poses its own challenges, as does the development of technical methodologies to continuously monitor and mitigate bias throughout an AI system’s lifecycle. Companies such as IBM, for instance, are building frameworks that allow for real-time auditing of AI models for equity across various demographic groups. These types of innovations in bias monitoring are crucial for establishing more trustworthy and inclusive AI systems. The future of AI and DEI, therefore, points towards a deeper integration of ethical standards and diverse viewpoints in the design and governance of AI, ensuring that these technologies do not reinforce existing inequities but instead support fairer and more inclusive societal outcomes.
Additionally, the debate around accountability is pushing for a regulatory framework that explicitly aligns AI ethics with DEI goals. Regulators in regions like the EU, through proposals like the AI Act, are considering rules that mandate transparency, human oversight, and fairness checks for high-risk AI applications. As organizations worldwide adopt AI-driven tools, the incorporation of DEI standards is likely to evolve from being a compliance measure to a competitive advantage, with consumers increasingly favoring brands that reflect their values. Future DEI-aligned AI strategies will likely involve collaborations between technologists, regulators, and community stakeholders, ensuring that AI systems not only perform accurately but also contribute positively to social and economic equity.
Can you share what are some of the things that interest you in the legal landscape of AI and tokenisation/digital assets adoption?
The rapid advancements in artificial intelligence (AI) and tokenisation underscore the dynamic interplay between technological innovation and regulatory adaptation. With AI adoption expanding across industries—72% of organizations globally integrating AI into at least one business function—regulators face the challenge of ensuring its transformative potential is harnessed responsibly. Similarly, the surge in generative AI use, particularly in markets like China, highlights the importance of addressing ethical concerns such as fairness, data privacy, and accountability to safeguard public trust.
In the realm of tokenisation, the projected growth of the market to $5.6 billion by 2026 demonstrates its potential to revolutionize traditional finance. Major institutions are embracing blockchain technology to tokenize assets, enhancing liquidity and operational efficiency. However, the Financial Stability Board and other global regulators have rightly emphasized the risks to financial stability, making it imperative to craft frameworks that foster innovation while protecting the integrity of financial systems.
Meeting the demands of these transformative technologies requires legal frameworks that not only address emerging risks but also create opportunities for sustainable growth. This is a moment to rise to the challenges of our time, crafting solutions that balance ambition with responsibility. As Winston Churchill once said, "To each, there comes a time when they are figuratively tapped on the shoulder and offered a chance to do a very special thing, unique to their talents. What a tragedy if that moment finds them unprepared for that which could have been their finest hour."