I have spent considerable time working with clients to establish robust AI governance frameworks. One of the most critical questions I encounter regularly is how to structure AI leadership effectively. The answer, I've found, depends largely on the nature of the organisation itself.
My clients are primarily small to medium-sized businesses that are tech-natives, and this presents a unique opportunity to structure the broader governance. Rather than centralising AI leadership in a single role or department, I've observed that distributed leadership across functions and roles proves remarkably effective.
The tech side of these businesses tends to be naturally solution-oriented, diving deep into the technical possibilities and implementation details. Meanwhile, the leadership and control functions maintain their focus on outcomes and overall business sustainability whilst leveraging AI. This division of labour creates a natural balance—technical innovation paired with strategic oversight. It helps when the overall structure is generally flat and the headcount stays between 70 and 100 individuals.
In my experience, this structure works precisely because it mirrors how these organisations already operate. They're built for agility and cross-functional collaboration from the ground up.
Given the specificities of the fintech space, I actively advise against creating dedicated executive roles such as a Chief AI Officer. Here's why: fintech companies are already required to maintain mature Risk and Compliance functions, and these control functions must encompass AI thematics within their scope and day-to-day responsibilities. Adding another layer creates unnecessary complexity and potential confusion about accountability.
Similarly, any successful fintech organisation must have Product and Development teams that are future-oriented and naturally include AI in their roadmaps. The expertise and accountability already exist within the organisation—it's simply a matter of ensuring these functions evolve to encompass AI considerations.
I've seen this pattern before. In the past, there were calls for Chief Information Officer (yes, I am that old!), Chief Digital Officers or Chief Cloud Officers (yes, there are “creative” businesses out there!), particularly amongst more traditional players in the financial services space, as those technologies were emerging and rapid transformation was occurring. It didn't succeed, and it shouldn't be replicated for AI, especially not within digitally native organisations that already possess the structural agility to adapt.
When working with clients on AI initiatives, I focus heavily on facilitating cross-functional collaboration, which is already at the heart of any successful fintech organisation. The beauty of the fintech model is that services and products are already subject to strict regulatory frameworks and internal controls.
The introduction of AI simply follows the same established pattern—it's an additional technical layer added to the services and products offered to clients. This means the governance structures, accountability mechanisms, and collaborative processes are already in place. We're not reinventing the wheel; we're ensuring the existing wheel can handle the additional load.
In practice, this means ensuring that Risk and Compliance teams understand AI implications, that Product teams consider AI opportunities in their roadmaps, and that Development teams can implement AI solutions within existing governance frameworks. The key is coordination, not reorganisation.
What I've learnt from working with these organisations is that accountability for AI initiatives works best when it is embedded within existing structures rather than layered on top of them. Risk teams remain accountable for managing AI-related risks, Product teams own AI feature development, and senior leadership maintains strategic oversight, just as they would for any other technology or business initiative.
This approach ensures that AI becomes integrated into the organisation's DNA rather than treated as a separate, parallel concern. It also prevents the common pitfall of creating AI initiatives that exist in isolation from core business objectives and governance structures.
As I continue to advise clients on AI governance, I'm increasingly convinced that the most effective approaches are those that build upon existing strengths rather than creating entirely new structures. For tech-native organisations in particular, the distributed, collaborative model offers the perfect balance of innovation and accountability.
The future of AI governance is not about creating new hierarchies: it's about ensuring existing ones evolve intelligently to meet new challenges whilst maintaining the agility that makes these organisations successful in the first place.
DNYC is working with clients to establish robust AI governance frameworks