Artificial Intelligence (AI) has moved from the periphery to the heart of organizational strategy. For ethics and compliance leaders in large corporations, this technological shift is akin to navigating a fast-moving river with ever-changing currents. While AI offers transformative potential, it also introduces complex governance and ethical challenges that traditional compliance mechanisms struggle to address.
It is within this context that a select group of 25–30 senior leaders from diverse industries—ranging from Mahindra and Mahindra to Tata Steel, Pernod Ricard, Accenture, HP, and beyond—convened at a Razor Speed Insights session at a recent BELA South Asia roundtable held in Delhi, India. The session, hosted by EY, embodied a dynamic, collaborative approach to unlocking practical strategies for AI governance, ethics, and compliance.
The Razor Speed Insights Format: Engineering Collaboration for Depth and Agility
Unlike conventional panels or workshops, the Razor Speed Insights session adopted a speed-networking format, designed to maximize idea generation under time constraints. Picture a relay race, where each team hands off the baton; here, the “baton” was a topic, and the teams were roundtables of experts, each approaching the issue from a distinct vantage point. The session was anchored in the G-PaCTS framework, with tables focused on Global, Policy, Culture, Tech, and Stakeholder lenses. This structure ensured that every insight was scrutinized and enriched through multiple dimensions, much like a diamond being cut and polished from every angle.
For each discussion round, topics were randomly assigned, groups deliberated through their designated lens, and real-time insights were captured on a shared Miro board. Participants rotated tables, ensuring a cross-pollination of perspectives. This not only fostered layered, multidimensional insights but also lowered the barriers for compliance and legal professionals who often hesitate to voice opinions in open forums. The result was a psychologically safe space that encouraged candid, practical contributions.
Structuring AI Governance: From Fragmentation to Frameworks
Today’s AI governance landscape is fragmented. Many organizations have established governance committees, yet these often operate in silos or align to global structures without sufficient localization. Training programs exist but their effectiveness varies—much like a patchwork quilt, strong in some places and threadbare in others. A recurring challenge is information overload: compliance leaders are inundated with frameworks, regulations, and an evolving risk landscape, making it difficult to know where to begin.
Yet, within this complexity lies opportunity. Participants reframed governance not as a defensive shield, but as a lever for responsible innovation. For example, AI can be harnessed to redesign compliance processes for greater efficiency, empower business users to create tailored solutions, and provide deeper, real-time insights into risk. Consider an analogy from urban planning: while too many disconnected traffic signals lead to gridlock, a smart, integrated traffic management system enables both safety and flow. Similarly, effective AI governance enables productivity and growth while maintaining necessary guardrails.
Key Barriers and the Path Forward
Systemic barriers persist—divergent regulatory approaches across countries, lack of urgency in some organisations, the rapid evolution of AI risks, data security concerns, and challenges in assigning accountability. The classic dilemma of global consistency versus local adaptability surfaced repeatedly. To address these, leaders recommended:
- Adopting human-in-the-loop models to balance automation with accountability
- Anchoring governance in established frameworks such as ISO 42001, NIST RMF, and the EU AI Act
- Building structured approval hierarchies and oversight mechanisms
- Investing in continuous upskilling and awareness programmes
- Establishing frequent governance touchpoints and feedback loops
The consensus was that governance must evolve from static policies to living systems—responsive, adaptive, and deeply embedded in organizational culture.
Operational Fragility and AI Risk: Navigating the Unknown
For many large corporations, AI adoption feels like building a ship while already at sea. Maturity levels vary widely, and teams often lack a comprehensive understanding of the risks. The pressure for faster outputs and efficiency gains can create a tension between speed and control, reminiscent of a high-speed train that must stay on track despite sharp curves ahead.
Instead of reinventing the wheel, participants advocated leveraging established large language model (LLM) platforms, implementing ethical validation layers, and using enterprise-grade AI tools with built-in safeguards. They encouraged customization, but always with rigorous testing. Crucially, they also placed strong emphasis on incident monitoring and escalation frameworks—shifting from reactive risk management to proactive resilience.
Roots of Fragility and Building Resilience
Operational fragility stems from several factors: the rapid proliferation of new tools, organizational skill gaps, resistance to change, immature governance frameworks, and data privacy risks. AI risks are often probabilistic and assumption-based, making them hard to quantify. Think of this like weather forecasting—while we can predict patterns, uncertainty always remains.
To strengthen resilience, leaders suggested:
- Restricting AI usage to approved tools and environments
- Creating AI Champions within organizations to drive adoption and awareness
- Investing in secure, enterprise-grade systems
- Building internal data repositories for controlled usage
- Embedding governance frameworks deeply into day-to-day operations
The shift required is both technical and cultural—moving from reactive controls to embedded resilience, much like reinforcing the foundations of a building to withstand both everyday stresses and unexpected shocks.
Transforming Ethics and Compliance (E&C) Through AI
AI adoption in Ethics and Compliance (E&C) remains at an early stage. Most organizations are experimenting, rather than fully integrating AI into their workflows. Legacy systems, fragmented data, and limited awareness continue to hinder progress. As a point of comparison, imagine trying to modernize a city’s infrastructure while its roads, power, and water systems are all managed by separate agencies.
Despite these challenges, the potential for AI to transform E&C is immense. High-impact use cases include automating routine compliance processes, standardizing advisory outputs, personalizing training, enabling secure whistleblowing, and improving reporting accuracy and analytics. In essence, AI can shift E&C from a reactive function to a strategic enabler, much like transforming a security guard from a passive observer into an active, data-driven partner in risk management.
Obstacles and the Roadmap to Integration
Adoption suffers when constrained by lean teams, poor data quality, fear of retaliation in reporting systems, jurisdictional and regulatory differences, and concerns around bias and intellectual property ownership. Trust—both in technology reliability and stakeholder confidence—emerged as a central issue.
To unlock AI’s potential in E&C, the following strategies were highlighted:
- Integrating AI across functions, rather than in silos
- Using specialised tools tailored to compliance needs
- Enhancing analytics for better decision-making
- Maintaining human oversight in critical processes
- Building dynamic, adaptive compliance programmes
The journey forward requires not just new tools, but a fundamental shift in mindset and organizational capability.
The Bigger Picture: What Collective Intelligence Reveals
The Razor Speed Insights session underscored the power of structured collaboration in tackling complex AI governance challenges. By bringing together diverse perspectives within a disciplined format, the session surfaced insights that are both practical and forward-looking. Three overarching themes emerged:
- Governance Must Move Faster: AI risks are evolving at breakneck speed. Static policies are no longer sufficient—organisations need adaptive, iterative governance systems that can keep pace with technological change.
- Fragility is Real—and Manageable: Operational fragility is not a temporary phase but an inherent aspect of AI adoption. The focus must shift to building resilience by design, embedding safeguards and adaptability at every level.
- AI Can Elevate Compliance—If Trust is Built: The transformative potential of AI in E&C will only be realised if trust, transparency, and reliability are established as foundational principles.
Conclusion: Leading with Integrity and Foresight
For ethics and compliance leaders in large corporations, the evolution of AI governance is not just a regulatory necessity—it is a strategic imperative. The insights from the Razor Speed Insights session highlight that the journey requires more than compliance checklists; it demands collective intelligence, structured collaboration, and a willingness to rethink established paradigms.
As you steer your organisation through the shifting waters of AI adoption, consider these guiding analogies: govern like a smart traffic system, build resilience as you would reinforce a building’s foundation, and transform compliance from a passive observer into an active partner. Above all, cultivate trust—both in your teams and in the technologies you deploy. Only then can AI become a true enabler of responsible innovation and sustainable growth.