Artificial intelligence has become one of the most urgent governance challenges facing ethics and compliance teams. Employees are experimenting with generative AI tools. Vendors are embedding AI into products and services. Business leaders are mandating AI use and looking for productivity gains. Meanwhile, regulators, customers, and employees are all asking hard questions about what really constitutes responsible AI use.
During the initial AI hype cycle, many organizations scrambled to craft an AI policy, which was a good starting point, but it is not nearly enough. An organization that is still relying on an AI policy crafted 18 months ago is relying on seriously outdated governance.
The challenge with AI policy is that artificial intelligence is moving too quickly, and touching too many parts of the business, for one static document to do all the work. A strong approach to AI governance requires a practical system of principles, rules, guidance, communication, and controls that helps employees understand how to use AI responsibly in their daily work.
A solid AI compliance program should help those employees recognize what AI risks look like in context, understand where the guardrails are, and know who is responsible for approving, monitoring, and updating the organization’s approach.
Start by separating principles, policy, and guidance
One of the most common mistakes organizations make is trying to put every aspect of AI governance into one document. A single document may feel efficient, but keeping its constituent parts up to date will create problems as tools, use cases, laws, and business needs change.
A more effective structure separates AI governance into three related layers.
The first layer is a set of ethical use of AI principles. These principles explain the organization’s broader approach to artificial intelligence, especially where AI factors into in products, services, decision-making, or stakeholder-facing activity. They can help articulate commitments around fairness, transparency, accountability, privacy, security, human oversight, and responsible innovation. These principles are not meant to answer every operational question. Their job is to create a decision-making framework.
The second layer is the AI development and use policy. This is where organizations can define core terms, connect AI use to existing policies, set expectations for employee behavior, identify prohibited uses, and clarify the baseline rules for acceptable AI use. For example, the policy might address when employees may use approved AI tools, what kinds of information may not be entered into public models, how employees should validate AI-generated outputs, and which existing rules around confidentiality, privacy, intellectual property, cybersecurity, and records management still apply.
The third layer is dynamic guidance. Many of your practical details (e.g., workflows, approval processes, review requirements, escalation paths, vendor review procedures, and instructions for specific tools or use cases) should live here. Guidance is easier to update than a formal policy, which makes it better suited for a technology environment that continues to shift. If the organization changes its approved tools, modifies its vendor review process, or adds a new AI use case, the guidance can evolve without forcing a full policy rewrite every time.
Together, these layers help organizations build an AI governance framework that explains what the organization values, establishes sensible rules, and helps employees put those rules into action.
Treat AI as both a risk and an opportunity
Artificial intelligence presents real risks that are already showing up inside organizations that include:
- Uploading confidential company information into a tool without realizing the implications
- Relying on inaccurate AI-generated output
- Using an enterprise license for personal purposes
- Applying AI to workstreams that require legal, privacy, security, or compliance review
- Using a vendor tool that includes AI functionality without understanding how that tool handles data
- Applying agentic AI without proper testing that could create massive problems too fast for humans to detect in time
At the same time, AI is also an opportunity. It can help employees work more efficiently, identify patterns, automate repetitive tasks, and improve decision-making when used responsibly. An AI policy that focuses only on restriction can miss the point. The goal is not to scare employees away from AI. The goal is to give them enough clarity to use it well.
That means ethics and compliance teams should approach AI the same way they have approached other major workplace shifts. Email, the internet, ephemeral messaging, and hybrid work all required new guardrails. AI is another evolution in how work gets done. The work now is to determine where the guardrails belong, who maintains them, who must follow them, and how the organization will know whether they are working.
Build AI compliance around people, process, and controls
AI can feel technically complex, but the governance challenge is familiar. Ethics and compliance teams already know how to help organizations manage new tools. The work starts with a people, process, and controls analysis.
- Who is using AI?
- What are they using it for?
- Which teams present the greatest risk?
- What decisions or workflows are involved?
- What kinds of data might be entered into a tool?
- Which uses require review before launch?
- Which uses should be prohibited outright?
- Who owns the approval process?
- Who monitors ongoing use?
These questions help move AI compliance out of abstraction and into the operating reality of the business.
For example, the risks presented by an engineering team may differ from those presented by a sales team, a marketing team, a legal team, or a customer support function. Engineering may need guidance on development, testing, model behavior, data inputs, and technical safeguards. Sales may need guidance on using AI-generated content in customer communications. Human resources may need guardrails around employee data, hiring, performance management, and fairness. Procurement may need a clear process for reviewing third-party vendors whose products include AI functionality.
A single all-employee message cannot carry all of that nuance. AI governance becomes more effective when organizations identify the relevant employee populations, understand the risks they present, and design controls and communications accordingly.
Communicate AI policy in ways employees can act on
Even the strongest AI policy will fail if employees do not know what it says or how it applies to their work. That is where messaging becomes essential.
The communications strategy should be built around a simple formula: message, messenger, and modality.
The message should be specific. “Use AI responsibly” is too broad to guide behavior. Employees need practical direction: do not put confidential company information into unapproved AI tools; do not rely on AI-generated content without reviewing it; do not use enterprise tools for personal projects; do not deploy AI in a customer-facing process without proper review; do not assume that a vendor’s AI functionality has already been approved.
The messenger should be credible to the audience. Some communications may need to come from ethics and compliance. Others may be more effective coming from legal, privacy, cybersecurity, human resources, procurement, business leaders, or direct managers. The right messenger depends on the risk, the audience, and the behavior the organization is trying to influence.
The modality should match the moment. A formal policy announcement has value, but it will not be enough on its own. Organizations may need manager talking points, short training modules, intranet guidance, tool-specific prompts, onboarding materials, FAQs, approval workflow reminders, and targeted communications for higher-risk teams. The point is to place the right guidance in front of employees when they are most likely to need it.
This is especially important because many employees do not set out to misuse AI. They may simply fail to imagine how a tool could create risk. Maybe they don’t realize that entering sensitive information into a model could create a confidentiality issue. They may not understand that an AI-generated draft still needs human review. Or, they may not see that a familiar business process becomes higher risk when AI is added to it.
Good AI policy messaging closes that imagination gap. It helps employees see the risk before they create it.
Make ownership clear
AI governance often becomes difficult because responsibility is spread across the organization. Legal cares about regulatory risk. Privacy cares about data. Cybersecurity cares about access, tooling, and exposure. Procurement cares about vendor contracts. Human resources cares about workforce impact. Business teams care about speed and productivity. Ethics and compliance cares about whether employees understand and follow the rules.
All of those perspectives matter, but they need to be connected.
An effective AI governance framework should make clear who reviews new AI use cases, who approves tools, who evaluates third-party vendor risk, who updates guidance, who communicates changes, and who monitors compliance. Without defined ownership, AI policy can become a document everyone supports in theory and no one manages in practice.
This is also why dynamic guidance is so important. As new tools and use cases emerge, employees need a clear path for review. They should know where to go before entering sensitive data into a tool, using AI in a customer deliverable, buying software with AI functionality, or automating part of a regulated process.
Keep the focus on practical guardrails
AI policy, AI governance, and AI compliance are some of the fastest-evolving areas in ethics and compliance today. And even at high speed, organizations still often find themselves playing catch-up to the state of AI technology itself. The answer, then, isn’t to wait until the landscape becomes stable, because it will never settle enough for that to be a viable strategy. The better approach is to build a governance structure that can adapt.
That starts with principles that establish the organization’s posture. It continues with a policy that sets clear expectations. It depends on guidance that can be updated as technology and business use cases change. And it only works when employees receive communications that are targeted, practical, and grounded in the risks they actually face.
Artificial intelligence may be a new tool, but the ethics and compliance challenge is familiar. Employees need to understand what they can do, what they cannot do, when to ask for help, and why the guardrails matter. Organizations that build AI governance around those realities position themselves to manage risk, support responsible innovation, and help their people use AI with confidence.
SIDEBAR
BELA AI Policy Examples
A major component of the Business Ethics Leadership Alliance (BELA) member hub is its resource library, where member companies share policies and guidelines for their peers to consider using themselves. It’s a big part of the credo that “there is no competition in compliance,” and when it comes to AI governance, that’s especially true. Below are links to some of the AI policy examples currently available on the BELA Member Hub.
- Cargill – Driving Efficiency & Insight: The AI-Enabled Compliance Function
- Lennox – AI-Assisted Ethics Investigations Guidelines
- HCLTech – Responsible AI in Practice in India & Supporting Global Initiatives
- IBM – Principles for Trust and Transparency in AI
- HCA Healthcare – Responsible AI Policy
- TTEC – Addition of AI to the Hotline