AI Compliance Without Borders: How Companies Can Navigate Global AI Regulations
TELUS Digital’s Jeff Brown offers commentary on AI compliance without borders and how companies can navigate global AI regulations. This article originally appeared in Insight Jam, an enterprise IT community that enables human conversation on AI.
Whether it’s in human resources, supply chain, workforce management, or customer support, organizations are increasingly embedding artificial intelligence into their operations. With new AI use cases emerging daily across industries and geographies, and the growing adoption of AI worldwide, comes uncertainty about how to safely, ethically, and responsibly develop, deploy, and monitor AI. Understanding global regulations and monitoring emerging legal mandates is essential to AI strategies and adoption, and protecting your brand.
If enterprises fail to take notice and action, there are numerous downsides to non-compliance, including reputational damage and hefty fines. Currently, the European (EU) Union AI Act enforces serious penalties for violations, while the U.S. AI Action Plan has guardrails and leans toward industry-led self-governance. Elsewhere, countries are rolling out voluntary codes, drafting new laws, and collaborating through global initiatives like the G7 Hiroshima AI Process.
Why is AI Regulation a Global Business Risk?
There’s no question that the era of “light-touch” AI oversight has come to an end. That approach made sense before 2015, when AI systems were still narrow in scope and not yet viewed as socially or economically disruptive. At the time, regulatory attention was mostly indirect, falling under broader privacy or cybersecurity laws.
Today, AI is becoming more embedded across industries and geographies. Our heightened understanding of the risks and detrimental real-world implications of not ‘doing AI right,’ along with the need to ensure transparency and accountability in how these technologies are created and used, is increasingly guiding how we approach AI development.
AI regulation has become both a significant business imperative and a challenge, especially for global enterprises, given the fragmented patchwork of legal definitions, expectations, and uncertainty they must navigate. In a 2025 global industry report that surveyed more than 800 enterprise leaders, 44 percent cited ‘complying with government regulations’ as one of their top challenges when it comes to maintaining customer trust. Additionally, nearly half of the respondents said data breaches and cyber attacks are the top threat to maintaining a safe and secure digital environment for customers. AI adoption exacerbates these risks.
Why is Global AI Regulatory Compliance so Challenging?
Today’s legal and governance teams face seemingly opposing constraints: they must contribute to providing agile, adaptable, cross-market strategies in a world where there’s no global consensus on what constitutes an AI system, what counts as “high-risk,” or even which safeguards are considered mandatory. In other words, ensuring compliance is a moving target. In addition, regulations vary greatly from a geographic perspective.
Let’s take a look at how some key regions around the world are currently defining and enforcing AI oversight.
Europe: The First Binding AI Law
Adopted as legislation in July of 2024, the EU AI Act is the first binding, comprehensive law focused entirely on artificial intelligence. It introduced the world to a tiered system that classifies AI tools by risk level, from minimal to unacceptable. Depending on the type of organization, non-compliance can lead to penalties of up to €35 million (US$40M), or 7% of a company’s global revenue, whichever is higher.
Article 5 of the EU AI Act bans certain high-risk applications, including social scoring (ranking people based on behavior or traits that could lead to unfair treatment) and manipulative AI (systems that exploit user vulnerabilities or overlook consent). It also imposes strict requirements on AI that’s used in more sensitive fields like healthcare and law enforcement, and for recruitment practices.
To help organizations gauge whether their tools fall under the EU AI Act’s purview, the European Commission published Guidelines on the Definition of an AI System. It’s important to note that the EU AI Act isn’t limited exclusively to AI; it also encompasses logic-based software tools that can make or influence decisions. For global organizations, the most restrictive alternative is often the best standard.
United States: Sector-Based, Shift Towards Deregulation
America’s AI Action Plan is a new framework introduced by President Trump in July of this year, replacing President Biden’s rescinded Executive Order 14110 on AI. The new 2025 AI Action Plan rejects the idea of centralized AI regulation in favor of sector-specific oversight and encourages global AI competition. Agencies are directed to review all existing AI regulations to “eliminate policies that limit growth, restrict speech, or mandate bias audits.” Moreover, references to topics like misinformation, diversity, and climate change will be removed.
The U.S. government still promotes guidance like the NIST AI Risk Management Framework, a voluntary guide developed by the U.S. National Institute of Standards and Technology in January 2023 to help companies identify and manage risks throughout the AI lifecycle. It offers practical steps for improving transparency, safety, and accountability, and a companion AI Risk Management Framework Playbook.
Canada: Voluntary Standards, Pending Legislation
Canada’s Voluntary Code of Conduct for Advanced Generative AI Systems was issued in September 2023 and focuses on six core principles: accountability, safety, fairness, transparency, human oversight, and validity and robustness. The Artificial Intelligence and Data Act (AIDA) (Bill C‑27) was introduced in June of 2022 to regulate high-impact AI systems, but it has stalled in Parliament and is not yet law.
If passed, expect AIDA to align with EU standards across categories like healthcare, employment, biometric ID, and public services. It’s designed to enforce transparency, risk assessment, and incident reporting.
AI Rules are Taking Shape in Brazil & Singapore
Meanwhile, Brazil’s government is reviewing the Brazil AI Act (Bill No. 2338/2023). The bill proposes a three-tiered, risk-based framework that classifies AI systems as excessive-risk (banned), high-risk (regulated), or low-risk (minimal obligations), and includes requirements for fairness, transparency, and human rights protections.
Singapore’s Model Artificial Intelligence Governance Framework has not yet introduced binding legislation, but it offers practical guidance on topics like accountability, data quality, security, and human oversight. In 2024, Singapore released a Generative AI addendum to address emerging risks and encourage responsible innovation across sectors.
The G7 Hiroshima AI Process: Global voluntary oversight
At the G7 Summit in Hiroshima in May of 2023, leaders launched the Hiroshima AI Process, the world’s first international framework for advanced AI governance. This voluntary process covers issues like risk mitigation, publicly disclosing capabilities, and developing tools to identify AI-generated content.
What is Legal Counsel’s Role in Navigating AI Regulatory Compliance?
Multinationals often find themselves aligned with very prescriptive global regulations like the EU AI Act, while navigating a U.S. regulatory environment that’s prone to political shifts.
Given that AI spans departments, legal counsel plays a critical role in making sure governance keeps pace with innovation. That means translating emerging rules into business-ready policies, but also engaging directly with the technology. In my own organization, getting hands-on experience with our proprietary GenAI platform has made our legal team more credible and effective in advising product teams and anticipating risk, and more confident in regulatory conversations. When legal counsel understands how these systems work in practice, not just in theory, they’re better positioned to guide responsible development.
Further, the most effective legal teams will:
Translate regulations into policies that align with diverse global rules and frameworks.
Advise cross-functional teams that include product, engineering, and data to ensure transparency and accountability are built into systems from the start.
Stay ahead of change by establishing mechanisms for monitoring and responding to regional legal changes in real time, such as a centralized knowledge base that helps teams meet requirements and deadlines.
Shape the regulatory environment by participating in industry associations and public consultations, and provide feedback that informs how AI laws are presented.
Support ethics oversight through key roles on AI ethics boards or working groups to assess legal and reputational risks during the development process.
In short, the legal department’s role is to move beyond reactive reviews and into a proactive partnership that anticipates risks and builds AI confidence for all stakeholders, including regulators, employees, and customers.
8 Practical Guidelines to Build Resilient AI Governance
Legal teams are critical to compliance, but building strong AI governance is a team sport, requiring consistent attention from departments across the organization. Enterprises can strengthen their approach by:
Developing internal AI governance playbooks that align with recognized frameworks like NIST AI Risk Management Framework and ISO/IEC 42001. These should include definitions and guidance on tiers and accountability roles to ensure all teams abide by best practices.
Using a tiering system to sort AI tools by risk level based on a system’s role, decision-making power, the data it uses, and how likely it is to trigger legal or ethical concerns (the higher the risk, the stronger the safeguards and reviews should be).
Building or designing AI systems to be transparent from the start by keeping clear records of use, adding human-in-the-loop checks, and creating audit trails. All high-impact decisions should be traced and reviewed.
Participating in voluntary governance frameworks, including the G7 Hiroshima Process and national voluntary codes of conduct. This signals responsible intent, even in markets where binding regulation is still evolving.
Vetting external partners by making sure AI tools and services from outside vendors abide by your company’s core values and guiding principles. Check for security, legal compliance, and adherence to your ethical AI use standards.
Tracking evolving global norms and emerging international initiatives like the UN’s Global Digital Compact or OECD’s AI Principles. Even if they aren’t yet binding, they signal the path that future laws are taking.
Defining clear escalation paths and internal processes to flag concerns about AI system use. Aside from legal issues, reputational or ethical risks may surface over time.
Training internal stakeholders to become literate in AI compliance, building it into onboarding and training for teams, especially those in data, product, and procurement. Responsible practices develop from the ground up.
Future-Proofing Your Company for the Regulatory Road Ahead
Investing early in internal AI governance, even in markets without legal mandates, puts companies in a better position to adapt quickly as new rules emerge. It’s advisable to set yourself up for compliance by building audit readiness through clear documentation, third-party evaluations, and risk assessments.
Consider a central repository of region-specific obligations, as this can help legal and product teams stay in sync as laws change. As much as possible, legal teams should also scenario-plan for incoming shifts, like the passage of AIDA in Canada or a federal law in the U.S. Even in places without regulation, showing responsible intent through voluntary frameworks and transparent systems can reduce risk and bolster trust.
Legal teams aren’t just advisors who check compliance boxes after an AI product launch. In the age of AI, they play a central role as architects of trust, global readiness, and responsible innovation. When business leaders bring legal counsel to the table early, they’re better equipped to navigate complexity, align with shifting norms, and maintain strong relationships that bolster loyalty, revenue, and customer trust.

