The Future of AI Governance: What 2025 Holds for Ethical Innovation

Cloudera’s Manasi Vartak offers insights on the future of AI governance and what 2025 holds for ethical innovation. This article originally appeared on Solutions Review’s Insight Jam, an enterprise IT community enabling the human conversation on AI.
The rapid evolution of artificial intelligence (AI) is poised to define 2025, sparking intensified global conversations about regulation to ensure its safe, responsible, and ethical use. An expanding array of federal, state, and international regulations will shape the year ahead, aiming to protect privacy, prevent misuse, and establish a framework for trustworthy AI development. Emerging regulatory trends and frameworks are creating a roadmap for CIOs to navigate this shifting landscape. By prioritizing responsible AI practices, organizations can seize opportunities for innovation while reinforcing public trust and upholding ethical standards.
Federal AI Regulations
Last October, the White House introduced its first Executive Order on AI, EO 14110, which aims to advance the safe, secure, and responsible development and use of AI in the United States. It includes measures to protect privacy, promote innovation, and mitigate risks, ensuring AI benefits are widely shared across society. More specifically, the order requires a significant number of federal agencies to appoint a chief AI officer, showcasing the importance of regulated and responsible artificial intelligence usage.
With generative AI (GenAI) technologies growing faster than ever, it is imperative that organizations are not running unchecked with their AI experimentation and implementation. The European Union (EU) soon followed suit by introducing its own regulations.
The EU Artificial Intelligence Act, introduced in February 2024, splits AI systems into three different categories:
- Prohibited
- AI systems that pose an unacceptable risk to people’s safety, rights, and freedoms. These systems are completely banned from being developed or used within the EU. Such acts include exploitation of vulnerable individuals such as children and disabled people, manipulation of standard or expected human behavior, and invasive usage such as live facial recognition.
- High risk
- AI systems that pose significant risks to the health, safety, or fundamental rights of individuals. These systems could impact specific sectors, such as healthcare, transportation, law enforcement, and education.
- General purpose AI (GPAI)
- AI systems that are designed to perform a wide variety of tasks across different domains and industries, rather than being specialized for a specific use case. These systems can be adapted and applied to multiple functions, including tasks they were not originally trained for. Examples include large language models like GPT, which can be used for everything from customer service to content generation.
These definitions help AI providers and consumers alike understand what usages of this groundbreaking tech are legal or not, as well as setting requirements for how the legal AI systems can be used. For example, high-risk AI systems must establish a risk management system and conduct data governance to ensure that it is not being used inappropriately or carelessly. The EU also announced the establishment of the AI Office, sitting within the EU Commission, to monitor the effective implementation of GPAI model providers.
Additionally, In December 2024, the House Task Force on Artificial Intelligence unveiled a comprehensive report addressing the multifaceted challenges and opportunities presented by artificial intelligence. The report provides a robust framework of guidelines and actionable recommendations for U.S. policymakers, aiming to foster innovation while mitigating risks across various AI domains. Additionally, it delves into critical research findings, offering insights into emerging trends, ethical considerations, and the potential societal impacts of AI technologies.
In light of these federal and international initiatives and the clear future of AI becoming a global priority, regulation that inspires, not rejects, safe experimentation is imperative. As these federal and international frameworks take shape, the conversation around AI regulation is also gaining momentum at the state level, where tailored approaches are emerging to address local needs and challenges.
State Issued Regulations
While AI regulations have been introduced and implemented at a federal level across the globe, individual U.S. states are also announcing initiatives to ensure safe and responsible AI. One recent example of this is the AI Bill, SB 1047 from California. Governor Gavin Newsom ultimately vetoed SB 1047 in September 2024 because the legislation did not account for specific scenarios of AI usage and instead applied general standards.
Other states, such as Colorado, have introduced similar protections and AI acts as well, showcasing the demand for responsible AI use and regulation. The Colorado AI Act, effective February 26, will impose obligations on developers and deplorers of high-risk AI systems to prevent misuse and discrimination. The law mandates regular impact assessments, compliance, and incident reporting, with the Colorado Attorney General maintaining exclusive enforcement authority.
Several additional states have been enacting or proposing similar AI bills, though the following list does not include every state that has introduced such legislation:
- Washington
- Oregon
- Montana
- Texas
- Massachusetts
- New York
- Tennessee
- New Hampshire
- Delaware
Governor Newsom’s veto of SB 1047 demonstrates a nuanced understanding of the complexities involved in regulating GenAI. While there is a clear need for AI regulation, such regulations must be grounded in a thorough understanding of how GenAI models and applications actually function.
Moving forward, there are three actions that CIOs should take to better prepare themselves. This includes developing a comprehensive understanding of safety in the context of GenAI and identifying effective guardrails, increasing funding for academic research and think tanks to address these issues, and fostering partnerships between government, academia, and industry. This approach ensures that future regulatory efforts are well-informed, practical, and capable of addressing the challenges posed by AI technology.
Non-Government Frameworks
Not all regulations put out over the last few years were issued by governmental offices. Non-government organizations have released their own AI playbooks and guidelines to help organizations manage the risk associated with adopting artificial intelligence systems.
In January 2023, NIST released the AI Risk Management Framework (AI RMF), which is a voluntary framework that provides a structured approach to identifying, assessing, and mitigating any AI-related risks throughout an AI system’s lifecycle.
Similarly, the International Organization for Standardization (ISO) and the International Electrotechnical Commission (IEC) released a new standard, called ISO/IEC 42001:2023, or simply ISO 42001 for short. ISO 42001 acts as a comprehensive framework for organizations to manage the development, deployment, and use of AI systems responsibly and ethically.
As AI continues to reshape industries and society, staying informed about evolving standards and frameworks is crucial. Looking ahead, we can expect more regulatory bodies and industry groups to release AI governance frameworks.
Organizations that proactively adopt these standards will be better positioned to navigate the complex landscape of AI ethics and compliance. Stay tuned for updates on emerging AI regulations and best practices to ensure your organization remains at the forefront of responsible AI implementation.
Responsible AI: A Path Forward
As governments and regulators move forward with proposed regulations, they are focused on a risk-based approach to AI governance. They are focusing on categorizing AI systems based on their potential impact, with particular attention to high-risk applications that could affect fundamental rights, safety, or critical sectors.
Regulators aim to prohibit GenAI uses deemed unacceptable, such as those exploiting vulnerable groups or manipulating behavior. Such unacceptable uses of AI include deploying manipulative techniques that distort decision-making, exploiting vulnerabilities of individuals for harmful purposes, and using facial recognition without consent. Additionally, practices like social scoring and profiling individuals based on personal traits for punitive measures are also prohibited to protect fundamental rights and prevent harm.
For high-risk AI systems, strict requirements around transparency, accountability, and human oversight are being implemented. Additionally, there’s a growing emphasis on regulating general-purpose AI models due to their wide-ranging applications. Regulators are also establishing dedicated AI offices and frameworks to monitor implementation and ensure compliance while encouraging innovation within ethical boundaries.
If a company is leveraging GenAI, these regulations mean that CIOs must prioritize transparency, accountability, and oversight in any project utilizing AI technologies to avoid any compliance issues. In addition to benefiting the company’s internal operations, it will build trust with customers. To stay agile, companies should monitor regulatory changes and adjust AI strategies accordingly, ensuring that AI investments are both compliant and aligned with ethical standards to drive sustainable value.
In summary, as 2025 unfolds, regulatory bodies embracing a risk-based approach to AI governance will prioritize protecting fundamental rights and safety while fostering an environment that accelerates ethical innovation in AI technologies.