Responsible Generative AI: A Pathway to Success
Genpact’s Sreekanth Menon offers insights on responsible generative AI and the key pathway to success. This article originally appeared on Solutions Review’s Insight Jam, an enterprise IT community enabling the human conversation on AI.
How safe are generative AI (gen AI) applications? Given the emergent behavior of large language models (LLMs), this question is not an easy one to answer. To further complicate things, the EU AI Act just recently came into force, and the law allows the EU to slap penalties as high as 7% of the global revenue of the companies trespassing the law. Companies utilizing general purpose AI, LLMs especially, will face stricter regulations. Because of acts like these around the world, enterprises using AI are mandated to demonstrate responsible data practices, ensure transparent LLM use, and potentially explain how these models reach their outputs.
However, despite these budding regulations, recent data shows two of the biggest challenges hindering gen AI adoption and innovation is a lack of structured plan and a lack of data quality strategy.
Neglecting responsible AI can have swift and costly consequences, from reputational damage to legal liabilities. This is why enterprises are taking their time planning, strategizing, and allocating budget accordingly.
It is especially key to invest in understanding the evolving benchmarks. This includes considering potential external enhancements and ecosystem-wide progress when assessing risks. While static dataset-based evaluations have limitations, they remain valuable tools for preparedness, to be supplemented with other evaluation methods for a comprehensive view of model capabilities and potential risks. Responsible AI principles help with executing this.
Even frontier model makers like OpenAI are taking responsible AI measures, like System Cards, to provide a comprehensive view of the model’s development processes, safety considerations, and evaluation methods, aligning with emerging regulatory frameworks, like the EU AI Act. It indicates how the ecosystem wants to push AI capabilities while prioritizing safety and ethics at each step. Here are four principles you can follow to strengthen your approach to responsible gen AI:
Improve Data Transparency & Accessibility
Gen AI’s heavy reliance on heterogeneous data sources introduces the risk of bias and ethical issues, especially if the data is not carefully curated or vetted for fairness and accuracy. Additionally, implementing auditing mechanisms like human oversight can protect against potential biases and other issues.
Set Up a Center of Excellence
Pretrained LLMs offer easy access but also present challenges for companies, developers, and regulators. AI governance must expand beyond IT specialists to include key stakeholders with diverse expertise—technical, industry, and cultural perspectives. This approach fosters greater accountability for compliance and best practices. Collaboration across departments is also essential to establish a framework that prioritizes human values and ethics.
Upskill and Train Your Workforce
LLMs have a tendency to hallucinate, and recent studies have found that optimizing LLMs to hallucinate less is challenging. Training employees to understand how AI models work, and their limitations, is essential for making informed decisions about the trustworthiness of AI-generated content. It’s also important to implement guidelines for selecting and fine-tuning models to ensure consistent, reliable outputs.
Double Down on Governance
By implementing a unified data and AI governance process, companies can establish clear ownership and access controls and auditing mechanisms for all data and AI assets within the organization. Choosing the right governance model, whether centralized or distributed, depends on the specific needs of the organization, but having a system in place is paramount.
Security is another crucial aspect of data governance. The emphasis on extensive testing throughout development, including red teaming exercises to identify risks and develop mitigations, is a common best practice in recent times. Responsible AI practices help governance teams set governance frameworks and core risk frameworks that can guide the workforce. They challenge risk assessment and coordinate “red team” tests with engineers, which play a vital role in risk identification. A well-defined responsible AI operating model should map out interactions among various personas throughout the gen AI lifecycle, tailored to each organization’s capabilities.
How to Weave Responsible AI into Your Corporate Fabric
A key aspect of managing risk in gen AI adoption is the adaptation of existing governance structures. Rather than creating entirely new committees or approval bodies, organizations are better off expanding the mandates or coverage of their current risk frameworks. This approach minimizes disruption to decision-making processes while maintaining clarity in accountability. Central to effective risk management is the establishment of robust governance mechanisms. This includes forming cross-functional, responsible AI working groups comprised of business and technology leaders as well as experts in data, privacy, legal, and compliance domains.
Here are some best practices to follow:
- Increase awareness: Develop a strategy for communicating and enforcing responsible AI practices throughout your organization. Consistency over time helps integrate these practices into your company’s culture.
- Have a plan: As gen AI becomes more accessible, adequate preparation is crucial for a successful launch. Begin by identifying the most promising use cases, and then collaborate with your center of excellence to address potential issues from the outset.
- Be transparent: Rather than sweeping issues under the rug, be open and honest about gen AI’s capabilities. Use the lessons and experiences you gain to educate all stakeholders, both internal and external. By addressing issues such as underspecified problem statements and overly specific unit tests, we can create more accurate assessments of AI performance. Responsible AI frameworks remove impossible or ambiguous tasks and give us a clearer picture of true AI capabilities, allowing for more informed decisions about model deployment and risk mitigation.
- Build trust: Enhance stakeholder confidence by making gen AI tools transparent. Provide resources that explain decision-making processes, use confidence scores to gauge output reliability, and integrate a human-in-the-loop approach to improve model accuracy.
The Path Forward
The market is poised to make a transition from traditional chatbots to LLM-powered agents. That means more emergent behavior and more unpredictability. This is why establishing responsible AI policies for widespread adoption will continue to remain crucial. As enterprises race to integrate GenAI into their operations, they have an uphill task of navigating a complex landscape of governance and compliance to ensure responsible deployment. On top of everything, enterprise customers are increasingly concerned about the ethical implications of AI.
AI-first companies have an obligation to conduct impact assessments to evaluate potential consequences. This underlines the importance of having a robust, responsible AI framework that can help avoid roadblocks and reputational damage while maintaining a competitive edge. There are no shortcuts to scaling an ethical enterprise, and responsible AI preparedness will be a key differentiator to an enterprise’s success.