This is part of Solutions Review’s Premium Content Series, a collection of contributed columns written by industry experts in maturing software categories. In this submission, Dataiku Chief Customer Officer Kurt Muehmel explores three ways organizations can align on AI ethics and data governance.
In early 2021, the White House made good on its requirement to establish an office responsible for “coordinating artificial intelligence research and policymaking across government, industry and academia,” as part of the National AI Initiative Act of 2020. This calls to mind similar initiatives in both public and private sectors to help ensure that this technology—which has the potential to impact how we do business in every industry, and our daily life as individuals—is developed in a way that’s aligned with societal values.
However, Microsoft president Brad Smith said it well when his company and others partnered with the Vatican to announce the Rome Call for Ethics, as he noted, “I don’t think it will be easy to develop a singular approach to ethics for machines since we haven’t been able to do it for people.”
And indeed, ethics are not so cut and dry – it’s unreasonable to expect a single governing body to dictate how businesses can use AI ethically, particularly in the context of their specific industry, culture and application of the technology. A singular, prescriptive ethical standard isn’t achievable, reasonable or practical – but with strong leadership and best practices around data governance, organizations can help ensure AI is used in a responsible way. The guiding principles that leaders should instill throughout their organizations and consider at every decision-making stage of AI development are deceptively simple but absolutely essential.
Aligning AI Ethics and Data Governance
Without accountability and transparency, it’s far too easy to embed technology with the biases of the people who create it. Responsible AI is as much about mindset as it is about policies – without buy-in from every level of an organization, even the best policies will fall by the wayside. By addressing head-on any biases or negative outcomes that might arise from AI in each specific context and use-case, leaders can help ensure that everyone who touches data within their organization is also keeping these possibilities top of mind. With transparency from leadership, employees are empowered to actively work to counter any misuse or abuse of data in the course of their everyday work.
Transparency and clear communication must be at the core of any successful data governance strategy – and data governance should be a foundational element of any enterprise AI strategy. Policies around how data is made available, used and stored need to be clearly and effectively communicated to employees in all departments. If employees aren’t aware of their company’s policies, they can’t be expected to effectively enact them. As every enterprise knows, this becomes much harder as a company scales, adding employees and sorting through exponentially growing volumes of data, so adopting consistent communication policies early is key. Without effective transparency around data governance, it becomes increasingly likely that data could be mismanaged or misused.
Leaders should encourage not only continuing education around how other organizations are thinking about responsible AI (there are no shortage of resources here – AlgorithmWatch alone maintains a repository of more than 150 ethical guidelines), but also critical thinking around responsible AI and best practices for governance within your specific organization. Technical education around data governance is also vital to ongoing success. Understanding principles around data quality and preparation, which tools and platforms can be supportive of your particular AI needs, and importantly, continuing to educate on your particular policies, are all ways for an organization to continue improving their practices.
Without continuing education around governance policies, employees have minimal support in case of a security breach or other crisis. Education specific to data governance also helps companies avoid a situation where employees misuse data – this is the trap that Cambridge Analytica fell into.
In order to develop an enterprise-wide practice around responsible AI and consistent data governance, adjustments will need to be made on an ongoing basis. Being realistic and proactive about these adjustments is imperative, and it falls to leadership to demand ongoing and rigorous reviews of models in use, data governance policies, and the implications of the insights garnered. This is a best practice not only to ensure responsible AI, but to ensure implementations are still having impacting business. AI models should be subject to constant review, and both historical data sets and new AI implementations should be held to the same high standards of quality to ensure they are delivering on their intended outcome and are free from misuse or inconsistency.
Ultimately, a lack of governance leads to a lack of trust at every stage of the data pipeline. If people across an organization do not trust the data, they can’t possibly confidently and accurately make the right decisions. We’ve reached a point in AI applications in the enterprise (and elsewhere) where leaders are no longer able to claim ignorance. Governance in AI is vital to any organization that’s using AI in their normal business operations—which is inching ever closer to all organizations—but this cannot happen without strong belief and endorsement from the top.