Industry Experts Quotes on the United States’ Executive Order on AI
The editors at Solutions Review have compiled a collection of quotes and insights from industry experts on the recent Executive Order President Joe Biden made on AI.
On October 30th, 2023, President Joe Biden and the White House made an Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence. As you can expect, there was a lot of discussion around the Executive Order, what it might mean for AI regulation, and how it will affect the trends involved in governing, developing, and using AI in enterprises across industries.
With that in mind, the Solutions Review editorial team compiled some commentary from industry experts worldwide, who shared their thoughts on the Executive Order and how it will change AI’s role in business.
Industry Expert Commentary on the United State’s Executive Order on AI
Kamal Ahluwalia, the President of Ikigai Labs
“As a result of recent technological developments, the United States has emerged as the leading source for fast-paced innovation in AI. The White House’s Executive Order is a significant step forward in the nation’s efforts to ensure that AI is developed and used responsibly. Through the mandate, the U.S. will become a reference point for competition and will help the rest of the world adopt AI-powered solutions faster with lower risk and higher accountability. The broad scope of the Executive Order means there are many potential impacts for companies presenting a trade-off of short-term investments vs. long-term gains. While the executive order will cost more time and money at the onset to achieve compliance, companies will ultimately reduce the friction that is inevitable amongst customers and other entities that could impede the adoption of AI-powered solutions.
“We can expect to see more innovative development from organizations in the coming years than ever before as startups and innovators figure out how to leverage the latest capabilities and their boundless imagination. Companies will also have to make strong investments in workforce education and reskilling to adopt the understanding that good decisions pair AI with human intelligence.
“This Executive Order will help ensure organizations and AI companies are transparent about the methods they deploy to ensure compliance with the guidance and do their part to protect consumers and businesses.”
Olga Beregovaya, the VP of AI and Machine Translation at Smartling
“Governments and regulatory bodies pay little attention to the notion of ‘watermarking’ AI-generated content. It is a massive technical undertaking, as AI-generated text and multimedia content are often becoming indistinguishable from human-generated content. There can be two approaches to ‘watermarking’—either have reliable detection mechanisms for AI-generated content or force the watermarking so that AI-generated content is easily recognized.
“Publishing and indexing machine-generated content has been a concern for a good 10 years now (for instance, Google would not index machine-generated content), and now the concerns are increasing since AI content is often not distinguishable from human-generated content.”
Balaji Ganesan, CEO and Co-Founder of Privacera
“While the EU has been working on finishing up its own set of AI regulations, the U.S. has taken the lead in formulating a regulatory framework. While it may not be comprehensive, it is extremely encouraging that this level of engagement is taking place – the power of AI and GenAI promises to revolutionize every business and business function. But this will only be achieved with the guardrails, governance, and trust frameworks in place.
“While the executive order does not put any penalties for non-conformance and refers to the voluntary work done by 15 private companies, GenAI safety, and risk is something governments worldwide are talking very seriously about. The importance of the potential of GenAI to be balanced with the risks and safety requirements is echoed by data and business leaders I have spoken to. Over the past three months, I’ve had the privilege to speak to many data leaders from more than 40 Fortune 500 enterprises, as well as business partners like AWS and various thought leaders, and mitigating risks and safeguarding sensitive data are major inhibitors for organizations embarking on this journey.”
Ali Ghodsi, CEO and Co-Founder of Databricks
“We believe in adequate and effective regulation of AI. We applaud the Administration’s significant efforts to promote AI innovation, an open and competitive AI marketplace, and the broad adoption of AI. We are also pleased to see the support for open-source and open science. We look forward to contributing to the process the Commerce Department is starting, looking at the importance and concerns around continuing to allow the open sourcing of model weights of foundation models.”
Jaysen Gillespie, Head of Analytics and Data Science at RTB House
“President Biden has a long history of considering how changes—such as widespread adoption of AI—will impact Americans across the economic spectrum. I think it’s reasonable to expect that he’ll want to strike a balance that preserves the lead the United States enjoys in AI development while ensuring some degree of transparency in how AI works, equity in how benefits accrue across society, and safety associated with increasingly powerful automated systems.
“Biden is starting from a favorable position: even most AI business leaders agree that some regulation is necessary. He is likely also to benefit from any cross-pollination from the dialogue that Senator Schumer has held and continues to hold with key business leaders. AI regulation also appears to be one of the few topics where a bipartisan approach could be truly possible.
“Given the context behind his potential Executive Order, the President has a real opportunity to establish leadership—both personal and for the United States—on what may be the most important topic of this century.”
Nadia Gonzalez, Chief Marketing Officer, Scibids
“It’s encouraging that The White House is beginning to take AI seriously at a broader level, moving us away from the patchwork approach that has so far occurred at a state-by-state level. AI has the potential to drastically improve how governments operate, protect privacy at large, and promote innovation, but care must be taken to ensure that the regulations go far enough.
“Having been operating in the background of the world’s devices for years, the public is quickly adapting to the AI Age, and regulators must pave the way before it’s too late. It is great to hear that officials are taking it seriously, encompassing the real meaning of “AI” into our nation’s regulations, understanding how that will impact the public, and using future technological innovation across the US.”
Peter Guagenti, President of Tabnine
“Corporate control over models and the data they are trained on is critical. As the White House’s announcement called out, ‘AI not only makes it easier to extract, identify, and exploit data – it also heightens incentives to do so because companies use data to train AI systems.’ Given this, protecting our privacy with AI is incredibly important. And it’s not just about Americans’ privacy; it’s also about intellectual property and copyright held by business entities.
“Big Tech has been completely unconstrained in its competitive practices for the last 25 years, and unsurprisingly, its monopolistic tendencies are now playing out across AI. Case in point: there are currently pending lawsuits against the companies behind the large-scale models for copyright infringement, and directly against Microsoft, in particular, for training its code-generation models on code sourced from private code repositories without the permission of the code creators. Data used in models must be explicitly allowed and fully transparent, an ongoing and persistent problem for AI that urgently needs to be dealt with.
“We also applaud the White House for promoting a ‘fair, open, and competitive AI ecosystem’ by providing both small developers and entrepreneurs access to technical assistance and resources, helping small businesses commercialize AI breakthroughs, and encouraging the Federal Trade Commission (FTC) to exercise its authorities.” As we know, Big Tech is proving it wants to expand its aggressively competitive practices to capture the entire AI stack.
“Companies like Microsoft invested heavily in OpenAI because they wanted to control not just the algorithms and building blocks of AI, but also deploy them down to every single product category and potential application. These behemoths want to control a fully integrated stack that ensures there is no meaningful competition. It appears the White House is also seeing this, and creating opportunities for small businesses—the lifeblood of the American economy—to double down on innovation.”
Sam Gupta, Principal Consultant at ElevatIQ
“This is probably one of the most sensible steps I have seen from any government. While I appreciate the power of AI, we might not be able to acquire AI debugging skills as fast as we would develop the technology. And soon, it might become uncontrollable.
“As far as market opportunities are concerned, regardless of whichever direction we go, we will have plenty of opportunities.
- Opportunities generated by the power of AI.
- Opportunities generated just to understand what the hell AI is doing.
- Opportunities to combat the misuse of AI.
- Opportunities to report data to comply with the regulations.
“All in all, everyone will make plenty of money. So remain calm and watch the world around you change through technological innovation or regulatory control.”
Corey Hynes, Executive Chairman and Cofounder at Skillable
“The job market is expected to be characterized by a fusion of human and machine collaboration, where AI augments human capabilities. To thrive in this evolving landscape, individuals must cultivate a unique skill set that includes adaptability, creativity, emotional intelligence, and the ability to interact and collaborate with AI. The jobs of the future will demand not only technical expertise but also hands-on ability to showcase those skills.
“New skills in responsible AI, ethics, critical thinking, negotiation, grit, data analysis, and more will be needed for the future job market. Cybersecurity skills will also become mission critical, especially because threats are evolving with advances in AI.”
Dr. Mohamed Lazzouni, the CTO of Aware
“Broad regulatory measures can inhibit companies and competitors from entering the marketplace and hurt innovation. But at the same time, developers of AI-based solutions must create these solutions responsibly. This is especially true in the area of biometrics, where solution providers need to ensure they’re training their algorithms on the most diverse data sets in the world and thus achieving demographic parity.”
Michael Leach, Compliance Manager at Forcepoint
“The Executive Order on AI that was announced today provides some of the necessary first steps to begin the creation of a national legislative foundation and structure to better manage the responsible development and use of AI by both commercial and government entities, with the understanding that it is just the beginning.
“The new Executive Order provides valuable insight into the areas that the U.S. government views as critical when it comes to the development and use of AI, and what the cybersecurity industry should be focused on moving forward when developing, releasing, and using AI such as standardized safety and security testing, the detection and repair of network and software security vulnerabilities, identifying and labeling AI-generated content, and last, but not least, the protection of an individual’s privacy by ensuring the safeguarding of their personal data when using AI.
“The emphasis in the Executive Order that is placed on the safeguarding of personal data when using AI is just another example of the importance that the government has placed on protecting American’s privacy with the advent of new technologies like AI. Since the introduction of global privacy laws like the EU GDPR, we have seen numerous U.S. state-level privacy laws come into effect across the nation to protect American privacy, and many of these existing laws have recently adopted additional requirements when using AI in relation to personal data.
“The various U.S. state privacy laws that incorporate requirements when using AI and personal data together (e.g., training, customizing, data collection, processing, etc.) generally require the following: the right for individual consumers to opt-out profiling and automated decision-making, data protection assessments for certain targeted advertising and profiling use cases, and limited data retention, sharing, and use of sensitive personal information when using AI. The new Executive Order will hopefully lead to the establishment of more cohesive privacy and AI laws that will assist in overcoming the fractured framework of the numerous, current state privacy laws with newly added AI requirements. The establishment of consistent national AI and privacy laws will allow U.S. companies and the government to rapidly develop, test, release, and adopt new AI technologies and become more competitive globally while putting in place the necessary guardrails for the safe and reliable use of AI.”
Tim MalcomVetter, Executive Vice President of Strategy at NetSPI
“There has never been faster adoption of any technology than what we’ve seen with Generative AI, ML, and LLMs over the past year. A prime example of such rapid adoption and disruption is the public letter by Satya Nadella, CEO of Microsoft, where it was announced that all Microsoft products are or soon will be co-pilot enabled—this is just the starting point.
“The most recent AI Executive Order demonstrates the Biden administration wants to get ahead of this very disruptive technology for its use in the public sector and desires to protect the private sector by requiring all major technology players with widespread AI implementations to perform adversarial ML testing. The order also mandates NIST to define AI testing requirements, which is critical because no one can yet say with confidence that we, as a tech industry, exhaustively know all the ways these new AI implementations can be abused.”
Sreekanth Menon, Global AI/ML Services Leader at Genpact
“Governments and organizations around the globe have begun to realize the pressing concern and need for a unified process, measurement metric, systems, tools, and methods to regulate the development and monitoring of AI systems for the larger good of the world. On the other hand, AI and analytics organizations developing and distributing AI solutions to client organizations across verticals have long been following methods and processes to offer responsible, ethical, and explainable AI systems and solutions that carefully balance innovation and impact.
“The Biden administration’s wide-ranging executive order is a move to streamline the development and dissemination of AI systems, including but not limited to healthcare, human services, and dual usage foundation models. The executive order balances optimism about the potential of AI with considerations of risk, privacy, and safety from using such systems if unmonitored. The executive order stresses the need for existing agencies and bodies to come together and provides a directive for these organizations to formulate cohesive tools to understand AI systems better and create oversight.”
Nandan Mullakara, Founder of Bot Nirvana
The Executive Order is a step in the right direction. I would like to see a global agreement on AI that includes these 5 key aspects of safety and ethics:
- Responsible Data Practices: Priority on non-invasive, consequence-aware data use.
- Defined AI Use Boundaries: Clear-cut AI application guidelines to prevent abuse.
- Guaranteed Reliability & Safety: AI must be consistent and risk-free.
- Mandatory Transparency & Accountability: AI decisions require clarity and responsibility.
- Essential Fairness & Inclusiveness: AI has to serve diverse needs without bias.
Dan O’Connell, the Chief AI and Strategy Officer at Dialpad
“While there should be regulation that protects people from the dangerous effects of ‘bad AI’ like deep fakes, for example, the prevalence of these types of malicious AI use cases is vastly overblown. Most AI can be defined as ‘good AI,’ or AI that enhances human productivity, and it’s not as scary or all-encompassing as people fear.
“As we’ve seen time and time again, it cannot be relied upon to produce consistently accurate results without ever-present human oversight. Think of generative AI like an editor or a copywriter; it’s a tool that makes you faster and better at your job. Government regulation has proven to slow down innovation, and I worry about forms of regulation where there are too many restrictions that stop good people from collaborating quickly and freely.
“Generative AI is impressive and amazing, but it won’t be the magic pill for everything you need. Just like with the invention of the first computer or the internet, AI will make us more efficient and better at our jobs and create new startup growth.”
Gopi Polavarapu, the Chief Solutions Officer at Kore.ai
What does this mean for the industry?
“President Biden’s swift Executive Order on AI will steer the industry in a positive direction while placing the United States at the forefront of AI innovation and responsible governance. This move, coupled with the establishment of a comprehensive framework for responsible AI development and deployment, is essential in fostering greater trust in the technology across all industries. It is imperative to strike a balance between innovation and regulation to guard against misuse and risks, as seen throughout history when the US government has regulated powerful technologies like nuclear fission, genetic engineering, and even seat belt requirements for automobiles. Companies must embrace this balance by developing and implementing stringent guardrails to uphold ethical AI use, preventing any unintended consequences.”
How should businesses react?
“Most AI regulations will need businesses to act on four major concerns as they adopt and integrate the technology: inclusiveness, transparency, factual integrity, and continuous evaluation. For businesses, it’s an ethical imperative to ensure AI is a force for good and ensure fairness and equity across diverse user demographics. The AI systems they build should transcend biases, toxicity, and discrimination. Transparency is the foundational principle of responsible AI and serves as a linchpin for building trust between AI systems and users.
“Enterprises should also design AI systems in a way that ensures that humans maintain ultimate control over their operation through regular, logical, human-run auditing. Lastly, organizations must be aware of the boundaries that their AI models operate within and systematically identify the strengths and weaknesses of their models. By doing so, they can pinpoint areas that require improvement, fine-tune their systems, and adapt to evolving challenges.”
Colin Priest, Chief Evangelist at FeatureByte
“AI safety requires AI governance, and the dirty secret in the AI industry is that the weakest link in AI governance is data pipelines. The manual bespoke AI data workflows used by most enterprises need to be redesigned and industrialized. AI governance for data requires data provenance, data documentation (especially semantics), role-based access control (RBAC) security, identification of downstream consequences of changes, version control, formal approval processes for all changes, and audit trails.”
Bjorn Reynolds, CEO of Safeguard Global
“The Executive Order provides much-needed AI governance to protect against the use of these powerful tools for deceitful or malicious intent. In considering the opportunities presented by the adoption of AI, companies must carefully articulate processes for decision-making, implementation, and oversight of AI’s utilization as a resource and tool within their organization.
“Despite the evolving regulatory landscape, as the economy and employee expectations continue to shift, AI will remain a key tool that supports agile, strategic decision-making within global operations. AI solutions provide data-driven support to leaders as they tackle longstanding challenges, such as keeping up with international HR and employment laws worldwide, facilitating continued global expansion. Embracing AI doesn’t mean you’re surrendering to a machine but rather leveraging technology to streamline tactical tasks, enabling teams to focus on higher-level work to support global operations and growth. In the context of the executive order, these factors enable a company to increase competitiveness and foster better employee experiences.”
“We applaud the Biden administration’s recent Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence, particularly the emphasis on ensuring the cybersecurity of models. The president’s main directive is to develop standards, tools, and tests to help ensure that AI systems are safe, secure, and trustworthy. Users need to know the AI systems they are using are secure, so building security solutions before use is important, as is providing security at the point of inference. Securing the usage of these models means organizations will be protected in real-time from threat incursions, regardless of the threat’s nature or origin.
“While this order is a step in the right direction, we urge the Biden administration to take steps to reinforce cybersecurity measures surrounding the utilization of foundation models, including large language models (LLMs), by encouraging the organizations deploying them to aggressively address critical security considerations. This external approach is increasingly important on a global scale as multinational and international organizations adopt SaaS applications that have models embedded within them and seek to integrate a rapidly expanding array of models into their enterprise.”
Doug Shannon, a Global Intelligent Automation Leader, and Gartner Peer Community Ambassador
“Reflecting on the evolving landscape of AI, it’s evident that, as a society, we need a delicate balance between necessary regulations and the ability to adapt swiftly to the ever-changing nature of this technology. While governance is essential for oversight, there’s a growing imperative to foster nimbleness in our approach.
“In fostering nimbleness within governance, the focus shifts beyond societal adaptation to empowering individuals to actively contribute to the decision-making process. Utilizing technology, we have the potential to establish platforms where people can voice concerns, vote on issues, and bring forth topics that might otherwise go unnoticed.
“As we navigate the path of AI development, the imperative is to move from a reactive to a proactive stance. Imagine a future where technology facilitates a direct connection between citizens and decision-makers, enabling a faster and more effective response to emerging challenges. I think it’s something worth looking forward to.”
Raju Vegesna, Zoho Chief Evangelist
Government regulation is not only not enough, but it’s too late to even start trying to cap this firehose.
“The Biden administration’s recent executive order on AI demonstrates that when vendors don’t police themselves, which is to be expected, regulations will attempt to do the policing. Unfortunately, governments will always be playing catch-up to technology evolving faster than it can be reined in, and this executive order fails to enable the full level of transparency needed to predict where the technology will head next. Instead, tech companies need to see this announcement as a call to institute their own AI best practices and communicate them to consumers who, through their purchase decisions, will serve as the true regulators of AI, moving forward. It’s also important that those consumers understand how government agencies will use AI—an area not covered by the executive order. We are in the initial stages of an AI revolution, and my hope is that governance, or lack thereof, doesn’t slow the innovation process.”
Achim Weiss, CEO at IONOS
“AI regulation is at the forefront of global political agendas, and for good reason. Global investments in emerging tech like AI have shed light on a new era: one defined by digital transformation and limitless possibilities for today’s enterprise landscape. Harnessing the power of AI, however, has become a controversial topic, leaving thought leaders with one question: what are the risks?
“The EU is leading the way for ethical AI standardization with the creation of The AI Act, which will drive risk mitigation across AI implementation within European nations. Companies who prioritize the responsible use of AI tools are destined to lead in today’s world defined by emerging technologies and digital innovation.”
Edmund Zagorin, Founder and Chief Strategy Officer at Arkestro
“Organizations using AI in their supply chains will need to put the proper guardrails in place because, without some degree of human intervention, AI could wreak havoc on supply chains. For example, if AI could be useful in detecting or preempting a run on a specific commodity, food, or medicine, then it could also trigger an autonomous buying cycle ahead of that run, thereby exacerbating it. We call this ‘AI-induced panic buying,’ and this could drive shortages in life-saving supplies like cancer medicines, which have proven vulnerable to supply chain fragilities.”