Ad Image

Defining Moments from the Insight Jam: Cybersecurity and the AI Executive Order

AI Executive Order

AI Executive Order

The editors at Solutions Review recap defining moments from the 5th Annual Insight Jam with their favorite quotes from the industry expert panelists. This panel examined the ins and outs of the AI Executive Order and how this affects the current and future landscape of cybersecurity.

At Solutions Review, December is defined by the Insight Jam– an always-on community for enterprise technology end-users, experts, and solution providers. The Cybersecurity and The AI Executive Order panel, featuring Dwayne McDaniel of GitGuardian as moderator, saw a discussion centered around Biden’s AI Executive Order and how it will impact cybersecurity teams. Panelists included Brian Sathianathan of Iterate.ai, Daryan Dehghanpisheh of Protect AI, Josh Davies of Fortra’s Alert Logic, Luis Villa of Tidelift, and Mike Pedrick of Nuspire.

While the whole hour-long event is monumental, these are the defining moments from the Cybersecurity and The AI Executive Order Insight Jam.

Defining Moments from the Insight Jam: Cybersecurity and the AI Executive Order


Dwayne McDaniel, Developer Advocate at GitGuardian

“The best thing I heard about AI in the recent past was from Deloitte at API World. And they said from stage, ‘Anybody that tells you they know what AI is going to look like a year from now is probably lying to you. It’s just moving so fast. Developers are doing things and inventing as they go.’ So while this is all, I think, a great discussion, we didn’t even get into bias. Like, I think bias should be its own panel for let’s get into how we build or fix bias without inputting bias into the system. But that’s all a different matter. But I think this is an important read if you want to see what’s coming in the future of AI. But I don’t think this is something that’s going to affect you and I tomorrow.”

Brian Sathianathan, Chief Technology Officer at Iterate.ai

“There will be a lot of knowledge gaps that needs to be filled. I think that’s where  a lot of companies here will have to kind of rush into to kind of fill that knowledge gap in. Because when a lot of the cyber security laws came in from 1995 till, I don’t know, till like 2001, 2004, when a lot of these things became mainstream. There was 8 to 10 years for people to kind of build a lot of tools. I think in AI, it’s developing a lot faster, right? I mean, of course, attacks are happening, but more than that, the tech, the base tech itself is developing faster, which means, you know, like the folks have to really understand and learn what’s exactly going on within these models and these systems. And sometimes, you know, even engineers and scientists who operate these models don’t quite know what’s going on, right? It doesn’t mean that attacks cannot be prevented. I mean you can still have inventories and best practices and everything, right? But, there is a lot more opportunities to create confusion, and there is going to be a lot bigger knowledge gap.

I think as cyber teams learn on AI, I think the first thing I would say: Learn as much as possible. And treat all of them differently, just like how you would look at different security attack vectors. Treat them differently. It’s just not one AI, one ML. It’s depending on the systems. Learn as fast as possible and get the help of startups and various companies to fill your knowledge gap till your teams become up to par.”

Daryan Dehghanpisheh, President Co-Founder of Protect AI

“If you look at the AI laws that went into effect for 2023, at the start of the year, right, you had California, Colorado, Connecticut, New York City, Virginia, Utah, like the states are getting involved in this. But I think what’s really interesting is that courts are ultimately going to have the final say. And what I find really interesting about this whole debate is that it’s often said that history doesn’t repeat, but it rhymes.

This feels a lot to me like the 1995 elements around encryption and encryption being classified as a munition by the US government, right? And what you saw the courts ultimately rule was that code is speech and that it can’t be regulated. And there’s a whole lot of cases and precedent law here. And what the courts have generally said is that code is speech, and that’s what AI is. AI is code. It’s going to be really hard to develop regulations that go and try to limit code and have that code under the protections of First Amendment, at least in the United States. So I think this is all going to take a lot of time to play out. And there’s going to be a ton of litigation to try to solve some really interesting regulatory and constitutional questions about code, AI, “Is AI code?” Yes, it is. I don’t think any of us would debate that. So if code is speech, how are they going to go regulate this? I think it’s going to be really, really difficult to figure out what is going to come down this pipe.”

Josh Davies, Principal Technical Marketing Manager at Fortra’s Alert Logic

“Initially, I really thought we were going to focus on security when I sat down to read this. And I appreciate the nods to all the different things, but there is a feeling of trying to boil the ocean and a lot of good sentiment without real substance to back it up behind. So I think what would be interesting is what we see from the output of this report, right?

When we’ve seen the different American departments that have been assigned certain tasks, what’s that going to result in? Is that going to result in something that is more enforceable? Is it going to result in buying from a lot of other companies? That’s really where I’m going to reserve my judgment to see how much they’re actually able to act upon. But also it felt like a bit of an advert for the U.S. government as well, that we are opening shop for all AI people to come over and have got a good handle on all the different elements they could possibly think of that will need to be legislated or at least considered going forward.”

Luis Villa, Co-Founder and General Counsel of Tidelift

“For those who are not experts in the way the U.S. federal government work, an executive order is pretty limited by nature, right? It can’t do a lot to shift around funding. It doesn’t have the force of law in a variety of ways. And it can be, if the administration changes in the next election, which is less than a year away now, this could all go away then. So it is, by nature, a pretty limited kind of thing.

But that said, it does, I think you have to see this as part of preparing the government to do more, right? So that when Congress is ready to act, for example, heads of departments have somebody on their team who’s thinking about this, right? Reports have been written, which will, because of this, reports will be written
which will turn into the seeds of future legislation and future regulation. So it’s not going to impact your day-to-day now, but especially for those of you in larger companies in the audience who have like, you know, lobbyists and things like that, those folks are going to be very busy responding and reacting to the fact-finding that’s going to be done as a result of this. And that’s going to turn into real facts on the ground, you know, presumably in the next administration.”

Mike Pedrick, Vice President of Cybersecurity Consulting at Nuspire

“As a bit of a data privacy and personal privacy nut, one of my biggest questions, and it’s been playing out in my head for a long time now, is that if AI is gobbling up all of our consumer data, not just contact information, but telemetry data and so on and so forth, and privacy legislation, a growing number of privacy laws give us a right to action to claw back that data. The conundrum in my head has been, and I’ll try to frame it quickly but clearly, if the AI model is able to just remove my information as a data subject, the information that is unique to me, if the AI model is able to just jettison that information out of itself, just throw it back out, was it important that it be there in the first place? Or does it represent a very serious and very fundamental catastrophic change to the model when we pull information back out?

And so I think it’s two opposing forces, two perpendicular forces, right? The force of personal privacy legislation versus the force of what goes into the AI model has to stay there because it’s learning from it and there’s long-term ramifications if we take it back out. And I don’t know how to grapple with that. I don’t know how to grapple with it in my head, let alone at a legal perspective.”


Watch the Cybersecurity and the AI Executive Order Insight Jam Here.

Share This

Related Posts