Ad Image

2024 Cybersecurity Predictions from Industry Experts

2024 cybersecurity predictions

2024 cybersecurity predictions

The editors at Solutions Review have compiled a list of 2024 cybersecurity predictions from some of the top leading industry experts.

To properly close out the year, we called for the industry’s best and brightest to share their Identity ManagementEndpoint Security, and Information Security predictions for 2024 and beyond. The experts featured represent some of the top Cybersecurity solution providers with experience in these marketplaces, and each projection has been vetted for relevance and ability to add business value.

2024 Cybersecurity Predictions from Industry Experts

John Stringer, Head of Product at Next DLP 

“In 2024, AI will better inform cybersecurity risk prevention decision-making. Elsewhere, disgruntled employees may lash out at stricter working-from-home policies as insider threats loom.

With AI estimated to grow more than 35 percent annually until 2030, businesses have swiftly adopted the technology to streamline processes across a variety of departments. We already see organizations using AI to identify high-risk data, monitor potential insider threat activity, detect unauthorized usage, and enforce policies for data handling. Over the next year, AI will power data loss prevention (DLP) and Insider Risk Management (IRM) efforts by detecting risky activity and then alerting IT teams who can analyze their movements and respond accordingly, preventing further cybersecurity issues from arising.

Insider threats will start to manifest themselves in other ways in the new year, too. As an increasing number of companies implement stricter policies about office-working and fewer days at home, disgruntled staff – particularly younger employees who have only experienced a ‘post-Covid’ working environment – may lash out at these supposed unfair policies. Frustrated employees could turn to stealing data and leaking sensitive company information, leading to wider security concerns that may impact brand reputation. ”

Steve Wilson, Chief Product Officer at Exabeam 

“Companies are under constant assault and frankly, the cybersecurity sector is failing customers. Businesses, government agencies, healthcare installations and more are in the unfair position of being attacked from the outside by nation state actors, while employees exfiltrate and sell company data from the inside.

Defending against these asymmetric threats using most of the security tools available today is near impossible because they do not effectively target the right threats. As the great unsolved challenge of this decade, cybersecurity needs an overhaul of approaches and tools, so that the people trying to protect companies and data don’t continue to drown amid thousands of daily threats.

We’ve seen the great innovations that AI spurred this year, especially with large language models (LLMs). I expect this technology to be transformative next year, especially in cybersecurity. I believe that AI will allow security operations personnel to use natural language with security tools and remove the friction of programming complex queries to stop intrusions.

Natural language will also allow SecOps to explain threats to executives and departmental business counterparts without complicated representations and tables on a screen. This natural language understanding can open space for SOC personnel to move faster and cover even more ground, which will be music the CISO’s ears, as they continue struggle with a persistent talent gap, and building skilled teams.”

Darren Shou, Chief Strategy Officer at RSA Conference

“While not new for 2024, mental health challenges will continue for many in the cybersecurity industry who are overworked and underappreciated. The stress that cyber employees endure day in and day out to secure vital systems, companies and individuals is only compounded and exacerbated by the skills gap shortage that our industry faces.

The price of mistakes is much higher in today’s world which creates a lot of pressure. The emergence of cyber insurance goes hand in hand with that. Other forms of mental health anxiety can stem from job safety, the expectation for 24/7 availability and the lack of proper training or continuous education suitable to deal with emerging threats. One question to ask heading into 2024 – can mental health support be delivered or supported by AI? Another question is does your team feel supported? Many of these individuals take on the role of cyber heroes and are deployed to help prevent breaches or attacks. The burden should not fall squarely on their shoulders. Organizations and individuals should make it a priority to address these issues head on before burn out, mental health collapse and other significant issues take hold.”

Petros Efstathopoulos, Vice President of Research at RSA Conference

“The evolution of generative AI, ChatGPT and other forms of artificial intelligence and machine learning have privacy once again at the top of the cybersecurity forefront. Privacy concerns for consumers, in the enterprise and online are all still valid. AI adds a new wrinkle to that. How should engaging with AI for work or personal lives be regulated in terms of privacy? Where does individual data go and who has access to it? How can data be removed from a model? What safeguards do we need in order to avoid extraction attacks? These are the types of questions that need to be front and center.

In terms of policy, different jurisdictions have different rules, regulations and restrictions for AI and the data privacy laws. How does this translate to the development of AI capability and how will consumers and enterprises be able to manage the jurisdictional differences? This will be key to watch in 2024 and beyond.”

Zach Capers, Manager of ResearchLab and Senior Security Analyst at GetApp

“In 2023, we finally saw some positive signs in the world of security. Businesses appear to have rebounded from an influx of pandemic-fueled vulnerabilities and have begun locking down systems like never before. This means that cybercriminals will increase reliance on social engineering schemes that exploit employees rather than machines.

Moving into 2024, GetApp research finds the number one concern of IT security managers is advanced phishing attacks. And we’re not only talking about email phishing. SEO poisoning attacks are a rising phishing threat designed to lure victims to malicious lookalike websites by exploiting search engine algorithms. This means that employees searching for an online cloud service might find a bogus site and hand their credentials directly to a cybercriminal, have their machine infected by malware, or both. In 2024, it will be more important than ever to educate employees on the sophisticated and increasingly dynamic methods used to trick them into handing over sensitive information that can result in damaging cyberattacks.”

Theresa Lanowitz, Head of Evangelism at AT&T Cybersecurity and Former Gartner Analyst

“In a world of edge computing comprised of diverse and intentional endpoints, it is important for the SOC to know the precise location of the endpoint, what the endpoint does, the manufacturer of an endpoint, whether or not the endpoint is up to date with firmware, if the endpoint is actively participating is computing or if it should be decommissioned, and host of other pieces of pertinent information. Edge computing expands computing to be anywhere the endpoint is – and that endpoint needs to be understood at a granular level.

In 2024, expect to see startups provide solutions to deliver granular detail of an endpoint including attributes such as physical location, IP address, type of endpoint, manufacturer, firmware/operating system data, and active/non-active participant in data collection. Endpoints need to be mapped, identified, and properly managed to deliver the outcomes needed by the business. An endpoint cannot be left to languish and act as an unguarded point of entry for an adversary.

In addition to granular identification and mapping of endpoints, expect to see intentional endpoints built to achieve a specific goal such as ease of use, use in harsh environments, energy efficiency. These intentional endpoints will use a subset of a full-stack operating system. SOCs will need to manage these intentional endpoints differently than endpoints with the full operating system.

Overall, look for significant advancements in how SOCs manage and monitor endpoints.”

Mary Blackowiak, Director of Product Management and Development at AT&T Cybersecurity

“Digital transformation continues to rapidly evolve and despite some return to office initiatives, the workforce remains vastly distributed. Given these factors, I expect endpoint security will be a major focus for organizations in 2024. The dispersed nature of today’s workforce amplifies the complexity of safeguarding against cyber threats, with a glaring challenge being the lack of visibility into the multitude of devices accessing organizational networks. The good news is that there are solutions to this dilemma.

The difficulty in effectively protecting what you can’t see remains a fundamental principle in the cybersecurity industry, which is why the first step in any endpoint security strategy should be conducting an inventory of all the devices that are accessing the network. This can be accomplished with a unified endpoint management (UEM) solution. Curated security policies via a UEM solution and endpoint security technologies can be applied once you know the kinds of devices you’re working with. Rogue asset discovery tools are also helpful for identifying endpoints behaving in a manner that would indicate malicious intent.

As we look ahead to 2024, organizations must understand that endpoint security is not just a necessity for risk reduction but a strategic investment in safeguarding the digital future.”

Tom Traugott, SVP of Strategy at EdgeCore Digital Infrastructure

As generative AI models are trained and use cases expand, in 2024 we will enter the next generation of edge and scaled computing through the demands of inference (putting the generative AI models to work locally). AI-driven supercomputing environments and databases will not stop or get smaller as they require 24/7 runtime, but they may seek to be closer to the end user in contrast to the large model training locations that can be asynchronously located. While this evolution is certainly something to watch, the next generation computing at the edge is still underway and likely another one to two years from materializing and understanding what it will actually look like, but we believe modern, state of the art power dense data centers will be required to support.”

Richard Tworek, CTO at Riverbed

“Being good corporate citizens, many companies will move to reduce their carbon footprint in 2024. They will also get a push from regulators in California and the European Union, who are looking to make the laggards meet at least minimal standards of reducing their carbon footprint. One area where AI will help companies is determining which devices are running 24/7 when they don’t need to be and suggesting where to throttle back power consumption if possible. Ironically, the move to AI will consume more computing power which will increase companies’ carbon footprint, making it necessary for companies to increase their sustainability efforts.”

Arti Raman, CEO and Founder of Portal26

“The rapid innovations in artificial intelligence (AI) this past year have brought companies to a crossroads: adapt AI or block AI. The productivity gains and competitive advantages AI allows cannot be ignored, nor can its security concerns. As companies debate on implementing or blocking AI, their employees continue to utilize the free and open resource across company networks – whether or not the company is aware. Going into 2024, investments in AI governance and visibility technology will play a significant role in widespread AI adoption.

A recent report about the state of generative AI showed that company executives, while optimistic about AI’s role in the enterprise, struggle to get valuable insight and visibility into its usage. Concerns like data governance and security will be a top priority. Two-thirds of the survey respondents already indicated a generative AI security or misuse incident in the past year.

While AI governance will be essential to building enterprise AI programs, companies must also develop and implement guardrails that ensure responsible usage alongside responsible development. The same survey also found that nearly every executive interviewed expressed concerns over Shadow AI, yet 58 percent provided less than hours of annual education and training on these tools.

Before companies can effectively and safely use generative AI tools, employees must be educated on utilizing best practices: writing prompts that achieve desired outcomes, keeping data security and privacy in mind when inputting data, identifying the quality and security of AI, verifying AI output, and more.

By investing in AI governance tools and developing complimentary guardrails, companies can avoid what may end up being the biggest misconception in 2024: the assumption that you can control the adoption of AI.”

Michiel Prins, Co-Founder of HackerOne

“As the adoption of generative artificial intelligence (GenAI) accelerates, organizations have realized they must prioritize security and risk management as they build and implement this emerging technology. The work we’re already doing with customers, including leading AI companies, proves the value hackers deliver to secure GenAI. Red teaming and the insights hackers offer will play an increasingly central role in ensuring the security of this new technology — as exemplified by the Biden Administration’s endorsement of red teaming in its recent executive order.

While we’re seeing more external support for ethical hacking, the value they offer isn’t new; ethical hackers are consistently first to pressure test emerging technology. Their creative, adversarial, and community-minded approach gives them a distinct advantage in understanding novel security issues.

Our 2023 Hacker-Powered Security Report found more than half of hackers expect GenAI tools to become a major target for them — and we can assume malicious actors are planning the same. As AI continues to shape our future, and new emerging technologies crop up, the ethical hacker community will remain at the forefront of identifying new risks.”

Joe Fousek, Legal Technology Evangelist at Aiden 

“As we move into 2024, we need to be aware that AI (Artificial Intelligence) will introduce new attack vectors that security teams must address. Compounding the problem, the turnaround time for exploitation will drop dramatically as bad actors learn to use AI. Law firms and legal departments — with their extensive list of specialized practice-specific apps that are already a prime target for hackers — could be especially vulnerable. Manual patching and the use of common deployment tools will struggle to keep up with escalating update cycles and new attack vectors. However, continuous updates that take advantage of AI and hyperautomation can decrease the severity and frequency of service interruptions. Apple has shown that forcing users to apply updates can improve security, and Microsoft is similarly updating their 365 suite of applications and services. Law firms and companies in general need to act with the same urgency across their entire applications portfolio by leaning into AI-based solutions.

The current concerns about confidentiality in Generative AI systems like ChatGPT are very real. These systems ‘learn’ by ingesting confidential information, potentially spitting out that confidential information in their output. The legal community will get through this in 2024 just like it got past the concerns 25-30 years ago when everyone was concerned about saving client data on law firm network resources or Document Management Systems. Generative AI is still in its infancy and faces usability and confidentiality concerns. Still, while mainstream material usage of Generative AI-powered apps in 2024 seems unlikely, the legal community is determined to incorporate this technology in their workflow. The ‘will’ to incorporate AI technology is extraordinarily strong, and as AI use increases in 2024, we will be looking to find the ‘way’ to take out the barriers that remain, allowing for more mainstream adoption in the years that follow.”

Alex Rice, Co-Founder & CTO of HackerOne

Over the next year, we’ll see many overly optimistic companies place too much trust in generative AI’s (GenAI) capabilities. Nearly half of our ethical hacker community (43 percent) believes GenAI will cause an increase in vulnerabilities within code for organizations. It’s essential to recognize the indispensable role human oversight plays in GenAI security as this technology evolves.

The largest threat GenAI poses to organizations is in their own rushed implementation of the technology to keep up with competition. GenAI holds immense potential to supercharge productivity, but if you forget basic security hygiene during implementation, you’re opening yourself up to significant cybersecurity risk.

Low code tools built on GenAI also threaten the security of software development lifecycles. GenAI empowers people without the proper technical foundations to produce technical products. If you don’t fully understand the code you’re producing, that’s a huge problem.

The best solution I see to ensure the safe implementation of GenAI is to strike a balance: organizations must remain measured and conservative in their adoption and application of AI. For now, AI is the copilot and humans remain irreplaceable in the cybersecurity equation.”

Chris Evans, CISO and Chief Hacking Officer at HackerOne

“As we look toward 2024, one thing is clear: a pipeline of diverse talent into the cybersecurity workforce remains a significant industry problem. However, there is hope to meet this challenge.

The growing popularity of ethical hacking, particularly among younger generations, has democratized how anyone with a computer, technical ability, and creativity can earn money and experience to jumpstart a cybersecurity career. We’ve heard countless stories of those in the ethical hacker community who started hacking in high school and recently found their calling — and a career — through hacking.

Hacking experience helps these individuals evolve into in-house penetration testers or bug bounty program managers, where their frontline experience provides invaluable insights. Ethical hacking is creating a diverse, skilled, and creative workforce capable of viewing cybersecurity challenges from multiple perspectives. The lower barrier to entry for individuals interested in this field builds an inclusive path toward the security experts of tomorrow and a safer internet for everyone.”

Steve Povolny, Director of Security Research at Exabeam

“In 2024, nation-states will increasingly develop their own large language models (LLMs).

Nation-states will remain a persistent threat in 2024— especially in light of the growing use of artificial intelligence (AI). With almost infinite resources currently at their disposal, nation-states will likely develop their own LLM generative AI systems specifically for malware. They’ll hire large teams to evolve models and build next-generation development tools that will be difficult to combat. The current geopolitical conflicts throughout will likely add fuel to the fire. Hacking operations are being used in tandem with military assaults to gather intelligence for war crimes prosecutions and to initiate disruptive actions that threaten civil society.

To combat nation-states using AI, businesses and government organizations must collaborate with cybersecurity providers to assess their resilience to sophisticated attacks, implement new artificial intelligence capabilities, and improve training and processes to optimize their understanding of these tools.”

Javed Hasan, CEO and Co-Founder of Lineaje

“Organizations’ inability to identify the lineage of AI is going to lead to an increase in software supply chain attacks in 2024.

Over the course of the last year, organizations have been heavily focused on how to prevent cyberattacks on AI. There’s only one problem: everyone is focusing on the wrong aspect. Many security teams have zeroed in on threats against AI once it’s deployed. Organizations are concerned about a threat actor using AI to prompt engineering, IT, or security to take action that could lead to a compromise.

The truth is that the best time to compromise AI is when it is being built. Much like the majority of today’s software, AI is primarily built from open-source software. The ability to determine who created the initial AI models, with what bias, which developer with what intent, is by and large far more critical to preventing gaps in an organization’s security posture.

I suspect that few organizations have considered this approach, and as a result, we’ll see all kinds of interesting challenges and issues emerge in the coming months.”

Ravi Pandey, Sr. Director of Vulnerability Management Services at Securin

“As we look ahead to the year 2024, the cybersecurity industry is expected to undergo several exciting and new developments. New technologies, techniques, and companies have all but made an impression on the industry with unique and exciting products; however, on the other hand, 2023 was another record-breaking year for cybercriminals and other cyber threats. Bad actors are taking advantage of new methods automation has afforded them and are creating more sophisticated tactics to leverage against all kinds of organizations. With the emergence of new threats, techniques, and attackers, security professionals must remain vigilant at all times. Here are some of my detailed thoughts on where the industry is headed.

Firstly, as we look ahead to the future of cybersecurity solutions, artificial intelligence (AI) will continue to play a critical role in computer security. Automation will enable us to scale and innovate more easily, simplifying the process. However, we must also be aware that cybercriminals will be quick to adapt and are already using this new technology on the offensive— identifying and exploiting vulnerabilities within an organization’s attack surface. These bad actors are taking advantage of AI to launch more complex attacks at an even faster rate. As cybersecurity leaders, we must focus on using AI as part of a defensive strategy to counter these threats and automate preventative measures. Specialized testing of AI applications will soon become a standard practice to assess their security and will be used to find potential vulnerabilities within companies’ networks.

Secondly, cyberattacks overall are expected to increase; ransomware groups are targeting vendors, government agencies, and critical infrastructure in the United States. Over the past five years, cyberattacks have surged and this trend shows no signs of slowing down, as cyber criminals move to target supply chains and zero-day vulnerabilities with relentless voracity. Breaches like the MOVEit file-transfer tool will continue to see lasting reach and have a ripple effect across organizations with its impact. With the assistance of AI, particularly generative AI (GenAI) technology, attackers will be able to refine their techniques, increasing their speed and effectiveness. GenAI will allow criminal cyber groups to quickly fabricate convincing phishing emails and messages to gain initial access into an organization. Cyber breaches or ransomware attacks have the potential to cost companies millions of dollars in remediation expenses. Organizations must, therefore, be proactive in implementing and updating their cybersecurity measures to combat these threats.

Thirdly, external Attack Surface Management (ASM) will become an essential aspect of comprehensive cybersecurity strategies. Unfortunately, many organizations currently lack the necessary vulnerability management and validation capabilities to effectively manage their external attack surface. With the US government’s Cybersecurity and Infrastructure Security Agency (CISA), there continues to be a national effort to understand, manage, and reduce risk to the country’s cyber and physical infrastructure. This effort includes a national mandate for asset discovery and incident disclosure, which will hopefully lead to increased trust and faster response times between the private sector and the government in the event of a cyber incident. With both AI and cognitive human intelligence driving these initiatives, new security practices must be developed and tested to manage the external attack surface and protect organizations from irreparable damage.”

Diana Freeman, Senior Manager Governance Advocacy & Initiatives at Diligent

“With the explosion of artificial intelligence (AI) over the past few months, we are seeing the topic come to the fore in conversations at school board meetings and local councils.

According to a Walton Family Foundation national survey, 76 percent of teachers agree that integrating ChatGPT for schools will be important for the future. However, a further survey by of 1,000 teachers/professors revealed that 71 percent of teachers say their school does not have a policy regulating ChatGPT use.

In 2024, as AI adoption gains traction in education, school boards must establish a roadmap for ethical governance and comprehensive oversight to responsibly maximize the applicable benefits of AI technology, while adhering to privacy guidelines and laws. Boards should take a leadership role in staying informed and must seek out educational resources in order to develop policy and oversight of this rapidly evolving technology.”

Richard Copeland, CEO of Leaseweb USA

“The emergence of AI and machine learning has unveiled a need for heavy computing power to train machine learning models, recognize images, model neural networks, detect fraud, and more. Companies looking to leverage AI and machine learning (ML) will require a hosting provider that ensures high availability, flexible configurations, and powerful computing options to address scalability and cost concerns. As AI and ML technologies become more widespread, we can expect demand for cloud computing to increase over the next few years.

Connie Stack, CEO of Next DLP

“In 2024, organizations will be pressured to consolidate their security stack. Driven by a continued shortage of cybersecurity talent and cost-saving initiatives, in 2024, we will continue to see CISOs pressured by non-security-focused peers and executives to adopt some of Big Tech’s solutions as the single source of data protection. Consolidation is here to stay, but putting all your eggs in one basket is never a good strategy– in life or in cybersecurity. There’s a long list of pros and cons. Cost savings is the core pro for a ‘good enough’ broad platform, but CISOs must consider the cons seriously. From solution gaps, to narrow OS and app coverage, to additional staff or consultants required to manage complex implementations– the widely-touted cost-saving on software subscriptions can quickly get eaten up by supplemental point solutions and consulting fees. Additionally, more red flags become apparent when considering how challenging it is to get resolution on support tickets and feature requests.

As we approach the New Year, I would remind anyone looking to consolidate in 2024 to evaluate their current stack, identify which tools can be replaced, and develop a roadmap tailored to your specific security goals. Consolidation involves more than adopting new technology or embracing an aggressively discounted license that finance teams adore– it’s about reshaping your security strategy, leveraging Big Tech and other specialist solution providers, quantifying the total cost of ownership, understanding your gaps, and aligning them with your organization’s goals and security needs.”

Richard Bird, Chief Security Officer at Traceable

“In 2024 we will continue to see a hockey stick curve in the upward trend of APIs being used to attack your organization and a moderately curved trend in the number of companies that are moving from thinking about doing something about API security, to actually doing something about it. The market is proving to be stubborn in acknowledging the scale of the problem, but the bad actors are showing no restraint in using API security weakness to their advantage.

My number one advice going into 2024 is to stop trusting that your APIs are secure and start asking the hard questions about how exposed your organization currently is to API key theft, API transactional fraud and authorization level exploits. Until you get curious enough to start digging, APIs will remain your greatest unmitigated risk in 2024.”

Tom Ammirati, CRO at PlainID 

“The security landscape will continue to present new and challenging obstacles for organizations in 2024. We saw the global average cost of a data breach in 2023 jump to $4.45 million, and as ransomware attacks continue to persist, that number will only go up in the coming year. To avoid this kind of payout, many organizations are investing in cyber insurance; however, that too will only get more expensive as its demand grows. In an attempt to decrease the financial and operational burden of devastating cyberattacks on organizations, regulators and legislators will ramp up their initiatives to hold cybersecurity firms accountable for breach notifications, security posture management and zero trust initiatives.

Innovative firms seeking to provide a holistic cybersecurity solution will look to advance identity and authorization concepts such as identity security posture management (ISPM) and identity threat detection and response (ITDR). However, both the ISPM and ITDR spaces are still nascent, and analysts are still forming their views on what these categories entail — and if they are in fact categories or simply functionalities. Additionally, cybersecurity firms will continue to evaluate AI and its potential in proactively remediating breaches as a whole and — in particular — addressing identity-related breaches and attack surface.

While high-tech solutions like AI will continue to dominate the cybersecurity conversation, there’s a low-tech option that organizations must invest in more in 2024: cybersecurity training for employees. Of course, all employees need functional training when it comes to using new or updated technology, but organizations can dramatically improve their cybersecurity posture by increasing education for employees on threats both historical and trending so they are more familiar with what threats can look like. I foresee an increase in government programs designed to harden and regulate infrastructure and hold firms accountable for cyber negligence, which will force private and public firms to further their investment in cyber security at all levels. In turn, this move will result in increased and improved programs in educational systems to increase the skill set of security practitioners.”

Caroline Vignollet, SVP of Research & Development at OneSpan

“In this ever-evolving tech landscape, as AI advances, so do the threats, especially with quantum computing on the rise. These increasingly complex challenges have highlighted the growing skills gap in the cybersecurity realm, with organizations already feeling the ramifications. This shortage of experts is critical, and by 2024, the demand will surge, emphasizing the urgent need for expanded skill sets. To navigate this landscape, organizations must prioritize fostering a culture of innovation. This proactive mindset not only anticipates threats but also encourages creative solutions. By investing in research, nurturing talent, and encouraging new approaches, companies can fortify their digital defenses and stay ahead of threats.”

Andre Durand, Co-Founder & CEO of Ping Identity

“Identity has always been a gatekeeper of authenticity and security, but over the past few years, it’s become even more central and more critical to securing everything and everyone in an increasingly distributed world. As identity has assumed the mantle of ‘the new perimeter’, and as companies have sought to centralize its management, fraud has shifted its focus to penetrating our identity systems and controls at every step in the identity lifecycle, from new account registration to authentication. 2024 is a year when companies need to get very serious about protecting their identity infrastructure, and AI will fuel this imperative. Welcome to the year of verify more, trust less, when ‘authenticated’ becomes the new ‘authentic.’ Moving forward, all unauthenticated channels will become untrustworthy by default as organizations bolster security on the identity infrastructure.”

Mike Scott, CISO at Immuta

“Third-party risk will evolve as a big data-security-related challenge in the coming year as organizations of all sizes continue their transition to the cloud. It’s clear teams can’t accomplish the same amount of work at scale with on-prem solutions as they can in the cloud, but with this transition comes a pressing need to understand the risks of integrating with a third party and monitor that third party on a ongoing basis. Organizations tend to want to move quickly, but it’s important that business leaders take the time to evaluate and compare the security capabilities of these vendors to ensure they are not introducing more risk to their data.”

Sameer Hajarnis, SVP and GM of Digital Agreements at OneSpan

“As we approach 2024, the evolution of e-signature methods will redefine the landscape of digital agreements and play an important role in establishing digital identities. The shift from physical documents to digitized formats represents just the initial phase of this transformation. Looking ahead, I see wider adoption of alternative e-signature methods, potentially leveraging technologies like facial recognition or even audio-based authentication. However, the rapid pace of innovation in this realm contrasts with the relatively slow progress in regulatory frameworks and compliance standards, so organizations need to be certain of the security of their e-signature solutions.”

Roman Arutyunov, Co-Founder and SVP Products at Xage Security

“As we approach the new year, the escalation of geopolitical tensions poses a serious threat to critical infrastructure. While nation-state threats loom, opportunistic ransomware groups taking advantage of these situations also pose significant risks. Ransomware-as-a-service continues to rise, following the same repeated pattern of credential theft, privilege escalation, and lateral movement.

To counter these threats, emphasis should be placed on proactive solutions, eliminating compromised credentials, securing access, and controlling any east-west access between machines, devices, or apps. As such, investments should prioritize a strong foundation in protection rather than detection and response strategies. Additionally, we can expect to see more CISA-driven regulation and enforcement for key sectors beyond the TSA and EPA, such as critical manufacturing, particularly given the recent Clorox attack having a lasting impact on operations.

A promising sign is that we are beginning to see a shift in cybersecurity investment strategies that better reflect the current threat landscape. Companies are recognizing that threat hunting and responding to endless detections and false positives uses too much of their precious security resources and they’re growing tired of chasing needles in a haystack. They are now turning their attention to reducing the attack surface by proactively protecting their assets. By prioritizing tangible protection solutions that enhance productivity while complying with expanding regulations, organizations can ensure they can address emerging threats from around the globe in 2024 and beyond.”

Karl Fosaeen, VP of Research at NetSPI

“Across industries, even with workloads shifting to the cloud, organizations suffer from technical debt and improper IT team training – causing poorly implemented and architected cloud migration strategies. In 2024, IT teams will look to turn this around and keep pace with the technical skills needed to secure digital transformations. Specifically, I expect to see IT teams limit account user access to production cloud environments and monitor configurations for drift to help identify potential problems introduced with code changes.

Every cloud provider has, more or less, experienced public difficulties with remediation efforts and patches taking a long time. I anticipate seeing organizations switch to a more flexible deployment model in the new year that allows for faster shifts between cloud providers due to security issues or unexpected changes in pricing. Microsoft’s recent “Secure Future Initiative” is just the start to rebuild public trust in the cloud.”

Nick Carroll, Cyber Incident Response Manager, Raytheon, an RTX Business

“As we head into 2024, organizations will be challenged to strengthen their defenses faster than cyber threats are evolving. This ‘come from behind’ rush to keep pace with attackers can often lead to the harmful practice of organizations skipping the foundational basics of cyber defense and failing to establish a general sense of cyber awareness within the business. Without a solid security culture at the foundation, security tools, such as expensive firewalls or endpoint detection and response (EDR), will ultimately become ineffective down the line. If organizations haven’t already, they must begin to build cybersecurity awareness among employees and third-party partners, while also determining the best path for how to integrate security into the organization’s culture and operations. Once these steps are taken, organizations will have a solid organizational footing that will position them for success in their cyber defense initiatives for the long run.”

Chaim Mazal, Chief Security Officer, Gigamon

“Security data lakes are the future of cybersecurity – Data lakes have long been leveraged in other industries, but are only now becoming a solution to the growing data security concern. Security data lakes haven’t traditionally been used due to the lack of data available in the security industry. With the continued shift to hybrid cloud environments, data simultaneously, and automatically, lives behind encryption. All cloud traffic is encrypted. While it potentially limits viewership by cybercriminals, it also limits the data security professionals have access to in real-time. However, visibility into encrypted traffic is essential as cybercriminals traverse laterally in the network, undetected. As organizations prioritize visibility into this traffic, security data lakes can be a game-changer. By gathering all available data into a security data lake, organizations will be able to create overlays that enable them to monitor what’s happening across the network in real-time.”

Shane Buckley, President & CEO, Gigamon

“The future of cybersecurity isn’t AI. It’s data – AI is the shiny new distraction to investors and enterprises across industries. While it has the potential to change the security landscape dramatically, it cannot do it on its own. Large language models (LLMs) are only as accurate as the data within them. However, with 95 percent of network traffic encrypted, there is a surplus amount of data not visible – and therefore not being used – to optimize AI toolsets. Without that data, networks, organizations, employees, and customers are at substantial risk of being compromised. As organizations look to prioritize budgets for 2024 and look to do more with less, they must have visibility into encrypted cloud traffic to not only improve their security posture but also make the most of AI toolsets.”

Merritt Baer, Field CISO at Lacework

“In 2024, I predict that a large new open source vulnerability will be disclosed. There will always be a next “log4j”. Security and open source communities are intertwined in really healthy ways, but they also require ongoing maintenance and repair. Additionally, I predict there will be a big security issue in an AI model, and there will be a big security issue in using AI for security. On top of all of this, the SEC will continue to apply standards in more aggressive ways, such as the new 4-day incident disclosure reporting requirements that went into effect on Dec. 18.”

Will LaSala, Field CTO at OneSpan

“In 2023, we saw generative AI take off and many companies jumped on implementing and using genAI-powered technologies, and they are now realizing the implications of this rapid adoption both internally and externally, namely in regards to trust and security. To account for this, in 2024, the market will need to create and adopt new solutions focused on reestablishing trust within today’s digital world. We can expect to see an uptick in solutions that focus on verifying digital assets online, as well as digital agreements. With digital transactions at the core of every business, we need to prepare ourselves for an upgrade in innovation and bleed confidence into every interaction we have with customers.”

Frederik Mennes, Director Product Management & Business Strategy at OneSpan

“As we look to 2024, there will be an uptick in new industries investing in security and digitization tools. Industries that have been slower to digitize, such as the energy, mortgage, and transportation sectors are being hit with new regulations, forcing investment in security measures like mobile and cloud authentication along with the adoption of secure e-signatures.

With so much business now conducted online, there’s a renewed focus on protecting the transaction, forcing both companies and consumers to take a closer look at their security posture. As new regulations are adopted, such as the Digital Operational Resilience Act (DORA) in the EU, more organizations will adopt FIDO (Fast Identity Online) and phishing- resistant solutions in the financial sector. Our world of connection is changing, and to compete in 2024, security and digitization must be at the forefront– across all industries.”

Pukar Hamal, Founder and CEO of SecurityPal

“As AI regulations are codified in 2024 — and even before then, when companies feel obligated to take a more scrutinous look at how vendors are using their data to make their business operations run smoother, more effectively, and so on — we’ll start to see a greater divide between established, far-reaching vendors and newer, more specialized entrants to the market. The former will have a clear advantage: companies are already using their products and have undergone the necessary security and GRC reviews, so deciding whether or not the vendor deserves to continue using their data to refine AI capabilities is relatively simple. New entrants, however, will have to have significantly compelling value propositions and be able to convince an ever more security and privacy-conscious GRC team that they are the best solution.”

Joe Palmer, Chief Product & Innovation Officer at iProov

“Over the past year, many financial services organizations have expanded remote digital access to meet user demand. However, this has widened the digital attack surface and created opportunities for fraudsters. The US financial services sector has been slower to adopt digital identity technologies than some other regions which could be attributed to the challenges it faces around regulating interoperability and data exchange. Yet, with synthetic identity fraud expected to generate at least $23 billion in losses by 2030, pressure is mounting from all angles. Consumers expect to open accounts and access services remotely with speed and ease while fraudsters undermine security through online channels and siphoning money. All the while, there is the serious threat of Know Your Customer (KYC) and Anti Money Laundering (AML) non-compliance. Penalties for this include huge fines and potentially even criminal proceedings. Further, there is an increased risk of bypassing sanctions, and financing state adversaries. In response, many financial institutions are being prompted to take action. This has involved replacing cumbersome onboarding processes and supplanting outdated authentication methods like passwords and passcodes with advanced technologies to remotely onboard and authenticate existing online banking customers.
One of the front-runners is facial biometric verification technology, which delivers unmatched convenience and accessibility for customers while at the same time unmatched security challenges for adversaries. More financial institutions will recognize how biometric verification will reshape and redefine the positive impact that technology can have in balancing security with customer experience and will make the switch.”

Andrew Bud, Founder & CEO of iProov

“An estimated 850 million people worldwide lack a legal form of identification, and without identity, people struggle to open a bank account, gain employment, and access healthcare, which leaves them financially excluded. Digital identity programs improve access to digital services and opportunities. They enable people to assert identity, access online platforms, and participate in digital government initiatives. Supported by investment from World Bank funds, digital identity programs can assist less advanced economies in preventing identity theft and fraud as well as provide an alternative way to prove their identities and access essential services such as benefits, healthcare, and education. Based on a decentralized identity these programs will enable users to digitally store and exchange identity documents, such as a driver’s license, and credentials, such as diplomas, and authenticate without a central authority. A decentralized identity puts the user in control by allowing them to manage their identity in a distributed approach. They will offer the convenience end-users now demand and open essential pathways for previously disadvantaged or marginalized individuals to access financial and welfare services.”

Peter Evans, CEO of Xtract One Technologies

“In 2024, the landscape of artificial intelligence is set for a transformative shift as it moves from being a mere buzzword to a realm of pragmatic and purpose-built applications–the year ahead shows a transition to a climate where practicality is at the forefront. Drawing inspiration from the historical cycle of tech innovations like Segway, TiVo, Google Glass, and Palm Pilots, the pattern suggests a departure from hyped and over-marketed tech innovations to tangible, problem-solving applications addressing specific threats. While the general excitement surrounding AI might cool down, specific solutions tackling real-world problems will rise to the forefront–take for instance, physical security. This evolution will start to extend into the daily lives of individuals, with the integration of AI threat detection at entry points becoming commonplace. Think of it as fire alarms for the digital age, where AI-powered weapons detection systems become mandatory instead of just niche solutions. This mandate has the potential to spur innovation and offer proactive safety measures and advanced insights.

Amidst this technological transformation, Generative AI is poised to play a pivotal role, and its influence is expected to become more mainstream with new applications. Though still in its early stages, Generative AI has the creative potential to redefine how we create, design, define, draw, and express ourselves. With this continued innovation comes an opportunity for the adoption of pragmatic, AI-driven solutions to today’s modern problems. As more states, education systems, sports leagues, and other organizations make weapons prevention systems mandatory, there is a growing demand for next-generation solutions. Scaling these systems at every door, gate, and entrance cost-effectively will drive broader adoption versus legacy traditional labor-based manual approaches, and can provide advanced insights to heighten safety.”

Neil Serebryany, Founder and CEO of CalypsoAI

“Given the prevalence of generative AI, we’re going to see the number of security incidents involving these tools grow in volume and complexity in 2024. Specifically, we’ll see the first large-scale breach of a foundation model provider, such as OpenAI, Microsoft, Google, etc. This will lead to a large-scale security incident thanks to all the data – including personal identifiable information and in some cases company secrets – that has been sent to the model by hundreds of millions of regular users over the past year. To better protect themselves, and stop valuable, private information from entering public LLMs, enterprises need to implement appropriate guardrails that ensure data protection and governance, ethical usage, user monitoring and oversight, and full auditability. Only with the right safeguards in place can companies take advantage of all the benefits generative AI has to offer, without worrying about the risks of data exposure.”

Andrew Barnett, Chief Strategy Officer at Cymulate

“We’re due for extinction event in 2024. Even following the most public-facing breach, somehow organizations have been able to recover both their reputation and their stock prices within around three quarters following an attack. This isn’t sustainable, and frankly, it’s shocking that we haven’t seen businesses fail after a breach of that magnitude. 2024 will change things. The introduction of AI has made for smart criminals becoming even smarter. And I anticipate this we will see a company face an extinction because of an attack in the coming year.”

Mike DeNapoli, Director and Cybersecurity Architect at Cymulate

“New SEC regs are going to shake up business as we know it. Significant changes will arise in 2024 as a result of the newly adopted SEC regulations. This will include a range of responses as local government entities, law firms, and other countries follow suit and adapt their own rules and regulations. States like California and New York have previously done this, but we can expect to see other state and local governments begin to ramp up their regional regulations, with particular attention to data control and privacy, given the SEC regulations focus on material impact. Other countries with securities and exchange regulatory bodies will also put forward their own regulations, requiring strict notification schedules and detailed annual reports. Further, we expect law firms and practices to increase their activity around individual and class-action lawsuits against organizations who create some form of perceived or actual harm that can be used as the basis for recovering damages.”

Dr. Brett Walkenhorst, CTO at Bastille

“Protecting against wireless threats will be a key focus in 2024 across industries. The Department of Defense (DoD) has implemented a September 2024 deadline for SCIFs and SAPFs to implement electronic device detection systems to help prevent classified leaks. These detection systems will look for unauthorized devices such as cell phones, wearables, laptops and tablets, medical devices, USB Cables with hidden Wi-Fi and Bluetooth data extraction capabilities, and any device emitting cellular, Wi-Fi, Bluetooth, or BLE. Wireless threats are not limited to the defense and intelligence communities— enterprises and organizations face very similar security challenges. It is imperative for organizations of all industries to adopt security measures and technology such as Bastille’s Wireless Intrusion Detection System (WIDS) to protect and prevent sensitive information from being compromised.”

Ofer Friedman, Chief Business Development Officer at AU10TIX

“2024 will be the year of the inverse parabola of fraudster AI adoption. Most identity fraudsters will not be using AI, so it will be the opposite of the typical bell curve. Instead, it will be effectively adopted primarily by, on one end, amateurs using the free or very inexpensive applications available, and on the other end, sophisticated criminal organizations using professional tools and injection to commit large scale identity fraud.”

Kaarel Kotkas, CEO and Founder of Veriff

“There will be an increase in new founders using AI to solve society’s challenges. New founders are in a prime position to solve complex problems and societal issues with AI solutions. As digital natives, this generation of entrepreneurs has the innate ability to understand AI, its applications, and how it can influence the digital age – for better or worse. For example, in 2023, fake identities and deepfakes became a significant challenge to identity verification – 85 percent of all fraud in 2023 was impersonation fraud. But, despite the threats it can pose, AI is also applied to provide fast and accurate verification and authentication of real users. While the AI threat landscape constantly evolves, we should look to these new leaders to ensure their companies are equipped to easily implement new techniques to solve major challenges ranging from security to predictive analytics to user authentication.”

David Divitt, Senior Director of Fraud Prevention and Experience at Veriff

“Access to advanced technologies will be more widespread. There has been a 20 percent rise in overall fraud in the past year and it will continue into 2024. We will see the number of account takeovers using deepfakes with liveness rise as the use of biometrics for authentication purposes increases. As tools like AI become increasingly easier and cheaper to access and facilitate, we will see more impersonation and identity fraud-type attacks. We’ll see more counterfeit attacks pushed on and at the masses as well as at-scale mass attacks that use deepfake libraries and acquired identities. The trifecta of counterfeit templated docs, deepfake biometrics, and mass stolen credentials will continue to be a looming threat.”

Suvrat Joshi, Senior Vice President of Product at Veriff

“Authentication will have an increased focus on customer experience. Traditional multi-factor authentication should no longer be seen as an adequate security strategy – for the companies or their customers. Cybersecurity threats continue to become more advanced and while companies seek to improve their security methods, their users desire more streamlined and efficient user experiences. To meet these expectations and needs, security leaders will continue to integrate and move to a single device to be used for all forms of digital interaction with a customer and will encourage the use of biometric authentication, like facial recognition, to verify identity while ensuring a positive customer experience.”


Share This

Related Posts