Ad Image

The Definitive Guide to Artificial Intelligence Predictions for 2024

For our 5th annual Insight Jam LIVE! Solutions Review editors sourced this resource guide of artificial intelligence predictions for 2024 from Insight Jam, its new community of enterprise tech experts.

Note: Artificial intelligence predictions are listed in the order we received them.

Artificial Intelligence Predictions from Experts for 2024


Sam Gupta, Founder and CEO of ElevatIQ

The year is going to be all over the place, with dramatic events occurring every single quarter

“Because of the uncertainty, unfortunately, digital transformation initiatives face the most impact. AI is probably the only hope we have. But it might take a while before we see the real impact of how AI is going to impact how we conduct business. “

Moe Tanabian, Chief Product Officer of Cognite

Digital mavericks – or the new age CIO – will need to prioritize the below in order to see success in 2024

“Stop investing in IT and data and analytics skills and focus more on connecting AI-powered use cases to business impact. Successful digital mavericks know that their organization’s charter and KPIs must be even more tied to business value and operational gains. Instead of deploying proofs of concepts, their KPIs must reflect business impact, successful scaling, and other product-like metrics.”

“Acquire capabilities to make your generative AI deployments trustworthy, safe and secure. Combat hallucinations, security vulnerabilities, and privacy risks associated with LLMs and the meteoric rise of generative AI by implementing contextualized industrial knowledge graphs, strong anonymization, and cybersecurity measures.”

“Speed of innovation will matter more in 2024 than any other year, so challenge traditional thinking around DIY (do-it-yourself) projects and technology, especially with the availability of generative AI enabled tools to innovate in Industrial Operations faster and cheaper. Instead of making a name based on ‘completeness and sophistication of tech stack,’ consider building a reputation based on ‘time to scaled value,’ a far more critical metric.”

“Broad experience with ChatGPT and other generative AI-driven agents are rapidly changing expectations of simple and immediate access to data, especially industrial data. Embrace user experience frameworks that take full advantage of natural language interactions to define how individuals can access, visualize and organize data in the course of their work.”

Rahul Pradhan, Vice President of Product and Strategy at Couchbase

Expect a paradigm shift from model-centric to data-centric AI

“Data is key in modern-day machine learning, but it needs to be addressed and handled properly in AI projects. Because today’s AI takes a model-centric approach, hundreds of hours are wasted on tuning a model built on low-quality data.”

“As AI models mature, evolve and increase, the focus will shift to bringing models closer to the data rather than the other way around. Data-centric AI will enable organizations to deliver both generative and predictive experiences that are grounded in the freshest data. This will significantly improve the output of the models while reducing hallucinations.”

Multimodal LLMs and databases will enable a new frontier of AI apps across industries

“One of the most exciting trends for 2024 will be the rise of multimodal LLMs. With this emergence, the need for multimodal databases that can store, manage and allow efficient querying across diverse data types has grown. However, the size and complexity of multimodal datasets pose a challenge for traditional databases, which are typically designed to store and query a single type of data, such as text or images. “

“Multimodal databases, on the other hand, are much more versatile and powerful. They represent a natural progression in the evolution of LLMs to incorporate the different aspects of processing and understanding information using multiple modalities such as text, images, audio and video. There will be a number of use cases and industries that will benefit directly from the multimodal approach including healthcare, robotics, e-commerce, education, retail and gaming. Multimodal databases will see significant growth and investments in 2024 and beyond — so businesses can continue to drive AI-powered applications.”

Priya Rajagopal, Director of Product Management at Couchbase

AI tools will separate the good developers from the exceptional ones, playing an integral role in developer productivity

“I predict that AI tools will separate the good developers from the exceptional ones. Good developers will lean on AI tools to lighten their workload. Exceptional developers will use AI tools to boost productivity on repetitive, mundane tasks so they can focus more on being creative, tackling the hard problems and to handle the higher value tasks that promote innovation.”

“While I caution against developers getting too reliant on AI tools and leaning on productivity tools to do all or most of their work for them, the reality is that AI will continue to play a critical role in developer productivity, as long as developers understand the limitations of these tools and exercise good judgment when using AI tools. AI overuse can stifle innovation and critical thinking – and often the results from these tools may not be the most accurate, up-to-date or efficient way to solve the problem.”

Sarah Choi, Sr. Manager of Product Marketing at SonicWall

AI tools will separate the good developers from the exceptional ones, playing an integral role in developer productivity

“Companies looking to drive productivity and efficiency will invest in AI technology giving them opportunities to evolve business models and provide near-term operational efficiencies. However, while AI will help enterprises evolve and accelerate their digital transformations, it will need to mitigate emerging risks by implementing a governance model that accommodates oversight for sensitive data and provides a looking glass into potential misinformation. Early adopters will help shape these new generative AI tools to deliver real value to customers and the bottom line – but the question remains: “Will there be enough mass adoption to overcome the adoption curve chasm, or will generative AI be another hyped, emerging tech fad like blockchain and virtual reality?”

Nima Negahban, CEO and Co-Founder at Kinetica

Generative AI turns its focus towards structured, enterprise data

“Businesses will embrace the use of generative AI for extracting insights from structured numeric data, enhancing generative AI’s conventional applications in producing original content from images, video, text and audio. Generative AI will persist in automating data analysis, streamlining the rapid identification of patterns, anomalies, and trends, particularly in sensor and machine data use cases. This automation will bolster predictive analytics, enabling businesses to proactively respond to changing conditions, optimizing operations, and improving customer experiences.”

Jason Beres, Sr. VP of Developer Tools at Infragistics

AI Technology Will Not Replace Developers

“AI is moving to the forefront of software development, with IT leaders using AI to speed time to market and alleviate the developer shortage. While generative AI–based tools can speed up many common developer tasks, complex tasks remain in the domain of developers for now. AI technology will be used to augment developers rather than replace them as some tasks continue to demand skilled developer expertise.”

Low-Code/No-Code Tools Will Dominate Software Development in 2024

“In 2024, low-code/no-code tools will dominate software development as they bring the power of app development to users across the business. The rise of “citizen developers” has proven that as we move toward a no-code future, people without coding experience are changing the working world. As tech companies adopt low-code/no-code tools, they’ll save time and money, rather than falling behind early adopters.”

Vasu Sattenapalli, CEO at RightData

Generative AI Will Move to Modern Data Management

“Historically, data management is a bit of a black box with highly technical skills required to create a strategy and manage data efficiently. With the help of LLMs, modern data management will change its framework, allowing users to participate in the entire data stack in a fully governed and compliant manner.”

NLP-Powered Analytics Will Be the Next Wave of Self Service

“Analytics have been stuck in dashboards, which will no longer be the only way to consume business insights. Voice and Generative AI will enter the analytics space where you can ask questions of your data verbally and get a response back in minutes, if not seconds. Imagine even pulling out your phone with an app specific to your organization’s data and being able to access a world of insights. It’s coming!”

Naren Narendran, Chief Scientist at Aerospike

Decentralization of LLMs

“Though LLMs are impressive in their generality, they require huge amounts of compute and storage to develop, tune, and use, and thus may be cost-prohibitive to the overwhelming majority of organizations. Only companies with vastly deep resources have the means to access them. Since there needs to be a path forward for making them more economically viable, we should expect to see solutions that decentralize and democratize their use. We should anticipate more numerous, more focused, and smaller models that consume less power becoming more readily available to a wider range of users. These focused models should also be less susceptible to the hallucination effects from which LLMs often suffer.”

Mike Loukides, VP of Emerging Tech Content at O’Reilly Media

GenAI Will Change the Nature of Work for Programmers 

“GenAI will change the nature of work for programmers and how future programmers learn. Writing source code will become easier and faster, but programming is less about grinding out lines of code than it is about solving problems. GenAI will allow programmers to spend more time understanding the problems they need to solve, managing complexity, and testing the results, resulting in better software: software that’s more reliable and easier to use.”

A New Generation of AI-Assisted Programming Tools 

“Copilot is just the start. We’ll see a new generation of AI-assisted programming tools. We are already seeing tools for managing prompts; we will soon have libraries of prompts designed to direct GenAI to accomplish specific tasks. And, while Copilot is primarily useful for low-level coding, we will soon see generative AI tools for high-level tasks like software architecture and design.”

AI-Assisted Programming Will Reward Higher Level Skills

“AI-assisted programming is going to reward software developers who focus on higher level skills: Software architecture, understanding users’ requirements, and thinking about how to solve problems, and of course, testing. But I don’t think there will be any new job titles. There will be a shift in the skills needed–away from low-level coding and towards higher level thinking.” 

AI Will Drive Adoption of Proactive Security Models 

“There will be a greater focus on proactive approaches and tools including firewalls, zero trust, malware, and hardening. The top GenAI threat issues are growing privacy concerns, undetectable phishing attacks, and an increase in the volume/velocity of attacks. Addressing the complex security challenges AI poses requires strategic planning and proactive measures. On O’Reilly’s learning platform, we have seen a huge increase in interest in most security topics. Governance, network security, general application security, and incident response have shown the largest increases. Security is on the map in a way that it hasn’t been in many recent years.”

Significant Attacks Against AI Applications 

“We will see significant attacks against AI applications in the wild. AI provides cyber criminals with new attack vectors, such as prompt injection, that we don’t yet know how to defend against. These attacks will include subverting AI to generate hate speech and misinformation, along with sending users to sites that install malware. Companies deploying AI in real-world applications will need to understand these new attack vectors and monitor their AI systems with these attacks in mind.”

Adrien Gendre, Chief Product Officer at Vade

A new malicious use of generative AI may emerge

“2023 was the year of generative AI and large language models (LLMs). The sophisticated capabilities of ChatGPT simultaneously captured imaginations and stoked fears of AI’s potential as a technology. That was especially true for cybersecurity, where most attention focused on ability to create content—from producing phishing templates to malicious code and more.

Content creation, while a risk to cybersecurity, is one that our modern solutions can address. The real threat is generative AI developing the ability to plan and orchestrate attacks. If that were to happen, it would mean that AI could design and execute attacks on the fly—and do so using information on the Internet. This includes details that may expose our vulnerabilities, cybersecurity solutions, and more. And the threat would only increase over time. Each failed attempt would be an opportunity for AI to learn and immediately improve.”

Jeremy Burton, CEO at Observe

Why AIOps Fall Is LLMs Triumph

“In 2024, I expect to see more companies reach a breaking point with AIOps and shift their focus towards the potential of LLMs. While AIOps was a laudable concept when introduced, in practice it has failed to live up to its promise. The idea that you could train a model on data emitted by apps, that change everyday, is nothing more than a pipe dream. Large Language Models (LLMs) appear to be a far more promising alternative because they attack the problem differently and help users make more intelligent decisions. Companies are waking up to this fact but many more will begin to act on it in the new year.”

LLMs: The Second Coming Of AI In Observability

“In 2024, it will become apparent to almost everyone that LLMs / GPT deliver meaningful productivity improvements in the world of observability. From simple help to writing RegEx’s and queries, LLMs and the friendly GPT interface will enable new users to get up to speed faster and resolve incidents faster than ever before.At the same time, AIOps (the first generation of AI) will continue to fall out of favor as those that implemented it realize that the promised benefits like root cause detection just aren’t there.”

David Strauss, CTO at Pantheon

AI website generators will ultimately be seen as toys

“In the next year, AI’s promise will primarily be realized through the cooperative support of humans who would otherwise be expected to handle the entire tasks themselves. In other words, AI will be leveraged primarily in areas where the user is already competent at the task. Think diagnostic support for physicians, writing support for marketers, coding support for engineers, and content moderation automation still sometimes subject to human review escalation. That said, there are exceptions, which David can elaborate on.”

2024 will have the first AI price wars

“It’s going to be like early VMs on the cloud all over again as vendors vie for spending. I agree with Google’s internal letter: there is no moat here for any company willing to spend prolifically on AI product development.”

Josh Koenig, CSO at Pantheon

The AI conversation will move past the “Peak of Inflated Expectations” on the Hype Cycle

“Josh predicts we’ll see some high-profile failures that puncture the narrative, and a lot of companies will cut or scale back investments as real-world applications are harder to find than initially thought. Google, Microsoft, and OpenAI will keep trucking, as will the open-source alternatives, but it’ll be a while before we hit the “Slope of Enlightenment.”

Phil Nash, Developer Advocate (JavaScript) at Sonar

Overconfidence in Generative AI code will lead to generated AI vulnerabilities

“As more and more developers use generative AI to successfully help build their products, 2024 will see the first big software vulnerabilities attributed to AI generated code. The success of using AI tools to build software will lead to overconfidence in the results and ultimately a breach that will be blamed on the AI itself. This will lead to a redoubling across the industry of previous development practices to ensure that all code, written by both developers and AI, is analyzed, tested, and compliant with quality and security standards.”

Generative AI will evolve beyond the chatbot

“The breakout star of generative AI has been ChatGPT; subsequently 2023 saw most interfaces to generative AI via chat. As designers and developers work with the technology, and as more specialized LLMs are produced, we’ll see AI fade into the background, but we’ll see more powerful applications built upon it. Right now, chatbots are hammers and everything looks like a nail, to truly use AI to its full potential we will need to move beyond that.”

Jonathan Vila, Developer Advocate (Java) at Sonar

AI-Generated code growth

“As LLMs are going to be more accessible and diverse, more generative AI code tools with integrations with specific or more focused libraries will appear. I see more development regarding test generators, UI generators, integration plumbing generators, where users with natural prompting will be able to get the necessary code, aligned (or not) with the current user’s code base.”

Ben Dechrai, Developer Advocate at Sonar

AI Coding Assistants will keep getting better

“There are many of us saying that AI won’t kill the developer role, but that’s based on the current capabilities, and the need for a human to check the computer’s “intelligence.” While Artificial General Intelligence is still a pipe dream, GenAI solutions are getting very good, especially those that are trained for specific work (i.e. ChatGPT is too generic, but CoPilot/Cody are specialized and provide better results for coding). GPT-4 is already leaps and bounds above GPT-3.5, and while some reckon GPT-5 won’t be as huge a leap, in the next year, I feel we will keep closing that gap on how much developers need to do. So, we’re going to see more developers vetting generated code instead of writing the bulk of it by hand.”

AI as a Service

“It’s already possible to use OpenAI’s ChatGPT in your own applications, but being able to model responses based on your own, proprietary datasets will bring much more value to businesses. This leads to issues of data sovereignty and confidentiality, which will see the rise of not just cloud-based AI services, but the ability to run them in siloed cloud-environments.”

Stefan Schiller, Vulnerability Research at Sonar

AI-Assisted attacks to become more sophisticated and automated

“IT security attacks leveraging AI are expected to become more sophisticated and automated. Hackers will likely use AI to analyze vast amounts of data and launch targeted attacks. AI-driven phishing attackers capable of generating highly convincing and personalized messages, which trick users into revealing sensitive information, may increase. Furthermore, AI-powered malware could adapt and evolve in real time, making it more challenging for traditional antimalware detection systems to keep up.”

Eoin Hunchy, Co-Founder and CEO at Tines

For attackers, AI is a trusty sidekick. For defenders, it’s a game-changer

“For all the FUD (fear, uncertainty, and doubt) about an AI arms race between attackers and defenders in cybersecurity, AI is proving to be far more of an asset for security teams than hackers. Generative AI is helping bad actors write malware and phishing emails, but there was no shortage of malware before AI and people were already happy to click on phishing attempts. For defenders, on the other hand, AI has been a game changer. The powerful technology is tailor-made for solving security team’s most-pressing challenges: too much data, too many tedious tasks, and not enough time, budget, or people. AI is democratizing cyber defense by quickly summarizing vast swaths of data, normalizing query languages across different tools, and removing the need for security practitioners to be coding experts. In 2024, we’ll see AI’s impact in automation as defenders use AI to make incident response more efficient. AI is a once-in-a-decade leap forward, and it’s carrying cyber defenders farther than hackers.”

Natural language will pave the way for the next evolution of no-code

“Automation is only effective when implemented by teams on the frontline. Five years ago, the best way to place powerful automation in the hands of non-technical teams was via low- or no-code interfaces. Now, with AI chatbots that let people use natural language, every single team member — from sales to security — is technical enough to put automation to work solving their own unique problems. The breakthrough in AI was the new ability to iterate in natural language, simply asking an LLM to do something a bit differently, then slightly differently again. Generative AI and LLMs are obliterating barriers to entry, like no-code tools once did for the need to know how to code, and no-code will be the next barrier to fall. We’ve already moved from programming languages like Python to Microsoft Excel or drag-and-drop interfaces. Next year, we will see more and more AI chat functions replace no-code interfaces. We can expect non-technical teams throughout organizations embracing automation in ways they never thought possible. Natural language is the future on the frontline.”

Nick King, CEO and Founder at Data Kinetic

Applied AI will seamlessly integrate, complement existing workflows 

“In 2024, Applied AI will seamlessly integrate into organizational workflows, enhancing human capabilities and improving operational efficiency. AI technologies will be user-friendly and adaptable, aligning with existing human behaviors and operational processes to facilitate easy adoption and immediate benefit realization.

AI will be designed to complement existing workflows, promoting efficiency without causing disruption or necessitating significant changes in work patterns. This approach will ensure smooth transitions, quick adoption, and immediate productivity improvements.

By aligning with human behaviors and enhancing current processes, AI will enable organizations to be more responsive and agile, easily adapting to changing conditions and evolving needs. In 2024, the focus of Applied AI will be on practical integration, ensuring that AI technologies work harmoniously within existing organizational structures to drive innovation and success.” 

2024 will catapult AI from the experimental stages into real-world applications 

“In 2024, the landscape of Applied AI will be vibrant with the evolution and proliferation of ‘agents’ and ‘model chains’ that are both general and specialized by industry. The momentum gained over the past years will catapult AI from the experimental stages directly into real-world, practical applications, driving innovation and operational excellence across various sectors. 

One of the most exciting developments will be the deployment of agents and chains of agents that are meticulously tailored and fine-tuned to meet the unique needs and challenges of specific industries such as oil and gas, healthcare, and manufacturing. These agents will be instrumental in bridging gaps, enhancing interoperability, and facilitating seamless integrations within and across industry ecosystems. 

The agents and model chains will not be one-size-fits-all; instead, they will embody a spectrum of capabilities ranging from generalized functionalities to highly specialized solutions meticulously crafted to address industry-specific challenges and objectives. This nuanced approach will enable industries to harness the full potential of Applied AI, unlocking unprecedented levels of efficiency, innovation, and strategic insight. 

In the realm of specialized agents, we will witness a surge in applications that are deeply rooted in industry knowledge, and capable of navigating the complexities and nuances inherent to each sector. These agents will drive transformative changes, enabling industries to optimize processes, enhance decision-making, and unveil new opportunities for growth and innovation. 

On the other hand, generalized agents will offer a broader array of functionalities, promoting versatility and adaptability. These agents will be pivotal in fostering cross-industry synergies, facilitating the exchange of knowledge and best practices, and nurturing a more collaborative and interconnected AI landscape.

In essence, the future of Applied AI in 2024 will be marked by a rich tapestry of agents and model chains, each bringing a unique set of capabilities and value propositions to the table, collectively driving the evolution of industries towards a more intelligent, agile, and innovative future.” 

Concerns for Applied AI include misleading marketing & AI-washing, complex interactions, exposure to upstream attacks, and more 

“Misleading Marketing & AI-Washing: The landscape is rife with companies either exaggerating their AI capabilities or ‘AI-washing’ their products—labeling them as AI-driven without substantial AI functionality. Both practices add to market confusion, set unrealistic expectations, and dilute the value of genuine AI solutions. 

Complex Interactions: As we push the boundaries of AI, understanding how models and agents interact becomes crucial. Without a deep comprehension of these interactions, we risk unforeseen consequences and inefficiencies in our AI systems.

Guardrails and Transparency: The rapid growth of AI demands robust guardrails. Ensuring that AI models and agents operate within defined and ethical boundaries is paramount. Moreover, transparency in how these systems function and make decisions is essential to maintain trust and ensure they align with our values.

Exposure to Upstream Attacks: As the demand for AI solutions grows, there’s increasing pressure to update tool chains rapidly. This haste can create opportunities for third parties to infiltrate these tools, posing a significant risk. Such vulnerabilities can grant malicious actors access to critical data, undermining the security and trustworthiness of our AI systems. 

Meaningful AI Education: One of the pressing challenges is ensuring that industries across the board have access to meaningful AI education. It’s vital that businesses and professionals understand how to leverage modern AI tools and technologies effectively. Without this foundational knowledge, industries risk missing out on the transformative potential of AI or misapplying it.” 

Paval Goldman-Kalaydin, Head of AI/ML at Sumsub

Addressing AI bias will become a priority, leading to greater collaboration among stakeholders

“The process of data ingestion, where AI algorithms consume vast quantities of information, acts as a double-edged sword. It empowers the AI to learn from the wealth of human knowledge, but also makes it susceptible to the prejudices embedded in that data. In 2024, it’s likely we’ll see more real-life examples of AI models exhibiting biased behavior – resulting in “unfair” outcomes mirroring the inequities found in our society. To combat this, collaboration among stakeholders will be essential for detecting and combating bias as we advance AI technology while upholding ethical standards. Overall, the AI community will realize the importance of engaging in ongoing discussions to help identify and reduce bias, as it is a never-ending and evolving process.”

Generative AI is a flow of innovation and development we can’t shut off, not a cold shower

“Though generative AI has been rumored to be reaching an inflection point, I believe 2024 will actually be a year where we see significant innovation and development driven by collaboration and regulation. We will see the continued development of open-source alternatives, the increased computational effectiveness of models, and, in many cases, a general-purpose large model is not necessary, rather smaller, more innovative models are what will move the needle. Language models in particular will continue to be relevant for a long time, as long as there is AI-generated fraud, we’ll need AI-generated solutions to detect it.”

Rex Ahlstrom, CTO and EVP at Syniti

Increased adoption of generative AI will drive need for clean data

“The foundation of generative AI is data. That is, to function as desired, data is what provides the basis for this new technology. However, that data also needs to be clean. Regardless of where you’re pulling the data from – whether you’re using something like modeling or a warehouse of your choice – quality data will be essential. Bad data can lead to bad recommendations, inaccuracies, bias, etc. Having a strong data governance strategy will become more important as more organizations seek to leverage the power of generative AI in their organization. Ensuring your data stewards can access and control this data will also be key.”

Generative AI will quickly move from the peak of inflated expectations to the trough of disillusionment 

“There’s a lot of hype right now around generative AI, to put it mildly. However, all of this hype means that for some organizations, adoption of this technology is more of a matter of “keeping up with the Jones” rather than because it is truly the best solution for a specific problem they are trying to solve for. As a result, we’re likely to see a lot of money invested in failed generative AI projects – hence, the failing into the trough of disillusionment. It’s the shiny new object and many CIOs and other senior leaders may feel pressured to be able to say they have a generative AI program in place. The key to limiting these failed projects will lie in really ensuring that your organization understands the specific reason for using generative AI, that it’s tied to a defined business outcome and there’s a method established for measuring the success of the investment.”

Sreekanth Menon, Vice President of AI/ML Services at Genpact

The Rise of Custom Enterprise Foundation Models (FMs) 

“The debate around open-source vs closed source will only get heated as we move to 2024. The open-source LLMs like Meta’s Llama are catching up to the closed-source LLMs like GPT-4. Both these models come with their trade-offs with regard to performance and privacy. Enterprises would want to deliver on both fronts. The recent updates, such as OpenAI Enterprise, allow enterprises to build custom models to suit their solutions. Similarly, open-source models allow enterprises to build lightweight custom models with privacy in mind. This trend will continue, and we will see custom tiny language models take center stage.”

AgentOps: Rise of multi-modal agents 

“Imagine LLM-powered agents that browse the web by imaging the web instead of just going through HTML code. The implication of such AI agents is that the back offices will run 24×7. The autonomous agents use LLMs as their brains and then perform planning, task decomposition, reflection, and execution. For example, an LLM-augmented travel website can perform end-to-end vacation planning with minimal user instructions. Making this a reality requires frameworks that maintain and monitor these agent pipelines. The following components will be key areas that will come under agent operations or AgentOps:

  • Productionizing AI agents.
  • Building agent evaluation frameworks
  • Tests and observability for AI agents.
  • Rise of AgentOps marketplaces

AgentOps will also feed on the evolution of LLMOps, which again is a consolidation of best operationalizing practices through the Responsible GenAI(RGAI) lens.”

CX Gets a Facelift with AI 

“AI will help agents contribute to success by answering questions faster and better, resolving problems on first contact, communicating clearly, and leaving the customer feeling satisfied. This will lead to new CX strategies centered around AI to design, execute, and measure new or reimagined customer service experiences. According to Forrester, the key to many of 2024’s improvements will be behind-the-scenes GenAI, which augments customer service agents’ capabilities.”

A New Kind of Knowledge Worker: AI to PI (Personal Intelligence)

“Generative AI will transform personalization, creating substantial opportunities across industries. It will attract more users, command higher prices, and enhance retention and engagement. The current price gap between digital and in-person services presents a pricing opportunity for AI apps. While costs may rise for AI-native services, consumers are willing to pay more. Enhanced personalization, particularly in high-touch industries, will be a key driver. AI can revolutionize services like therapy by offering personalized, affordable, and convenient options. We’re already seeing AI-native companies making strides, and as AI capabilities improve, they’ll approximate in-person experiences, attracting more users willing to pay premium prices. The vision is for every person to have an AI assistant, ushering in a new era of consumer services. 

Whereas, quite different from an AI that learns any challenging professional skill, a personal AI is closer to a personal assistant. According to Mostafa Suleyman (ex-DeepMind), PI applications are like “a chief of staff, it’s a friend, a confidant, a support, and it will call on the right resource at the right time depending on the task that you give it. And our AI is a personal AI, which means primarily it’s designed for good conversation, and in time, it will learn to do things for you. So, it will be able to use APIs, but that doesn’t mean that you can prompt it in the way that you prompt another language model, because this is not a language model, PI is an AI. GPT is a language model. They’re very, very different things.” It has elements of generality, but it isn’t designed with generality as a first principle, as a primary design objective. For instance, companies like Inflection AI, founded by the team behind DeepMind, are building personal intelligence interfaces like PI, which is more narrowly focused on being a personal AI. 

The AI-PI paradigm will help both the enterprise and the customer.”

Matt Wallace, Chief Technology Officer at Faction

The impact of AI is requiring organizations to modernize their data architecture

“The continuous and rapid adoption of AI will force organizations to modernize their data infrastructure in 2024. Enterprises are examining their data and it’s pushing them to have a better handle on it so technologies like AI can be properly used. Organizations will double down on data management and data integrity to ensure third-party applications are seamlessly integrated. Data practitioners will look for solutions that continuously keep data clean to quickly act on workflows. Better data means better-trained models on less data, as well as a better ability to leverage that data in AI applications that incorporate retrieval.”

Nate Berent-Spillson, VP of Engineering at Launch by NTT DATA

Transformation to transformative

“For the last 20 years, technology leaders have spent their time implementing transformation initiatives with mixed results. The pace of disruptive change is increasing, and leaders must go beyond surface-level adoption, to being transformative to maintain strategic advantage. We have finally been able to get attention on the importance of paying down legacy tech-debt, but many are just moving the problem from one place to another rather than looking at technology as an essential component that powers every product and service offering. In 2024, we will see organizations shift from “doing” transformations that are applied to technology and business, to wiring the motion of transformation itself directly into the business.”

Dean Phillips, Director of Public Sector Programs at Noname Security

AI policy will drive a divide between public and private sectors

“In 2024, I predict that there will be a persisting division between the private and public sectors as government AI policy implementation takes shape. Government agencies, along with private companies outside government, such as critical infrastructure, that are impacted by proceeding policies, will be forced to comply. However, a pronounced divide will emerge in cases where there are no government-mandated policies concerning private companies. These private entities will adhere to a wide range of AI approaches, and many will choose to create their own policies. I expect that this lack of consistency, in contrast to the structured government approach, will persist into the foreseeable future, while critical infrastructure and the Defense Industrial Base (DIB) are likely to be leaders in AI policy as they directly support government operations and national security.”

Joe Payne, CEO at Code42

AI democratization will amplify threats to corporate data and IP

“As AI technology becomes more user-friendly, employees across industries will use AI-powered solutions to streamline their workflows, automate repetitive tasks, and make data-driven decisions. 

The rise in AI-driven technologies will also exacerbate a concerning trend: increasing organizational data loss as employees have more opportunities to exfiltrate sensitive data via these new technologies. 

In 2024, this shift will pose a serious challenge to organizations, as competitors can use those same AI tools to gather intelligence on each other – putting organizations at risk of losing their competitive edge, damaging their reputation, and even impacting their profits.”

Rob Juncker, CTO at Code42

CTOs will helm the AI regulation and policy conversation

“In 2024, I anticipate the CTO role will evolve as technology leaders will play a central role in fostering collaboration between security and legal departments as AI regulation, legislation, and policy discussions continue to take shape. 

Drawing on their comprehensive knowledge of the dynamic technology landscape and how technologies can best be harnessed for business success, CTOs have a holistic grasp of the implications of AI deployment, making them instrumental in leading AI regulation discussions. By collaborating with legal and HR teams, CTOs can enhance their organizations’ readiness to navigate and comply with emerging AI regulations.”

Hitesh Sheth, CEO at Vectra AI

AI’s future will hinge on regulatory decisions

“In 2024, I predict we will witness monumental progress in AI regulation and policy. Building on President Biden’s executive order on artificial intelligence, decision-makers across governmental bodies will evaluate and put into place more concrete regulations to curb AI’s risk and harness its benefits. As AI continues to evolve, it will be important for these developing regulations to strike a balance between advocating for transparency and promoting the continued innovation that’s taking place at a rapid pace.”

Sohrob Kazerounian, Distinguished Researcher at Vectra AI

AI’s ethical, legal, political dilemmas will drive litigation

“Ethical, legal, and socio-political questions regarding AI will only get thornier. Given general political paralysis in the U.S., it is unlikely that robust regulatory and legal frameworks around regulating AI will emerge as fast as they are needed. Lawsuits regarding copyrighted material being used to train generative AI models will increase in number.”

Generative AI will influence election distrust and disinformation

“The broad availability of generative AI models, and their relative ease of use, will have far reaching effects given that the U.S. is currently in an election year. Disinformation at scale, with quality content (e.g., faked audio and video of candidates, mass produced fiction masquerading as news, etc.) will become easier than ever before. The inability to trust our senses could lead to distrust and paranoia, further breaking down social and political relations between people.”

Matt Waxman, Senior Vice President and GM for Data Protection at Veritas Technologies

The first end-to-end AI-powered robo-ransomware attack will usher in a new era of cybercrime pain for organizations

“Nearly two-thirds (65 percent) of organizations experienced a successful ransomware attack over the past two years in which an attacker gained access to their systems. While startling in its own right, this is even more troubling when paired with recent developments in artificial intelligence (AI). Already, tools like WormGPT make it easy for attackers to improve their social engineering with AI-generated phishing emails that are much more convincing than those we’ve previously learned to spot. In 2024, cybercriminals will put AI into full effect with the first end-to-end AI-driven autonomous ransomware attacks. Beginning with robocall-like automation, eventually AI will be put to work identifying targets, executing breaches, extorting victims and then depositing ransoms into attackers’ accounts, all with alarming efficiency and little human interaction.”

Generative AI-focused data compliance regulations will impact adoption

“For all its potential use cases, generative AI also carries heavy risks, not the least of which are data privacy concerns. Organizations that fail to put proper guardrails in place to stop employees from potentially breaching existing privacy regulations through the inappropriate use of generative AI tools are playing a dangerous game that is likely to bring significant consequences. Over the past 12 months, the average organization that experienced a data breach resulting in regulatory noncompliance shelled out more than US$336,000 in fines. Right now, most regulatory bodies are focused on how existing data privacy laws apply to generative AI, but as the technology continues to evolve, expect generative AI-specific legislation in 2024 that applies rules directly to these tools and the data used to train them.”

Mike Nelson

AI will shift from defense to attack, and organizations will need to prepare

“In 2023, we heard a lot about utilizing AI for defensive solutions like intrusion detection and prevention systems. But in 2024, the tables will turn, with AI being used far more often for attack surfaces. Attackers will begin using AI capabilities to harvest the landscape, learning about an individual or enterprise to later generate AI-based attacks. With today’s technology, a bad actor could pick up a phone, pull basic data from LinkedIn and other online sources to mimic a manager’s voice, and perform malicious activities like an organizational password reset.

The ability to render sites on the fly based on search can be used for legitimate or harmful activities. As AI and generative AI searches continue to mature, websites will grow more susceptible to being taken over by force. Once this technology becomes widespread, organizations could lose control of the information on their websites, but a fake page’s malicious content will look authentic thanks to AI’s ability to write, build and render a page as fast as a search result can be delivered.

Just as they’re doing with PQC, leaders will need to create a strategy to combat AI threats and assure trust for public-facing websites and other key assets.”

Justin Borgman, Co-Founder and CEO at Starburst

Companies will prioritize minding the gap between data foundations and AI innovation

“There is no AI strategy without a data strategy and companies will need to prioritize closing gaps in their data strategy; specifically, the foundational elements of more efficiently accessing more accurate data securely.”

Padhu Raman, CEO at Osa Commerce

Optimizing Use of AI Will Determine Future Supply Chain Winners

“AI and predictive analytics will separate the winners and losers over the next decade across manufacturing and retail. Leaders who harness big data to optimize inventory, forecast demand, control costs, and personalized recommendations will dominate their less analytical peers. Companies that fail to adopt will see spiraling costs and plummeting efficiency.”

Eric Purcell, Senior Vice President of Global Partner Sales at Cradlepoint

AI will become one with the network, impacting all business operations 

“If 2023 was the year of flashy AI investments, 2024 will be the year of AI impact—which may not be as visible to the naked eye. AI will move from a “tool you go to” (such as ChatGPT) to being integrated into the applications we are using everyday and empowering network connectivity. As such, we’ll begin to see the benefits of AI being integrated into all applications related to the network, bolstering network predictability, troubleshooting, security and more. Businesses will need to ensure AI transparency and security practices are adequate in order to make the most of AI.”

Mike Carpenter, VC Advisor for Lightspeed Venture Partners

AI to Drive Real-Time Intelligence and Decision Making

“Next year will be foundational for the next phase of AI. We’ll see a number of new innovations for AI, but we’re still years away from the application of bigger AI use cases. The current environment is making it easy for startups to build and prepare for the next hype cycle of AI. That said, 2024 is going to be the year of chasing profitability. Due to this, the most important trend in 2024 will be the use of AI to drive real-time intelligence and decision-making. This will ultimately revolutionize go-to-market strategies, derisk investments, and increase bottom-line value.”

Haoyuan Li, Founder and CEO at Alluxio

Compute Power is the New Oil

“The soaring demand for GPUs has outpaced industry-wide supply, making specialized compute with the right configuration a scarce resource. Compute power has now become the new oil, and organizations are wielding it as a competitive edge. In 2024, we anticipate even greater innovation and adoption of technologies to enhance compute efficiency and scale capacity as AI workloads continue to explode. In addition, specialized AI hardware, like TPUs, ASICs, FPGAs and neuromorphic chips, will become more accessible.”

Moving GenAI from Pilots to Production

“GenAI is influencing organizations’ investment decisions. While early GenAI pilots show promise, most organizations remain cautious about full production deployment due to limited hands-on experience and rapid evolution. In 2023, most organizations are on small and targeted trials to assess benefits and risks carefully. As GenAI technologies mature and become more democratized through pre-trained models, cloud computing, and open-source tools, budget allocations will shift more heavily toward GenAI in 2024.”

Balancing In-House and Vendor-Provided LLMs

“To leverage the power of LLMs, organizations need to decide between building their own models, utilizing a closed-source model like GPT4 via APIs, or fine-tuning a pre-trained open-source LLM. In 2024, as LLMs keep iterating, organizations would not want to be “locked in” to one model or one vendor. They will likely adopt a hybrid approach, balancing the use of pre-trained models with developing in-house custom models when there are tighter privacy, IP ownership, and security requirements.”

Ed Macosky, Chief Product and Technology Officer at Boomi

Artificial intelligence will bring teams closer together as leaders across every industry begin to embrace the technology

“Within the next year, AI will become the primary driver of the development life cycle — not just as an IT assistant, but as a collaborative tool. Developer and engineering teams have had their work largely restricted to the backend, but I anticipate IT leaders to become key advisors as AI becomes more ingrained in a business’ overarching goals. Both technical and non-technical staff will need to align on their AI strategy in tandem as organizations seek to utilize AI for  automation, prototyping, testing, and quality assurance to drastically reduce the time needed to develop new projects. This will enable technical staff to innovate more frequently, and non-technical staff can have a stake in building solutions, rather than just providing requirements.”

AI technology won’t be as valuable or trustworthy without data accuracy and quality

“One of the most pressing concerns in the age of AI is data quality. Data quality has always been a key ingredient for success — for example, being able to parse through overwhelming amounts of customer information to provide them with personalized experiences. However, AI has elevated the stakes when it comes to data. As organizations rely on AI more and more to supplement their developer initiatives, investing in data management is no longer just an IT concern, but a business concern. This makes having quality data to ensure the AI-based outcomes are accurate even more important than ever before. Without quality data, AI can quickly become a massive pain point, rather than an efficient solution. By ensuring data accuracy from the beginning, AI can drastically improve productivity and efficiency across every level of your organization.”

Democratization of technology will be driven by AI automation

“Democratizing technology is becoming a top priority for many, thanks to high demand and limited supply for IT talent. Being able to reskill non-technical staff quickly and effectively will be integral to overall resiliency within an organization.   With too much overhead on both infrastructure maintenance and technical training, businesses can quickly find themselves unable to adapt quickly enough in tough macroeconomic climates. AI automation will enable the drastic reduction of resources needed for maintenance, as well as reduce the amount of expertise required to have a strong understanding of their tech stack.”

Sean Knapp, Founder and CEO of Ascend.io

CIOs will make structural changes in 2024 as a result of AI

“2023 saw an explosion of interest in AI. In 2024, companies will enact sweeping top-down AI adoption mandates. We expect to see goals such as reducing Opex by 20 percent, boosting CSAT/NRR by 10 percent, and generating 10 percent in top-line revenue from AI-based products all on the table. The organizations that succeed here will make significant structural changes similar to the ones we saw during the digital transformations of the 2010s. We are already starting to see powerful roles like the Chief AI Officer assuming some core responsibilities of the CIO. It will be interesting to see if CIOs can deploy enough infrastructure automation to carve out a strong focus on AI or ultimately cede that territory to this newcomer in the C-suite.”

Generative AI will become more factual thanks to retrieval augmented generation (RAG)

“This technology will allow engineers to feed clean business data into LLMs models to reduce hallucinations and ground outputs inon factual information. This clean business data will be generated by traditional data pipelines that handle data extraction, cleansing, normalization, and enrichment on an organization-wide scale. RAG is starting to emerge now and will see increased adoption next year as businesses seek to ensure more accurate results from generative AI.”

Companies will have top-down mandates on the adoption of AI in 2024

“Many team leaders will come back from the holidays to find mandates from their CEO and CFO with pointed targets that AI adoption should achieve. Expectations like reducing Opex by 20 percent, increasing CSAT/NRR by 10 percent, and generating 10 percent topline revenue through AI-based products and experiences will be at the forefront. In service of these objectives, some C-suite teams will appoint an AI leadership role to mimic the success of digital transformation winners in the previous decade. We anticipate Chief AI Officer or similarly titled roles will become common as organizations grapple with how to rapidly integrate this new technology into legacy operations. This new role will be somewhat contentious with the increasingly fractional role of the CIO. Whether CIOs can deploy enough automation to carve out a strong focus on AI or ultimately cede that territory to this newcomer in the C-suite is something to watch closely.”

Companies that don’t have sophisticated enough automation to power AI will start to feel the burn

“As businesses implement AI to maintain their competitive edge, many will feel the effects of their disorganized data infrastructure more acutely. The effects of bad data (or not enough data) will be compounded when the stakes are raised from simply serving up bad information on a dashboard, to potentially automating the wrong decisions and behaviors based on that data. It’s only a matter of time before someone without strong data infrastructure and governance puts generative AI in a mission-critical context and suffers from a loss in accuracy.”

Dave Hoekstra, Product Evangelist at Calabrio

AI will transform the contact center workforce in the next decade

“AI’s impact on the contact center workforce will be transformative over the next 10 years. Contrary to concerns about job displacement, a resounding 70 percent of contact center managers believe that the number of agents will increase. This forecast indicates that AI will serve to augment human abilities, creating a heightened demand for well-trained agents proficient in working alongside AI technologies and efficiently engaging with customers.

AI will fuel a shift toward a continuous learning culture in contact centers, boosting agents’ critical thinking

“AI will be the driving force behind the cultivation of a continuous learning culture within contact centers in the coming year, enhancing agents’ critical thinking abilities. Recognizing the role of adaptability, contact center managers will allocate funds to training initiatives that empower agents to adjust to evolving challenges, and recognize these skills as essential for future productivity. More than 60% of managers feel that critical thinking is a top skill needed by the agents of the future. Recruitment strategies will pivot towards individuals exhibiting robust critical thinking skills and a proactive willingness to continuously acquire new skills.”

Gil Dror, Chief Technology Officer at SmartSense

“To address concerns about transparency and bias in AI decision-making, there will be a push towards developing AI technologies that explain the reasoning behind predictions or decisions. Companies will invest in AI systems that not only provide outcomes but also offer insights into the parameters and logic used in the decision-making process. This will become a critical factor in gaining trust and mitigating the risk of false information or biases.”

Liz Fong-Jones, Field CTO at Honeycomb

AI/LLM development

“AI/LLM technology advancements in 2024 will enable developers to ditch the rote work but at the cost of introducing leaky, imperfect abstractions. Unless LLMs can also help developers debug their software, it will introduce bugs at the same rate as it increases code production productivity. I’m particularly excited about developments like Stanza’s new LLM for adding OpenTelemetry instrumentation to existing codebases as a form of reducing rote work. Yet, at the end of the day, there will always be a need for manual checking.”

Austin Parker, Head of Open Source at Honeycomb

AI

“In 2024, I hope that the technology industry as a whole will be more responsible with AI and focus on how these tools can be used for the common good. There’s immense potential for LLMs, specifically, to be the greatest advance in human-computer interaction since the touchscreen or mouse. Let’s focus on how these tools empower humans in the coming year.”

Brian Peterson, Co-Founder and Chief Technology Officer at Dialpad

AI hype – industry-specific models 

“In 2024, we will see the initial hype around large foundational AI LLMs fade as companies realize that one size does not fit all. While the introduction of AI tools like ChatGPT was impressive, the enterprise will not benefit from solutions that pull from the entire internet. Instead, businesses are going to move away from leveraging large LLMs, leaning toward more specialized solutions and LLMs that are trained on a more bespoke and curated dataset. Not only will these produce more tailored results, but they are also more secure and cost-efficient. Businesses will embrace AI that is tailored to them and their customers to improve accuracy, avoid hallucination and, ultimately, increase productivity and revenue.”

Influx of data talent/AI skills 

“As businesses continue to embrace AI, we’re going to see not only an increase in productivity but also an increase in the need for data talent. From data scientists to data analysts, this knowledge will be necessary in order to sort through all the data needed to train these AI models. While recent AI advancements are helping people comb through data faster, there will always be a need for human oversight – employees who can review and organize data in a way that’s helpful for each model will be a competitive advantage. Companies will continue looking to hire more data-specific specialists to help them develop and maintain their AI offerings. And those who can’t hire and retain top talent  – or don’t have the relevant data to train to begin with – won’t be able to compete. 

Just like we all had to learn how to incorporate computers into our jobs years ago, non-technical employees will now have to learn how to use and master AI tools in their jobs. And, just like with the computer, I don’t believe AI will eliminate jobs, more so that it will shift job functions around the use of the technology. It will make everyone faster at their jobs, and will pose a disadvantage to those who don’t learn how to use it. ”

The commoditization of data to train AI

“As specialized AI models become more prevalent, the proprietary data used to train and refine them will be critical. For this reason, we’re going to see an explosion of data commoditization across all industries. Companies that collect data that could be used to train chatbots, take Reddit for example, sit on an immensely valuable resource. Companies will start competitively pricing and selling this data.” 

Yingqi Wang, CEO and Founder at ONES.com

AI hype – industry-specific models 

“The application scenarios of artificial intelligence aim to improve individuals’ productivity within teams. Therefore, in 2024, businesses are likely to purchase or use more AI enterprise applications to enhance efficiency. Currently, there are many AI applications for individual use, but there has yet to be a development management system that can effectively define AI’s role, allowing AI and humans to collaborate within the system. 

AI will help complete many repetitive tasks in project management, such as workflow transitions, creating management process templates, and summarizing documents. Developers should be capable of introducing AI to improve productivity while ensuring that the quality of work is maintained or at least remains unchanged.

In 2024, it will be crucial to recognize the limits and scope of AI’s capabilities and to introduce AI appropriately into workflows. There will be multi-role system software involving AI, where system workflows will be reconstructed, and the software will continue to add value.

If we define AI’s role and have a need for transparency in all processes, the significance of system software remains. Before the involvement of AI, system software described human collaboration workflows in certain fields. With AI, workflows will be restructured, and the system will accelerate. Given today’s LLM capabilities and stability, and the significance of the human role in most systems, AI currently appears as a Copilot, meaning it requires a human mentor in the system. In the future, AI may independently complete tasks within certain specific capabilities. This is similar to an airplane’s Autopilot; when parts of the system work can be automated, workflows will be restructured, and software will continue to add value.”

Helena Schwenk, VP, Chief Data & Analytics Officer at Exasol

AI governance becomes C-level imperative, causing CDOs to reach their breaking point

“The practice of AI governance will become a C-level imperative as businesses seek to leverage the game-changing opportunities it presents while balancing responsible and compliant use. This challenge is further emphasized by the emergence of generative AI, adding complexity to the landscape. 

AI governance is a collective effort, demanding collaborative efforts across functions to address the ethical, legal, social, and operational implications of AI. Nonetheless, for CDOs, the responsibility squarely rests on their shoulders. The impending introduction of new AI regulations adds an additional layer of complexity, as CDOs grapple with an evolving regulatory landscape that threatens substantial fines for non-compliance, potentially costing millions.

This pressure will push certain CDOs to their breaking point. For others, it will underscore the importance of establishing a fully-resourced AI governance capability, coupled with C-level oversight. This strategic approach not only addresses immediate challenges, but strengthens the overall case for proactive and well-supported AI governance going forward.”

Florian Wenzel, Global Head of Solution Engineering at Exasol

Expect AI backlash, as organizations waste more time and money trying to ‘get it right’

“As organizations dive deeper into AI, experimentation is bound to be a key theme in the first half of 2024. Those responsible for AI implementation must lead with a mindset of “try fast, fail fast,” but too often, these roles need to understand the variables they are targeting, do not have clear expected outcomes, and struggle to ask the right questions of AI. The most successful organizations will fail fast and quickly rebound from lessons learned. Enterprises should anticipate spending extra time and money on AI experimentation, given that most of these practices are not rooted in a scientific approach. At the end of the year, clear winners of AI will emerge if the right conclusions are drawn.

With failure also comes greater questioning around the data fueling AI’s potential. For example, data analysts and C-suite leaders will both raise questions such as: How clean is the data we’re using? What’s our legal right to this data, specifically if used in any new models? What about our customers’ legal rights? With any new technology comes greater questioning, and in turn, more involvement across the entire enterprise.”

Mathias Golombek, Chief Technology Officer at Exasol

AI shifts from reactionary to intentional, unlocking opportunity while eliminating data collection-based roles

“The year 2023 introduced AI, which caused knee-jerk reactions from organizations that ultimately spawned countless poorly designed and executed automation experiments. In 2024, AI will shift from reactionary to strategic, rooted in purposeful proofs of concept that bring more clarity and focus on business objectives. We’ll see more business benefit-driven use cases leveraging AI and ML than ever before.

As AI is paired with other technologies, like open source, we’ll see new models emerge to solve traditional business problems. Generative AI, like ChatGPT, will also merge with more traditional AI technology, such as descriptive or predictive analytics, to open new opportunities for organizations and streamline traditionally cumbersome processes.

As a result, AI will continue to eliminate redundant job roles that involve high levels of repetition, data collection and data processing, with customer service, retail sales, manufacturing production and office support expected to be most impacted by the end of 2024.”

Mike Scott, CISO at Immuta

New AI tools will require clear policies and increased education from the top down

“Gartner predicts that IT spending will increase more than 70% over the next year. In addition to expanding current solutions, this will likely mean new tools, software, technology integrations, etc., and a lot of it will be powered by artificial intelligence (AI). Organizations have to continue to embrace new technology to remain competitive and relevant in today’s economic landscape. Still, the introduction and integration of AI-based solutions create complexities for security teams, who will have more to manage and oversee than ever before. 

To support the inevitability of AI, organizations will need to do two things. First, they will need to implement policies and processes around AI in general, and these integrations, or the speed at which they can innovate, will be impacted. Second, there will be a need for a significant amount of education from the top down around the difference between AI, machine learning (ML), and large language models (LLMs) to ensure teams are aware of what risks exist and when company policies are relevant. The democratization of AI means the technology is being used by employees who are not as technologically savvy. There will likely be confusion around how to write and apply new policies to these new tools, given the broad user base.”

Joe Regensburger, VP of Research & AI SME at Immuta

As AI regulation evolves, clarity around liability will be a catalyst for progress and adoption

“One of the biggest questions about AI going into 2024 is centered around liability. The EU’s AI Act is proposing restrictions on the purposes for which AI can be employed, placing more scrutiny on high-risk applications, but President Biden’s October 30th executive order concerning AI took a slightly different tune, focusing on the vetting and reviewing of models and imposing restrictions and standards based on that. On both sides, liability and indemnity remain murky. In 2024, as these regulations fall into place, industry leaders will begin to get more clarity around who is liable for what and AI insurance will emerge as regulators and industry leaders look to harden the vetting and reviewal process, both in production and in development.”

Nick Elprin, Co-Founder and CEO at Domino Data Lab

An army of smaller, specialized Large Language Models will triumph over giant general ones

“As we saw during the era of “big data” — bigger is rarely better. Models will “win” based not on how many parameters they have, but based on their effectiveness on domain-specific tasks and their efficiency. Rather than having one or two mega-models to rule them all, companies will have their own portfolio of focused models, each fine-tuned for a specific task and minimally sized to reduce compute costs and boost performance.”

Generative AI will unlock the value and risks hidden in unstructured enterprise data

“Unstructured data — primarily internal document repositories — will become an urgent focus for enterprise IT and data governance teams. These repositories of content have barely been used in operational systems and traditional predictive models to date, so they’ve been off the radar of data and governance teams. GenAI-based chat bots and fine-tuned foundation models will unlock a host of new applications of this data, but will also make governance critical. Companies who have rushed to develop GenAI use cases without having implemented the necessary processes and platforms for governing the data and GenAI models will find their projects trapped in PoC purgatory, or worse. These new requirements will give rise to specialized tools and technology for governing unstructured data sources.”

Despite a lot of activity, regulatory efforts will be ineffective in preventing the misuse of AI

“The EU will push for impractically restrictive, and occasionally contradictory, regulation. The US will put toothless policies forward — like the recent executive order — that aren’t effective at mitigating risks from bad actors (who will disregard and circumvent the regulations anyway), and that do little, if anything, to require organizations to implement processes and capabilities necessary for the safe, secure and trustworthy use of AI.”

Kjell Carlsson, Head of Data Science Strategy and Evangelism at Domino Data Lab

Predictive AI Strikes Back: Generative AI sparks a traditional AI revolution

“The new hope around GenAI drives interest, investment, and initiatives in all forms of AI. However, the paucity of established GenAI use cases, and lack of maturity in operationalizing GenAI means that successful teams will allocate more than 90% of their time to traditional ML use cases that, despite the clear ROI, had hitherto lacked the organizational will.”

GPUs and GenAI Infrastructure Go Bust

“Gone are the days when you had to beg, borrow and steal GPUs for GenAI. The combination of a shift from giant, generic LLMs to smaller, specialized models, plus increased competition in infrastructure and also quickly ramping production of new chips accelerated for training and inferencing deep learning models together mean that scarcity is a thing of the past. However, investors don’t need to worry in 2024, as the market won’t collapse for at least another year.”

Forget Prompt Engineer, LLM Engineer is the Least Sexy, but Best Paid, Profession

“Everyone will need to know the basics of prompt engineering, but it is only valuable in combination with domain expertise. Thus the profession of “Prompt Engineer” is a dud, destined, where it persists, to be outsourced to low-wage locations. In contrast, as GenAI use cases move from PoC to production, the ability to operationalize GenAI models and their pipelines becomes the most valuable skill in the industry. It may be an exercise in frustration since most will have to use the immature and unreliable ecosystem of GenAI point solutions, but the data scientists and ML engineers who make the switch will be well rewarded.”

GenAI Kills Quantum and Blockchain

“The unstoppable combination of GenAI and Quantum Computing, or GenAI and Blockchain? Not! GenAI will    be stealing all the talent and investment from Quantum and blockchain, kicking quantum even further into the distant future and leaving blockchain stuck in its existing use cases of fraud and criminal financing. Sure, there will be plenty of projects that continue to explore the intersection of the different technologies, but how many of them are just a way for researchers to switch careers into GenAI and blockchain/quantum startups to claw back some of their funding?”

Jon France, CISO at ISC2

Artificial intelligence will continue to take a front seat, but the hype will die down

“Now that we’re at the end of 2023, just a short year after the initial release of ChatGPT, it seems like everything has an AI component to it. If you ask me, generative AI is at the top of its “hype cycle,” but it will still remain in the general consciousness in 2024 and will likely start to deliver more business value. However, what we have to realize is that even though AI was the buzz-topic of the year, it has yet to reach its full potential. For adversaries we’ve seen it mainly be used for social engineering purposes so far, and it’s likely that we’ll continue to see that threat surface deepen, but both from a defensive and offensive cyber operations side, we have a long way to go. I think in 2024, we’ll see vendors try to combat AI’s use for malicious purposes outside of just social engineering and ultimately use AI to deliver more tangible value. However, like any “hot new topic,” the hype will inevitably cool down as time goes on and it will settle into part of the landscape.”

Arina Curtis, CEO and Co-Founder at DataGPT

AI is Recession and Inflation Proof

“Despite economic headwinds or tailwinds, interest in AI will remain strong in 2024 regardless of which way the economy turns. AI’s potential to drive innovation and competitive advantage is a must-have, with its own line item in the budget. Measuring the ROI on AI will be critical and practical use cases will be put under the microscope. For example, proving out how AI can make everyday tasks like data analysis cheaper and more broadly available to business users will be key. Likewise, investors will be more wary of AI companies.”

Enterprise Security Could Rain on AI’s Parade

“As enterprises begin rolling out AI using self-hosted LLMs or by fine-tuning commercially available models, security teams could slow the roll of mass adoption. Sacrificing innovation in favor of security isn’t a new concept. But the promise of AI and pressure to adopt could make it the one technology that forces a happy medium between risk and reward.”

AI Job Creation

“Conversational AI and other methods of engaging with AI will pave the way for new jobs that didn’t previously exist. Job descriptions for a wide variety of roles will require the use of AI, and the knowledge of how to best adopt AI in everyday tasks. We will see roles in compliance, DEI and finance evolve as these gatekeepers become tasked with reducing bias, ensuring ethical use and maximizing ROI on AI investments.

Data and Business Teams Will Lock Horns Onboarding AI Products

While business user demand for AI products like ChatGPT has already taken off, data teams will still impose a huge checklist before allowing access to corporate data. This tail wagging the dog scenario may be a forcing function to strike a balance, and adoption could come sooner than later as AI proves itself as reliable and secure.”

Jim Barkdoll, CEO and CEO at Axiomatics

Greater AI/ML adoption swill be the first big step to solving the cybersecurity skills shortage

“In 2024, AI is going to make a more noticeable change in the way security is measured and monitored. As companies continue to integrate, AI will be helping with the staff shortages and dealing with the sheer overwhelming number of events that need to be processed and interpreted into more useful actionable intelligence.”

Giorgio Regni, CTO at Scality

End users will discover the value of unstructured data for AI

“The meteoric rise of large language models (LLMs) over the past year highlights the incredible potential they hold for organizations of all sizes and industries. They primarily leverage structured, or text-based, training data. In the coming year, businesses will discover the value of their vast troves of unstructured data, in the form of images and other media.

This unstructured data will become a useful source of insights through AI/ML tooling for image recognition applications in healthcare, surveillance, transportation, and other business domains. Organizations will store petabytes of unstructured data in scalable “lakehouses” that can feed this unstructured data to AI-optimized services in the core, edge and public cloud as needed to gain insights faster.”

Joy Allardyce, General Manager, Data & Analytics at insightsoftware

The rise and adoption of AI

“AI, like all reporting projects, is only as good as the data it has access to and the prompts used to make a request. With the push for AI, many are still stuck getting their data foundations established so that they can take advantage of AI. To avoid pilot purgatory, starting with the outcome (use case) in mind that shows a quick win and demonstrable value vs. a one-off project is key.”

Kevin Miller, CTO of North America at IFS

The Enterprise Will Latch Onto AR

“2024 is sure to bring what we refer to as ‘ubiquitous AR’, with increased adoption of augmented reality (AR) via mobile devices, especially glasses. There will be projects to integrate AR into existing workflows, empowering manufacturing and field service professionals with real-time information, improved collaboration, and enhanced visualization, while retaining the situational awareness necessary for the safety and well-being of workers in factories and in the field. A recent report on global AR in healthcare showed the market is projected to balloon to more than 4.2 billion U.S. dollars by 2026, demonstrating the wide appeal of AR in business. Other industries like manufacturing and field services sectors are ripe for AR adoption.

Manufacturing functions like design and prototyping, quality control, and maintenance can all benefit massively from AR use. Workers can, for instance, automatically measure the length of an item or ascertain if the right tool is used with their safety glasses. In the field services sector – think field engineers, mechanics and maintenance workers – AR is already helping with remote assistance, training and onboarding, equipment maintenance, and documentation and reporting. At IFS, we have embedded AR into our software to empower customers to take advantage of the new capabilities it unlocks for them.”

Grant Bourzikas, CSO at Cloudflare

The AI arms race will officially commence, and the first AI model breach will take place

“Organizations of all types are aggressively adopting and beginning to rely on models to carry out critical business functions. Moreover, organizations are leaning heavily on AI to maintain a competitive edge, with Wall Street upgrading the stocks of companies that mention AI and punishing those who are seemingly behind the technology curve. As with any technology that becomes a crucial piece to an organization’s success, it increasingly becomes a top target for threat actors to inflict significant damage. Organizations rushing to join this revolution without the proper precautions put in place are opening themselves up as a low hanging fruit for model tampering and breaches – ones that could have the power to impact everything from critical care, banking systems, power grids etc.”

The only way to fight against AI is with AI… if you have already mastered the basics

“Defending against AI ultimately means defending against all human knowledge indexed. Information sharing exists at an order of magnitude faster, and is more efficiently exchanged than ever before. Security pros protecting their organizations in the era of infinite information face challenges never seen before. But if the industry has historically struggled with doing the simple things well, over pivoting to solve issues using AI will be mostly benign. Sometimes the best way to mitigate attacks is by going back to foundational elements of detection and mitigation.”

John Engates, Field CTO at Cloudflare

Executives, beware the AI knowledge gap. Your productivity and profit depends on it

“The AI divide is deepening, the result of C-suites that chose to invest–or ignore–AI investments. The result? A new class of “have-nots” that operate at status quo while the AI-savvy gain surges in productivity thanks to teams that are equipped with efficiency-creating AI tools. Across industries, this divide will solidify the leaders and brands that can navigate the torrents of the economic landscape and come out on top, today and through the coming years.”

Vignesh Ravikumar, Partner at Sierra Ventures

“Software used to be owned, then rented (SaaS model) and now with Gen AI it will be able to be customized, and customers will pay for outcomes, not just usage. AI will open up new spend because outcomes can be tied directly to the software, increasing the leverage and pricing potential that software vendors can command. Furthermore, with these tools, business models that have been eschewed by software companies in recent years, like services, can make a comeback. Historically, customization was associated with expensive and labor-intensive service work. Software companies were generally averse to providing custom solutions because it impacted margins and scalability. However, GenAI can potentially allow for the creation of highly customized solutions at a fraction of the cost of traditional service-based approaches. “

“While the broader uptake of Gen AI in healthcare may linger, its impact will become more tangible as it continues to commoditize specific functions, such as transcription services. This transition towards commoditization hints at a gradual shift in how certain healthcare tasks are approached, signaling a transformation that might reshape operational norms within the industry. “

“Looking forward, service-based businesses, like law firms, advertising/marketing agencies, and consulting firms, face the highest vulnerability to disruption by AI. GenAI’s disruptive potential lies in its capacity to enable companies to achieve massive efficiency, doing more with reduced reliance on excessive headcounts. This technological advancement not only streamlines processes but also empowers knowledge workers to amplify their output and capabilities. Service-oriented businesses, traditionally reliant on human capital, now confront the prospect of optimization through GenAI, allowing them to navigate a landscape where accomplishing more with fewer human resources becomes not just feasible but advantageous for sustaining competitiveness in the market.”

Tim Guleri, Managing Director at Sierra Ventures

“Clearly GenAI today has massive potential, but it’s still relegated to information retrieval. I believe the next big shift will come from agentic AI – a  type of AI that can understand context and take actions based on what it knows. It’s not just about providing information. The real jump will come from products that can take information, understand what additional context is needed, and know how to route to the correct action. We’re already seeing early versions of this agentic AI using LLMs in customer support via our investment Siena. Multi-modal LLMs are now proving that you can replicate human-level task-solving while controlling hallucinations and providing superior customer experiences.”

“2024 will see an explosion of task-specific agents or AgenticAI.  These agents, which will mushroom by the millions, will be written by subject matter experts and monetized via a marketplace, creating billions of dollars of market opportunity.  We will see “Agent Farms,”; where companies will release many versions of agents and market them to consumers to improve very specific processes.  This will make Generative AI even more mainstream by giving it use even more “context” than it has today.”

James Beecham, Founder and CEO at ALTR

While AI and LLMs continue to increase in popularity, so will the potential danger

“With the rapid rise of AI and LLMs in 2023, the business landscape has undergone a profound transformation, marked by innovation and efficiency. But this quick ascent has also given rise to concerns about the utilization and the safeguarding of sensitive data. Unfortunately, early indications reveal that the data security problem will only intensify next year. When prompted effectively, LLMs are adept at extracting valuable insight from training data, but this poses a unique set of challenges that require modern technical solutions. As the use of AI and LLMs continues to grow in 2024, it will be essential to balance the potential benefits with the need to mitigate risks and ensure responsible use. 

Without stringent data protection over the data that AI has access to, there is a heightened risk of data breaches that can result in financial losses, regulatory fines, and severe damage to the organization’s reputation. There is also a dangerous risk of insider threats within organizations, where trusted personnel can exploit AI and LLM tools for unauthorized data sharing whether it was done maliciously or not, potentially resulting in intellectual property theft, corporate espionage, and damage to an organization’s reputation.  

In the coming year, organizations will combat these challenges by implementing comprehensive data governance frameworks, including, data classification, access controls, anonymization, frequent audits and monitoring, regulatory compliance, and consistent employee training. Also, SaaS-based data governance and data security solutions will play a critical role in keeping data protected, as it enables organizations to fit them into their existing framework without roadblocks.”

Ryan Welsh, Founder and CEO at Kyndi

Generative AI and large language model (LLM) hype will start to fade

“Without a doubt, GenAI is a major leap forward; however, many people have wildly overestimated what is actually possible. Although generated text, images and voices can seem incredibly authentic and appear as if they were created with all the thoughtfulness and the same desire for accuracy as a human, they are really just statistically relevant collections of words or images that fit together well (but in reality, may be completely inaccurate). The good news is the actual outputs of AI can be incredibly useful if all of their benefits and limitations are fully considered by the end user.  

As a result, 2024 will usher in reality checks for organizations on the real limitations and benefits GenAI and LLMs can bring to their business, and the outcomes of that assessment will reset the strategies and adoption of those technologies. Vendors will need to make these benefits and limitations apparent to end users who are appropriately skeptical of anything created by AI. Key elements like accuracy, explainability, security, and total cost must be considered. 

In the coming year, the GenAI space will settle into a new paradigm for enterprises, one in which  they deploy just a handful of GenAI-powered applications in production to solve specific use cases.”

Natural language interfaces will become ubiquitos

“Imagine this scenario: you walk into a brick-and-mortar retail store. When you ask the store assistant a question, instead of a verbal response, they point at a display with a list of options or rush over to a whiteboard to sketch an illustration that includes minimal text. In this silent exchange, the richness of human-level communication is replaced by a menu of options or a group of visuals. Odd, right? Yet, this has been the paradigm for most websites for the past 25 years.

There is already a race to create “intimacy at scale on the web” enabled by GenAI and large language models. It is complicated to attain and the challenge to achieve this personalization is well understood. A small number of vendors have worked out how to overcome these issues in a production environment to enable accurate and trusted interactions with these language models.

As a result, and as these positive experiences multiply in 2024, more individuals will become comfortable with leveraging and maximizing their use of natural language interfaces.”

Businesses will learn that adding GenAI to existing tools will not address foundational weaknesses 

“While GenAI can provide valuable assistance, it cannot miraculously solve foundational issues related to volumes of information and relevance of searches through that data. If an existing tool was unable to reliably surface relevant information immediately ten months ago, bolting GenAI onto any of these offerings will fail to make them work better. Similarly, if a solution did not effectively answer questions previously, the mere addition of GenAI would not change its performance. Put simply, when it comes to GenAI, garbage in produces garbage out.

In 2024, a few implementations of Retrieval Augmented Generation (RAG) will emerge as the only possible way to successfully eliminate hallucinations. RAG is an AI framework that attempts to provide a narrow and relevant set of inputs to GenAI to yield an accurate and reliable summary. However, the successful execution of this framework is no easy task and consequently not all instances of RAG are created equal. For instance, if RAG yields pages of results that may or may not be accurate and defers the task of deciphering the correct answer to GenAI, the outcome will once again be subpar and unsuitable for business use. 

GenAI faces the same challenge as a human would in trying to summarize ten pages of relevant and irrelevant data. In contrast, both GenAI and humans do a much better job synthesizing ten relevant sentences. Furthermore, RAG alone can still fail to surface highly accurate answers when it comes to answering questions containing domain-specific context. Boosting the result’s relevance requires last-mile fine-tuning of the LLM. The combined RAG + fine-tuning approach will help achieve production-level performance of the GenAI solution for companies next year.”

Generative AI initiatives will be driven by Line of Business not IT 

“Executives traditionally require organizations to adopt new tools to enable new (and better) business practices and save money, even if the users prefer to stick with what they already know. IT supports the rollout while implementation teams debate change management procedures, conduct extensive training for potentially reluctant users, and stamp out any continued use of the older tools. However, ensuring compliance and achieving the desired benefits quickly is no easy feat. 

GenAI will be the opposite in 2024. The enthusiasm among users for GenAI-enabled solutions is palpable, as many have already tried these tools in various forms. The user-friendly nature of GenAI, with its natural language interfaces, facilitates seamless adoption for non-technical stakeholders. However, technical teams are left grappling with inherent challenges, including hallucinations, the lack of explainability, domain-specific knowledge limitations, and cost concerns.

‘In some organizations, the use of GenAI is forbidden until their technical teams come up to speed. Detecting ‘shadow’ usage, where individuals become suddenly hyper-productive after a brief period of quiet, adds an additional complication to the implementation challenges. Next year, organizations will work out a process to evaluate the myriad of options available and allow the business to use the few tools that are capable of addressing all of GenAI’s challenges in an enterprise environment.”

GenAI will streamline new employee onboarding

“Organizations continually cope with employee turnover and retirements in a labor-constrained environment. As a result, they are constantly working to hire and onboard new employees. The problem is that these new hires often struggle to navigate complicated and confusing company processes, policies, and specific language used throughout the organization. Existing learning systems frequently fail to surface the right information to answer new hires’ questions. Simultaneously, new employees will likely not know the right way to phrase a question to obtain the information they need when they search for answers hidden in the training materials. In many cases, domain-specific jargon can be non-intuitive, making it difficult for newcomers to communicate their inquiries effectively. This hurdle often leads to longer learning curves and reduced productivity early on.

GenAI, coupled with answer engines, are emerging as a solution to accelerate this process significantly. In 2024, organizations will increasingly leverage GenAI and answer engines to dramatically improve this process. Using these technologies, employees can ask questions in their own words, eliminating the need to master keywords and domain-specific terminology upfront.

These solutions also provide relevant information pertinent to organization-specific services and programs, ensuring that newcomers have the knowledge they need to perform their tasks competently. Moreover, the analytics generated by these systems enable trainers to tailor content, addressing specific learning needs and information gaps. Incorporating answer engines into the onboarding process ensures that individuals become productive contributors to the organization at a much faster pace. By harnessing the power of AI and natural language processing to facilitate learning and knowledge retrieval, new employees can be more excited to provide an impact immediately in their new posts in the coming years.”

Atanas Kiryakov, CEO at Ontotext

Manufacturers (finally) Manage the Hype Around AI

As the deafening noise around GenAI reaches a crescendo, manufacturers will be forced to temper the hype and foster a realistic and responsible approach to this disruptive technology. Whether it’s an AI crisis around the shortage of GPUs, climate effects of training large language models (LLMs), or concerns around privacy, ethics, bias, and/or governance, these challenges will worsen before they get better leading many to wonder if it’s worth applying GenAI in the first place.

While corporate pressures may prompt manufacturers and supply chain organizations to do something with AI, being data driven must come first and remain top priority. After all, ensuring data is organized, shareable, and interconnected is just as critical as asking whether GenAI models are trusted, reliable, deterministic, explainable, ethical, and free from bias. 

 

Before deploying GenAI solutions to production, manufacturers must be sure to protect their intellectual property and plan for potential liability issues. This is because while GenAI can replace people in some cases, there is no professional liability insurance for LLMs. As a result, business processes that involve GenAI will still require extensive “humans-in-the-loop” involvement which can offset any efficiency gains. 

 

In 2024, expect to see vendors accelerate enhancements to their product offerings by adding new interfaces focused on meeting the GenAI market trend. However, organizations need to be aware that these may be nothing more than bolted-on band aids. Addressing challenges like data quality and ensuring unified, semantically consistent access to accurate, trustworthy data will require setting a clear data strategy, as well as taking a realistic, business driven approach. Without this, manufacturers will continue to pay the bad data tax as AI/ML models will struggle to get past a proof of concept and ultimately fail to deliver on the hype.”

Knowledge Graph Adoption Accelerates Due to LLMs and Technology Convergence

A key factor slowing down knowledge graphs (KG) adoption is the extensive (and expensive) process of developing the necessary domain models. LLMs can optimize several tasks ranging from the evolution of taxonomies, classifying entities, and extracting new properties and relationships from unstructured data. Done correctly, LLMs could lower information extraction costs, as the proper tools and methodology can manage the quality of text analysis pipelines and bootstrap/evolve KGs at a fraction of the effort currently required. LLMs will also make it easier to consume KGs by applying natural language querying and summarization.

Labeled Property Graphs (LPG) and Resource Description Frameworks (RDF) will also help propel KG adoption, as each are powerful data models with strong synergies when combined. So while RDF and LPG are optimized for different things, data managers and technology vendors are realizing that together they provide a comprehensive and flexible approach to data modeling and integration. The combination of these graph technology stacks will enable manufacturers to create better data management practices, where data analytics, reference data and metadata management, data sharing and reuse are handled in an efficient and future proof manner. Once an effective graph foundation is built, it can be reused and repurposed across organizations to deliver enterprise level results, instead of being limited to disconnected KG implementations.

 

As innovative and emerging technologies such as digital twins, IoT, AI, and ML gain further mind-share, managing data will become even more important. Using LPG and RDF’s capabilities together, manufacturers can represent complex data relationships between AI and ML models, as well as tracking IoT data to support these new use cases. Additionally, with both the scale and diversity of data increasing, this combination will also address the need for better performance. As a result, expect knowledge graph adoption to continue to grow as manufacturers look to connect, process, analyze, and query the large volume data sets that are currently in use.” 

Phillip Merrick, Co-Founder and CEO at pgEdge

As organizations deploy newly developed AI applications, they will see in many cases the need for inference to happen at or closer to the edge, avoiding network latency.”

Bryan Murphy, CEO at Smartling

AI-powered Human quality translation will increase productivity by 10X or more

“At the beginning of 2023, everyone believed that LLMs alone would produce human-quality translations. Over the year, we identified multiple gaps in LLM translations ranging from hallucinations to subpar performance in languages other than English. 

Like cloud storage or services, AI-powered Human quality translation is increasingly moving toward a cost at which the ROI of translating nearly all content becomes attractive, creating a competitive advantage for those companies that use it to access the global market. 

Contrary to the shared belief that the language services industry will shrink in 2024, it will grow as more content gets localized, but it costs less to do. 2024 will be the year the cost of translation plummets. Translators powered by Language AI and AI-powered Language Quality Assurance increase their productivity by 10X or more.”

Mark Neufurth, Lead Strategist at IONOS

Generative AI’s impact on quantum computing

“The likelihood of  generative AI impacting quantum computing is high, particularly in IT security. The combination of AI and quantum computing has the potential to reshape encryption technologies, posing both security challenges and opportunities for societal benefits in areas such as weather prediction, technology, and medical research.”

Mark Neufurth, Lead Strategist at IONOS

Generative AI’s impact on quantum computing

“The likelihood of  generative AI impacting quantum computing is high, particularly in IT security. The combination of AI and quantum computing has the potential to reshape encryption technologies, posing both security challenges and opportunities for societal benefits in areas such as weather prediction, technology, and medical research.”

Christian Buckner, SVP, Data Analytics and IoT at Altair

AI Fuels the Rise of DIY Physics-based Simulation 

“The rapidly growing interaction between Data/AI and simulation will speed up the use of physics-based simulations and extend its capabilities to more non-expert users.”

Mark Do Couto, SVP, Data Analytics at Altair

AI Will Need to Explain Itself

“Users will demand a more transparent understanding of their AI journey with “Explainable AI” and a way to show that all steps meet governance and compliance regulations. The White House’s recent executive order on artificial intelligence will put heightened pressure on organizations to demonstrate they are adhering to new standards on cybersecurity, consumer data privacy, bias and discrimination.”

Yeshwant Mummaneni, Chief Engineer, Cloud at Altair

Blockchain Plays the Hero in Securing Data Lineage

“As AI/ML models play key roles in critical decision-making, whether supervised by humans or in a completely autonomous fashion, model provenance/lineage becomes crucial. The foundational technology that powered blockchain to provide immutability of records, digital identities, signatures, and verifications leveraging cryptography will become a key aspect of enterprise AI to provide tamper proof model provenance.”

MLOps Moves to the Edge

“MLOps (Machine Learning Operations) will significantly evolve to not only provide operational capabilities such as deployment, scaling, monitoring etc. but will include model optimization. This will encompass everything from hyperparameter tuning to tweak model performance to model size/quantization and performance optimization for specific chipsets and use cases such as for edge computing on wearable devices or cloud computing.”

Bill Nitzberg, Chief Engineer, HPC & Cloud at Altair

HPCaaS Becomes a Thing

“Because of the shortage of skilled HPC technologists, HPC as a Service models will become increasingly popular. Organizations will seek out partners with the skills, services and platform to provide complete HPC solutions using cloud, hybrid and physical clusters and ‘everything needed’ to work right away.”

HPC Hardware Gets Greener (Because It Has To)

“HPC clusters have always used a lot of power. However, AI/ML and the use of GPUs have increased the power consumption and thus carbon emissions of HPC clusters necessitating a strong focus on new GPU hardware to reduce emissions.”

Rodman Ramezanian, Global Cloud Threat Lead at Skyhigh Security

Power of AI and machine learning

“Continuing the explosion of AI-powered services and capabilities, cloud security will see an increased reliance on AI and machine learning, empowering automated detection, response, and remediation of threats. Solutions powered by AI can analyze extensive datasets, recognizing patterns and anomalies that signal potential security risks.”

Michael Rinehart, VP of AI at Securiti

Generative AI will redefine the cybersecurity landscape, simultaneously promising innovation and peril

“In 2024, the widespread adoption of Generative AI will be a double-edged sword for cybersecurity. While it promises significant productivity gains across various enterprise functions, the rush to embrace it will open organizations to substantial risks. Simultaneously, more restrictive regulatory frameworks and guidelines will emerge to ensure responsible adoption and compliance with data protection and security standards. In this context, enterprises that do not adopt safeguards in their adoption of Generative AI are likely to inadvertently expose company-sensitive information or customer data, leading to new kinds of data breaches and regulatory non-compliance. 

Simultaneously, regulatory frameworks and guidelines will emerge to ensure responsible adoption and compliance with data protection and security standards. In 2024, cybersecurity efforts will be critical to reap the benefits of Generative AI while safeguarding sensitive information and maintaining regulatory compliance.”

Cybersecurity will enter a new era as AI-powered attacks continue to democratize

“Cybersecurity will confront evolving threats driven by Generative AI, primarily focusing on three critical aspects. The first may be the most obvious, but perhaps the most potent. Generative AI is proving particularly potent in generating very high-fidelity impersonation – text, image, and voice that will continue to make it increasingly difficult for victims to identify as fraudulent. Coupled with Generative AI lowering the barrier of entry ability to scale attacks, the odds of a successful breach can significantly increase.

Second, attackers will target the very benefit enterprises seek for Generative AI models – gaining insights into enterprise data. Whether through fine tuning or retrieval augmented generation, Generative AI models/systems are able to reproduce sensitive information and should be casually regarded as a data system with a natural language interface. Thus, a mismanaged deployment can break the original entitlements of the data. Attackers may be able to extract that data either through prompting or, if it is fine-tuned, by exfiltrating the model.”

The industry will pivot towards a specialized model safeguarding

“In the new year, we will see a push for models with robust innate privacy and safety constraints, but we will find that this direction offers incomplete and inflexible protection. This is because such models cannot be adapted to new protections easily, and any fine-tuning can damage protections.

Instead, we will see a layered approach to security emerge, focusing on specialization through a two-tier strategy. Organizations will first adopt application-specific models, potentially augmented by knowledge bases, which are tailored to provide value in specific use cases, such as Q&A systems. Then, they will implement a monitoring system to safeguard these models by scrutinizing messages to and from them for privacy and security issues. Such monitoring will be more suitable as they can utilize models aligned with governance and data protection principles in addition to more flexible technologies. In essence, the year 2024 will witness a rapid adaptation of both traditional security and cutting-edge AI techniques toward safeguarding users and data in this emerging Generative AI era.”

Andrew Hollister, CISO & VP Labs R&D at LogRhythm

Generative AI will augment, not replace, SOC analysts in cybersecurity

“As the cybersecurity landscape evolves, generative AI’s role within Security Operations Centers (SOCs) will be characterized by augmentation rather than replacement of human analysts due to its maturity limitations. Gen AI will primarily assist and enhance the capabilities of SOC staff with the necessary expertise to interpret its output, proving especially valuable for mid-level analysts. Organizations will need to discern genuine gen AI contributions amid marketing hype, and the debate between investing in more technology like gen AI or hiring additional SOC analysts will persist, with the human factor remaining crucial. Success will depend on aligning these tools with analyst workflows rather than relying on superficial intelligence.”

Generative AI adoption will lead to major confidential data risks

“The cybersecurity landscape will confront a similar challenge with generative AI as it did previously with cloud computing. Just as there was initially a lack of understanding regarding the shared responsibility model associated with cloud computing, we find ourselves in a situation where gen AI adoption lacks clarity. Many are uncertain about how to effectively leverage gen AI, where its true value lies, and when and where it should not be employed. This predicament is likely to result in a significant risk of confidential information breaches through gen AI platforms.”

Kevin Kirkwood, Deputy CISO at LogRhythm

AI in cybersecurity will shift from hype to practical application

“Security companies will proudly proclaim their use of AI and machine learning as supportive tools, focusing on how these technologies can accelerate tasks and elevate the capabilities of analysts. However, the hype surrounding AI will begin to wane as it enters the “valley of despair,” prompting a shift from marketing emphasis to practical education on its applications. The question of AI’s mainstream integration into our culture will persist, reflecting the ongoing exploration of its true potential and practical implementation in cybersecurity.”

Sally Vincent, Senior Threat Research Engineer at LogRhythm

2024 braces for surge in AI-enhanced botnets, posing unprecedented cybersecurity challenges

“In 2024, the symbiosis between AI (Artificial Intelligence) and botnets will witness a significant surge. The convergence of AI capabilities will empower the proliferation and sophistication of botnets, amplifying their potency to orchestrate complex cyber threats. AI-powered botnets will exploit advanced algorithms to expand their reach and impact, intensifying the challenges faced by cybersecurity. This alarming trend will necessitate innovative defense strategies and heightened vigilance to counter the escalating threat posed by botnets, reshaping the landscape of digital security measures.”

Gabrielle Hempel, Customer Solutions Engineer at LogRhythm

Healthcare will be at the frontline of AI-powered attacks

“The healthcare industry will be most susceptible to AI-powered attacks in 2024. As AI becomes more integral in diagnostics, patient data management, and medical tools, there will be a notable rise in targeted breaches, jeopardizing the confidentiality and reliability of vital health information. The vulnerability of interconnected systems will compel a critical reevaluation of cybersecurity measures, marking a pivotal moment in fortifying defenses against AI-powered attacks in healthcare.”

Angel Vina, CEO & Founder at Denodo

Organizations Will Struggle to both Adopt GenAI and Leverage it Successfully

“Organizations are encountering multiple challenges as they attempt to implement GenAI and large language models (LLMs), including issues with data quality, governance, ethical compliance, and cost management. Each obstacle has direct or indirect ties to an organization’s overarching data management strategy, affecting the organization’s ability to ensure the integrity of the data fed into AI models, abide by complex regulatory guidelines, or facilitate the model’s integration into existing systems.”

Kevin Keaton, CIO/CISO at Red Cell Partners

Generative AI Will Bolster Hackers and Cybersecurity Alike in 2024 AI 

“Generative AI will be used to create new and diverse threats, some by entities that have little knowledge of cybersecurity. I expect that generative AI will reduce the barrier to entry into hacking – and in particular ransomware – as the revenue is relatively high and the consequences to the hacker are low.

At the same time, I am hopeful that AI improvements will be a great tool to add to the vulnerability discovery and response tool kit for CISOs and CIOs.”

Nick Sovich, Head of Engineering at Red Cell Partners

Data Restrictions Will Become Differentiator in AI Adoption

“2023 showcased the potential of generative AI and particularly large language models (LLMs). 2024 will realize that potential as companies incorporate AI into their products and processes. We’ll see faster adoption for use cases where there are fewer restrictions on data. For example, a project manager might use an AI assistant to help with planning and scheduling, or an analyst might use AI to parse data out of PDFs and into spreadsheets.

While there is great potential for AI in healthcare and national security, use cases in these industries will continue to have more stringent requirements around security, privacy and reliability, considering the high stakes at play. That said, there will be a lot of demand for customized open-source models deployed in secure and compliant environments and integrated with existing processes and systems. Beyond the development of the AI models themselves, a lot of work is required to bring AI software systems to a standard where they can, for example, operate on patient data in clinical settings or detect cyberattacks on critical infrastructure.”

Engineers Must Watch Out While Using AI in Day-To-Day Work

“Engineers have already been leveraging AI in their work and will continue to do so in 2024. A great example is that AI copilots have now been built into several major Integrated Development Environments (IDEs), where engineers write their code.

There are a few pitfalls for engineers to watch out for as they leverage generative AI, heading into 2024:

  • Inaccurate information: While LLMs are great tools for accelerating the pace of software development by allowing engineers to quickly synthesize solutions from a variety of Internet sources, they can land in a debug cycle due to inaccurate information.  Instead of engineers searching the internet for solutions to problems they’re facing and then applying what they’ve learned to their scenario, the LLM synthesizes some of this for them. But if the model provides inaccurate information, leading to a debug cycle, it would have been faster to approach the problem as if they didn’t have the LLM at their disposal.
  • Accidental leaks of sensitive information: While engineers are consulting their favorite AI coding assistant on how to optimize code, they should be sure not to share passwords, confidential information, or intellectual property. Just like all software, AI can be susceptible to security breaches.”

Miguel Baltazar,  VP of Developers at OutSystems

Low-code and GenAI 

“With the help of low-code and generative AI, developer teams will reach new heights of innovation in 2024 and create applications at unprecedented speeds with built-in guardrails in place. This opens up a new playing field for experimentation and creativity while minimizing the risks that come with public AI models around privacy and security. Low-code and generative AI will not only help teams beat burnout and do more with the same resources but will also help close the communication gap between IT teams and business leaders by expressing code in a visual way. This will ensure developers’ projects are aligned with the business’s objectives, break down silos and foster a culture of communication.”

Emmanuel Methivier,  Axway Catalyst, and Business Program Director at Axway

2024 is set up to be the year of AI consumerism

2024 will probably see the emergence of a new approach to interaction between information systems, thanks to the arrival of a new consumer: the AI-powered assistant. The progress and democratization of generative AI tools will create new uses. And OpenAI, creator of ChatGPT, wants to create a competitor to the iPhone. They want to put ChatGPT in your pocket, making it the ultimate advisor.

It’s time to rethink digital interactions. In the 2020s, we understood that we had to move from a vision of technical APIs to business-oriented digital products. The coming year will force us to reassess these products and their marketing to adapt them to the new consumers of services: the AI generatives!”

Sam Crowther, Founder and CEO at Kasada

“Attackers will use AI to automate and enhance various aspects of their attacks. While this involves sophisticated phishing techniques, it will shift to using the same AI that defenders rely on for behavior analysis to automate evasion of company security measures, and they will use AI for adaptive strategies that learn and adjust to defensive countermeasures.”

“Nearly 80 percent of IT pros claim that bots are becoming more sophisticated and challenging for their security tools to detect. Advanced bots intended to scalp sneakers and electronics are being repurposed and are easily accessible for those wanting to commit fraud. The hacking community has achieved economies of scale, and it’s never been easier to launch sophisticated cyber attacks without needing the formerly prerequisite expertise. In 2024, advanced bots will drive an increase in digital fraud and abuse, including more high profile, successful account takeover attacks and money washing schemes.”

“Disinformation will be at an all time high in 2024, especially leading into the US election. Historically, disinformation campaigns were only launched by nation-state threat actors. However with the increase of bot-related services, anyone can eliminate the manpower needed to spread these campaigns with just a few clicks and a few bucks. In addition, these disinformation campaigns will be more convincing this election thanks to advancements in generative AI. Generative AI allows bots to operate at scale and personalize their messages down to the individual account – instead of posting the same message over and over. This will make disinformation more persuasive and more difficult for social media platforms to detect and remove.”

Molham Aref, Founder and CEO at RelationalAI

Extreme Hype Around Generative AI will Diminish, True Generative AI Deployments Will Emerge 

“In the new year, we will also begin to see the extreme overhype around generative AI start to diminish. I’ve been working in, and around, AI since the early nineties, AI has always been prone to be overhyped. Having said that, I think we are going to see enterprises actually deploying generative AI in more measured and meaningful ways. As with most new technology adoption in the enterprise, it’s going to take longer for these kinds of AI systems to become part of enterprise software in the ERP or HCM sense, but real value will start to be created next year.  We will be able to calibrate our expectations appropriately once we begin to see its true impact.” 

Survival of the Fittest Among AI Players

“The venture capital climate has been tough as of late and will be even more so in 2024. I believe we will begin to see a shift in the industry when it comes to the survival of AI startups, as AI startups start to get acqui-hired by the big tech companies for their talent. This has already started to happen and in the last few months we’ve seen higher than normal venture-funded companies big and small either shut down or quietly get acquired by bigger players.

I think there will be an evolutionary cycle for the companies that can survive the next 18 months or so. It has been said before that some of the best and most valuable companies are usually created in difficult times, like during the 2008 recession and in 2000 when the dot-com bubble burst, as they usually tend to have better products and more disciplined companies. Companies that can run efficiently, be agile, and can adapt quickly to tough situations will be better positioned. At the end of the day, companies that have a strong product, and a demonstrated value proposition, will be in a better position to outrun the competition.”

Dhruba Borthakur, Co-Founder and CTO at Rockset

In 2024, Enterprises Get A Double Whammy from Real-Time and AI – More Cost Savings and Competitive Intelligence 

“AI-powered real-time data analytics will give enterprises far greater cost savings and competitive intelligence than before by way of automation, and enable software engineers to move faster within the organization. Insurance companies, for example, have terabytes and terabytes of data stored in their databases, things like documentation if you buy a new house and documentation if you rent. 

With AI, in 2024, we will be able to process these documents in real-time and also get good intelligence from this dataset without having to code custom models. Until now, a software engineer was needed to write code to parse these documents, then write more code to extract out the keywords or the values, and then put it into a database and query to generate actionable insights. The cost savings to enterprises will be huge because thanks to real-time AI, companies won’t have to employ a lot of staff to get competitive value out of data.”

The Rise of the Machines Powered by Real-Time Data and AI Intelligence

“In 2024, the rise of the machines will be far greater than in the past as data is becoming more and more “real-time” and the trajectory of AI continues to skyrocket. The combination of real-time data and AI make machines come to life as machines start to process data in real-time and make automatic decisions!”

Rohit Choudhary, CEO and Co-Founder at Acceldata

Real-Time AI Monitoring: A Data-Driven Future

“2024 will witness the rise of real-time AI monitoring systems, capable of detecting and resolving data anomalies instantaneously. This transformative technology will ensure data reliability and accessibility, especially for the ever-growing volume of unstructured data.”

Scott McAllister, Principal Developer Advocate at ngrok

AI Will Advance Abstraction in Programming

“We will see a significant leap in how AI advances abstraction for developers. As developers have looked to increase efficiencies, they have abstracted out the common and mundane tasks. Each new language, framework, and SDK that comes along abstracts another level of tasks that developers don’t need to worry about. AI will take abstraction to the next level. AI-powered reference architectures will give developers a jump on starting new projects or lend a hand when solving complex problems.  Developers will no longer begin with a blank slate. Instead, AI will help remove the intimidation of an empty page to jumpstart projects and streamline workflows.”

Younes Amar, VP of Product at Wallaroo.ai

LLMs Become More Practical for Enterprise

“Now that the novelty of commercial LLMs is wearing off, business leaders are looking for ways to incorporate AI that optimizes their operations. We’ll see enterprises veer toward open-source offerings and focus on use case-centric LLMs vs. technology-centric options.”

AI Meet the Three S’s

“Before widespread adoption, AI and LLMs need to be smart, safe and at scale. AI offerings will incorporate computer vision for real-time automated decision-making from video content. With that, companies can use AI for threat detection at public events, video-enabled flight boarding and grab-and-go stores.”

Zandra Moore, CEO at Panintelligence

“The AI rush will continue into 2024, at least in the SaaS sector, whose products are the gateway through which most people and businesses will access AI. More than half of SaaS companies plan to progress new AI innovations by the end of 2024.”

“Following 2023’s Generative AI spree, AI strategies will shift in 2024. The focus is moving to more savvy innovation. 2024 will be the year of ‘pragmatic AI’. Our research indicates that SaaS companies will embrace Deep Learning, Predictive Analytics and Causal AI in 2024. “

“While one in six vendors are currently testing new Generative AI functionality ahead of planned launches, more than a quarter are testing Predictive Analytics to help users predict future outcomes based on historical data.”

“Causal AI, which helps understand data relationships and decision-making processes, also looks to gain prominence, addressing the need for transparent AI models. The number of SaaS vendors using this technology will double in 2024.”

“The number of SaaS vendors using Deep Learning technologies could also double. Almost a fifth of SaaS vendors are testing neural networks capable of learning complex patterns and representations ahead of target launch dates next year.”

Alex Holland, Senior Malware Analyst at HP Inc.

AI Will Supercharge Social Engineering Attacks on an Unseen Scale, Spiking on Red Letter Days

In 2024, cybercriminals will capitalize on AI to supercharge social engineering attacks on an unseen scale: generating impossible-to-detect phishing lures in seconds. These lures will appear highly plausible and look indistinguishable from the real thing, making it harder than ever for employees to spot – even those that have had phishing training.

 We are likely to see mass AI-generated campaigns spike around key dates. For instance, 2024 stands to see the most people in history vote in elections – using AI, cybercriminals will be able to craft localized lures targeting specific regions with ease. Similarly, major annual events, such as end of year tax reporting, sporting events like the Paris Olympics and UEFA Euro 2024 tournament, and retail events like Black Friday and Singles Day, will also give cybercriminals hooks to trick users.

With faked emails becoming indistinguishable from legitimate ones, businesses cannot rely on employee education alone. To protect against AI-powered social engineering attacks, organizations must create a virtual safety net for their users. An ideal way to do this is by isolating and containing risky activities, wrapping protection around applications containing sensitive data, and preventing credential theft by automatically detecting suspicious features of phishing websites. Micro-virtualization creates disposable virtual machines that are isolated from the PC operating system, so even if a user does click on something they shouldn’t, they remain protected. 

Organizations will also use AI to improve defence against the rise in attacks. High-value phishing targets will be identified and least privilege applied accordingly, and threat detection and response will be enhanced by continually scanning for and automatically remediating potential threats.”

Dr. Ian Pratt, Global Head of Security for Personal Systems at HP Inc.

Beyond Phishing, the Rise of LLMs Make the Endpoint a Prime Target for Cybercriminals

One of the big trends we expect to see in 2024 is a surge in use of generative AI to make phishing lures much harder to detect, leading to more endpoint compromise. Attackers will be able to automate the drafting of emails in minority languages, scrape information from public sites – such as LinkedIn – to pull information on targets, and create highly-personalized social engineering attacks en masse. Once threat actors have access to an email account, they will be able to automatically scan threads for important contacts and conversations, and even attachments, sending back updated versions of documents with malware implanted, making it almost impossible for users to identify malicious actors. Personalizing attacks used to require humans, so having the capability to automate such tactics is a real challenge for security teams. Beyond this, continued use of ML-driven fuzzing, where threat actors can probe systems to discover new vulnerabilities. We may also see ML-driven exploit creation emerge, which could reduce the cost of creating zero day exploits, leading their greater use in the wild.

Simultaneously, we will see a rise in ‘AI PC’s’, which will revolutionize how people interact with their endpoint devices. With advanced compute power, AI PCs will enable the use of “local Large Language Models (LLMs)” – smaller LLMs running on-device, enabling users to leverage AI capabilities independently from the Internet. These local LLMs are designed to better understand the individual user’s world, acting as personalized assistants. But as devices gather vast amounts of sensitive user data, endpoints will be a higher risk target for threat actors.

As many organizations rush to use LLMs for their chatbots to boost convenience, they open themselves up to users abusing chatbots to access data they previously wouldn’t have been able to. Threat actors will be able to socially engineer corporate LLMs with targeted prompts to trick them into overriding its controls and giving up sensitive information – leading to data breaches.

And, at a time when risks are increasing, the industry is also facing a skills crisis – with the latest figures showing 4 million open vacancies in cybersecurity; the highest level in five years. Security teams will have to find ways to do more with less, while protecting against both known and unknown threats. Key to this will be protecting the endpoint and reducing the attack surface. Having strong endpoint protection that aligns to Zero Trust principles straight out-of-the-box will be essential. By focusing on protecting against all threats – known and unknown – organizations will be much better placed in the new age of AI.”

Michael Heywood, Business Information Security Officer at HP Inc.

Beyond Phishing, the Rise of LLMs Make the Endpoint a Prime Target for Cybercriminals

“In 2024, we’ll see the attention on software and hardware supply chain security grow, as attackers seek to infect devices as early as possible – before they have even reached an employee or organization. With awareness and investment in cybersecurity growing each year, attackers have recognized that device security at the firmware and hardware layer has not maintained pace. Breaches here can be almost impossible to detect, such as firmware backdoors being used to install malicious programs and execute fraud campaigns on Android TV boxes. The increasing sophistication of AI also means attackers will seek to create malware targeted at the software supply chain, simplifying the process of generating malware disguised as secure applications or software updates.

In response to such threats, organizations will need to think more about who they partner with, making cybersecurity integral to business relationships with third parties. Organizations will need to spend time evaluating software and hardware supply chain security, validating the technical claims made by suppliers, to ensure they can truly trust vendor and partner technologies. Organizations can no longer take suppliers’ word on security at face value. A risk-based approach is needed to improve supply chain resilience by identifying all potential pathways into the software or product. This requires deep collaboration with suppliers – yes or no security questionnaires will no longer be enough. Organizations must demand a deeper understanding of their partners’ cybersecurity posture and risk – this includes discussing how incidents have changed the way suppliers manage security or whether suppliers are segregating corporate IT and manufacturing environments to shut down attackers’ ability to breach corporate IT and use it as a stepping stone to the factory.

A risk-based approach helps   ensure limited security resources are focused on addressing the biggest threats to effectively secure software and hardware supply chains. This will be especially important as supply chains come under increasing scrutiny from Nation State threat actors and cybercrime gangs.”

Shay Levi, CTO and C0-Founder at Noname Security

API Security Evolves as AI Enhances Offense-Defense Strategies

“In 2023, AI began transforming cybersecurity,  playing pivotal roles both on the offensive and defensive security fronts. Traditionally, identifying and exploiting complex, one-off API vulnerabilities required human intervention. AI is now changing this landscape, automating the process, enabling cost-effective, large-scale attacks. In 2024, I predict a notable increase in the sophistication and scalability of attacks. We will witness a pivotal shift as AI becomes a powerful tool for both malicious actors and defenders, redefining the dynamics of digital security.”

AI Fundamentally Disrupting Business

“AI will continue to be a huge disruptor for current business models, in two ways. In the first instance, AI will be introduced into businesses’ existing processes, helping to streamline operations, increase efficiencies, and broadly change the way the business operates. Secondly, there are businesses that will struggle to adopt AI into their current model, resulting in AI-first organizations replacing them completely. When we reach the tipping point, this will happen very quickly; it will then be up to businesses to decide whether they sink or swim.”

Consolidation in AI

“AI startups that are building their own models are not going to succeed. As with any industry, consolidation will take place, with Big Tech companies such as Alphabet/Google, and Meta – alongside the likes of OpenAI – operating the foundational models that will enable AI to proliferate. As a result, the generative AI market will soon be limited to a select few companies, limiting scope for new entrants to innovate.”

Miten Marfatia, Founder and CEO at EvolveWare

“High hopes for GenAI will drive a surge in application modernization activity, but overoptimistic expectations will get a reality check. Modernization will be as urgent as ever next year — due to the shrinking talent pool, the advent of AI, and dangerously antiquated legacy systems. But there will be a key difference: high hopes that GenAI will reduce modernization cost and time will significantly boost enterprises’ appetite to modernize during a period of economic uncertainty. The resulting scramble to experiment with GenAI for more efficient modernization will expose its limits — GenAI technology will not be ready to make a measurable impact on modernization in 2024, and that reality check will take hold within the first half of the year. 

Enterprises will hesitate to fuel LLMs with their own code, curbing the impact of GenAI on modernization efforts. Though interest in applying GenAI to modernization will surge next year, enterprises will be hesitant to supply their code to train LLM models, due to security concerns and the fact that their software’s code represents their intellectual property. This hesitation will significantly limit the near term impact that GenAI will have on modernization processes, given that any GenAI-enhanced modernization technology would require massive amounts of legacy code that reside within organizations to properly train a model and thus achieve accurate and useful results.  

GenAI experimentation for application modernization will start with code documentation and source code transformation. With the advent of AI-powered tools over the last few years and the promise that GenAI will further streamline modernization efforts, organizations will aim for increasingly complex modernization strategies such as refactoring their monolithic applications and creating microservices in 2024. GenAI models for application modernization will first be developed in areas where significant data is available for modeling, likely starting with documentation of legacy applications where code is translated to a plain English description for use by business personnel, and transformation of source code to modern code.

The breadth of impact by implementing AI models will depend on the existing capabilities of each modernization tool. Tools that generate the same syntax pseudocode from source code written in multiple languages will provide the greatest impact by developing models using pseudocode to generate English descriptions of the source code. These same models will also be useful in translating extracted rules in pseudocode format into rules with descriptive English summaries and relevant details. From a code transformation perspective, significant advantage will lie with tools that provide refactoring capabilities prior to transformation. Quality of GenAI models is dependent on the quality of data used for modeling and using refactored data will result in high quality and efficient modern code being generated from GenAI models.”

Michael Armer, Chief Information Security Officer at RingCentral

A Year of AI Governance

“AI adoption is taking place at a breakneck pace. Companies are under immense pressure to identify innovative ways to leverage AI and create differentiation. The reason is simple: because if they don’t, their competitors will. I don’t see the rush to implement AI slowing down any time soon, but to mitigate the risk of unchecked AI, I believe that leadership teams will start to put some controls in place around its adoption. Over the next year, AI governance will start to catch up with AI deployments as companies establish and build out institutional and legal structures around the use of artificial intelligence.”

Ashu Varshney, Chief Information Officer at RingCentral

AI Will Revolutionize Business Communications

“AI is bringing tremendous value in properly categorizing and routing incoming customer tickets and calls, automating the responses of frequently asked questions and repeated requests, and providing the right assistance to resolve the tickets faster. As AI becomes more embedded in business communications and everyday operations, future advancements will lead to more accurate and context-aware systems, making business communications more successful and more efficient.”

Paola Zeni, Chief Privacy Officer at RingCentral

AI Will Require Trusted Vendor Partnerships and Transparency

“With AI in a constant state of evolution, AI compliance must become a responsibility that providers and customers share. Businesses should seek providers that are transparent when it comes to disclosing information around their AI, how it works, and what it’s used for. When transparent and trusted partnerships are formed, businesses can meet disclosure requirements and better keep pace with evolving regulations.”

AI Regulations on the Horizon

“Although it’s difficult to predict for certain whether AI will make it to the top of the Federal priorities in 2024, we may see AI regulations start to come in at the state-level. With the lack of national regulation, states may take matters into their own hands and roll out state-based AI rules, similar to how California deployed the CCPA in the absence of a national data privacy law. To prepare for pending regulations, companies should adopt a strong governance by bringing together AI stakeholders, adopting policies around AI use, introducing AI risk assessments into vendor due diligence processes, and adding information about AI to their terms and to customer collateral to ensure maximum transparency.”

David Boskovic, Founder and CEO at Flatfile

From Enterprise AI to Zero-Trust AI

“In 2024, we will see a significant shift in how enterprises approach AI, from focusing on performance to emphasizing accountability. As AI becomes more integrated into critical decision-making processes, organizations will prioritize ensuring the accuracy and reliability of AI outputs. This shift will lead to the development of “zero-trust AI,” where the validation of data sources and the transparency of AI-induced modifications become paramount. The goal will be to create AI systems whose operations and decisions are not just effective but also understandable and reviewable by all stakeholders, thereby fostering a culture of trust and responsibility around AI usage.”

Graham Russell, Market Intelligence Director at Own Company

AI adoption will drive data breaches  

“As the adoption of AI continues to skyrocket, the risk of data breaches increases. The sophistication and reach of AI can inadvertently expose vulnerabilities in cybersecurity defences, making organizations more susceptible to malicious attacks and unauthorised access.

This inevitable intersection of AI and data breaches is set to redefine the data protection and cybersecurity landscape in the near future. The silver lining? It will propel a renewed and intensified focus on data security issues. With each headline-grabbing breach, businesses are becoming increasingly vigilant about the safety of their business data. Organizations will be more focused than ever on being compliant with – and demonstrating compliance with – regulatory standards.”

AI adoption will prompt greater focus on data hygiene

“As the adoption of AI continues its rapid ascent, the spotlight on data hygiene is poised to become even more intense. AI’s voracious appetite for high-quality, accurate data makes the concept of data cleanliness a critical factor in unleashing the true potential of AI applications.

In response to this need for impeccable data, a notable trend is the strategic use of backup files. Traditionally seen as a safety net for data recovery, backup files are now being leveraged as a valuable resource for training and refining AI and machine learning models. These files, enriched with historical and real-world data, serve as a goldmine for organizations looking to enhance the depth and breadth of their AI algorithms.

Incorporating backup files into AI and machine learning models allows organizations to simulate diverse scenarios, ensuring that the algorithms are robust and adaptable to real-world complexities. This approach not only optimises the performance of AI applications but also enhances the accuracy of predictions and decision-making processes.”

Lance Hood, Senior Director of Omnichannel Authentication at TransUnion

AI will exponentially accelerate the erosion of trust 

“Generative AI is creating a much easier path for fraud, and the problem is only going to get worse. With the proliferation and broad availability of AI-based deepfake tools, fraudsters are magnifying existing weaknesses in authentication processes that enterprises rely on. For example: 

  • It has made account takeover fraud much easier to conduct. There have been multiple high-profile incidents where enterprising fraudsters have used AI to fake voice biometric authentication systems in the call center, allowing them direct access to consumer accounts. This has many fraud experts calling into question the future of voice biometrics as a secure authentication measure. Because AI can be used to outwit voice authentication to take over accounts, organizations need to use a multi-layered defense leveraging additional signals (e.g., forensic phone carrier analysis) and not rely on voice biometrics alone.
  • Similarly, fraudsters are leveraging AI-based tools that allow them to create increasingly convincing fake imagery and documents to conduct ATOs and support synthetic fraud schemes. Unfortunately, because many document verification providers rely primarily on visual inspections of documents, they are vulnerable to these AI deepfakes.
  • AI is also being used to accelerate the process of looking at password variations on a host of sites to increase the net of account takeovers. It has been widely assumed that the usefulness of passwords would expire with the advent of the quantum computing age, but AI may be the final nail in its coffin before we can get there.

Organizations with fragmented identity, in even the most sophisticated fraud systems, can be easily exploited by fraudsters using sophisticated AI tools. When fraud systems are siloed, it’s nearly impossible to see the bigger picture of fraudulent interactions (e.g., the same person requesting the OTP password is the same person calling the call center is the same person logging into the website). An omnichannel solution is the only way to see the whole equation.

It has always been important for institutions to continually reevaluate the identity authentication measures they have in place to ensure they are rigorous enough to stand up to emerging areas of risk, but the proliferation of AI-based threats will prompt an industry-wide rethink of how trust should be effectively established in this new reality.”

Alastair Pooley, CIO at Snow Software

AI is here to stay

“2023 was the year of AI and that isn’t likely to change heading into 2024. AI is too valuable to avoid and remains the number one priority for most IT leaders, but how these tools are used and approached is starting to change. While interest in new AI applications and solutions will remain healthy, organizational focus, and spend, is shifting toward improving the AI services already being used internally, as opposed to collecting and having to successfully implement a host of new tools. But, as organizations continue to workshop their internal AI functionalities, they must balance this pursuit of optimized value with the risk inherent to increased AI use and the resulting need for effective regulation.”

Manny Rivelo, CEO at Forcepoint

AI Policies Will Evolve Rapidly to Keep Pace With the Market

“In 2024, AI-related innovations will create new possibilities we’re not even considering at the moment.  Moving forward, organizations of all sizes will need to create and expand corporate AI policies that govern how employees can interact safely with AI. And AI security policies will need to extend beyond commercial AI tools to also cover internally-developed GPTs and LLMs. At Forcepoint, we have web and data security solutions all designed to future-proof adoption of emerging technologies such as GenAI, no matter how quickly the technology landscape evolves.”

Steve Leeper, VP of Product Marketing at Datadobi

“As artificial intelligence (AI) continues to weave into the fabric of modern business, the year 2024 is likely to witness a surge in the demand for enhanced data insight and mobility. Companies will need to gain insight into their data to strategically feed AI and machine learning platforms, ensuring the most valuable and relevant information is utilized for analysis. This granular data insight will become a cornerstone for businesses as they navigate the complexities of AI integration. At the same time, the mobility of data will emerge as a critical factor, with the need to efficiently transfer large and numerous datasets to AI systems for in-depth analysis and model refinement. The era of AI adoption will not just be about possessing vast amounts of data but about unlocking its true value through meticulous selection and agile movement.

 The trajectory of storage technology is also poised for a significant shift as the year 2024 approaches, with declining flash prices driving a broad-scale transition towards all-flash object storage systems. This shift is expected to result in superior system performance, catering adeptly to the voracious data appetites and rapid access demands of AI-driven operations. As flash storage becomes more financially accessible, its integration into object storage infrastructures is likely to become the norm, offering the swift performance that traditional HDD-based object storage and scalability that NAS systems lack. This evolution will be particularly beneficial for handling the large datasets integral to AI workloads, which necessitate rapid throughput and scalability. Consequently, a data mobility wave may be seen, with datasets and workloads being transferred from outdated and sluggish storage architectures to cutting-edge all-flash object storage solutions. Such a move is anticipated not just for its speed but for its ability to meet the expanding data and performance requisites of burgeoning AI initiatives.

Also importantly, in 2024, the landscape of data management will undergo a profound transformation as the relentless accumulation of data heightens the necessity for robust management solutions. According to Gartner’s projections, by 2027, it is expected that no less than 40% of organizations will have implemented data storage management solutions to classify, garner insights, and optimize their data assets, a significant leap from the 15% benchmark set in early 2023. This trend is likely to be propelled by the relentless expansion of data volumes, outpacing the rate at which companies can expand their IT workforce, thus elevating the indispensability of automation for data management at scale.

2024 is set to be a pivotal time for data management, with a shift towards API-centric architectures for meshed applications gaining traction. As customers increasingly demand that data management vendors offer API access to their functionalities, we are likely to see a mesh of interconnected applications seamlessly communicating with one another. Imagine ITSM (IT Service Management) and/or ITOM (IT Operations Management) software triggering actions in other applications via API calls in response to tickets — this interconnectedness will become commonplace. The trend towards API-first strategies will likely accelerate, driven by the desire to embed data management more integrally within the broader IT ecosystem. As a result, the development of self-service applications will flourish, enabling automated workflows and facilitating access to data management services without the need for manual oversight. This move towards a more integrated, automated IT environment is not just anticipated; it is imminent, reflecting a broader shift towards efficiency and interconnectivity within the technological landscape.

Finally, as we look toward 2024, we predict that an intensified focus on risk management will become a strategic imperative for companies worldwide.  Governance, risk, and compliance (GRC) practices are anticipated to receive heightened attention as companies grapple with the complexities of managing access to data, aging data, orphaned data, and illegal/unwanted data, recognizing these as potential vulnerabilities. Moreover, immutable object storage and offline archival storage will continue to be essential tools in addressing the diverse risk management and data lifecycle needs within the market.”

Nate Dow, Director of Technology at BairesDev

Navigating 2024: Cultivating Cross-Cultural Collaboration in AI-Infused DevOps Workflows

“As new applications get built from the ground up with AI, and as LLMs become integrated into existing applications, vector databases will play an increasingly important role in the tech stack, just as application databases have in the past. Teams will need scalable, easy to use, and operationally simple vector data storage as they seek to create AI-enabled products with new LLM-powered capabilities.”

The Convergence Era: AI Integration and the Unified Future of DevOps in 2024

“Looking ahead to 2024, AI models will become ingrained in an increasing number of applications, marking a shift from current practices involving data scientists, data engineers, DevOps, and security specialists operating in silos. The trend will move towards convergence, which will be driven by the expansion of capabilities, enabling traditional DevOps engineers to take on roles in ML Ops and Dev SEC Ops. Tooling will empower engineers to cross-skill, making it easier for them to wear multiple hats and fulfill diverse needs within the organization. While complete convergence to a single-engineer model may not be imminent, the direction is unmistakably toward increased versatility and flexibility facilitated by AI-driven tooling.”

Sendur Sellakumar, CEO at Dremio

Generative AI hype train will continue to grow exponentially 

“I think we’re still in a GenAI hype cycle, and I tend to be very practical. Things around GenAI have been very compelling. We hardly talked about GenAI a year ago; now we do, which is excellent.

Generative AI will be the future of user interfaces. All applications will embed generative AI to drive user interaction, which guides user productivity. Companies are embedding GenAI to do semantic searching to solve some of those old data problems – discovery becomes easier, creating pipelines becomes more accessible.”

Mohan Atreya, SVP of Products and Services at Rafay Systems

AI will transform Kubernetes into the automatic transmission system of the cloud-native era

“AI is poised to redefine how businesses utilize Kubernetes for application deployment and managing their infrastructure. Similar to how automatic transmissions streamlined driving, AI will become the automatic transmission for Kubernetes. AI will serve as the bridge between Kubernetes’ inherent complexity and accessibility so that even entry-level team members will be able to efficiently navigate and manage Kubernetes environments. 

AI will act as an intelligent guide, simplifying intricate operations and offering real-time insights. It will not only automate issue detection but also empower less experienced staff to operate Kubernetes proficiently. This empowerment will optimize the workforce, reducing the need for extensive training or specialized knowledge. Consequently, businesses will be able to streamline their operations, reduce human intervention, and significantly cut operational costs, making the adoption of Kubernetes even more feasible and economical.

With AI and ML, manual coding and testing will become a thing of the past

“In 2024, AI and ML will transcend their roles as mere tools and emerge as indispensable partners in reshaping cloud automation and optimization. Traditional development is time-consuming and requires significant expertise, but with AI and ML integration, manual coding and testing will become a thing of the past. These technologies excel at generating code and drastically reduce manual work. They also analyze data comprehensively, pinpoint inefficiencies, and recommend efficient resource management. This shift allows businesses to redirect human resources towards strategic goals, fostering innovation and growth. The outcome: enhanced efficiency, reduced costs, and seamless collaboration between human expertise and AI, delivering unprecedented value and agility in the digital era.”

Generative AI will be the new UI, eliminating the need for technical expertise and making K8s a truly inclusive technology

“Generative AI is well on its way to revolutionize the tech industry by becoming the new UI for complex systems, bridging the gap between human understanding and advanced technology. It will be transformative when adopting new systems like Kubernetes that historically required extensive training or certifications, often leading to high costs and resource constraints. Generative AI can act as a universal translator that enables communication in natural language, allowing users to effortlessly interact with technologies without delving into the underlying technical jargon. 

This shift will redefine the user interface much like graphical user interfaces revolutionized computing. Generative AI will become the conduit, translating human commands into the specific language understood by diverse systems, eliminating the need for users to become experts in each domain. It will enhance accessibility and democratize technology adoption, allowing users of all technical backgrounds to seamlessly integrate advanced systems into their workflows. As generative AI takes center stage, we can anticipate a future where human-machine collaboration is not only intuitive and efficient but also truly inclusive, shaping the industry’s landscape.”

 

Register for Insight Jam (free) to gain exclusive access to best practices resources, DEMO SLAM, leading enterprise tech experts, and more!

Share This

Related Posts