Ad Image

Achieving the AI Moonshot Will Lead to Next Generation of Apps

Solutions Review’s Contributed Content Series is a collection of contributed articles written by thought leaders in enterprise tech. In this feature, Couchbase VP of Product Marketing Jeff Morris offers commentary on how achieving the AI moonshot will lead to the next generation of apps.

AI is going mainstream. Industry-wide, the power and advantage of AI is undeniable — from government and law to marketing and sales, every business sector is trying to identify their own “moonshot” opportunity.

The original moonshot was made possible by the Saturn-V rocket, for which my dad made the space suits worn by the Apollo program’s astronauts and which demanded a similar scale of resources to today’s AI solutions. AI needs humans, plans, and other foundational technology underpinning the system in order to “reach the moon.” Saturn-V made it to the moon because of engineers’ intense work, massive funding and energy. AI-powered applications will rely on the same focus and attention to detail — complex algorithms, raw data, massive compute power, energy for data centers, legions of users and consideration of not only costs, but also impact and viability including–socially, economically, ethically and and environmentally.

If organizations want to accomplish their own AI moonshot, it’s important they utilize their required resources responsibly, efficiently and effectively.

Stay on Course 

When implementing AI into an organization’s tech stack, businesses must establish clear, concise goals. How will AI solutions benefit teams? Saturn-V’s objective was clear: take astronauts to the moon. And even after the moon landings, the rocket was only used for launches in which its power was massive. This same level of intention and targeted ambitions should be applied to AI initiatives.

AI’s novelty can be challenging to avoid; while fun to experiment with, in most cases, project leaders need to take a honed-in approach to ensure the resources powering it aren’t wasted on futile plans. It’s also paramount that organizations lead AI advancements responsibly — from the underlying data to the privacy and security measures in place. Two potential negative outcomes can include AI hallucinations — which can produce inaccurate information — and the exposure of private, sensitive information if the right protocols are not taken.

For example, suppose an AI model is trained on a dataset that contains limited information from a niche topic, such as zoo animals. If a user asks the model a question about a general topic, like astronomy, the system could hallucinate. When queried about what stars are made out of, it might respond with something like: “Tigers make stars out of sugar and space dust.” This generated response is both scientifically incorrect and nonsensical, exposing the model’s lack of understanding of astronomy.

It would be in parallel with the Saturn-V landing on Mars instead of the moon. The time, money, and brain power were still spent on building the rocket to launch in space, but the end result did not go according to plan. Recognizing that the integrity of the information ingredients being fed to AI models must be considered as early in the development cycle as possible.

Minimize Losses With Flexibility and Agility

For project leaders to stay on course and minimize losses (financially, strategically and environmentally) with their AI models and solutions, the foundational infrastructure beneath the system must be built with the utmost efficiency in mind. The computing power and datasets the model is pulling from should be physically positioned as close as possible — enter edge computing. Intelligently utilizing distributed computing resources and the cloud makes it possible to garner the following benefits:

  • Improved performance: In use cases where rapid decision-making is required, including healthcare diagnostics and fraud detection, the proximity of computing resources to data can enhance the performance of AI algorithms.
  • Reduced latency: When compute power and data are physically close, it minimizes the delay in data transfer. This is especially critical in real-time applications like autonomous vehicles, online gaming and aircraft navigation, where even a slight lag can result in detrimental consequences.
  • Cost efficiency: Processing data close to its compute source can be cost-effective, as it helps reduce the need for large-scale data center infrastructure and expensive network infrastructure.

In addition, the database powering the AI application and supporting workflows must be capable of quickly scaling the amount of data AI systems require. Remember, there is no AI without data, and for the model to be the most accurate, reliable, and impactful, it must have access to the latest data. Flexibility and scalability are at the center of modern AI applications. It’s imperative for AI systems to rely on an agile database in order to handle diverse data sets, changing data requirements as apps evolve and allow organizations to expand infrastructure as needed while avoiding overprovisioning of resources and keeping costs down.

AI Needs a People-First Mindset

Much like the engineers that built Saturn-V, developers, SREs, and ops and platform teams need to work together with as little friction as possible; the AI model and the technology surrounding it must employ a practitioner-first mindset so they can focus on achieving their AI moonshot and not get distracted by clunky tools, and funky answers from the model. Using Generative AI-based coding assistance is an example of how to help the developer talent pool improve productivity, which could be the difference between landing on the moon and never leaving the ground.

It would be a mistake to ignore the ongoing developer experience gap, with over a third reporting it takes either more than three months or more than six months to deploy an application or service from code-complete to production. AI-enabled automation will soon help here, too. Developers don’t have time to start from square one with another new database added into their tech stack, and platform and ops teams don’t have the bandwidth to onboard another new data platform. AI-oriented data access patterns like vector search must be added to and complement existing access models in order to reduce overall architectural complexity that will quickly become the enemy of AI. Imagine debugging an AI hallucination when your prompts and instructions contain variables from multiple databases. How will you backtrace and debug the conversation to find the misleading variable?

AI-enabled applications and the technology powering them should equip teams with the following:

  • Familiar yet flexible data formats like JSON and AI-assisted coding that can create meaningful productivity boosts while enhancing developer morale.
  • Multiple data access patterns powered by a uniform data platform in order to minimize the complexity of data architectures such that any data fed to an AI model can be viewed as trustworthy.
  • Moving large-scale analytic operations alongside operational applications to shrink the latency gap between AI interactions that require aggregation horsepower as part of their prompt instructions. Latency will also be the enemy of AI if this is not addressed.
  • Repetitive, mundane tasks should be automated away so teams can innovate quickly and human-interventions and errors are reduced.
  • The recognition that AI-enabled applications are inherently mobile because most data that is used to create personalized experiences both originates on and is consumed by mobile devices.

Bottom line: When the solutions in place keep the end-user’s experience in mind, the moonshot can be achieved.

To the Moon and Beyond

Much like the first landing on the moon, a successful AI moonshot doesn’t conclude once the system has been built. Its impact will transcend across the organization — and for much larger initiatives, across industries — to unearth new experiences, provide faster turnaround, and become modern, adaptive applications.

As businesses become more familiar with AI and the results, it’s inevitable that more AI initiatives will be integrated into existing applications and processes. Looking ahead, it’s crucial for organizations to stay watchful of the ways AI can further advance the next generation of applications, with the ultimate goal of “reaching the moon, and then beyond.”

Share This

Related Posts