Evaluate Automation First and Invest in AI When it Adds Real Value
KNIME’s Iris Adae offers commentary on evaluating automation first and investing in AI when it adds real value. This article originally appeared in Insight Jam, an enterprise IT community that enables human conversation on AI.
Goldman Sachs predicts that global investment in AI will reach $200 billion by the end of this year. Yet in the face of this overwhelming economic support, more than 97% of organizations still struggle to show meaningful business value from their AI initiatives.
The disconnect reflects what many teams are experiencing firsthand: the enthusiasm for AI is high, but its impact is far less certain. As organizations test out new tools, many are adopting AI simply because it feels like the necessary next step to stay competitive.
When AI is introduced without a clear purpose, solutions quickly become more complicated than the problem necessitates, harder to maintain, and misaligned with actual needs. Over time, this can add friction to everyday work, preventing organizations from reaping the very benefits they hoped AI would bring.
In the rush to explore the newest models, it’s easy to forget that not every problem needs intelligence, prediction, or pattern detection. Many tasks are rule-based, and in those cases, straightforward automation can be the smarter, faster, and safer solution.
Both AI and automation have important roles to play, but they’re designed for different types of work. Understanding that distinction is what enables teams to build systems people can trust and use with confidence to move the business forward.
Where AI Hype Meets Real-World Constraints
As major tech companies double down on their AI investments, AI has become the shorthand for innovation. As a result, many teams are deploying it reactively, driven more by market momentum than by a clear understanding of the problem they’re trying to solve. But without a strategic rationale, this“AI-first” mindset often leads to solutions that are unnecessarily complex.
Issues surface quickly when AI is applied to vague or ill-defined tasks, especially those involving noisy or incomplete data. In these cases, LLMs can generate outputs that sound plausible but are incorrect, creating risks for systems that require precision. At the same time, teams can lose confidence in AI when outputs seem inconsistent or opaque, leaving organizations with costly tools they can’t trust or use effectively.
In most cases, the root problem is the same: introducing intelligence where a simpler, rules-based approach would be a better fit. A clear decision framework can help prevent this mismatch. Whereas AI excels when a task requires interpretation or flexibility, such as assessing sentiment in customer emails, automation is ideal for stable, repeatable processes like triggering threshold-based alerts or running daily KPI updates.
Neither approach is inherently better, and both bring equal value. The strength lies in knowing where each one works best to create systems that are efficient, trusted, and sustainable.
4 Ways to Determine When to Use AI Versus Automation
One of the most common challenges I see across organizations is that teams aren’t always clear on what they actually need; as a result, AI often becomes the default request because it’s seen as the modern solution.
Data leaders play a critical role in clarifying the problem, defining the task, and steering teams toward the most effective solution. Here’s the framework I use to evaluate whether a task calls for AI or automation:
1. Is the task deterministic? The first question to ask is whether the task follows a fixed, rule-based pattern. If the outcome can be defined by a clear sequence of conditions, then automation is the right fit. These tasks don’t require interpretation or prediction; they require the system to apply the same logic consistently.
Deterministic workflows benefit from full transparency. Every step is visible and explainable, making it easy for teams to validate and troubleshoot. When the rules are clear and the desired outcome is consistent, automation delivers the stability and clarity needed without the overhead and variability of an AI solution.
2. What is the quality of the data? Data quality is one of the most important considerations when evaluating AI. Automation thrives on structured, reliable inputs, applying the same rules every time without drift or unexpected variation. Just as importantly, it fails in predictable ways. If a required field is empty or a rule can’t be applied, the system surfaces the issue immediately, making it easier for teams to diagnose problems and maintain stable outputs.
AI, by contrast, makes inferences to fill in the gaps. Over time, the model may learn patterns that don’t reflect reality, producing outputs that are difficult to justify or trust.
Layering AI on top of shaky data only amplifies uncertainty. When data is unreliable, strengthening the foundation first is always the best approach. Automation is also a practical option, since it applies consistent, transparent rules even as the data matures.
3. How complex is the implementation compared to the payoff? AI solutions require significant investment upfront: data preparation, feature engineering, model selection, training, monitoring, and periodic updates. This effort only makes sense when the task benefits from prediction or interpretation; otherwise, the returns rarely justify the cost.
I’ve seen teams build sophisticated models that ultimately saved only a few minutes or handle edge cases that rarely occur. Oftentimes, automation can achieve the same desired outcome with less technical lift.
A focused scoping session helps quantify the tradeoff. Clearly define the problem, estimate the time and resources required to build and maintain the AI solution, and compare that to the actual time or value saved. If the investment outweighs the payoff, automation is the more efficient and sustainable option.
4. Does the output need to be explained or audited? Certain workflows — particularly those in finance, healthcare, or governance — require decisions to be fully traceable. Regulators or auditors typically need a clear explanation of how a result was reached.
Automation supports this need well because its logic is explicit: every rule is documented, and the path from input to output is easy to follow. AI, however, introduces a different dynamic. Even when models are accurate, their internal reasoning isn’t easily expressed in a step-by-step format. This opacity can create friction during audits, slow down approvals, or erode trust among decision-makers who need a clear rationale.
For workflows where accountability is just as important as accuracy, automation ensures that decisions can be explained, reproduced, and justified without the added uncertainty of AI.
Start Simple to Scale Intelligently
It’s no question that AI has an important place in modern data work — but it shouldn’t be the default. Real value comes from choosing the simplest, most effective approach for the task at hand.
By defining the task clearly, assessing the data, weighing the complexity, and understanding when transparency matters, teams can avoid the trap of using AI for AI’s sake. Once those foundations are strong, hybrid solutions become far easier to build: automation handles the deterministic steps, AI supports the nuanced ones, and humans keep the system calibrated and trustworthy.
Prioritizing clarity over novelty leads to systems that teams can understand, rely on, and improve overtime. To keep pace with AI, organizations must know which tasks are best left to automation so AI can be intentionally applied to the areas where it adds true value.

