Ad Image

Keeping Humans in the Loop While Building the Future of AI

By Freddy Kuo, chairman of Luminys, chairman of SYNCROBOTIC, and special office executive assistant at Foxlink

Luminys’ Freddy Kuo offers commentary on keeping humans in the loop while building the future of AI. This article originally appeared in Insight Jam, an enterprise IT community that enables human conversation on AI.

The growing presence of AI in physical environments is transforming how organizations think about security, automation, and efficiency. From robotics to building management systems, AI is helping businesses optimize operations in ways that were not possible even a few years ago. In fact, as of early 2025, 78 percent of companies globally report using AI in at least one part of their operations, with nearly half (45 percent) applying it across three or more functions.

But as these systems become more powerful, they also raise new questions about responsibility, transparency, and trust.

At the heart of these questions is a simple idea: AI must serve people. That means keeping humans actively involved in how these systems are designed, deployed, and managed.

Designing for Oversight and Clarity

In many industrial and commercial settings, AI systems are now performing tasks that previously required human judgment. These include monitoring physical spaces, navigating environments, managing energy use, and coordinating multiple devices at once. While these capabilities improve efficiency, they also introduce risk if people cannot understand how the systems work.

When AI operates as a black box, trust breaks down. Operators are left guessing at how decisions are made, which makes it harder to respond to errors or intervene when needed. This is especially concerning in physical spaces, where incorrect or misaligned decisions can impact safety, operations, and cost.

The shift is already underway. According to Gartner, 1 in 20 supply chain managers will oversee robotic systems rather than human teams by the year 2030. That kind of transformation requires visibility and explainability to ensure accountability doesn’t vanish alongside traditional roles.

To address this, organizations should focus on developing systems that are interpretable. AI tools should provide clear signals to human users, explaining why decisions were made and allowing for real-time visibility into system activity. When these mechanisms are built into the design process, they enable better control and stronger accountability.

Accessibility is as Important as Intelligence

AI is only useful if the people managing it can use it effectively. In many environments, the teams responsible for overseeing AI systems may not have formal training in computer science or machine learning. That does not mean they are not qualified, but it simply means the tools must be designed with practical use in mind.

Good AI systems are not just powerful–they are also intuitive. The interface matters just as much as the underlying model. Real-time notifications, visual dashboards, and simplified control mechanisms all help lower the barrier to adoption.

In practice, this means designing systems that can be operated by technicians, facilities managers, or frontline personnel without requiring them to become data scientists. It also means reducing the friction between users and systems, making it easier to monitor behavior, troubleshoot issues, and respond to alerts.

Balancing Local and Centralized Control

Another way to support human oversight is through hybrid system design. Many organizations are now combining edge computing with cloud coordination to strike the right balance between speed and control.

Edge computing allows AI to process information locally. This is especially important in environments where connectivity is limited or real-time decision-making is essential. At the same time, cloud-based platforms offer centralized visibility and policy control. Together, these models enable responsive local action with global oversight.

This approach supports human involvement by giving teams both the autonomy to manage local systems and the tools to monitor and coordinate across broader deployments. It also allows for security updates, policy adjustments, and system-wide auditing without disrupting day-to-day operations.

Ethics Must Be Built In

Embedding human values in AI systems also means thinking beyond performance. AI has the potential to influence how people interact with public and private spaces, and with each other. That influence should be guided by ethical design choices and the removal of biases from the very beginning.

Microsoft’s 2025 Responsible AI Transparency Report found that over 75 percent of organizations using responsible AI practices reported improvements in data privacy, customer experience, confident business decisions, and more. Responsible AI development includes considerations such as minimizing unnecessary data collection, preserving user privacy, and ensuring systems are used fairly and transparently. It also involves reducing energy consumption and supporting sustainability whenever possible. These goals are not in conflict with efficiency, but instead often go hand-in-hand.

The key is to treat these values as foundational. Ethical considerations should not be tacked on at the end of the development cycle. They should inform the system architecture, operational policies, and user interface from day one.

The Role of Human Leadership

Finally, preparing people for an AI-augmented future goes beyond a technical task and is a leadership challenge. Organizations need to cultivate a culture where human input is valued, where teams are empowered to ask questions, and where technology is used to enhance rather than replace human capabilities.

This means offering training that goes beyond the technical, including discussions around decision-making, responsibility, and transparency. It also means designing policies and processes that give humans a meaningful role in system oversight, even when automation is fully operational.

The future of AI in physical environments depends on our ability to keep people engaged and informed. Smart systems can offer tremendous benefits, but only if they are designed with human oversight, accessibility, and ethics in mind.

As adoption continues to grow, organizations should prioritize AI design that is efficient, transparent, and usable. This is the path to building systems that earn trust and deliver lasting value.

Share This

Related Posts