Model Context Protocol Explained: Insights from Dremio CTO Rahim Bhojani

This exclusive Q&A with Rahim Bhojani, CTO of Dremio, offers an introductory explanation of the model context protocol and why it is becoming the backbone of agentic AI.
As generative AI continues to evolve from passive information retrieval to active decision-making and workflow execution, a new technical standard is emerging to support this transition: the Model Context Protocol (MCP). Promoted by major AI providers and already being implemented across enterprises, MCP enables large language models (LLMs) to interact with external tools and systems in a secure, structured, and scalable way.
Anthropic, the company behind Claude, was among the first to define MCP, but the protocol has since gained wide adoption, with companies like OpenAI, Microsoft, AWS, and GitHub integrating MCP support into their platforms. Now, forward-looking data and analytics providers are also embracing the standard to make AI more actionable within their ecosystems.
In this Q&A, Dremio CTO Rahim Bhojani shares his perspective on why MCP is critical to the future of agentic AI and enterprise data access. Through a series of questions curated by Solutions Review Executive Editor Tim King, Bhojani explains how MCP servers work, the benefits they offer for performance and security, and how Dremio leverages the protocol to deliver natural language access to complex data systems.
This conversation is a must-read for IT and data leaders seeking to understand the infrastructure behind agent-powered workflows—and how to prepare their organizations for the next wave of AI integration.
Model Context Protocol Explained
Question 1: There has been a lot of talk about Model Context Protocol servers these days, can you explain what they are and why are they so popular?
Answer: The Model Context Protocol (MCP) is an open standard that enables AI models and agents to interact with external tools, data sources, and systems in a structured, standardized way. Instead of building custom integrations for each application, developers can expose tools via MCP, allowing AI to access live data, invoke actions, and navigate complex workflows securely and efficiently.
MCP is gaining momentum as AI shifts from passive question-answering to action-oriented agents. It reduces integration overhead, improves interoperability, and enables Large Language Models (LLMs) to work across enterprise systems, including analytics platforms, productivity tools, and code environments.
MCP adoption is growing across industries due to its ability to provide a critical bridge between AI systems and enterprise data platforms. Specific adoption examples include:
- AI Providers: OpenAI, Anthropic (who introduced the protocol), Google, Microsoft, AWS, and GitHub are actively supporting MCP across their AI platforms.
- Tech Companies: Replit, Zed, Sourcegraph, Codeium, and JetBrains are embedding MCP to enhance developer workflows.
- Enterprises: Block (Square), Apollo, Goldman Sachs, AT&T, HubSpot, and PayPal are using MCP to connect agents to internal tools, CRMs, and data systems.
- Data Platforms: Dremio uses MCP to enable AI agents to query data, explore metadata, and troubleshoot performance—all through natural language interactions.
- MCP Tool Providers: Stripe, Cloudflare, IBM, Nasuni, Apify, Composio, Glama, and PydanticAI offer MCP-compatible services or toolkits.
Developers can also explore public MCP servers and tools on platforms like OpenToolChain.org, mcp.tools, and Anthropic’s GitHub. In short, MCP is quickly becoming the backbone of agentic AI—bridging LLMs with the tools and data needed to act, not just answer.
Question 2: What constitutes an MCP server and how does this differ from other offerings?
An MCP server is a specialized interface that connects AI models—especially LLMs—to external tools, data sources, and systems. It acts as a conduit that enables AI to retrieve relevant context or take specific, meaningful actions. By exposing tools, resources, and structured prompts through a standardized interface, the MCP server translates natural language-based requests into concrete operations like database queries, API calls, or file manipulations.
What sets MCP servers apart is their LLM-first design. They expose tools in a standardized, human-readable format, enabling models to understand and invoke them without custom code. MCP servers also manage context provisioning, handle data formatting, and support authentication workflows, which removes the friction typically associated with integrating AI into real-world systems.
There are four primary features that distinguish MCP servers s from traditional integrations. These include semantic tool definitions that are understandable by LLMs; dynamic tool discovery at runtime; context-aware interactions, as opposed to just data transfer; and standardized structure across diverse services. For example, in platforms like Dremio, an MCP server allows AI agents to explore datasets, generate SQL, and run queries without needing prior knowledge of the data or manual setup. It also—unlocks more natural, automated interactions between users and their data systems.
Question 3: In terms of security, do MCP strengthen security, or do they cause additional challenges?
MCP is an open and rapidly evolving standard, with upcoming features like OAuth support aimed at strengthening access control. Its open-source nature promotes flexibility but, like any emerging technology, it brings new security considerations such as:
- Token Management: MCP servers often store tokens to access external services. If mishandled, these tokens can be exploited to access sensitive data.
- Prompt Injection: Malicious inputs can manipulate AI behavior, leading to unintended or harmful actions via the MCP interface.
- Tool Description Poisoning: Altered tool definitions can mislead the model into unsafe operations.
As with any new technology, these risks reinforce the need for strong security practices like secure token management, access control, input validation, and monitoring. Over time and as the protocol matures, more robust safeguards such as are likely to become standard.
Question 4: How does MCP impact performance?
MCP can significantly improve the speed and efficiency of integrations between LLMs, AI agents, and external services—whether those are data platforms, automation tools, or operational systems. By standardizing how these components communicate, MCP reduces the need for complex, custom-built connectors and minimizes redundant API calls. This streamlined interaction model can lead to faster response times and more reactive AI-driven workflows.
Question 5: Who benefits from MCP? Can you explain sample use cases and titles of those who leverage this approach?
Due to the widespread adoption of the MCP protocol, the applications are limitless and are appealing to a large number of users including
- Developers and Engineers who are integrating AI models with development tools like GitHub or IDEs to automate code reviews and generate documentation.
- Data Analysts who are connecting AI to databases for real-time data analysis and visualization, enabling natural language queries over complex datasets.
- Productivity Professionals who are automating routine tasks such as scheduling meetings, managing emails, or organizing files by linking AI assistants to services like Google Calendar and Slack.
- Security and IT Teams who are monitoring and controlling AI’s interactions with sensitive systems, ensuring compliance and mitigating potential risks associated with AI integrations.
MCP unlocked even greater value, and by allowing AI agents to assist users with data discovery , enabling SQL-free insights through natural language queries, and helping troubleshoot issues by analyzing query profiles or system metrics. For example, a user might ask, “Why is this dashboard slow today?” With an MCP-connected agent, it can identify bottlenecks and suggest optimizations. These agents can also support context-aware exploration by surfacing relevant datasets based on user permissions, data freshness, or semantic meaning.
Ultimately, MCP empowers users across roles—whether technical or not—to engage with systems and data more intuitively, efficiently, and securely.