Data Privacy Week 2026: Key Insights from 40 Experts in the Field

Solutions Review editors sourced this definitive roundup of expert quotes on Data Privacy Week 2026 from Insight Jam, the world’s largest forum on the human impact of AI.
For Data Privacy Week 2026 (January 26-30), it’s essential to spotlight the evolving landscape of digital rights and personal data protection. This year’s theme underscores the critical balance between leveraging technology for advancement and ensuring the confidentiality and integrity of individual data. As we navigate through waves of technological innovation, from AI-driven analytics to IoT proliferation, the question of how to protect personal information while fostering progress becomes increasingly complex.
This roundup features insights from leading experts who dissect the nuances of data privacy today. They explore the challenges we face in safeguarding digital identities, the emerging threats to our online spaces, and the innovative strategies being developed to secure personal information against unauthorized access.
Their perspectives shed light on the importance of proactive measures, the role of legislation, and the individual’s part in maintaining their data privacy.
Note: Data Privacy Week 2026 quotes are listed in the order we received them.
Data Privacy Week 2026: Expert Insights
Chris Millington, Global Solutions Lead at Hitachi Vantara
“Cyber resilience maturity is still extremely low. Many businesses are pinning their future hopes on solution-in-a-box products to stay safe and remain operational. What they need are targeted resilience strategies. Attacks vary, so there’s no single way to fix this. Businesses need a multi-pronged approach that includes reliable and secure data infrastructure, efficient and dependable backup, anomaly detection and malware scanning, and the ability to recover within minutes. We haven’t seen enough of that in the last 12-18 months.”
Ravi Soin, CISO at Smartsheet
“As we mark Data Privacy Week 2026, we must recognize that privacy isn’t something we can check off our list once a year. It’s a fundamental right that requires our constant attention and action. We need to go beyond awareness campaigns and make privacy a core part of everything we do—how we design products and systems, how we manage security and risk, how we choose and oversee vendors, and how we lead our teams and shape our culture.
We must hold vendors accountable for how they secure our data, especially as AI adoption accelerates. Vendors should be transparent about how customer data is accessed, protected, and retained, because customer data belongs to customers. Period. Organizations should have clear control over if and how their data is used to train or improve AI, and vendors should clearly disclose those practices. Prioritizing data privacy pays dividends: it helps reduce exposure to security threats and data leakage as AI scales, and it reinforces confidence in the organization, strengthening customer trust.”
Bobby Ford, Chief Strategy Officer at Doppel
“As technology advances, so do the attackers using it. We’re seeing identity-based threats evolve faster than ever, with adversaries learning to exploit the trust people place in AI platforms. These platforms provide a rich source of intelligence for those looking to impersonate, manipulate, or deceive. The challenge isn’t that people are unaware, it’s that the positive impact of their use seems to outweigh the negative consequences of their misuse. Our responsibility now is to close that gap; to build awareness, resilience, and safeguards that evolve as fast as the threats themselves.”
Cynthia Overby, Director Strategic Security Solutions, zCOE at Rocket Software
“In today’s complex digital economy, data protection must now sit at the center of every enterprise’s resilience strategy. As organizations accelerate modernization with cloud and AI-driven technologies, sensitive operational and customer data have become both more valuable and more vulnerable. While these innovations expand productivity, they also introduce new attack surfaces that sophisticated threat actors are quick to exploit with advanced AI tools. When critical data is compromised, the impact extends beyond financial losses to reputational damage, regulatory exposure, and long-term business continuity.
Data confidentiality cannot be sustained with siloed controls or point solutions. It must be embedded within a broader cyber-resilience strategy that spans identity, infrastructure, and people. This includes strong multi-factor authentication, encryption for data at rest and in transit, proactive vulnerability assessment, and disciplined device policies. Just as importantly, ongoing employee awareness and training are essential to reducing risk in an increasingly complex threat environment.
As enterprise environments become more interconnected, platforms like the mainframe are no longer operating in isolation, with open technologies and hybrid connectivity introducing new vulnerabilities. Meeting today’s data protection challenges requires a fundamental shift in how core technologies are secured. Organizations must strengthen protections around the mainframe as they adopt open technologies, manage new attack surfaces introduced by AI, and move toward cyber-resilience strategies that unite security controls with employee awareness. When organizations elevate operational resilience and modernize critical systems, they create a foundation that enables data to move securely across hybrid environments while preserving customer trust and keeping essential business operations running without disruption.”
Mike Baker, Vice President & Global Chief Information Security Officer at DXC Technology
“What’s important to consider during Data Privacy Week is the rate of change with AI far exceeds what we saw with cloud. We don’t have years to understand AI and determine its precise business value. With AI, that urgency is eight, even 10-fold, where if you’re not on board in three to six months, you may never catch back up. Just look at the sheer amount of zero-day exploits in the last 24 months. In most cases, there aren’t legions of keyboard warriors behind these attacks, rather models manipulated to incessantly probe and penetrate at machine speed and scale.”
Vijay Pawar, SVP of Product at Quokka
“Mobile devices have become one of the most sensitive — and least governed — data environments in modern organizations. Smartphones routinely store authentication credentials, personal communications, financial information, and direct access to corporate systems. When a mobile device or app is compromised, attackers can quietly collect and exfiltrate sensitive data at scale, often without the user’s awareness. From a privacy perspective, this creates significant risk, particularly as organizations face increasing scrutiny around how personal and regulated data is accessed, processed, and protected.
Best practices now require a layered security approach that treats mobile apps as first-class data processors. While device management and network controls remain important, they are no longer sufficient on their own. Attackers are increasingly embedding AI capabilities directly into mobile applications to identify valuable data, adapt behavior to avoid detection, and operate in ways that appear legitimate to users and app marketplaces. This raises serious concerns for consent, transparency, and data minimization.
Looking ahead, organizations should expect mobile threats to become more adaptive, autonomous, and difficult to audit using traditional methods. To meet both security and privacy obligations, companies need deeper visibility into the mobile applications accessing their data, including insight into app behavior, permissions, third-party SDKs, and embedded AI functionality. Proactive analysis and continuous monitoring will be critical for maintaining compliance, protecting user trust, and ensuring sensitive data is not misused as mobile ecosystems continue to evolve.”
Jimmy Mesta, CTO & Co-Founder at RAD Security
“Static data maps aren’t enough. If you can’t see how sensitive data moves, you can’t secure it. Most privacy programs are still anchored in where sensitive data is stored. But in modern, cloud-native environments, risk comes from what’s moving, not from what’s sitting idle. PII flows across services, containers, regions, and APIs faster than legacy tools can track. If you only look at storage, you’re blind to exposure paths, cross-boundary violations, and unauthorised access in flight.
Security teams need real-time observability beyond data location and into data behaviour. That means understanding how sensitive data is accessed, transmitted, and transformed, and whether that behaviour aligns with policy and compliance requirements.
On Data Privacy Day, the call to action is to lock down what you know, and get visibility into what you don’t. Data privacy is a flow problem, not a storage problem.”
Dan Balaceanu, Chief Product Officer and Co-Founder at DRUID AI
“Data privacy is the first thing to consider when building an IT system, especially an AI solution. It is not a naïve architectural choice. Intoday’s world, IT solutions are composed of multiple services, often distributed, integrating LLM providers, vision providers, line-of-business applications, and automations. Ensuring data privacy in such complex ecosystems requires expertise.
As a solution provider, Druid takes full responsibility for keeping data private by hosting the required technologies within its own environment and validating that all connected technologies comply with data privacy regulations.”
Craig Ramsay, Senior Solutions Architect at Omada
“Know Who Has Access and Why
For organizations, effective data privacy begins with visibility into identities: employees, contractors, partners and, now more than ever, NHIs. Continuously review and monitor access to ensure it is appropriate and being used responsibly.
For individuals, understand which accounts, apps or providers have access to your personal information and remove those you no longer use or trust.
Excessive Access is a Privacy Risk
Privacy is not only about preventing breaches; it is about preventing unnecessary exposure. Organizations should enforce least privilege access and remove dormant or excessive permissions. Individuals should limit the data they share and avoid “over-consenting” to apps and services.
Access should be Contextual and Secure
Organizations should move beyond one-time access decisions providing long standing permissions. Use contextual controls such as risk-based authentication, time-bound access, and continuous verification, especially for sensitive or regulated data.
Individuals should enable multi-factor authentication and stay alert to unusual login activity.
Demand Transparency and Accountability
Organizations should be able to explain and prove who accessed personal data, when, and for what purpose. Clear audit trails and access reviews are essential for trust and compliance.
Individuals should expect transparency from organizations and exercise their rights to understand how their data is used.
Non-Human Access
APIs, service accounts, bots, and AI agents often have broad access to personal data. Organizations must govern these identities with at least the same rigor as human identities. Ideally these identities should have ephemeral access based on JiT (just-in-time) principles.
Individuals should be aware that automated systems increasingly interact with their data and should expect responsible oversight.
Equate Privacy with Trust
Strong identity practices enable organizations to protect privacy while still innovating and providing value to their customers. For individuals, choosing services that demonstrate responsible access controls is your best bet to avoid your data being misused or stolen. Look out for organizations that demonstrate compliance with the California Consumer Privacy Act, GDPR or equivalent regulatory frameworks.
Ultimately, privacy starts with identity. When access is intentional, transparent, and well-governed, both organizations and individuals are better positioned to protect data and build trust.”
Spencer Kimball, CEO and Co-Founder at Cockroach Labs
“Data sovereignty doesn’t fail because people ignore policy. It fails because systems weren’t designed to enforce boundaries when things go wrong. In steady state, almost anything looks compliant. Under failure—regions go dark, providers misbehave, failovers trigger—that’s when data crosses borders unintentionally and risk compounds fast.
The real signal from regulations like GDPR isn’t about paperwork or checklists. It’s a warning that architecture matters more than intent. As data protection laws proliferate and geopolitics continue to shift, sovereignty has to be a property of the system itself. The future belongs to architectures that can localize data by default, preserve guarantees under stress, and adapt as rules inevitably change.”
Sergio Gago, CTO at Cloudera
“Data Privacy Week offers an opportunity to reflect on what “data privacy” really means in practice, especially as AI changes the rules by the minute. With AI now a key component of critical workflows, data privacy is no longer just about access. It encompasses other major components, including where AI runs, how inference is performed, and whether organizations can maintain control across the full AI lifecycle.
This shift is already reshaping how organizations think about long-term data protection and accountability, as well as how the government will begin enforcing data sovereignty and transparency. With this, data privacy will begin to be talked about in tandem with private AI. Organizations will recognize that this is not only a technical choice but also a strategic one, as companies can anticipate regulations, reduce reputational risk, and build trust with their customers and stakeholders. This week helps not only to create awareness of the multifaceted and global challenge, but also to establish very clear and actionable tactics. This is not just about “keeping your data secure.” It is about feeding that data and skills to digital colleagues (agents) that are context aware; they know which data they can access and to whom they can talk about it; where the information resides and where to store it. It is about complete data lineage, authority and trust and compliance. From large banks to public institutions. But it is also about sovereignty. These systems (LLMs and beyond) require complete control of the pipeline. Which models you deploy, where and how establish true sovereignty and governance.
Companies that can integrate these principles will gain a competitive advantage from their most sensitive proprietary data, but also limit their liabilities. They will be able to collaborate in regulated digital ecosystems, participate in European or governmental programs, and differentiate themselves through reliability and technological responsibility.
For data privacy, private AI and Sovereign AI integrated with open models are crucial. They enable companies to transform artificial intelligence into a true strategic asset that generates value while simultaneously protecting the business and critical data.”
Nick Kathmann, CISO at LogicGate
“Domain-specific AI enable deeper expertise than generic LLMs – And make guardrails for protecting industry-specific data an even higher priority
Enterprises want to generate real, meaningful ROI from their AI tools by integrating deeper expertise into their models. These priorities will fuel a shift away from generic LLMs with broad (but shallow) knowledge pools and toward domain-specific AI models built on industry-specific data that allows them to provide a greater level of depth and context. It won’t be uncommon to see a single application or solution with many domain specific models tuned for specific use cases. However, despite the hype around these smaller, domain-specific models, they also create new vulnerabilities for attackers to exploit to gain access to confidential information like customer data, intellectual property, financial and legal particulars, and more. The same data security concerns associated with LLMs will not only carry over to domain-specific AI models, but may even be more difficult to prevent due to the prevalence of industry-specific (and often sensitive or proprietary) data and lack of talent around AI security, which is still an emerging talent pool. That’s why the most trusted, successful domain-specific models will be those backed with robust and transparent data governance protocols – and why enterprises will need to prioritize establishing data guardrails to ensure their confidential data doesn’t wind up leaked or compromised.
The Risk of AI browsers
AI browsers feed off the mantra, “think smarter, not harder.” People are busy, and the thought of leveraging AI to complete chore-like tasks such as responding to emails, booking flights or scheduling meetings is appealing. But this mentality has consequences, as security is often sacrificed for the sake of convenience. Search engines already have access to a plethora of personal information from users; what happens when this data is stitched together and access transcends to action?
To maximize benefits, AI browsers require access to as much user data as possible, in order to perform tasks on their behalf. The possible risks are endless: editing or deleting documents, approving permissions, making unintended purchases, posting to social media, changing account settings or passwords, uploading private files to the wrong place or unintentionally exposing credentials. And the determining factor for whether these risks become reality is the success rate of prompt injection attacks. Hackers rely on malicious code hidden from the human eye, which can share commands directly with agentic AI. Multi-modal adds significant complexity to attack mitigation as well. Currently, these offerings lack sufficient safeguards, and most users lack the experience and expertise to recognize the threats and protect themselves appropriately. If prompt injection is the gasoline, AI browsers are the match.”
Soniya Bopache, Senior Vice President and General Manager at Arctera
“Compliance isn’t a static, box-ticking exercise – it’s an ongoing obligation organizations must be able to evidence at all times. Data Privacy Day is an opportunity to focus on this shared requirement.
As the regulatory landscape evolves, organizations are under growing pressure to demonstrate clear governance over how data is accessed, protected, and recovered – not just in policy, but in practice.
With many individuals unclear on how their personal data is used or controlled, organizations must be able to clearly demonstrate lawful processing, appropriate safeguards, and oversight across the full data lifecycle. This is especially the case as more organizations embed AI into processes.
Strong compliance isn’t just about avoiding regulatory penalties; it is fundamental to maintaining trust, proving resilience, and sustaining long-term confidence in digital services.”
Kurt Markley, Managing Director at Apricorn
“Data Privacy Day is a good opportunity for organizations to take a step back and look at how data actually moves through their business. In many environments, sensitive information doesn’t just live in a single system anymore. It’s shared with remote employees, partners, and third parties, often across multiple platforms. Protecting privacy starts with having a clear picture of where that data lives, how it’s accessed, and where the weak spots might be, rather than assuming traditional perimeter controls are enough.
At a practical level, strong encryption, clear policies around data handling, and reliable backup practices still do a lot of heavy lifting. Just as important is making sure employees understand what’s expected of them when they’re working with sensitive information, especially outside the office. The organizations that tend to get this right aren’t chasing the latest headline or regulation. They’re putting in daily effort to build data protection into everyday workflows, which makes privacy easier to maintain and builds trust over time.”
Steve Visconti, CEO at Xiid
“Data Privacy Day is a reminder that privacy and security are inseparable. Today’s greatest privacy failures often result from preventable network intrusions. Recent attacks have shown how easily adversaries can access sensitive data through lateral movement. Criminals using AI and phishing techniques need just one privacy failure in order to sneak in, quietly traversing systems for months and siphoning data along the way. When we rely on employee protocols and security alerts to detect and triage data breaches, we’re already fighting an uphill battle.
This week is a good time for leaders to reevaluate how their security architecture can eliminate the risk of a breach. The path forward is prevention by design: architect environments so only the right applications can communicate, and everything else is invisible and unreachable. With software-defined pico-segmentation, lateral movement becomes structurally impossible; attackers can’t access what they can’t reach, no matter how fast they move with AI.
If you’re not preventing by design, you’re designing in risk.”
Cabul Mehta, Industry Principal at Presidio
“Without tech modernization and strong AI governance, healthcare organizations are on a dangerous path and risk widening an already growing trust and security gap. Data privacy is a top concern for patients when asked about their healthcare providers adopting AI tools. According to a recent survey Presidio conducted of 1,000 U.S. consumers, only 32% said they are very confident that their provider protects their personal info from cyber threats, and nearly 1 in 4 are uncomfortable with AI in any role. This problem is becoming impossible to ignore as clinician burnout pushes some frontline workers toward unsanctioned shadow AI workarounds – right when patients are already questioning whether their data is safe.”
Octavian Tanase, CPO at Hitachi Vantara
“Enterprises have long prioritized cybersecurity, but it hasn’t always been embedded as a core requirement in infrastructure contracts. That’s changing, as cybersecurity is quickly becoming a non-negotiable service guarantee. Additionally, insurers are demanding telemetry to validate compliance and risk posture. Vendors will respond by implementing zero-trust architectures using immutable-by-default data snapshots, granular data management, air-gapped recovery environments, and rehearsed recovery time objective commitments.
Traditional cybersecurity approaches can leave organizations vulnerable, which is why enterprises must now treat cybersecurity infrastructure as critically as the systems themselves, with clear accountability and measurable guarantees. Those organizations that lack a comprehensive approach to cyber resilience — one that leverages AI and spans monitoring, detection, recovery, and remediation — will struggle to meet customer requirements. However, the vendors who can deliver verifiable cybersecurity commitments will have a competitive edge.”
Zak Hemraj, CEO and Co-Founder at Loopio
“On Data Privacy Day, it’s clear that AI is changing how we handle sensitive data. As AI and automation become part of everyday work, keeping data secure in processes like RFPs matters more than ever. Last year, 68% of teams used AI in their RFP workflows, and 70% of those teams relied on it weekly. With AI handling more and more confidential business information, the risk of exposure is only getting bigger.
That’s why companies need to go beyond securing their own data and make sure their vendors are held to the same high standards. Protecting data is a shared responsibility. Organizations must put data security front and center to reduce risk, prevent breaches, and stay compliant. At Loopio, we’re committed to leading with responsible AI and making data privacy a foundation of everything we do.”
Tilman Harmeling, Strategy & Market Intelligence at Usercentrics
“AI systems now shape content and decision making at scale, meaning trust can no longer be assumed or retrofitted. This, coupled with the increased demand for more data within AI workflows is fundamentally changing how we think about privacy going forward. As AI becomes embedded in everyday decisions, the old privacy playbooks built for cookies, banners and passive consent are no longer effective. Recent data reveals 59% of consumers are uncomfortable with their data being used to train AI. This reflects the new reality we’re in where people are more aware of how their data is being used and more willing to disengage from brands that fail to respect their choices. As a result, compliance can no longer be tacked on as an afterthought. The organizations that overcome this challenge will be those that treat consent and transparency as a foundation to AI innovation.”
Gene Moody, Field CTO at Actian
“For Data Privacy Day, the focus should start with a simple but powerful principle: control what you share. In today’s hyperconnected world, every app, service, and online interaction collects fragments of your personal information, often without your full awareness. While data breaches may only affect a fraction of the vast volumes collected, the consequences are still significant, with stolen data reaching Exabyte scales(1 EB = 1M TB) . Users can take immediate steps by practicing better security hygiene: use strong, unique passwords, enable multifactor authentication, limit app permissions, and regularly review what information they share online. Importantly, recognize that convenience often comes at the cost of privacy; not every platform or service has your best interests in mind, in fact most do not. By slowing down, thinking critically about what we share, and taking proactive measures, individuals can regain some control over their digital footprint, reducing both exposure and potential harm, and contributing meaningfully to a culture of privacy and security awareness. My research shows approximately 30% of data stolen was essential to the service it was given to, and that service being essential to the victims need. The other 70% was complete willing forfeiture for entertainment, social interaction, Et alia. So be careful what you share with whom you share it, because most of the time, you never really know who is protecting it!”
Danny Manimbo, Managing Principal at Schellman
“Data Privacy Day is a reminder that many of the risks organizations face with AI and data are not new. What has changed is scale. As AI and automated systems become more deeply embedded in business operations, long-standing gaps in data governance, ownership, and oversight become harder to contain and easier to expose.
AI amplifies these existing privacy risks. When data governance is weak, those gaps surface faster and with greater impact — through biased outcomes, unreliable outputs, and limited visibility into how decisions are made. This is why privacy and AI risk cannot be managed as ad hoc compliance exercises. They must be treated as material risk domains, governed through structured, repeatable processes aligned to recognized management system frameworks such as ISO 27701 for privacy information management and ISO 42001 for AI management.
Used together, these frameworks reinforce disciplined governance by requiring organizations to assess risk upfront and continuously through privacy and AI impact assessments. These assessments help organizations understand how personal data is used, how automated decisions may affect individuals, and where controls, human oversight, and accountability are needed across the AI lifecycle. Building trust requires more than strong policies — it requires evidence that risks are identified, mitigated, and monitored as systems scale.”
Michael Gray, CTO at Thrive
“Data Privacy Day is a reminder that trust is now one of the most valuable assets organizations have, and one of the easiest to lose. As businesses rely more heavily on data and AI to drive decisions and automation, privacy can no longer be treated as a compliance exercise alone.
Strong data protection starts with understanding what data you collect, why you need it, and how it is governed throughout its lifecycle. In 2026, poor data hygiene does not just create privacy risk, it undermines the reliability of AI systems built on that data. Regulatory scrutiny will continue to increase, but the greater risk for organizations is reputational damage when customers lose confidence in how their information is handled.
Companies that prioritize transparency, accountability, and restraint in their data practices will be better positioned to build lasting trust in a data-driven world.”
Brett Tarr, Head of Privacy & AI Governance at OneTrust
“Data Privacy Day is a reminder that privacy isn’t just about compliance; it’s about trust, accountability, and how organizations earn the right to use data responsibly. As we look forward in 2026, we are seeing some shifting tides in the world of privacy and AI governance, both domestically and across the globe. Increasingly, the scales are shifting towards economic competitiveness across regions.
Within the US, additional states continue to deliver comprehensive privacy policies, but at the federal level there is a shift in focus towards pre-empting disparate state AI regulations to pursue competitive advantages for US AI leadership.
From an AI perspective, changes in regulation don’t mitigate the underlying need for AI governance, it just shifts how and why we deliver governance controls. Customers expect businesses to take care of their data and statistics show that customers flee brands that are careless with the data they are entrusted with. Even if the regulatory environment shifts to fewer controls, market conditions demand that companies pick up the slack if regulations recede, and responsibility for AI governance simply shifts from compliance requirement to a business imperative.”
Sundaram Lakshmanan, SVP & GM, Data Security at Fortra
“AI represents a generational leap in technology, and with it come new challenges that society will only fully understand over time. Privacy is one of the biggest concerns, and not all AI providers handle it in the same way. As laws evolve to catch up, users can protect themselves by avoiding the sharing of sensitive personal information such as identity details, financial or health documents, or family photos when seeking advice from AI tools. Digital images often contain hidden data like location and timestamps, and being aware of this is an important part of staying in control of your privacy.
The always‑on digital world has blurred the boundaries between personal life and work. People often use their personal devices for work tasks and their work devices for personal ones without a second thought. A simple way to protect both privacy and corporate data is to use separate browser profiles, or even different browsers, depending on the situation. This small habit helps maintain personal privacy while keeping organizational information secure.
Most people don’t realize how much information apps and websites collect beyond what they type into forms. Modern web tools track a wide range of online activity including uploaded photos, interactions, chats, hashtags, and mentions, which can all create a much larger data trail than users expect. To protect your privacy, people can adopt simple habits like using separate browser profiles, different browsers for different tasks, or distinct email addresses. These small steps help limit how much of your personal information gets aggregated.”
Stephen Manley, CTO at Druva
“Your greatest privacy threats are your well-intentioned co-workers. They’re using AI to do their job, but it exposes your organization’s weak data governance. Privacy through obscurity once kept us safe, but AI agents are shining a light on sprawled data, and what users can see is terrifying.
For decades, companies focused on securing sensitive data from external threats, but blocking external access does not resolve unclear data ownership, overly broad access, and poor access tracing. Before AI, a person might misuse data one record at a time. AI can access millions of records in seconds, and if permissions are broad or auditing is weak, there is no way to stop it.
Teams want to move fast, but the data environment is not ready. The only viable path forward is modernizing governance for an AI era: reduce access by default, make data use traceable end-to-end, and monitor how agents interact with sensitive data. You need a centralized policy and tools to govern all your data—from endpoint to data center to cloud. Otherwise, your co-workers will continue to be the biggest unintentional threat to your privacy.”
Ken Braatz, CTO at SupportNinja
“The safest data is the data you never store. AI doesn’t need Social Security numbers or credit card details to be effective – in fact, holding onto that kind of personal data just makes you a target. The real opportunity is using clean, connected, non-sensitive data to deliver better customer experiences without putting people at risk.”
Przemysław Grandos, Head of IT & Compliance at Catalogic Software
“After two decades in banking across InfoSec and AML, I’ve learned a simple rule: if you can’t evidence it, you don’t really have it. Privacy programs fail when they’re built on policies instead of controls and when ‘who has access to what data’ lives in tribal knowledge.
In 2026, the winning approach is simple: strong identity controls, least privilege, encryption by default, and audit trails that actually stand up to scrutiny. But there’s a piece many teams miss: resilience is part of privacy.
When ransomware hits, the pressure to ‘just restore something fast’ leads to shortcuts, unsafe reintroductions of malware, and bad decisions about paying. Treat backup and recovery as privacy controls: immutable copies, separation of duties, tight admin access, and routine restore tests. Regulators care about outcomes. Customers care about trust. Both care whether you can contain damage and recover cleanly.”
Gal Naor, CEO at StorONE
“Data Privacy Day is a reminder that privacy is not a feature added after the fact. It is a foundational design decision that must be embedded into the core of every data platform.
For years, organizations focused primarily on preventing breaches. Today, that approach is no longer sufficient. Data now spans on-prem environments, cloud and hybrid deployments, backups, archives, and AI pipelines. In many cases, privacy risk does not stem from external attackers, but from loss of control and unclear security policies across these environments.
A privacy-by-design approach starts at the architectural level, where data protection and data security are integrated rather than treated as separate layers. Organizations need the ability to enforce encryption policies that align with operational requirements, whether encrypting data at the software layer, at the drive level using self-encrypting drives, or both. Just as importantly, encryption must be flexible enough to apply globally or selectively, ensuring strong protection without limiting how data is used.
When organizations know exactly where their data resides, how it is protected, and who can access it, privacy becomes enforceable rather than aspirational. Combined with intelligent data placement strategies and reduced data duplication, this approach limits exposure and reduces the blast radius when incidents occur.
On Data Privacy Day, the message is clear. True data privacy is achieved through architecture, control, and resilience, not promises.”
Yoram Novick, CEO at Zadara
“The importance of data privacy and security can’t be overemphasized in today’s hyper-digital and increasingly fragmented world. With the vast increase in AI workloads, the question is no longer whether organizations should focus more on data privacy, but where and under whose control that data resides. Data sovereignty, digital sovereignty, and the rise of sovereign cloud and sovereign AI cloud platforms are becoming central to national resilience, enterprise risk management, and regulatory compliance.
As AI adoption accelerates, particularly for sensitive workloads such as healthcare, finance, defense, and government services, traditional public cloud models reveal growing limitations. Sovereign AI and sovereign AI cloud architectures address these gaps by ensuring that data, models, and operations remain under local jurisdiction, aligned with national regulations, and insulated from foreign access or extraterritorial control. This approach is becoming essential for organizations seeking to deploy AI responsibly while maintaining trust, compliance, and operational continuity.
Zero-trust architectures and intelligent security controls remain foundational to modern data protection. Identity-aware systems, multi-factor authentication, and continuous verification significantly reduce attack surfaces and help defend against threats such as credential theft and lateral movement. When combined with sovereign cloud and AI-ready infrastructure, these measures provide stronger protection than legacy perimeter-based approaches.
AI itself introduces both powerful opportunities and new risks in the context of data privacy and security. While AI-driven tools can enhance threat detection and operational efficiency, poorly governed AI systems can amplify vulnerabilities and compliance risks. Human oversight, transparent governance, and adherence to proven security principles remain essential. Importantly, deploying AI within sovereign AI cloud environments can materially reduce exposure to public cloud security incidents and regulatory uncertainty.
Data Privacy Day is a timely reminder that in an era defined by AI acceleration and geopolitical uncertainty, organizations must proactively embrace sovereign cloud and sovereign AI strategies to protect sensitive data, maintain digital autonomy, and build long-term trust in an increasingly interconnected world.”
Corey Nachreiner, CSO at WatchGuard
“Data privacy risk today isn’t primarily caused by attackers breaking through a firewall, it’s driven by identity compromise and the misuse of trusted access. We’re seeing threat actors rely more heavily on social engineering and AI-enabled deception to steal credentials, impersonate legitimate users, and quietly exfiltrate data. In many cases, these attacks start with something as simple as a deceptive link or download, underscoring the importance of user awareness alongside technical controls.
This shift is why protecting data now requires a simpler, more unified approach that combines identity, endpoint, and identity protections. When those layers operate in silos, gaps emerge that attackers are quick to exploit. Simple measures like verifying download sources, using multi-factor authentication, and maintaining strong credential hygiene can stop attackers even when credentials are targeted. With these practices, organizations can interrupt attacks much earlier, before credential theft turns into a data breach, regulatory exposure, or long-term reputational damage.”
Patrick Harding, Chief Architect at Ping Identity
“This week offers an opportunity to pause and assess the rapidly evolving landscape of digital trust, as privacy really boils down to choice and trust around how personal data is being used. Data privacy is no longer a passing concern for consumers – it has become a defining factor in how they judge brands, with three-quarters now more worried about the safety of their personal data than they were five years ago, and a mere 14% trusting major organizations to handle identity data responsibly.
Whether it’s social engineering, state sponsored impersonation or account takeover risks, AI will continue to test what we know to be true. As threats advance and AI agents increasingly act on behalf of humans, only the continuously verified should be trusted as authentic.
For businesses, the path forward is clear: trust must be earned through transparency, verification, and restraint in how personal data is collected and used. The businesses that adopt a “verify everything” approach that puts privacy at the center and builds confidence across every identity, every interaction, and every decision, will have the competitive edge.”
Melissa Bischoping, Head of Security Research at Tanium
“As AI agents and workflows become an undeniable part of the modern enterprise, data privacy expands into a complex ecosystem that many organizations are scrambling to understand and govern. The spirit of innovation that fuels technologists drives them to want to build, adopt, and integrate agentic AI, but fear of the unknown can bring pause. While AI has given us unprecedented ability to execute sophisticated workflows at speed and scale, we also understand that – if ungoverned and unchecked – it can introduce unprecedented risk and loss of data at that same scale.
To lead responsibly as an AI-forward technologist, build on a strong foundation of data governance and visibility first. Understanding the scope and permissions of agents, the data resident on systems interacting with other AI tools and infrastructure and having safeguards where there is always a human-in-the-loop to validate actions will reduce the risk of unexpected data loss through misconfiguration. Data privacy in the era of AI requires a clear, accurate, real-time answer to the questions, “What AI agents exist in my environment? What data/systems can they access? Under what permissions can they access systems? And do I have governance and controls to ensure autonomous workflows and agentic actions can be traced and audited with confidence?”
Agentic AI is transformational for every organization, but its transformation must be responsibly built on foundations of visibility, governance, and human oversight to protect privacy and resilience.”
Martin Raison, Co-Founder and CTO at Nabla
“Data Privacy Week is an important reminder for leaders that as AI becomes more embedded in enterprise workflows and decision-making, governance plays just as pivotal a role as accelerating technical capabilities. In healthcare, AI is a huge asset – it can analyze patient data, including medical history, scans, and lab results, to identify the root causes of health conditions. However, in a field where data privacy is so top of mind, these capabilities require strong guardrails and human oversight to be deployed safely. Often, we see companies rush to accelerate AI capabilities and deploy agents without extending data access policies or understanding how they act on behalf of users. This amplifies risk and erodes trust. The most successful AI strategies will treat privacy and security as foundational principles rather than afterthoughts.”
Mark Wojtasiak, SVP of Product Research and Strategy at Vectra AI
“Taking control of data in the AI enterprise isn’t about writing better policies—it’s about building resilience into how systems behave. As data moves continuously across identities, clouds, SaaS, and automated workloads, privacy failures don’t start with a single breach. They happen when organizations can’t see or respond fast enough as behavior changes.
Privacy by design only works when teams can detect abnormal access early, contain misuse quickly, and limit blast radius when controls inevitably fail. That requires continuous visibility into identity and network behavior—not assumptions based on static rules or one-time reviews.
In resilient organizations, privacy isn’t something you hope holds—it’s something you can measure and prove under pressure. The ability to detect, contain, and recover from misuse is what ultimately determines whether personal data stays protected in an AI-driven world.”
Anthony Cusimano, Director of Solutions Marketing at Object First
“From grain to gold to bitcoin, currency has taken a variety of shapes throughout history. Today, data is our currency. Your personal information is bought, sold, and exposed via real-time bidding (RTB) more times than you’d like to know in a single day.
AI has paved the slippery slope to exploitation with deepfakes, phishing campaigns, data poisoning, AI agents, and the list goes on. It’s no exaggeration to say that your data faces more threats today than at any other point in history—driven by AI-powered exploits and cyberattacks.
That’s why it’s so important to have proper controls in place to protect your data.
Start by reviewing your privacy settings, using strong authentication, and partnering with trusted organizations that prioritize security and recovery. Although we can hope organizations champion transparency and accountability, individuals also need to take proactive steps to protect their digital footprint.”
George Cray, CEO at GCH Technologies
“Authoritative, unified verification will become essential infrastructure—not for blocking bad actors, but for confidently enabling legitimate ones.
The real cost of fragmented verification is the friction imposed on legitimate businesses trying to reach customers at scale, not just the fraud that slips through. When there’s no single source of truth, every participant in the ecosystem duplicates effort, slows onboarding, and still lacks full confidence in the outcome.
In 2026, the conversation will shift. The goal is shared infrastructure that lets carriers, aggregators, and brands move faster because verification is settled, not repeated. When market participants can reference an authoritative source, onboarding accelerates, compliance becomes consistent, and the entire ecosystem operates with greater confidence.
The byproduct? Bad actors face more friction when legitimate pathways are clearly defined. But the real win is velocity and trust for everyone playing by the rules.
New messaging channels will succeed or fail based on whether the industry builds trust infrastructure from day one—not retrofits it later.
Short codes remain the gold standard for business messaging precisely because they were built on a foundation of verified, accountable participation. That infrastructure didn’t happen by accident; it was designed with trust at the center.
As new messaging channels scale in 2026, the industry faces a familiar choice: invest in verification infrastructure proactively, or scramble to retrofit it after trust erodes. Brands and consumers already expect authenticated, transparent engagement. The channels that meet that expectation from launch will earn adoption; those that don’t will spend years recovering from early missteps.
The playbook exists. The question is whether the industry will use it, or learn the hard way that trust is easier to build than rebuild.”
Kristel Kruustuk, Founder at Testlio
“Adopt a “double verification” mindset for everything AI tells you
We’ve entered an era where verifying AI-generated outputs is table stakes now. I’ve reached a point where I fact-check nearly everything AI tells me: the sources, the quotes, the statistics. When I ask any AI chatbot like ChatGPT or Perplexity to give me sources, I’m checking if those sources are actually real. When I ask for quotes, I’m Googling to confirm they exist. Sometimes they don’t.
This matters for personal safety because AI models are also known to be people-pleasers. That means if you feed them incorrect assumptions or leading questions, they’ll reinforce misinformation rather than correct it. Double verification protects you from acting on fabricated information, whether that’s a fake statistic you’re about to share at work or a “source” that doesn’t exist.
Rule: if it affects money, reputation, health, or security, verify with a second, primary source.
Treat AI like a starting point, not a final answer
The industry is moving so fast that it’s exciting and scary at the same time. We’ve already seen AI-generated material show up in legal cases and filings, with real consequences when no one verifies it. Courts, employers, and everyday users are all grappling with a question that didn’t exist a few years ago: Is what I’m seeing actually true?
If you’re using AI for anything involving your finances, health, career, or personal data, assume the output needs a human gut-check before you act on it. AI is a powerful tool, but it works best when curious, skeptical, and engaged humans stay in the driver’s seat. The moment you stop questioning what AI tells you is the moment you put yourself at risk.”
Jack Cherkas, Global CISO at Syntax
“Many organizations run into trouble when they treat privacy and security as separate disciplines. In reality, they’re inseparable. You can’t credibly protect personal data without securing it, and you can’t secure it properly without understanding the privacy obligations that come with it. Data Privacy Day is a timely opportunity to reflect on the gap between privacy commitments and the controls operating day to day.
Security fundamentals (e.g., IAM, threat detection, incident response, disciplined data hygiene) remain the foundation of credible privacy protection. And they are even more critical in 2026 as Generative AI accelerates, turning data into the fuel that powers new capabilities as well as new risks. The organizations that will succeed won’t simply “comply”; they’ll treat privacy as an active practice, pairing innovation with responsibility and grounding every AI ambition in strong data stewardship and mature security controls. Strong privacy AND strong security, together, are what will carry organizations through the next wave of technological change.”
Shrav Mehta, Founder and CEO at Secureframe
“On the AI Compliance Paradox:
93% of companies say security is a top priority, yet 68% leave one or fewer full-time employees to handle compliance while AI-powered attacks surge. Teams are spending eight-plus hours a week on paperwork instead of protecting customer data, and manual compliance models are breaking down when the stakes are highest.
For lean teams facing AI-driven threats, the only sustainable path forward is continuous compliance and automation that generates evidence in the background, so your people can focus on actual privacy and security protocols.
On Lessons from 2025’s Biggest Breaches:
The biggest breaches of 2025 came from preventable failures: reused passwords, unmonitored vendor access, and data that should never have been collected in the first place. When 16 billion credentials leak in a single event, it’s a wake-up call that the fundamentals still matter most.
Organizations need to ask themselves a hard question: if you don’t need to store certain customer data, why are you collecting it? Data minimization isn’t just good privacy hygiene, it’s risk reduction.”
Monica Landen, CISO at Diligent
“Data Privacy Week comes at a moment when the gap between AI adoption and AI governance has never been wider. Business leaders are doubling down on AI investments, yet many organizations are racing to implement AI tools without putting the right data governance frameworks in place.
In some instances, companies have deployed generative AI solutions only to discover too late that they have inadvertently exposed sensitive customer data or violated compliance requirements. The aftermath isn’t pretty, leading to reputational damage, regulatory penalties, and considerable loss of revenue.
So how do companies actually protect their data when AI enters the picture? Recent research shows that 97% of organizations that experienced an AI-related security incident lacked proper AI access controls – a striking and preventable gap. This isn’t just a technology problem. It’s a governance failure. While 22% of boards have adopted formal AI governance, ethics or risk policies, another 31% have only discussed it without putting policies in place. The potential for AI-related data privacy incidents is no longer just a theoretical concern; it has become a critical governance challenge that many organizations are struggling to overcome.”
