We've Moved Into a World of Containment. Has Our Data?
The Hub's Insight

We've Moved Into a World of Containment. Has Our Data?

Last week, a coalition of some of the most prominent security leaders in the world, current and former CISOs from Google, Cloudflare, Atlassian, the NFL, and Rivian, alongside former directors from NSA and CISA, published a paper that should be required reading for every security executive and board member. This “Mythos-ready” Security Program paper, prompted by the demonstration that AI can now autonomously discover and exploit zero-day vulnerabilities, lays out a prioritized risk register and action plan for organizations navigating what the authors call a fundamental shift in the threat landscape.

The paper is excellent. It is serious, measured, and practitioner-oriented. It deserves the attention it is getting.

But there is a gap, one the authors themselves hint at but don't fully develop. The paper's recommendations are heavily oriented toward code, software, infrastructure, and network architecture. That makes sense given its origin. But the principles the authors articulate apply well beyond those domains, and one of the most consequential areas they extend to is the one most organizations still struggle to govern: unstructured data.

The Containment Paradigm Shift

The most important statement in the paper may not be in the risk register or the action items. It's in the executive and board briefing section:

"We have moved into a world of containment and a focus on resilience, so metrics should now focus on the speed to recover to normal operations."

And later:

"In an environment where entry points and weaknesses are discovered faster, that containment architecture is more valuable, not less."

Read those statements carefully. This isn't a recommendation. It's a declaration. The assumption that prevention alone can hold the line is no longer operative. The strategic posture has shifted to containment: limiting blast radius, enforcing boundaries, ensuring that when something goes wrong, the damage is bounded and recovery is fast.

If we accept that premise, and the weight of the signatories suggests you should, then the logical next question is: what falls inside our containment architecture, and what doesn't?

For most organizations, the answer is uncomfortable. Network segmentation, Zero Trust, endpoint detection, egress filtering; these are well-understood containment controls for infrastructure. But the sensitive documents, contracts, financials, intellectual property, and regulated data that organizations exchange every day? That data routinely leaves the containment perimeter entirely. It lands in external inboxes, sits in consumer-grade file sharing tools, and proliferates across third-party cloud storage accounts. No amount of network segmentation brings it back.

If we have truly moved into a world of containment, then containment must extend to the data layer. Containment that stops at the network and doesn't reach the data is incomplete containment.

The Attack Surface We’re Not Fully Addressing

The paper's Risk Register #3 focuses on attack surface management, and Risk #6 flags incomplete asset and exposure inventory as a critical blind spot, explicitly acknowledging that "unknown attack surface" is part of the problem. The paper's own language is direct: we cannot patch, segment, or defend what we don't know exists.

Notably absent from the attack surface discussion is unstructured data. We believe it should be there. Most organizations cannot say with confidence where their sensitive unstructured data has gone once it leaves the environment. Every copy that exists outside the perimeter is an asset we don't control, in an environment we can't monitor, subject to security practices we can't verify. Each one is exposure. Each one is attack surface. If we're serious about comprehensive attack surface management, the documents we share externally every day belong in that conversation.

This problem is about to get significantly worse. The paper repeatedly addresses the rise of AI agents, autonomous systems that discover, access, and act on information at machine speed. As these agents increasingly participate in business processes that involve document exchange, the volume and velocity of unstructured data crossing organizational boundaries will surge. Risk #6 becomes not just a current gap but an accelerating one.

Egress Filtering for Data

One of the paper's strongest and most practical recommendations is egress filtering. The authors note that it "blocked every public log4j exploit" and call for its implementation repeatedly across the executive summary, key takeaways, and priority actions. The logic is simple and compelling: controlling what leaves our environment is as important as controlling what enters it.

The data equivalent is just as compelling and far less commonly implemented. When a sensitive document is emailed to an external recipient or shared via a consumer file-sharing link, it has egressed our environment permanently. We have no further visibility, no ability to revoke access, and no mechanism to enforce policy on that copy. It is gone.

Platforms like eSHARE apply the egress filtering principle to unstructured data. Rather than sending copies of files outside the environment, data stays within the organization's governed tenant. External parties access it under continuous policy enforcement, with controls over who can view, download, or reshare, and with the ability to revoke access at any time. The data never actually leaves. That's egress filtering applied at the data layer, and the paper's own logic argues for it.

Hardening Through Architecture, Not Just Tools

The paper's Action #8 calls on organizations to harden their environments. For unstructured data, the most effective hardening isn't another tool layered on top of existing sharing practices; it's an architectural change in how data is shared in the first place.

Closing the company’s Microsoft 365 tenant to external sharing and routing external collaboration through a governed platform is one of the most concrete steps an organization can take almost immediately. It doesn't require waiting for AI defensive tooling to mature. It doesn't depend on threat intelligence or vulnerability scanning. It is a structural control: data that stays put stays safe. The blast radius of any breach is bounded by what was accessible, and data that never left the perimeter was never at risk of third-party exposure.

This is also directly relevant to the paper's concerns about lateral movement containment (Risk #8). The network framing is about segmentation, but the data equivalent matters just as much. Every external copy of a document is a node that can be reached independently. Eliminating the proliferation of copies is data segmentation, reducing the number of places an attacker or a compromised AI agent would need to reach to access sensitive information.

The Supply Chain Dimension

The paper warns that supply chains will be affected and calls out third-party risk as a persistent concern. Organizations exchange enormous volumes of unstructured data with partners, vendors, and customers as a basic requirement of doing business. That boundary, the organizational perimeter where our data meets someone else's environment, is where containment is most commonly abandoned.

Here is where the containment model changes the calculus entirely. When we transfer a copy of a sensitive document to a third party, a breach of that third party is a breach of our data. We are exposed by their security posture, their practices, their vulnerabilities. But when our data never actually transfers, when third parties access it within our governed tenant under continuous policy enforcement, a compromise of that third party does not compromise our data. Our documents were never in their environment. There is nothing to exfiltrate because nothing was ever there.

That is a fundamentally different risk profile, and it applies to every third-party breach scenario the paper warns about. Collaboration still happens, business still moves, but our containment architecture holds even when our partner's doesn't.

Governing at the Speed of Agents

The paper's Risk #11 addresses innovation governance: the challenge of maintaining oversight as AI capabilities are adopted faster than governance frameworks can keep pace. Action #4 calls for defining scope boundaries, blast-radius limits, and escalation logic for AI agents.

This is where continuous, automated policy enforcement becomes a force multiplier. As agents access and exchange data at machine speed, human-in-the-loop governance becomes a bottleneck at best and a fiction at worst. Policy must be embedded in the platform, enforced automatically based on data-centric signals and context, and applied consistently regardless of whether the entity accessing the data is a person or an agent. If an agent is compromised or behaves outside its intended scope, the containment architecture must hold independently of the agent's own trustworthiness.

This is a critical design principle: don't rely on the agent to be well-behaved. Rely on the platform to enforce boundaries regardless.

Start Now

The paper closes with a call to action rooted in urgency and pragmatism. Every recommendation, the authors argue, can begin straight away. The same is true here. Governing unstructured data sharing is not a future-state aspiration. It is something we can do now, with technology that exists today, and it directly addresses multiple risks and actions the paper identifies.

The Mythos authors are right: containment architecture is more valuable, not less. The question for every CISO and board is whether that architecture covers the data their organization shares every day, or whether it stops at the network edge and hopes for the best.

Data that never leaves the perimeter can't be exfiltrated. Access that can be revoked limits blast radius. Policy that enforces itself keeps pace with agents. A compromised third party can't expose data it never had. These aren't aspirational principles. They're operational today.

We've moved into a world of containment. It's time our data moved there too.

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

Last week, a coalition of some of the most prominent security leaders in the world, current and former CISOs from Google, Cloudflare, Atlassian, the NFL, and Rivian, alongside former directors from NSA and CISA, published a paper that should be required reading for every security executive and board member. This “Mythos-ready” Security Program paper, prompted by the demonstration that AI can now autonomously discover and exploit zero-day vulnerabilities, lays out a prioritized risk register and action plan for organizations navigating what the authors call a fundamental shift in the threat landscape.

The paper is excellent. It is serious, measured, and practitioner-oriented. It deserves the attention it is getting.

But there is a gap, one the authors themselves hint at but don't fully develop. The paper's recommendations are heavily oriented toward code, software, infrastructure, and network architecture. That makes sense given its origin. But the principles the authors articulate apply well beyond those domains, and one of the most consequential areas they extend to is the one most organizations still struggle to govern: unstructured data.

The Containment Paradigm Shift

The most important statement in the paper may not be in the risk register or the action items. It's in the executive and board briefing section:

"We have moved into a world of containment and a focus on resilience, so metrics should now focus on the speed to recover to normal operations."

And later:

"In an environment where entry points and weaknesses are discovered faster, that containment architecture is more valuable, not less."

Read those statements carefully. This isn't a recommendation. It's a declaration. The assumption that prevention alone can hold the line is no longer operative. The strategic posture has shifted to containment: limiting blast radius, enforcing boundaries, ensuring that when something goes wrong, the damage is bounded and recovery is fast.

If we accept that premise, and the weight of the signatories suggests you should, then the logical next question is: what falls inside our containment architecture, and what doesn't?

For most organizations, the answer is uncomfortable. Network segmentation, Zero Trust, endpoint detection, egress filtering; these are well-understood containment controls for infrastructure. But the sensitive documents, contracts, financials, intellectual property, and regulated data that organizations exchange every day? That data routinely leaves the containment perimeter entirely. It lands in external inboxes, sits in consumer-grade file sharing tools, and proliferates across third-party cloud storage accounts. No amount of network segmentation brings it back.

If we have truly moved into a world of containment, then containment must extend to the data layer. Containment that stops at the network and doesn't reach the data is incomplete containment.

The Attack Surface We’re Not Fully Addressing

The paper's Risk Register #3 focuses on attack surface management, and Risk #6 flags incomplete asset and exposure inventory as a critical blind spot, explicitly acknowledging that "unknown attack surface" is part of the problem. The paper's own language is direct: we cannot patch, segment, or defend what we don't know exists.

Notably absent from the attack surface discussion is unstructured data. We believe it should be there. Most organizations cannot say with confidence where their sensitive unstructured data has gone once it leaves the environment. Every copy that exists outside the perimeter is an asset we don't control, in an environment we can't monitor, subject to security practices we can't verify. Each one is exposure. Each one is attack surface. If we're serious about comprehensive attack surface management, the documents we share externally every day belong in that conversation.

This problem is about to get significantly worse. The paper repeatedly addresses the rise of AI agents, autonomous systems that discover, access, and act on information at machine speed. As these agents increasingly participate in business processes that involve document exchange, the volume and velocity of unstructured data crossing organizational boundaries will surge. Risk #6 becomes not just a current gap but an accelerating one.

Egress Filtering for Data

One of the paper's strongest and most practical recommendations is egress filtering. The authors note that it "blocked every public log4j exploit" and call for its implementation repeatedly across the executive summary, key takeaways, and priority actions. The logic is simple and compelling: controlling what leaves our environment is as important as controlling what enters it.

The data equivalent is just as compelling and far less commonly implemented. When a sensitive document is emailed to an external recipient or shared via a consumer file-sharing link, it has egressed our environment permanently. We have no further visibility, no ability to revoke access, and no mechanism to enforce policy on that copy. It is gone.

Platforms like eSHARE apply the egress filtering principle to unstructured data. Rather than sending copies of files outside the environment, data stays within the organization's governed tenant. External parties access it under continuous policy enforcement, with controls over who can view, download, or reshare, and with the ability to revoke access at any time. The data never actually leaves. That's egress filtering applied at the data layer, and the paper's own logic argues for it.

Hardening Through Architecture, Not Just Tools

The paper's Action #8 calls on organizations to harden their environments. For unstructured data, the most effective hardening isn't another tool layered on top of existing sharing practices; it's an architectural change in how data is shared in the first place.

Closing the company’s Microsoft 365 tenant to external sharing and routing external collaboration through a governed platform is one of the most concrete steps an organization can take almost immediately. It doesn't require waiting for AI defensive tooling to mature. It doesn't depend on threat intelligence or vulnerability scanning. It is a structural control: data that stays put stays safe. The blast radius of any breach is bounded by what was accessible, and data that never left the perimeter was never at risk of third-party exposure.

This is also directly relevant to the paper's concerns about lateral movement containment (Risk #8). The network framing is about segmentation, but the data equivalent matters just as much. Every external copy of a document is a node that can be reached independently. Eliminating the proliferation of copies is data segmentation, reducing the number of places an attacker or a compromised AI agent would need to reach to access sensitive information.

The Supply Chain Dimension

The paper warns that supply chains will be affected and calls out third-party risk as a persistent concern. Organizations exchange enormous volumes of unstructured data with partners, vendors, and customers as a basic requirement of doing business. That boundary, the organizational perimeter where our data meets someone else's environment, is where containment is most commonly abandoned.

Here is where the containment model changes the calculus entirely. When we transfer a copy of a sensitive document to a third party, a breach of that third party is a breach of our data. We are exposed by their security posture, their practices, their vulnerabilities. But when our data never actually transfers, when third parties access it within our governed tenant under continuous policy enforcement, a compromise of that third party does not compromise our data. Our documents were never in their environment. There is nothing to exfiltrate because nothing was ever there.

That is a fundamentally different risk profile, and it applies to every third-party breach scenario the paper warns about. Collaboration still happens, business still moves, but our containment architecture holds even when our partner's doesn't.

Governing at the Speed of Agents

The paper's Risk #11 addresses innovation governance: the challenge of maintaining oversight as AI capabilities are adopted faster than governance frameworks can keep pace. Action #4 calls for defining scope boundaries, blast-radius limits, and escalation logic for AI agents.

This is where continuous, automated policy enforcement becomes a force multiplier. As agents access and exchange data at machine speed, human-in-the-loop governance becomes a bottleneck at best and a fiction at worst. Policy must be embedded in the platform, enforced automatically based on data-centric signals and context, and applied consistently regardless of whether the entity accessing the data is a person or an agent. If an agent is compromised or behaves outside its intended scope, the containment architecture must hold independently of the agent's own trustworthiness.

This is a critical design principle: don't rely on the agent to be well-behaved. Rely on the platform to enforce boundaries regardless.

Start Now

The paper closes with a call to action rooted in urgency and pragmatism. Every recommendation, the authors argue, can begin straight away. The same is true here. Governing unstructured data sharing is not a future-state aspiration. It is something we can do now, with technology that exists today, and it directly addresses multiple risks and actions the paper identifies.

The Mythos authors are right: containment architecture is more valuable, not less. The question for every CISO and board is whether that architecture covers the data their organization shares every day, or whether it stops at the network edge and hopes for the best.

Data that never leaves the perimeter can't be exfiltrated. Access that can be revoked limits blast radius. Policy that enforces itself keeps pace with agents. A compromised third party can't expose data it never had. These aren't aspirational principles. They're operational today.

We've moved into a world of containment. It's time our data moved there too.

FAQ

How can CIOs ensure compliance and audit readiness in Microsoft 365?

Balancing collaboration speed with strong governance is the top challenge. Features like Teams/SharePoint external sharing can create oversharing and audit gaps if unmanaged. Pairing Microsoft Purview with a guest-less external collaboration layer like eSHARE keeps data in-tenant, applies existing controls, and gives CIOs/CISOs the visibility they need without slowing work.

What is the biggest Microsoft 365 governance challenge for CIOs and CISOs today?

Balancing collaboration speed with strong governance is the top challenge. Features like Teams/SharePoint external sharing can create oversharing and audit gaps if unmanaged. Pairing Microsoft Purview with a guest-less external collaboration layer like eSHARE keeps data in-tenant, applies existing controls, and gives CIOs/CISOs the visibility they need without slowing work.

How do organizations manage Microsoft 365 guest account sprawl?

Balancing collaboration speed with strong governance is the top challenge. Features like Teams/SharePoint external sharing can create oversharing and audit gaps if unmanaged. Pairing Microsoft Purview with a guest-less external collaboration layer like eSHARE keeps data in-tenant, applies existing controls, and gives CIOs/CISOs the visibility they need without slowing work.

Still have questions? Contact us to learn more.