How we’re tackling Microsoft 365 Copilot governance internally at Microsoft

|

This readiness guide walks you through how we’re managing our Microsoft 365 Copilot governance internally here at Microsoft.

Engage with our experts!

Customers or Microsoft account team representatives from Fortune 500 companies are welcome to request a virtual engagement on this topic with experts from our Microsoft Digital team. 

Governance in the age of AI

Unlocking the next generation of productivity tools

Microsoft 365 Copilot combines the power of large language models (LLMs) with your organization’s data to turn employees’ words into some of the most powerful productivity tools on the planet—all within the flow of work. It suffuses the Microsoft 365 apps your people use every day, including Word, Excel, PowerPoint, Outlook, Teams, and more, to provide real-time intelligent assistance.

Initial results from our Microsoft Digital team, the company’s IT organization, and early adopters speak for themselves.

70%

of users said they were more productive at work.

64%

reported they spent less time processing email.

85%

shared that Copilot helps them get to a good first draft faster.

75%

highlighted time savings through improved document discovery.

Source: What can Copilot’s earliest users teach us about AI at work?

Getting governance right

With all the opportunities AI presents, your organization might be in the process of implementing Microsoft 365 Copilot. But it’s important to do that safely.

Copilot combs through your organization’s entire data estate in the blink of an eye, so the old method of security through obscurity doesn’t cut it. You need to assert control over where data flows throughout your tenant so Copilot knows what it can and can’t access or display.

To ensure that proper data hygiene extends to AI-powered workflows, Microsoft designed Copilot to respect the sensitivity labels and data loss prevention (DLP) controls that organizations configure in their Microsoft Azure environment. That way, administrators can be confident that the right people and apps have access to the data they need, and it doesn’t appear where it shouldn’t.

 {Download the eBook version of this Governance in the age of AI readiness guide.}

Learn from our Microsoft 365 Copilot experience

We learned a lot as the first large enterprise to deploy Microsoft 365 Copilot. We used those learnings to create this deployment and adoption guide that you can use at your company—check it out:

Our team in Microsoft Digital implemented a company-wide governance strategy to address this issue. In the process, we learned valuable lessons that will be useful to any organization using Copilot.

This guide outlines our process for implementing a governance strategy that delivers the benefits of Copilot to Microsoft employees while minimizing the risks and entryways into our data estate. It shares our internal learnings so our customers can get up and running while avoiding pitfalls or surprises.

Follow along to find out how you can safely and effectively deploy Copilot at your organization—backed by rock-solid governance.

Principles for effective AI governance

Use this set of tips to ground yourself as you read through this guide.

Enable self-service

Give employees the ability to create new workspaces across your Microsoft 365 applications. By maintaining all data on a unified Microsoft 365 tenant, you ensure that your governance strategy applies to any new workspaces.

Limit the number of information protection labels

Limit your taxonomy to a maximum of five parent labels and five sub-labels. That way, employees won’t feel overwhelmed by the volume of different options.

Use intuitive labels that mean what they say

Make your labels simple and legible. For example, a “business-critical” label might imply confidentiality, but every employee’s work feels critical to them. On the other hand, there’s very little doubt about what “highly confidential” or “public” mean.

Capture container labels for groups and sites

Label your data containers for segmentation to ensure your data isn’t overexposed by default. Consider setting your container label defaults to the “Private: no guests” setting.

Derive file labels from parent containers

Classify files according to their parent containers. That consistency boosts security at multiple levels and ensures that deviations from the default are exceptions, not the norm.

Train employees

Train your employees to handle and label sensitive data to increase accuracy and ensure they recognize labeling cues across your productivity suite.

Trust employees, but verify their work

Trust your employees to apply sensitivity labels, but also verify them. Check against DLP standards and use auto-labeling and quarantining through Microsoft Purview automation.

Implement lifecycle management and attestation

Use strong lifecycle management policies that require employees to attest containers, creating a chain of accountability.

Enable company-sharable links

Limit oversharing at the source by enabling company-shareable links rather than forcing employees to add large groups for access. For highly confidential items, limit sharing to employees on a need-to-know basis.

Extract inventory to detect and report oversharing

Use Microsoft Graph Data Connect extraction in conjunction with Microsoft Purview to catch and report oversharing after the fact. When you find irregularities, contain the vulnerability or require the responsible party to repair it themselves.

Chapter 1: Enable self-service

Applying self-service principles to the way we manage labeling and governance emerged as a crucial step for us. 

Secure self-service that empowers employees

Self-service is a core tenet of employee empowerment here at Microsoft. We want to give every employee the independence to create the resources they need without engaging IT. But that level of freedom relies on ensuring our Microsoft Digital governance team identifies and protects valuable data. As a result, our employees can implement and own the containers, workspaces, and content they need to do their work productively. 

A container or workspace is a logical unit of content storage associated with a designated roster of collaborators. Common containers include SharePoint sites, Viva Engage communities, Outlook groups, and Teams channels.

Self-service forms the foundation of our entire governance strategy. Employees can create workspaces and content across many of the Microsoft tools they use for their day-to-day work, including SharePoint, OneDrive, Teams, and Power Platform. That freedom enables a culture of innovation and agility, where people can work together across teams and geographies without encountering “IT gating,” the need for IT to get involved in enabling day-to-day activities.

By encouraging collaboration in place, our tenant structure frees employees from resorting to email attachments or working in overly broad and open workspaces. As an IT team ourselves, we understand the value of eliminating IT gating for minimizing the time and effort our professionals need to invest in keeping employees productive.

This kind of data hygiene isn’t just about Microsoft 365 Copilot. It maintains data security and compliance wherever employees access company content and information. But because Copilot depends on the ability to access an organization’s data estate, good governance is essential for keeping it within bounds—especially in a self-service culture.

Pillars of our asset governance

Microsoft Digital’s asset governance strategy rests on four pillars: Empowering employees, identifying valuable and vulnerable content, protecting our assets, and ensuring accountability.

Responsible self-service

Self-service container creation has abundant benefits, but it also poses some challenges for content governance and security—things like oversharing, unneeded asset sprawl, and data leakage. To address these challenges, our Microsoft Digital governance team has established self-service principles that balance the needs of employees and the company.

We empower with accountability

Accountability has responsibility. Any full-time employee can create a workspace, but they’re responsible for re-attesting its compliance every six months to ensure it meets our governance requirements. They also need to attest that they still require and maintain the resource. They need to manage their own content and ensure it’s properly classified, labeled, and secured. The content’s accountable owner makes any decisions about the workspace with respect to reach or the desire to maintain it. That removes any guesswork for IT about whether a site is still valued and cared for.

We empower with guardrails

We secure assets by default and expand access based on employee needs.

We trust, but we also verify

Microsoft Information Protection (MIP) sensitivity labels and Purview DLP act as guardrails for employee-led governance efforts.

As we in Microsoft Digital have worked to improve the company’s overall governance posture, we’ve learned several important lessons. When you consider self-service container creation, there are a few questions to ask yourself:

Who do you trust to create containers? At Microsoft, we reserve complete self-service capabilities for full-time employees. Then, we configure those privileges in Microsoft Entra ID to define who can create Microsoft 365 Groups. These users need to take relevant trainings, and we hold them accountable for the containers they create.

Where does employee self-service make sense? Different employees will require self-service in different environments. Will yours need to operate within SharePoint? Power Platform? Teams?

What are your lifecycle rules? Think about your policies and rule sets. Who’s accountable? What does the lifecycle look like?

What are your naming rules? A clear taxonomy can act as an extra signpost and organizational driver for your users. It can also be useful to think through what names are explicitly helpful or obscure. At Microsoft, we use a blocked word list, but we don’t prefix or suffix all groups or site names to avoid overloading the employee experience.

When you’ve settled on degrees of autonomy and where to apply it, you can begin your AI governance journey. Find out how to configure containers for self-service here.

Learning from our self-service principles

Put thought into your environment and tenant architecture, key personas, and scenarios before adoption.

Understand that IT organizations have inherently cautious habits, and self-service might seem like a leap. As you lay out the business value for self-service container creation, illustrate the safety backstops as well. Also consider the risks if you don’t take this step, like employees misusing existing sites or other means not supported by IT.

Make the business case and offer reassurances that greater flexibility doesn’t equal greater vulnerability.

Consider your existing data hygiene and how it needs to extend to accommodate AI.

Chapter 2: Establish container labels and set well-scoped, intuitive defaults

We developed healthy baseline practices to ensure both our employees and the resources that they work with are protected.

Balancing freedom with trust through an easy-to-use labeling taxonomy

Self-service container creation forms the foundation of our employee-centric governance strategy. As part of that freedom, our Microsoft Digital governance team has established baseline protections inherent to all containers, and those protections depend on sensitivity labels. Microsoft 365 Copilot respects labels, so establishing effective labeling practices extends data security into our employees’ AI usage.

Baseline labeling habits

Employees need to label every container or workspace they create using Purview Information Protection (PIP) container labels. It’s a matter of policy at Microsoft: If it isn’t labeled, we delete it. We use container labeling for data delineation and to apply consistent protection and governance policies to containers based on their sensitivity and purpose. Microsoft labels break out into four different categories.

Container labels provide two things:

  • First, they drive user awareness over how to handle content. For example, if something is highly confidential, employees shouldn’t talk about it in the café.
  • Second, they illustrate what data is appropriate for which container. In other words, they signal to an employee that they shouldn’t store highly confidential documents on a general site.

Our Microsoft Digital governance team predefines and centrally manages labels to align them with broader MIP sensitivity levels used for email, files, meetings, and containers. Those include the same four categories: “highly confidential,” “confidential,” “general,” and “public,” although we don’t use the latter for containers.

Matching labels with policies and protections

Each label we’ve defined has a set of protection settings that include policies around characteristics like guest allowance and membership openness. They also drive inherited file labeling, which we use for encryption.

At its core, container classification communicates four things:

  • Privacy level: Labels determine whether the workspace is broadly available internally or it’s a private site.
  • External permissions: We administer guest allowance via the group’s classification, allowing specified partners to access teams when appropriate.
  • Sharing guidelines: We tie important governance policies to the container’s label. For example, can employees share this workspace outside Microsoft? Is this group limited to a specific division or team? Or is it restricted to specific people? The label establishes these rules.
  • Conditional access: While not implemented at Microsoft, tying identity and device verification to container labels introduces additional governance controls.

After extensive experimentation, we arrived at our current schema for how container sensitivity labels align with MIP policies. Your organization might make different choices about your labels’ relationships with information protection policies, but this can give you an idea of what a healthy governance ecosystem looks like.

Information protection container sensitivity labels

Microsoft Digital’s schema clearly delineates what each label means and how it affects content.

Building a process around employee ownership

The labeling process works like this: When employees create a new container, they’re responsible for selecting a container label that matches the sensitivity and purpose of the content they intend to store and share. By default, we lock new containers, which means that only the owner and members can access them. Locked containers prevent unauthorized or accidental access to their content.

Container owners can unlock the container if they need to share content with a broader audience within the organization or external partners. Container owners can also change the container label if the sensitivity or purpose of the content changes over time.

At Microsoft, this process provides the right combination of flexibility and protection while empowering employees with effective self-service.

Learning from our labeling practices

Your employees will be the ones applying labels, so make those labels intuitive. For example, “highly confidential” is easy to understand, while “business-critical” can be interpreted many ways from a sensitivity standpoint.

Identify the security needs and regulatory compliance that are specific to your organization and use built-in governance controls available through Microsoft tools.

Keep labels minimal to avoid overtaxing your employees’ understanding. We recommend restricting your labeling schema to no greater than five main labels with five sub-labels each—and the fewer, the better!

Experiment with sensitivity labeling through a small group of early champions, then roll these features out alongside an adoption and education initiative.

How we did it at Microsoft

Use these assets to guide you our own journey—they represent how we did things here in Microsoft Digital.

More guidance for you

Here are more assets that we found useful.

Chapter 3: Derive file labels from parent containers

We’re using default file-labeling based on container labels to help our teams stay consistent with how they create and store resources that they work on. Here’s how that looks for our employees:

SharePoint and other containers support default library labels, which we configure to align with the container label through mapping we define in Purview.

For instances where we need to define default library labels for tools that don’t have container labels, like OneDrive for Business, we create custom scripts.

By default, new items inherit the label of the container that stores them. That helps employees apply the correct label and avoid misclassification. For example, if an employee creates a new document in a SharePoint site labeled “confidential,” the document will automatically receive that label.

Employees can change the item label if the sensitivity or purpose of the content differs from the container label. But that only works in one direction; they can’t store files with higher-confidentiality labels in a lower-confidentiality container. For example, they can downgrade a file in a “highly confidential” container to “general” if it doesn’t require heightened protection, but they can’t upgrade a file in a “general” container to anything above that grade. SharePoint will provide warnings to site owners when it detects label mismatches, for example, when a file label is more sensitive than its container’s.

Understanding our sensitivity labels

By trusting employees and setting good defaults, we’re able to account for 99% of our governance needs.  

By defaulting file labels to their container labels, you can ensure that every item and collaborative space will align with both its context in your organization and your information protection policies. As a result, Copilot will respect those labels and their corresponding information protection policies.

Learning from our container-file relationships

Employees might not understand the relationship between files and their containers intuitively. When you implement your labeling strategy, be sure to include education about container-file derivation.

Many employees learn best from practice, not instruction. Include automated messages that correct edge-case behaviors like trying to make a file in a confidential container generally available.

Employees will more often than not use the default, so ensure your defaults are correct and reflect your organization’s needs.

Because a file can be moved or downloaded from its original container, the only way to protect that information is to ensure its label remains durable. Embed that durability in your object label configurations.

Whenever possible, make the container and file defaults the same from the outset. If you start with different labels or policy sets at the outset, it will be difficult to reconcile those changes later.

How we did it at Microsoft

Use these assets to guide you our own journey—they represent how we did things here in Microsoft Digital.

More guidance for you

Here are more assets that we found useful.

Chapter 4: Train employees

Training your employees on how to handle and label sensitive data was and continues to be a critical step on our governance journey. 

Empowering our employees: A joint effort between IT and users

Establishing a robust labeling strategy is only part of good governance. When it comes to getting employees on board, culture is as critical as policy.

At Microsoft, employee learning and development are how we move sensitivity labeling from the administrative sphere into day-to-day practice. It helps us increase the accuracy of how our labels are used and ensures that our employees recognize labeling cues when they appear across our productivity suite.

Every incoming Microsoft employee takes our Standards of Business Conduct and security trainings. As part of that process, we created an internal SharePoint resource dedicated to educating employees about their responsibilities for labeling and adhering to our governance policies. It educates employees about the philosophy behind our policies, shares a simplified overview of our sensitivity label structure, and provides practical, app-specific guidance for self-service labeling.

Use this decision tree to determine the sensitivity label needed on your document” to “Using sensitivity label decision trees

This quick-reference guide helps Microsoft employees understand our labeling taxonomy at a glance. 

Effective learning and development assets

As you build out your employee education assets, consider emulating our content with the following elements.

Overview

It will be much easier for employees to act according to your governance policies if they understand what they do and why they’re so important. Our overview illustrates the relevance of sensitivity labeling for security and compliance and reinforces our employees’ place in maintaining them.

A quick-reference guide

A visual guide will help employees understand how labels relate to each other and what they accomplish. At Microsoft, we use a helpful flowchart that provides an outline of our labeling taxonomy without overloading employees with details. Placing it near the beginning of your training content grounds employees in the knowledge early, before they dive deeper into the details.

Technical education

Our learning material includes a section on how labeling works within our data estate. Then, it proceeds into an in-depth description of how each label or classification interacts with users’ content. Including this section will make labeling more tangible for your employees.

App-specific guidance

At this point, our guidance documentation progresses through the most common app-based use cases for sensitivity labeling: Microsoft 365 files, Teams, Power BI, and PDFs, as well as AIP and other file types separate from Microsoft 365. This app-by-app procedural content will help employees home in on their most common scenarios and educate themselves accordingly.

Aside from laying a solid foundation as an IT team, the most effective way to promote good governance is by bringing your workforce on board. Robust learning and development content is a powerful lever for establishing a culture of data security.

Learning from our employee training

People will only do what they know, so ensure employees know your policies and how to enact them. Build robust education into your labeling and governance strategy, ideally as part of employee onboarding.

Labeling cues are an excellent opportunity for helping employees remember their responsibilities. Make label descriptions brief and tangible during in-app experiences.

Nobody’s memory is perfect. Link out to relevant information as part of label descriptions so curious employees have a chance to reinforce their knowledge.

If breaches occur or certain teams underperform, coordinate with relevant managers to refresh employee knowledge.

How we did it at Microsoft

Use these assets to guide you our own journey—they represent how we did things here in Microsoft Digital.

More guidance for you

Here are more assets that we found useful.

Chapter 5: Trust employees, but verify their work

Trusting your employees while also verifying that what that their actions are secure via automation is a crucial step. 

Self-service with guardrails: How we’re backstopping our employee efforts with technology

Thanks to our education efforts and intuitive labeling interfaces, we trust employees to apply sensitivity labels. But we also verify their work. It’s how we catch the 1% of edge cases where problems might arise.

We accomplish that by checking files against our DLP standards and using auto-labeling and quarantining when we need them. Swiftly tying up any loose ends eliminates wayward items that Microsoft 365 Copilot might scoop up during the course of its work.

DLP is a set of technologies and practices centered around Purview that prevent sensitive data from leaving the organization and make it impossible for unauthorized users to access it.

At Microsoft Digital, we use Purview DLP policies to define the rules and actions for detecting and protecting sensitive data across Microsoft 365, SharePoint, OneDrive, and Teams.

DLP policies support vulnerable data types and scenarios that require protection. They include any kind of information that might introduce inappropriate access to company data or intellectual property:

  • Access credentials like keys or tokens
  • Personally identifying information
  • Financial data
  • Non-public source code
  • Sign-in information

Reports and dashboards are available via Purview to help our team monitor and analyze content activity and compliance across the organization. They also provide insights into the volume, location, and usage of sensitive data, as well as any incidents and alerts that indicate potential data breaches or violations.

For example, an employee might label something as “General,” but it contains credentials or other sensitive end-user identification information (EUII). In those instances, Purview will automatically block the file from access beyond its owner or reapply a more appropriate label.

Automation and escalation

We’ve configured Purview to automatically remediate these kinds of issues or escalate them to our Microsoft Digital governance team for resolution when an issue is more complex. DLP remediation and escalation processes can involve several different groups of stakeholders depending on the severity and impact of the incident or alert:

  • Content owners
  • Content champions
  • The MIP team
  • Our legal team
  • Security

We use Microsoft 365 Purview to run DLP remediation operations at scale.

DLP systems acquire telemetry from the Microsoft 365 activity management API. Backend processing cleanses the data to build relevant insights and surface them through Power BI dashboards.

We flag information about files and aggregate it at the file level, then assign it to the last modifier for remediation action.

If users don’t act on the files quickly, the DLP team scopes risky sites to quarantine any files with vulnerabilities.

Fortunately, all these features and functionalities are available out of the box through Microsoft 365 and Purview. After you’ve established your labeling strategy and policies, it’s just a matter of adding guardrails to your self-service environment. By automating information protection through quarantining content or rightsizing its label, you can keep Copilot from making sensitive information available where it shouldn’t.

Learning from Microsoft Digital’s trust and verification process

Think carefully about where vulnerabilities can arise and where the relationships between labels, policies, and vulnerabilities might be. Incorporate those into your DLP automation.

When human intervention is necessary, it’s important to have immediate access to the relevant stakeholders. Assemble your list and build it into your process.

Purview DLP is a powerful set of capabilities, but it still relies on automation, which can miss things humans don’t. For example, DLP might not understand the code name for a product and fail to catch it during automated verification.

There are very few absolutes in IT, so you’ll always need exceptions. For example, finance professionals will often need to include passwords or credit card numbers in working documents, so we exempt them from Purview DLP oversight with that team. At Microsoft, we use exemption groups to exempt certain employees.

Your legal, HR, and security teams will be key allies in this process. Engage them early to help you flesh out risk factors and vulnerabilities.

How we did it at Microsoft

Use these assets to guide you our own journey—they represent how we did things here in Microsoft Digital.

More guidance for you

Here are more assets that we found useful.

Chapter 6: Implement lifecycle management and attestation

We focused on strong lifecycle management policies and employee attestation to help us get our lifecycle management right. 

Pairing trust with accountability: How we’re maintaining our data hygiene with attestation

Attestation and self-service go hand-in-hand. In simple terms, it means employees can create what they need, but they’re accountable for its upkeep. In turn, that chain of accountability makes sure Copilot only accesses clean and appropriate data.

At Microsoft, we follow the principle of data minimization. That means only content that’s necessary and relevant for the company’s operations and objectives should exist in storage. Data minimization reduces the risk of oversharing content that isn’t cared for by employees, minimizes asset sprawl, halts data leakage, and improves quality and usability.

To implement this principle, Microsoft Digital requires that every existing container has attestation. By extension, we delete information that doesn’t have a full-time employee to care for it or that has become stale or irrelevant.

Attestation is the process of verifying and validating the existence, ownership, and purpose of a container and ensuring it complies with content governance and security policies.

At Microsoft, we require attestation from a full-time employee every six months to confirm several aspects of their containers:

  • It’s correctly labeled.
  • Users actually care about its ongoing existence.
  • The roster of people with access is accurate and necessary.
  • Sharing capabilities are appropriately restrictive or permissive.
  • It complies with corporate retention guidelines.

If a container or an item doesn’t have attestation, we consider it orphaned or abandoned, and it’s subject to deletion. You don’t want to be too draconian about these policies. We configure our Microsoft Entra group expiration policies and SharePoint Premium inactive sites attestation to give container owners 60 days to take action. That’s followed by a final notice at deletion time with a link to restore and resolve for another 30 days. We also archive deleted items for recovery over an extended period if employees decide they need them after the fact.

Managing exceptions

If a container is subject to a retention or hold for our legal team, that supersedes any deletion event. Generally speaking, containers where the legal team is the accountable owner aren’t subject to re-attestation because we handle those life cycles more granularly based on Purview retention policies.

Ultimately, every organization will have to decide what makes the most sense for them. Applying these principles will help you maintain organization-wide data hygiene, which prevents over-access from Copilot.

Learning from our lifecycle management habits

The attestation interval should be short enough that it doesn’t introduce risk through neglect and long enough that it isn’t unnecessarily burdensome for employees. Think about what makes the most sense for your people by analyzing their behaviors.

Be sure that the attestation requests you create for employees contain both the objective for motivation and simple instructions. That will increase buy-in and smooth the process.

The severity of non-compliance will vary based on different files and containers. Some might be more relaxed, and others more strict. Determine a strategy for deciding which is which.

Consider your resolution and recovery intervals after a lapse in attestation. You’ll need to balance between items’ sensitivity, employees’ bandwidth, and the infrastructure cost of extended archiving for recoverable items.

Chapter 7: Enable company-sharable links

We’re finding that the best way to reduce oversharing is by addressing it at the source.

Enabling fluid, secure collaboration: How we’re extending access with company-shareable links

At Microsoft Digital, we recognize that content sharing is essential for collaboration and productivity. Employees need to share content with both internal and external audiences. But that also poses a risk of content oversharing when employees expose material to more people or for longer than necessary. It might also mean they’ve shared content without proper protection or classification.

In many cases, employees need to share content outside its container. That might include simply sharing a specific file outside of the container’s roster to enable collaboration in place without resorting to making a copy of the file. On the other hand, someone might need to email the file as an attachment.

Using company-shareable links

Microsoft Digital limits oversharing at the source by enabling company-shareable links (CSLs) for all containers and items except ones labeled “highly confidential.” A CSL is a type of link that allows anyone who receives it within our organization to access the content. CSLs are convenient and easy to use, and they promote a culture of openness and transparency.

Before CSLs, employees resorted to sharing with large security groups because they didn’t know which groups contained everyone who needed access, and manually adding every unique user was too cumbersome. That behavior leads to oversharing because anyone with access can stumble on the content in Microsoft Search or get answers from Copilot. Any Microsoft 365 discovery scenario will security-trim results, so it’s important that users can’t directly access things they don’t need.

While employees can pass a CSL around within the company, it isn’t discoverable in Microsoft Search or Copilot because only users who received the link directly via email or chat will have pre-granted access. It might seem counterintuitive that a CSL is more secure, but it eliminates the need for standing access to content and provides greater protection.

Finally, we allow content owners to modify or revoke CSLs if their sensitivity or purpose changes, or if sharing is no longer necessary. The content owner can also set an expiration date or a password for their link to enhance security and control.

Extra protection for highly confidential items

Our governance team at Microsoft Digital determined that we should enable CSLs by default for all containers and items labeled “public,” “general,” or “confidential.” As a result, employees can share content with their colleagues without having to grant individual permissions or manage access requests.

There are some kinds of content that employees absolutely shouldn’t share through a CSL. The risk emerges if someone copies the link into an open location like a broadly accessible document or community. You’ll have to decide where to draw that line for your organization. At Microsoft, we’ve elected to disable CSLs for all containers and items that are labeled “highly confidential.”

At Microsoft, highly confidential items require need-to-know access for specific people. For these files, employees use links they designate for specific people, which allows access to only individuals the content creator or owner explicitly identifies. In those situations, large security groups aren’t appropriate in any case.

Our policy compels employees to think about who needs access to content and take deliberate action before sharing. In some ways, it acts as an extra gate or prompt to keep our people security-conscious during the sharing process.

At Microsoft Digital, we tailored our policies to the company’s specific needs, but it provides a blueprint for other organizations to build a CSL strategy. Deciding what should be sharable and how will help you ensure robust information protection that’s still flexible enough to foster collaboration and productivity.

Learning from our company-shareable link strategy

Align your CSL policies with the sensitivity labels that meet your organization’s security needs. Above a certain threshold, it might make sense to require links for specific people.

Employees will need time to get used to this structure. Create education communications early in the process, and configure your labeling interface to display information about the sharing implications of different labels.

CSLs are counterintuitive in terms of safety. They might make security professionals uncomfortable because employees are free to share them internally with anyone. Reinforce that CSLs are safer than giant security groups, which will be the other default behavior for employees. And unlike security groups, they won’t show up in Microsoft Search.

Most people will take the simple path, so make the simple path the safe path. Generally speaking, employees leave the defaults intact. If CSLs are your default, that’s the behavior it will drive for your employees.

How we did it at Microsoft

Use these assets to guide you our own journey—they represent how we did things here in Microsoft Digital.

More guidance for you

Here are more assets that we found useful.

Chapter 8: Extract inventory to detect and report oversharing

When oversharing does slip through, it’s important to have systems in place to catch it. 

Remediating oversharing errors when they occur: How we’re reporting on broad-access files and sites with Microsoft Graph Data Connect

In spite of our Microsoft Digital governance team’s best efforts to limit oversharing at the source, it can still occur. In some ways, it’s inevitable.

Organizations are made up of people, and so will always experience human error. Left unchecked, content oversharing can have negative consequences for an organization, including data breaches, compliance violations, or reputational damage. It will also give employees access to that content through Copilot when it isn’t appropriate.

To detect and mitigate content oversharing, we use Microsoft Graph Data Connect to report on every broad-access file or site with more sensitive labels. It helps us access and analyze data from Microsoft 365, SharePoint, OneDrive, and Teams using Azure Data Factory, Azure Synapse Analytics, or Azure Machine Learning. We then connect those datasets in our data estate using Azure Synapse Spark and track how many SharePoint sites and items are currently overshared based on our business rules.

One of the principal benefits of Microsoft Graph Data Connect is accessing the information we need through each of these technologies in a secure and scalable way, with control governed by our tenant admins.

Using Microsoft Graph Data Connect for oversharing remediation

We use Microsoft Graph Data Connect to detect, reveal, and remediate oversharing in the rare cases where it occurs.

Reporting for accountability

Our tenant’s data team uses Microsoft Graph Data Connect to generate reports on every file or site on the tenant with a broad access level, like a CSL or link that can be shared with anyone. It also monitors any item with a sensitive label like “confidential” or “highly confidential.”

These reports provide information and insights on the content’s owners, recipients, activity, and content protection and compliance status. They also help identify and prioritize potential cases of content oversharing.

At Microsoft, this output is helpful for several groups of stakeholders:

  • We share the reports with the content champions responsible for reviewing and validating any cases of content oversharing.
  • We use the reports to contact and educate the content owners on how to resolve oversharing issues and comply with our governance and security policies.
  • We share the reports with the legal and security teams responsible for investigating and responding to cases of content oversharing that involve legal or security risks and incidents.
  • We track our improvement over time as we enforce policies on our assets.

To help customers benefit from this kind of visibility, we’ve created a freely available reporting template. We encourage you to use this tool to track oversharing.

Beyond weaving your Microsoft Graph data connect and data export into your own data estate, you can now also use SharePoint Advanced Management in SharePoint Premium to get a list of sites that meet a set of criteria that you select. We use this capability to find all our sites that share Highly Confidential data to more than 5,000 users. We then use the same capabilities to selectively require our site owners to fix any anomalies we discover.

Go here to get more information on this data access functionality in SharePoint.   

With the right controls and policies in place, you can minimize the number of oversharing errors your employees commit. But when errors do occur, a proactive detection strategy quarantines the risk from Copilot, even as your staff stays connected and collaborating.

Learning from our oversharing detection and reporting setup

Between Microsoft 365 and Azure, it’s likely you already have access to the tools you need to set up your reporting apparatus. Explore out-of-the-box functionality before building your own solution.

Collaborate with stakeholder teams to nominate point people who will receive oversharing reports and take action or communicate findings.

Work with internal comms professionals to determine the best communication strategy when you detect oversharing, especially when speaking with content owners.

Different stakeholders will require different information. Work with individual teams to determine what their reports should look like.

How we did it at Microsoft

Use these assets to guide you our own journey—they represent how we did things here in Microsoft Digital.

More guidance for you

Here are more assets that we found useful.

The way forward

Getting governance right in the age of AI

The advent of AI tools like Microsoft 365 Copilot is a once-in-a-generation development. At this point, we’re still learning all the ways that these tools can be used to unlock creativity, productivity, collaboration, and innovation. But we can be sure of one thing: implementing them securely and effectively should be priority one.

If you’re deploying Copilot to your organization, the lessons we’ve learned at Microsoft Digital can act as a roadmap for your own journey. Ultimately, the most important thing is to consider the data implications of AI assistance and plan accordingly. Diligence and forethought will make sure your employees get all the benefits of next-generation AI technology while your organization stays protected.

Welcome to the age of AI.

 {Download the eBook version of this Governance in the age of AI readiness guide.}

Appendix

This the full list of related resources shared with you in this readiness guide. 

How we did it at Microsoft with Microsoft 365 Copilot deployment and adoption

More guidance for you


 

Recent