1. What is the primary purpose of Information Rights Management (IRM) in Microsoft 365, and what business problem does it solve?
The primary purpose of IRM is to prevent the leakage of sensitive information by applying persistent protection to emails and documents. It solves the business problem where traditional solutions, like standard email, lose control over information once it’s delivered to a recipient. IRM prevents unauthorized forwarding, copying, printing, or modifying of sensitive content, thereby reducing the risk of intentional or accidental data leakage and helping organizations comply with regulations.
2. How does IRM apply persistent protection to emails and documents, and what service does it use in the backend to achieve this?
IRM applies persistent protection by encrypting the data and attaching a license that defines the usage rights for authorized users. To access the protected content, an RMS-enabled application must get a use license from the backend service. In Microsoft 365, IRM uses Azure Rights Management Services (RMS), which is part of Azure Information Protection (AIP), as its backend.
3. In SharePoint Online, IRM protection is applied at the list and library level. What is the key difference between these two applications?
When IRM is enabled for a SharePoint library, rights management and encryption are applied to all files within that library. When IRM is enabled for a SharePoint list, rights management applies only to the files attached to the list items, not the actual list items themselves.
4. Can IRM prevent a user from taking a photograph of the screen with a camera? Name two other methods of copying information that IRM cannot prevent.
No, IRM cannot prevent a user from taking a photograph of the screen with a camera. Other methods IRM cannot prevent include:
- Using third-party screen capture programs.
- Users remembering or manually transcribing the information.
- Malicious programs like Trojan horses or keystroke loggers.
5. What is the function of the Super User feature in AIP, and why is it essential for services like DLP and eDiscovery?
The Super User feature in Azure RMS ensures that authorized people and services can always read and inspect data protected by Azure Rights Management. This is essential for data recovery (e.g., an employee leaves the company), removing or changing protection policies, and allowing IT services like Data Loss Prevention (DLP), content encryption gateways, or anti-malware products to inspect the content of protected files. It must be enabled manually via PowerShell.
6. Explain the difference between Microsoft Information Protection (MIP) and traditional SharePoint IRM.
The main difference is where the protection is processed. SharePoint IRM processes protection directly inside the service (server-side) and relies on “protectors” for specific file types. Conversely, Microsoft Information Protection (MIP) primarily works on the client-side via the built-in Sensitivity Labels in M365 Apps. As a result, the MIP Purview client supports far more file types, does not need SharePoint protectors, and offers more granular control through labels and policies.
7. A company wants to classify all documents containing employee and customer PII as “Highly Confidential.” How would you use MIP labels and policies to achieve this?
I would configure an MIP label named “Highly Confidential”. Within this label’s settings, I would:
- Enable protection (encryption).
- Set permissions to define who can access the content and what they can do (e.g., Co-Owner, Co-Author, Reviewer).
- Configure visual markings like a header, footer, or watermark stating “Highly Confidential”.
- Optionally, I would configure conditions for automatic labelling by defining patterns that match employee and customer PII. Finally, I would add this label to a policy and scope it to the relevant users or groups in the organization.
8. What is the difference between a Microsoft-managed tenant key and a customer-managed tenant key (BYOK) in AIP?
The main difference is who creates, controls, and is responsible for the key.
Microsoft-managed: The key is generated directly in Azure RMS and stored in the Azure Key Vault. Microsoft manages its lifecycle, including backup and recovery.
Customer-managed (BYOK – Bring Your Own Key): The key is generated on-premises (e.g., in an AD RMS environment) and then uploaded to the Azure Key Vault. The customer is responsible for the backup, recovery, and renewal procedures for this key.
9. What are the three core security services provided by an S/MIME digital signature?
The three core security services are:
Authentication: Validates the sender’s identity, ensuring the message came from the person who claims to have sent it.
Non-repudiation: The uniqueness of the signature prevents the sender from disowning or denying they sent the message.
Data Integrity: Assures the recipient that the message received is the exact message that was sent and was not altered in transit.
10. Walk me through the high-level process of how a digital signature is applied to an email when a user clicks ‘send’?.
When a user sends a digitally signed email, the following happens:
- A hash value (a unique checksum) of the message body is calculated.
- The sender’s private key is retrieved.
- This hash value is encrypted using the sender’s private key, which creates the digital signature.
- This encrypted hash (the signature) is then appended to the email message and sent.
11. Now, explain the verification process that happens on the recipient’s end to validate that same digital signature.
When the recipient’s client opens the signed email:
- It retrieves the sender’s public key from the message.
- It uses this public key to decrypt the digital signature, revealing the original hash value.
- It independently calculates a new hash value of the received message body.
- It compares the newly calculated hash with the decrypted hash. If they match, the message’s integrity is verified, and the sender’s identity is authenticated.
12. Does S/MIME encryption alone provide authentication? Why or why not?
No, S/MIME encryption alone does not provide authentication. Encryption uses the recipient’s public key to encrypt the message, which is available to everyone. Therefore, anyone could encrypt a message for a specific recipient, and the recipient would have no way of knowing who the true sender was. To prove the sender’s identity, a digital signature is required.
13. What is a “triple-wrapped” S/MIME message, and which Microsoft 365 client creates them?
A triple-wrapped S/MIME message is one that is signed, then encrypted, and then signed again, providing an extra layer of security. This is done automatically by Outlook on the web when using the S/MIME control. Standard Outlook clients do not create triple-wrapped messages but are able to read them.
14. In what scenario would you recommend a company use Office 365 Message Encryption (OME) over S/MIME?
I would recommend OME when an organization needs to send encrypted emails to external recipients without the complexity of certificate management. S/MIME requires a certificate key exchange between sender and recipient beforehand, whereas OME is a service-based solution built on Azure RMS that works for any recipient regardless of their email service (Office 365, Gmail, etc.).
15. Explain how OME works when a user sends an encrypted email to an external Gmail user?
When an Exchange user sends an OME-protected email to a Gmail user, the message is sent with an HTML attachment. The Gmail recipient opens the email, follows the instructions in the message to open the HTML attachment, and then authenticates their identity. They can authenticate using a Microsoft account, a work account, or by requesting a one-time passcode to view the message in a secure web portal. They can then view the decrypted message and send an encrypted reply.
16. What is the core function of Windows Information Protection (WIP), and how does it prevent accidental data leakage?
The core function of WIP is to protect against accidental data leakage by separating corporate data from personal data on company and employee-owned Windows 10 devices. It does this by tagging any data generated by or coming from a corporate-defined “protected” app as “work” data. WIP policies then control what can be done with this tagged data, such as preventing it from being copied into personal apps or saved to non-corporate locations.
17. Explain the difference between an “Enlightened” app and an “Unenlightened” app in the context of WIP?
Enlightened apps can differentiate between corporate and personal data. They can handle both types of data and correctly determine which data to protect based on WIP policies. Examples include Microsoft Edge and modern Office apps.
Unenlightened apps cannot differentiate between corporate and personal data. When used as a protected app, they treat all data they handle as corporate and encrypt everything, which can limit user flexibility. These apps are often identified as always running in “enterprise mode” in Task Manager.
18. Describe a real-time scenario: A user on a Windows 10 laptop opens a corporate document, copies text, and tries to paste it into their personal webmail. What would WIP do if the policy mode is set to “Block”?
In this scenario, because the source document is corporate data, WIP would block the paste action into the personal webmail client. The user would be prevented from leaking the corporate data. The “Block” protection mode explicitly stops corporate data from leaving protected applications.
19. What are the four available WIP protection modes?
The four protection modes are:
Block: Blocks users from sharing corporate data with non-protected apps.
Allow Overrides: Prompts the user when they attempt to share corporate data with a non-protected app. The user can choose to override the block, and the action is logged.
Silent: Allows the user to share data freely, but all actions are logged in the audit log for monitoring purposes.
Off: WIP is turned off, and no actions are logged.
20. How can you determine an application’s “Enterprise Context” on a Windows 10 device?
You can determine the Enterprise Context using the Task Manager. You need to open Task Manager, go to the “Details” tab, right-click the column headers, select “Select columns,” and then enable the “Enterprise Context” column. This column will show whether an app is running as “Domain” (work), “Personal,” or “Exempt” (trusted to bypass WIP).
21. What is Data Loss Prevention (DLP) in Microsoft 365, and what are the three primary locations a single DLP policy can protect?
DLP is a feature that identifies, monitors, and protects sensitive data from being shared with unauthorized users through deep content analysis. A single DLP policy can be configured to protect content across three primary locations: Exchange Online email, SharePoint Online sites, and OneDrive for Business accounts.
22. A DLP policy looks for sensitive information like credit card numbers. How does it achieve a high degree of accuracy and avoid simply matching any 16-digit number?
DLP achieves high accuracy by using a combination of detection methods, not just pattern matching. Each sensitive information type is defined by:
- Keywords
- Internal functions to validate checksums (like the Luhn algorithm for credit cards)
- Evaluation of regular expressions
- Other content examination to find corroborating evidence.
23. What are “policy tips,” and how do they help educate users?
A policy tip is a notification or warning that appears in real-time when a user is working with content that conflicts with a DLP policy. They appear at the top of an Outlook message, as an icon on a document in SharePoint/OneDrive, or on the Message Bar in Office desktop apps. They educate users by making them aware of compliance policies as they work, which reduces the likelihood of violations.
24. Explain how a DLP policy can use rules for “low volume” and “high volume” of sensitive content.
A DLP policy can be configured with two distinct rules based on the instance count of a sensitive information type. For example:
The “Low volume of content detected” rule might trigger if 1-9 instances are found. The action could be to simply send a notification and show a policy tip.
The “High volume of content detected” rule might trigger if 10 or more instances are found. The action for this higher-risk scenario could be more restrictive, such as blocking access to the content and sending an incident report to a compliance officer.
25. What does it mean for a user to “override” a DLP policy, and why is requiring a business justification important?
Allowing a user to override a DLP policy means giving them the ability to bypass a blocking action if they have a legitimate business need. Requiring a business justification is important because it forces the user to formally document their reason for the override. This action is then logged for audit purposes, which helps maintain accountability and allows compliance officers to review the exceptions.
26. How can an organization use its existing Windows Server File Classification Infrastructure (FCI) properties within a Microsoft 365 DLP policy?
An organization can create a DLP policy in Office 365 that recognizes document properties applied by Windows Server FCI. For example, if FCI classifies a document with the property Personally Identifiable Information = High, a DLP policy can be created with a rule that looks for this specific property name/value pair and then takes an action, like blocking external access.
27. For a DLP policy to recognize an FCI property on a document in SharePoint, a “managed property” must first be created. Can you explain why this is necessary?
This is necessary because DLP in Microsoft 365 uses the SharePoint search crawler to identify sensitive information. When a document is uploaded, SharePoint automatically creates crawled properties from its metadata (like an FCI tag). However, only managed properties are kept in the search index. Therefore, the crawled property must be mapped to a managed property so that documents with that FCI tag are indexed and can be detected by the DLP policy.
28. What is Microsoft Cloud App Security, and what is its primary role in managing “Shadow IT”?
Microsoft Cloud App Security is a comprehensive solution that provides visibility and control over an organization’s cloud applications. Its primary role in managing “Shadow IT” is through its Cloud Discovery feature, which analyzes traffic logs from firewalls and proxies to dynamically discover all cloud apps being used by employees, including those not officially sanctioned by IT. It then provides risk assessment and control over this Shadow IT.
29. What are the two primary methods for getting network traffic log data into the Cloud Discovery service for analysis?
The two methods are:
Manual Upload: You can manually upload log files from your firewalls or proxies to create a one-time “snapshot report” of your organization’s cloud use.
Automatic Upload: You can set up continuous reporting by using Cloud App Security log collectors. These collectors run on your network and automatically forward logs periodically for ongoing analysis.
30. What is the function of “App Connectors” in Cloud App Security?
App connectors use APIs from cloud app providers (like Office 365, Salesforce, etc.) to integrate directly with Cloud App Security. This provides deeper visibility and governance over sanctioned apps. Through these connectors, Cloud App Security can scan for activities, files, and accounts, enforce policies, detect threats, and apply governance actions like quarantining a file or suspending a user.
31. Differentiate between an “Activity Policy” and an “Anomaly Detection Policy?.
An Activity Policy allows administrators to monitor for specific, predefined activities and enforce automated processes using app provider APIs. For example, you could create a policy to be alerted when a user downloads a large number of files from SharePoint Online.
An Anomaly Detection Policy looks for unusual activities based on machine learning. It establishes a baseline of normal user and organizational behavior and then alerts on deviations, such as impossible travel, activity from infrequent countries, or unusual file sharing activity.
32. Describe a scenario where Conditional Access App Control would be used?.
Conditional Access App Control, which uses a reverse proxy, would be used to control a session in real-time. For example, an organization could configure a policy where a user accessing SharePoint Online from an unmanaged personal device is allowed to view a sensitive file in their browser but is blocked from downloading it. This allows access while mitigating the risk of data exfiltration to an untrusted device.
33. An alert is triggered for “impossible travel activity” for a user account. As an administrator, what does this alert mean?
This alert means the anomaly detection engine has detected two user activities from the same account originating from geographically distant locations within a time frame that would have been impossible for the user to physically travel between. This is a strong indicator that the user’s credentials have been compromised and are being used by an attacker in a different location. The term for this type of policy is an anomaly detection policy.
34. Explain the purpose of the Cloud App Catalog and how it helps in risk assessment.
The Cloud App Catalog is a repository of over 16,000 cloud apps that have been ranked and scored by Microsoft analysts based on more than 70 risk factors. Its purpose is to help organizations assess the risk of discovered cloud apps. The risk factors cover security (e.g., data-at-rest encryption), compliance (e.g., ISO 27001, HIPAA), and legal (e.g., GDPR) categories, allowing an administrator to make informed decisions about whether to sanction or block an app.
35. How does Office 365 Cloud App Security integrate with a SIEM server?
Office 365 Cloud App Security can integrate with a SIEM server to enable centralized monitoring of alerts. This is done via a SIEM agent that runs on the organization’s network. The agent pulls alerts from Office 365 Cloud App Security using RESTful APIs and streams them as Syslog messages to the local SIEM server over an encrypted HTTPS channel.
36. To prepare for Office 365 Cloud App Security, what must be turned on in the tenant first?
For Office 365 Cloud App Security to work correctly, audit logging must be turned on in the organization’s Office 365 tenant. This is a crucial prerequisite step.
37. What are the two types of alerts in Office 365 Cloud App Security?
The two types of alerts are:
Anomaly detection alerts, which automatically detect suspicious or unusual activity based on machine learning baselines.
Activity alerts, which are based on specific activity policies defined by an administrator for activities that might be atypical or forbidden in their organization.
38. What is the initial “learning period” for Anomaly Detection Policies in Office 365 Cloud App Security?
There is an initial learning period of seven days during which anomalous behavior alerts are not triggered. This period allows the anomaly detection algorithm to establish a baseline of normal activity to reduce the number of false positive alerts once it is active.
39. If an administrator wants to ban a third-party app that users are accessing Office 365 data with, what action can they take in the portal?
The administrator can navigate to the App permissions page in the Office 365 Cloud App Security portal. From there, they can locate the specific app and choose the Mark app as banned icon. This action will revoke the app’s permissions to access Office 365 data for all users.
40. What is the difference between sanctioning/unsanctioning an app versus approving/banning it?
- Sanctioning/Unsanctioning an app is typically part of the Cloud Discovery process. When an administrator unsanctions an app, they can then generate a script to block that app at the network level using their firewall or proxy.
- Approving/Banning an app relates to App Permissions for connected apps like Office 365. Banning an app revokes its OAuth permissions so it can no longer access Office 365 data. Approving an app is a visual tag for administrators to track apps that have been reviewed and are considered safe, but it has no effect on the end user.



