Implement Security for AI Apps, Copilot & Agents

Last Updated 16 Dec 2025

Course Overview

This course helps you prepare, discover, protect, and govern data and applications to enable responsible AI adoption. You will learn how to build a strong AI security posture, safeguard sensitive data, and mitigate emerging AI security risks. The course highlights how Microsoft Purview, Entra, Defender, Intune, and Azure AI Content Safety complement one another to protect information, strengthen application security, and ensure regulatory compliance. It provides insight into applying the right safeguards at the right stages of AI adoption.

Duration - 12 Hours

Level - Intermediate

Style - Self paced

Course Type - Project Ready with Labs

Certification - No

Hands on Labs - Yes

Solution Areas - Security, Protect cloud AI Platform and Apps

Course Modules

Why security for AI?

This module outlines key frameworks and strategies for securing and governing AI adoption, along with five practical steps to build an effective AI security approach. It also demonstrates how to leverage the Microsoft Security Stack to protect AI systems end-to-end.

Prepare for AI Security

This section focuses on securing identities and devices, safeguarding sensitive business data, and enabling AI-driven threat detection and response. Together, these capabilities strengthen access control and enhance overall protection across the environment.

Discover AI apps and data

This module highlights key capabilities for discovering AI apps and gaining visibility into AI agents, usage, and data posture through Entra Agent ID and DSPM for AI. It also shows how to identify, sanction, or block AI apps with Defender for Cloud Apps and uncover deployed AI workloads with Defender for Cloud for deeper security insights.

Protect your organization’s use of AI apps and data

This section focuses on controlling data oversharing across Microsoft 365 and AI apps using DSPM, sensitivity labels, DLP, and Insider Risk Management. It also covers adaptive protection, AI-focused threat defenses, and securing and monitoring AI applications on Azure with Microsoft Defender for Cloud.

Managing Governance and Secure Access

This module explores new considerations for governing AI apps and data, helping organizations assess risks, define and enforce governance policies, and monitor AI-driven threats. It covers content safety, red teaming, supply-chain security, prompt-injection defense, and establishing an AI Center of Excellence for ongoing compliance and oversight.

Post-training Skills Assessment

Take this assessment to validate your skills gathered from the self-paced online learning course completed in this course to mark your completion.

Course Completion Survey

Share your feedback with us regarding your experience!

Other courses in this Category

Intermediate

Implement Microsoft Defender for Endpoint

Duration - 12 Hours
Course
Intermediate

Protect cloud, AI Platform and Apps by implementing Defender for Cloud

Duration - 12 Hours
Course
Intermediate

Implement Threat Protection with Microsoft Defender XDR solutions

Duration - 12 Hours
Course
Advanced

Implement Identity and access management with Microsoft Entra

Duration - 16 Hours
Course