Copilot’s ‘Compliant by Design’ Claim Under the EU AI Act: What Deployers Are Really Responsible Fo
You might think Compliant by Design means you can relax. But this idea can put your group at risk. The EU AI Act gives you real responsibility. The vendor is not the only one responsible. Many IT and compliance leaders worry about AI assistants. They think AI might show data to people who should not see it. Microsoft Copilot 365 uses strong role-based access controls. These features help, but they are only a start. Think about a user asking Copilot to summarize secret files. Are you sure your settings keep all data safe?
Key Takeaways
-
Learn about the EU AI Act risk ladder. Your AI use case decides your compliance level. The vendor does not decide this for you.
-
Use Copilot’s compliance features. Make sure you set them up the right way. The default settings might not follow EU rules.
-
Write down every step when you use AI. Keeping good records helps you show you follow the rules. This can help you avoid big fines.
-
Teach your team about AI safety and compliance. Everyone should know what to do. They should also know the rules for using AI tools.
-
Check and update your compliance plan often. Stay up to date on new rules. This helps you keep following the law.
EU AI Act Risk Ladder Explained
Risk Categories and Use Cases
You need to understand the EU AI Act risk ladder before you deploy any AI tool. The Act sorts AI systems into four risk tiers. Each tier comes with different rules for you to follow. Here is a simple table that shows how the risk ladder works:
|
Risk Tier |
Description |
|---|---|
|
Unacceptable risk |
AI systems that are completely prohibited, such as social scoring systems and manipulative practices. |
|
High-risk |
Subject to strict requirements, including conformity checks and registration in the EU database. |
|
Limited-risk |
Must meet transparency obligations. |
|
Minimal-risk |
Can operate freely with basic AI literacy requirements for deployers. |
Your obligations grow as you move up the ladder. If you use Copilot or ChatGPT for simple tasks like drafting emails, you fall into the minimal or limited risk category. If you use these tools for hiring, credit scoring, or health advice, you step into high-risk territory. You must follow more rules and keep better records.
Why Use Case Matters More Than Vendor
You might think the vendor or model decides your risk. That is not true. The way you use AI sets your risk level. You must develop clear use cases and requirements. You need to document what problem you want to solve and check if AI is the right answer. You should also assess output risks and quality controls. Ask yourself how you will use the AI’s results and what could go wrong.
Here are some examples:
-
If you use Copilot to summarize meeting notes, you face limited risk.
-
If you use ChatGPT to help decide who gets a loan, you face high risk.
-
If you build your own AI for social scoring, you face unacceptable risk.
You must focus on your use case, not just the brand or model. Compliant by Design features help, but you still need to match your AI use to the right risk tier and follow the rules.
Compliant by Design – Copilot’s Compliance Scaffolding
Built-In Features for Compliance
When you use Copilot in Microsoft 365, you get a strong start. The platform gives you many tools to help keep your data safe. These tools work together to help you follow the rules. You can see the main tools in the table below:
|
Compliance Feature |
Description |
|---|---|
|
Microsoft Purview Sensitivity Labels |
You can mark secret data and stop Copilot from using it. |
|
Data Classification |
You pick which data Copilot can use, so private data stays safe. |
|
Microsoft 365 EU Data Boundary |
You choose where data is stored and handled to follow EU rules. |
|
DLP Policies |
You set Data Loss Prevention rules to block Copilot from using private content. |
|
Monitoring and Logging |
You check logs and reports to find problems or leaks. |
These tools help Copilot support a Compliant by Design way. You get help to manage data, control who can see it, and watch what happens. You do not need to build everything yourself. You can use these tools as a base for your compliance plan.
Tip: Copilot does not use your prompts or answers to train its models. This keeps your data private and lowers risk.
What Deployers Still Need to Configure
You cannot just use the default settings. You need to make changes so Copilot follows the EU AI Act rules. Compliant by Design helps you start, but you must finish the work. Regulators want proof, not just promises.
Here are steps to make your compliance stronger:
-
Check and fix permissions in SharePoint, OneDrive, and Teams before you turn on Copilot. Remove access people do not need.
-
Use least privilege. Check group lists and sharing settings. Only let people see what they need.
-
Think about getting E5 licenses if you want better data protection for users with private data.
-
Teach your users. Tell them Copilot follows your data rules. Make sure everyone knows what Copilot can and cannot do.
You must set Purview sensitivity labels and DLP rules for Copilot. You need to set how long data is kept and check audit logs. You should limit SharePoint search to stop sharing too much. You must write down your risk checks and keep records of what you do.
Compliant by Design does not mean you are done. You must show how you used these tools in your own setup. Regulators want to see proof. You need to show you set up, watched, and wrote down your compliance steps.
ChatGPT and Azure OpenAI – Bring Your Own Governance
Public ChatGPT Compliance Gaps
Using public ChatGPT is a big challenge. Copilot has controls built in, but ChatGPT does not. You have to make your own rules and safety checks. Many groups learned this the hard way.
Look at the Tromsø municipality case. Officials used AI to write most of a report. No one checked if the facts were real before sharing it. The report had fake information. This mistake hurt their reputation and caused problems at work. It is easy to lose control if you do not have strong rules.
Here are some common problems you need to fix:
-
There is no automatic record of prompts or answers.
-
There is no built-in way to stop data loss or label sensitive data.
-
There are no warnings or labels for AI-made content.
-
There is no human checking for important results.
-
There is no clear way to limit access or control where data is stored.
You have to fix these problems yourself. The EU AI Act wants you to show how you handle risks, write down your choices, and keep records.
Azure OpenAI as a Middle Ground
Azure OpenAI gives you more control than public ChatGPT. You can use work accounts, set who can use it, and watch what people do. You get better records and can keep data inside your company. These tools help you follow some EU AI Act rules.
But you still have most of the work. You need to:
-
Do Data Protection Impact Assessments for each way you use it.
-
Keep Records of Processing Activities.
-
Set up rules to stop data loss and decide how long to keep data.
-
Check logs and look over results.
-
Teach users how to use AI safely and follow the rules.
Azure OpenAI helps you start, but you must finish the work. You need to show proof with clear records and checks. The EU AI Act does not accept promises. You must show what you did.
Deployers’ Survival Kit for EU AI Act
You need a simple plan to follow the EU AI Act rules. This kit helps you get organized. It gives you easy steps, tools, and tips for Copilot, ChatGPT, and Azure OpenAI. Use this kit to keep your group safe and show regulators you care about compliance.
Risk Classification and Documentation
Start by listing your AI systems and how you use them. You must know where AI is used and what each system does. Follow these steps to sort and write down risk:
-
Write down every AI tool and model your company uses, makes, brings in, or sells in Europe.
-
Decide if your company is a provider, deployer, importer, or distributor for each system.
-
Check if the EU AI Act covers each system or model.
-
Give each AI use case a risk level: unacceptable, high, limited, or minimal.
-
Look at contracts and check AI service details.
-
Make or update your AI governance plan to fit the Act’s rules.
-
Keep notes on any changes you make to AI systems or models.
You must write down every step you take. If you do not keep good records, you could get big fines.
Deployers who do not write down what they do for the EU AI Act can get in big trouble, like fines of €7.5 million for not giving full information to regulators, even if their risk choices are right.
Configuring Purview, DLP, and Access
You must set up strong controls to keep your data safe. Use Purview, Data Loss Prevention (DLP), and access controls to meet EU AI Act rules. The table below shows what you need:
|
Requirement/Control |
Description |
|---|---|
|
Compliance Portal Access |
Give Entra Compliance Administrator or a similar role. |
|
Monitoring License |
Use Microsoft 365 Copilot License to watch Copilot use. |
|
Data Governance |
Get Microsoft 365 E5 or add-on for data governance features. |
|
Endpoint DLP |
Watch AI use in browsers and apps. |
|
Browser Extension |
Add Purview Browser Extension for Chrome/Edge to set rules. |
|
Insider Risk Management |
Use E5 or add-on for finding risks early. |
|
Auditing |
Turn on Purview auditing. |
|
Device Onboarding |
Add devices for endpoint DLP. |
|
Extension Deployment |
Add browser extension for Chrome/Edge. |
|
Policy Configuration |
Set up Insider Risk Management rules. |
Group your DLP rules to cover common needs. Make sure your rules use the strictest actions first. Check audit logs and DLP reports often to see if you follow the rules.
Logging, Transparency, and Human Oversight
You must track AI use and keep things clear. Regulators want to see how you control and watch your systems. Use these best practices:
|
Best Practice |
Description |
|---|---|
|
Implement human oversight |
Set up high-risk AI systems so people can check results and step in if needed. |
|
Create post-market monitoring |
Keep watching for risks, track how things work, and report problems. Keep logs up to date. |
Add clear labels to anything made by AI. Use footers or watermarks to show when AI helped make a file or message. Make people check important choices, especially for things like hiring or credit scoring.
Training and Internal Policies
You must teach your team how to use AI safely. Build strong rules and training for everyone. Make sure all workers know their job and the rules.
-
Give training based on each person’s job, like business users, admins, and compliance staff.
-
Share rules about what AI uses are okay and not okay.
-
Use prompts and settings that help with compliance, like asking Copilot to show sources or add review notes.
-
Change your rules as the EU AI Act changes.
You can use built-in tools to help users learn their jobs. Give guides that explain the difference between general-purpose AI models and systems. Offer starter packs with templates and lists to help with compliance.
30/60/90-Day Action Plan and Internal Communication
You need a timeline to finish your compliance work. Here is an easy action plan:
First 30 Days:
-
List all AI systems and use cases.
-
Assign roles and risk levels.
-
Start writing down DPIA and RoPA.
Next 60 Days:
-
Set up Purview, DLP, and access controls.
-
Turn on auditing and monitoring.
-
Start training for main teams.
Final 90 Days:
-
Review and update rules.
-
Test human checks and clear labels.
-
Get proof ready for regulators.
Share this message with your team:
We are using AI tools with a Compliant by Design plan. Everyone must follow our new rules and training. We will track, check, and write down every step to meet EU AI Act rules.
Use this kit to build a strong compliance plan. You will keep your group safe and help everyone use AI the right way.
You should not just trust “compliant by design.” You need to show proof that you follow the rules. The EU AI Act gives you clear jobs to do. If you do not follow the rules, you could get big fines. Fines can be up to €15 million or 3% of your company’s total money.
|
Compliance Aspect |
Description |
|---|---|
|
Operational Responsibilities |
You must use AI in a safe and legal way. |
|
Penalties for Non-Compliance |
You can get large fines if you break the rules. |
AI governance is important. You need to watch how you use AI. You need someone in charge and steps you can repeat. Check your AI tools often. Set up your compliance tools and write down everything you do. Use the rules to help people trust your company and keep AI safe.
FAQ
What does “compliant by design” mean for Copilot?
“Compliant by design” means Copilot includes built-in tools to help you meet rules. You still need to set up, check, and prove your compliance. You cannot rely only on vendor features.
Do I need to run a DPIA for Copilot or ChatGPT?
Yes. You must run a Data Protection Impact Assessment (DPIA) for each AI use case. This helps you find risks and show regulators you take privacy seriously.
How do I prove compliance with the EU AI Act?
You need to keep records, set up controls, and show logs. Regulators want to see proof, not just promises. Use audit logs, risk documents, and training records as evidence.
Can I use public ChatGPT for high-risk tasks?
No. Public ChatGPT does not have built-in controls for high-risk tasks. You must add your own safety checks. For sensitive uses, choose tools with strong compliance features.
What is the biggest mistake deployers make?
Many think vendor tools alone make them compliant. You must configure, monitor, and document everything. Proof beats promises every time.