What Meta Is Doing to Its Employees Would Be Illegal in Europe
Meta tracks every click and screen of its employees using AI. In Europe, systems like MCI would be classified as high-risk under the AI Act. A case that concerns every Italian SME using AI tools to manage its workforce.

In May 2026, the New York Times published a detailed account of what is happening inside Meta. Employee morale has collapsed, a culture of fear has spread, and massive layoffs continue — all while the company pushes AI harder than ever.
But there is one detail in the story that matters far more than morale statistics for anyone operating in Europe: the employee monitoring system Meta has deployed would be illegal in Italy.
The Model Capability Initiative (MCI)
Meta introduced the Model Capability Initiative (MCI), software that records in real time every mouse movement, every click, every menu navigation, and every screen viewed by employees. The stated purpose is to measure how much workers actually use the AI tools the company has provided.
One former employee called the program "very dystopian." Dystopian or not, in the United States Meta can do this. In Europe, the story is entirely different.
What the AI Act Says
The EU Artificial Intelligence Regulation (AI Act), in force since 2024 with progressive application through 2027, classifies AI systems based on the risk they present. High-risk systems carry stringent obligations before they can be used.
Annex III of the AI Act explicitly includes among high-risk systems:
"AI systems intended to be used for monitoring and surveillance of workers, including systems intended to monitor the performance and behaviour of workers."
A system like MCI — tracking behaviour, clicks, screens, and usage patterns — falls squarely in this category. Before deploying it in Europe, a company would need to:
- 1.Complete a conformity assessment
- 2.Produce comprehensive technical documentation
- 3.Implement an ongoing risk management system
- 4.Ensure human oversight of outputs
- 5.Register the system in the EU AI systems database
What Italian Labour Law and the GDPR Add
The AI Act is not the only obstacle. In Italy, Article 4 of the Workers' Statute prohibits remote monitoring of employee activity without trade union agreement or Labour Inspectorate authorisation, and only for specific purposes (safety, production organisation, asset protection).
The GDPR adds the principle of data minimisation: collecting everything an employee does — every click, every screen — goes well beyond what is necessary for any legitimate purpose.
Combining the AI Act, the Workers' Statute, and the GDPR: a system like MCI would be blocked before it ever started.
The Real Risk for Italian SMEs: Tools Already in Use
Meta is an extreme and visible case. But the real risk for Italian SMEs is more subtle.
Many companies have already adopted AI tools for personnel management, performance evaluation, or productivity monitoring — often without knowing these tools fall under high-risk AI:
- ▶Copilot for Microsoft 365 with employee usage analytics
- ▶AI HR tools for automated candidate assessment
- ▶AI code review software that generates developer performance evaluations
- ▶AI scheduling systems that optimise shifts by evaluating individual efficiency
None of these tools are inherently illegal. But if used to monitor, evaluate, or make decisions about workers, they become high-risk AI systems and trigger regulatory obligations that most SMEs have never addressed.
The Consequences of Non-Compliance
The AI Act provides for fines of up to 3% of global annual turnover for non-compliance with high-risk system requirements, and up to 6% for violations of absolute prohibitions.
But the risk is not only financial. An SME that uses AI tools to evaluate employees without meeting AI Act requirements is also exposed to employment litigation — workers have the right not to be subject to significant automated decisions without human oversight.
The Point
Meta makes headlines because of its size. But the mechanism the NYT describes — using AI to measure, evaluate, and pressure workers — is not exclusive to big tech. It is a temptation that touches every organisation.
In Europe, a regulatory framework exists to protect workers from this trend. The problem is that many SMEs are adopting AI tools without knowing where they stand in relation to this framework.
Do you know whether the AI tools your company uses qualify as high-risk systems under the AI Act?
Are the AI tools you use in your company compliant with the AI Act?
Tomato Blue supports SMEs in AI Act gap analysis and in verifying the compliance of AI tools already in use for workforce management.
Request a free Gap Analysis →