Apple to allow AI agents in the App Store under strict restrictions
Apple is preparing to allow autonomous AI agents in the App Store with restrictions. The company is developing its own standards to control safety and monetizat

Apple is developing standards for allowing autonomous AI agents in the App Store. The company wants to remain at the forefront of the industry while maintaining full control over user safety and platform revenues.
What rules is Apple preparing
Apple is already working on internal standards for AI agents. The company sees how the entire industry is moving in this direction: Google is developing agents for search, OpenAI released the browser agent Operator, and even young startups are creating specialized agents for specific tasks. Apple cannot fall behind, but it also cannot simply open the App Store without rules — the risks are too high.
The company's main focus when developing standards:
- User safety — the agent must not steal data, perform dangerous actions, or have access to sensitive information without explicit consent
- Preserving platform revenue — Apple still receives its commission, as it does for all apps in the App Store
- Transparency of operations — the user sees and controls what the agent does, can stop it at any moment
- Quality and reliability — not every AI agent will make it onto the platform; verification is needed before publication
- Compliance with Apple's policy — agents must comply with all existing rules for applications
Why falling behind is not an option
Falling behind in this area could cost Apple dearly. Google and OpenAI have already shown that AI agents are a working technology with practical applications. Users see the possibilities and are starting to expect this from their devices. If Apple doesn't move, developers will create agents for other platforms. Over time, the App Store risks becoming a platform for ordinary applications, when all innovative interest shifts toward agents.
At the same time, Apple cannot simply open the doors indiscriminately. Every AI agent is a potential source of problems. If an agent makes a serious mistake — sends money to the wrong place, deletes important files, or reveals private data — Apple could face legal liability.
Balance between innovation and responsibility
This is the central contradiction that Apple is trying to resolve. On the one hand, you need to be at the forefront of technology. On the other hand, the platform bears responsibility for everything that happens on it. Apple traditionally chooses the path of quality control: better to lag behind by a year with a better product than to rush and face reputational risks.
Developing standards is an attempt to find middle ground.
What this means
By the end of 2026 or early 2027, the first AI agents operating under Apple's control could appear in the App Store. For developers, this is a new distribution channel and access to an audience of hundreds of millions of users. For users — new opportunities for assistance in work, shopping, and research. But this will happen under Apple's watch, with guarantees of safety and quality control. This is not a revolution, but an evolution of the App Store.