OpenAI added Codex to ChatGPT on iOS and Android for managing development remotely
OpenAI brought Codex to mobile ChatGPT on iOS and Android. Developers can now use a phone to monitor active tasks, view terminal output and diffs, approve comma

OpenAI on May 14, 2026 began a preview rollout of Codex in the ChatGPT mobile app on iOS and Android. Now the code agent can not only be launched from the desktop, but also be managed from a smartphone: you can see what it's doing, answer questions, and confirm next steps.
How It Works
Mobile Codex connects to an already running environment — a laptop, devbox, dedicated Mac mini, or remote machine. On the phone, the user sees the live state of this environment almost as if sitting at the workstation: screenshots, terminal output, diffs, test results, and action confirmation requests. Meanwhile, the files themselves, credentials, permissions, and local configuration remain on the machine where the agent actually works. This is important for remote work and long-running tasks.
"From your phone, you can work with all threads, view results, confirm
commands, and launch new tasks."
OpenAI emphasizes separately that this is not just a remote button for a single session. The state of active threads is synchronized between devices through a secure relay layer, and the machine with Codex is not directly exposed to the public internet. In other words, the phone becomes not a replacement for a work computer, but a control panel for long tasks that the agent runs in the background for hours, maintaining the same context, decision history, and work pace between steps.
What's New
The main idea behind the update is to give the developer the ability to stay in the loop even if they step away from the laptop for half an hour or leave town. Codex still performs the work in the main environment, but the phone becomes a point for quick decisions: you can check progress, remove a blocker, adjust direction, and immediately move the task forward during the day without waiting to return to the desk.
- follow active threads and task status
- confirm commands when the agent hits a permission wall
- change the model and work direction on the fly
- launch a new bug report, refactoring, or investigation
OpenAI describes quite everyday scenarios: ask Codex to debug a bug while you're standing in line for coffee; choose between two refactoring options on the way; quickly compile an incident summary before a call with a client; drop a new idea into a separate thread while it's fresh.
The feature is already rolling out in preview on iOS and Android for all plans, including Free and Go, in all supported regions. Support for connecting to the Codex app on Windows will be added later, the company promised.
Betting on Developers
For OpenAI, this is not an isolated feature, but part of a broader race for a place in the daily development workflow. The company states that Codex is already used by more than 4 million people per week. In April, the agent was given the ability to run longer in the background on desktop, and in May a Chrome extension was added so it could act directly in live browser sessions. The mobile layer logically closes another gap: now long tasks won't stop just because a person stepped away from the computer.
Against this backdrop, competition with Anthropic is particularly notable. In February 2026, the company released Remote Control for Claude Code with a similar idea of remote monitoring and management. OpenAI is responding not only with a mobile interface, but also with infrastructure updates for teams: Remote SSH became publicly available, programmatic access tokens appeared for CI and internal automations, and hooks can now be used for secret checks, logging, validators, and customizing agent behavior in specific repositories. For enterprise customers, this is no longer a toy, but a foundation for a permanent development layer.
What It Means
The market for AI programming tools is rapidly shifting from the mode of "write me a function" to "run the work yourself, and I'll plug in when needed." If this approach takes hold, the smartphone will become for a developer not a place to code, but a point of control over agents that work continuously in a local or remote environment. For teams, this means fewer idle periods between steps and more convenient management of long-running tasks.