Google unveils AI laptops and agentic Gemini features ahead of I/O
Google made several major announcements ahead of the I/O 2026 conference. Among the new additions are new AI laptops Googlebooks with built-in Gemini, expanded

Google unveiled a comprehensive set of innovations at the Android Show ahead of the I/O 2026 conference. From new AI laptops to expanded Gemini features — the company is preparing to significantly transform its ecosystem of devices and services.
From Laptops to Ecosystem
Google introduced a new line of AI laptops called Googlebooks. These are the company's flagship devices, developed from scratch for deep work with artificial intelligence. The Gemini assistant is not simply pre-installed on them, but deeply integrated at the operating system level and within all major applications. Thanks to such integration, Gemini has access to the context of your documents, emails, and files, making its assistance much more personalized. In parallel, Google expanded Gemini's availability: the assistant is now built into the Chrome browser, meaning millions of users worldwide can use it without downloading a special application. Thus, Gemini spans Google's entire ecosystem — from new laptops and smartphones to the browser and Android Auto automotive platform.
Gemini Becomes an Agent
The main technological update — Gemini is gaining more agentic capabilities. This means the assistant will be able to not just answer user questions, but execute complex multi-step tasks in the browser. Imagine this scenario: you tell Gemini "order me a pizza for breakfast," and the assistant enters the delivery website on its own, browses the menu, adds dishes to the cart, specifies the delivery address, and confirms payment.
All the user has to do is wait for delivery. Such capabilities require a new approach to language model architecture. Google had to add a layer that allows Gemini to not just generate text, but interact with web interfaces, understand website structure, and make decisions about necessary actions.
This is a significant step toward more autonomous AI.
In practice, Gemini's new functions look like this:
- Automating purchases on marketplaces and food delivery
- Booking tickets for events and travel
- Comparing prices and finding the best offers
- Filling out tax forms and administrative documents
- Scheduling meetings and managing correspondence on behalf of the user
This is not just a significant improvement to a chatbot, but a transition to what the industry calls "agentic AI" — assistants that act on the internet almost like humans. Google is not the first company working on such technologies, but it is among the first to embed such functions in mass consumer products available to millions of users.
Android and Automobiles
Beyond the main focus on Gemini, Google also announced visual updates for the Android operating system. New widgets now receive support for a technology the company calls "vibe-coding." This allows analyzing the overall visual style of the user interface and automatically selecting colors and visual style for individual widgets. Simply put, if you have a dark theme with blue accents installed, all widgets will be automatically colored in corresponding tones and shades. Google also announced a significant update to the Android Auto platform. The redesigned interface, better Gemini integration, and optimized touch gestures will make in-car navigation safer and more intuitive.
What This Means
Google is clearly moving to a more strategic position around Gemini and AI application in general. If previously the assistant was mainly a tool for information retrieval and quick answers to questions, now it is becoming a fully-fledged agent that can independently perform tasks instead of the user. This is not a marketing move or a demonstration of capabilities — it is a genuine direction for the entire mobile industry development over the coming years. The company is preparing for a future in which AI assistants will not be helpers, but working tools.