Google Tweaks Chrome Privacy Wording: On-Device AI Stays

If you have ever scrolled through Chrome’s settings and noticed a small shift in how a sentence is phrased, you are not alone. A subtle edit in the browser’s “On-device AI” description has sparked a wave of questions about what really happens to your data when you use AI features. The change removed the phrase “without sending your data to Google servers” from the disclosure. Privacy advocates immediately raised alarms, wondering whether this meant Google had altered how the technology works behind the scenes. But according to the company, the underlying architecture has not changed. The data still stays on your device. So why adjust the wording at all? Let’s dig into the details.

chrome ai privacy wording

What Changed in Chrome’s On-Device AI Description?

The modification appeared in Chrome’s System settings, specifically under the “On-device AI” section. Previously, the message read: “To power features like scam detection, Chrome can use AI models that run directly on your device without sending your data to Google servers. When this is off, these features might not work.” The updated version dropped the clause “without sending your data to Google servers.” That single omission turned a clear privacy assurance into a more ambiguous statement.

This shift was first spotted by a Reddit user and later highlighted by privacy advocate Alexander Hanff, who publicly questioned Google’s motives. Hanff asked whether the previous wording had been inaccurate, whether the architecture had changed, or whether legal advice had prompted the edit because Google was unwilling to defend the original claim. His questions resonated with many users who rely on Chrome for sensitive tasks like banking, remote work, or managing personal documents.

Why the Old Wording Mattered

The original phrasing gave users a straightforward guarantee: no data leaves your machine. That kind of explicit promise builds trust, especially in an era where cloud-based AI services like ChatGPT or Google’s own Gemini cloud models process data on remote servers. When a browser says “without sending your data to Google servers,” it signals a clear boundary between local and remote processing. Removing that line naturally raises suspicion, even if the actual data flow remains unchanged.

Google’s Official Explanation: Why the Edit Was Made

A Google spokesperson responded to inquiries by stating plainly: “This doesn’t reflect a change to how we handle on-device AI for Chrome. The data that is passed to the model is processed solely on device.” So if nothing changed technically, why alter the text? According to Google, the reason is tied to the rollout of the Prompt API and how websites interact with the on-device model.

The Prompt API allows web pages to send prompts to the Gemini Nano model that lives inside Chrome. When a website calls the model, that site can see the inputs and outputs of the AI interaction. In such cases, the data handling falls under the privacy policy of the website, not Chrome’s settings disclosure. Google realized that the old wording could mislead users into thinking that no data ever reaches any server—even when they are using a Google site that triggers the on-device model. To avoid confusion and potential legal claims, the company removed the blanket statement and opted for a more nuanced description.

In other words, the edit was about clarifying that while Chrome itself does not send your data to Google servers, the websites you visit might still receive the prompts and responses if they use the API. It is a distinction that matters, but one that was not communicated clearly in the previous version.

Timing of the Change: Coincidence or Strategy?

The wording edit occurred in early April 2025, coinciding with the public rollout of the Prompt API and ongoing discussions about Chrome’s Gemini Nano model. The model itself has been quietly downloaded onto users’ devices since 2024, taking up about 4 GB of local storage. Critics noted that the timing made the change look suspicious—as if Google was preparing to shift data handling without users noticing. However, the company insists it was simply an effort to align the disclosure with the new capabilities.

It is worth noting that Google also introduced a way to disable and remove the Nano model in February 2025, giving users more control over the space and the AI features. That move suggests a willingness to offer transparency, even if the wording change initially seemed like a step backward.

How the Prompt API Changes the Privacy Picture

To understand the controversy, you need to grasp how the Prompt API works. Before this API, Chrome’s on-device AI features—like scam detection—were purely local. The browser ran the model, analyzed the content, and never exposed that analysis to any external server. The Prompt API changes that dynamic by giving web developers a way to send prompts directly to the model. When you visit a website that uses the API, that site sends a prompt to your local Nano model, gets a response, and can then do whatever it wants with that response—including sending it to its own servers.

For example, imagine a writing assistant website that uses the Prompt API to help you draft emails. You type a request, the site sends it to your local model, the model generates text, and the site displays it. The site now has both your original prompt and the model’s output. If that site happens to be owned by Google, the interaction could theoretically be logged under Google’s broader privacy policy. The old wording would have implied that no data ever left your device, but the reality is that the website itself is a separate party. Google decided to remove the phrase to avoid making a promise it could not fully guarantee in every scenario.

Does This Mean Your Data Is Less Private?

Not necessarily. The core architecture remains the same: Gemini Nano processes everything locally, and Chrome does not send your prompts or responses to Google’s servers. What has changed is that third-party websites (including some Google properties) can now initiate these local AI interactions and see the results. Your privacy risk depends on which websites you allow to use the Prompt API. If you trust the site, the risk is minimal. If you visit an unknown site that abuses the API, your prompts could be captured and sent elsewhere.

This is a classic trade-off in modern browsers. APIs like WebUSB, WebBluetooth, and now Prompt API give developers powerful tools but also introduce new surfaces for data exposure. Chrome’s wording change simply acknowledges that reality.

How to Verify That Chrome’s On-Device AI Still Processes Data Locally

If you are a privacy-conscious user, you do not have to take Google’s word for it. There are practical ways to confirm that your data stays on your machine when using on-device AI features.

First, you can check Chrome’s settings. Go to Settings > System > On-device AI. If the toggle is on, the model is active. You can also see the current wording there. Remember, the new text no longer includes the server phrase, but that does not mean data is being sent—it just means the wording is broader.

You may also enjoy reading: Norway’s $2.2 Trillion Sovereign Wealth Fund Sees 1.9% Loss.

Second, you can monitor network activity. Use Chrome’s built-in DevTools (press F12, go to the Network tab) while using an on-device AI feature like scam detection. If no requests are made to Google’s servers, the processing is indeed local. For the Prompt API, you can check which websites are using it by looking at the console logs or using a browser extension that tracks API calls.

Third, you can disable the model entirely. As of February 2025, Chrome allows you to turn off and remove Gemini Nano. Go to Settings > System > On-device AI and toggle it off. The model will be uninstalled and will not download again until you re-enable it. This gives you full control over whether the AI runs at all.

What About the 4 GB Download?

Another concern that surfaced alongside the wording change is the size of the Gemini Nano model. Chrome has been downloading this model silently since 2024, which can consume significant disk space. Google says the model will automatically uninstall if your device is low on resources, but many users prefer to manage storage manually. The February update gave users a clear way to remove it. If you are worried about the model occupying space or running in the background, simply disable it in settings.

Why Google Might Have Removed a Privacy Promise It Could Not Fully Keep

Legal experts and privacy advocates have pointed out that the previous wording could have been seen as a binding representation. If a user relied on that statement and later discovered that a Google site accessed their prompts via the Prompt API, they might have grounds for a complaint. By removing the phrase, Google reduces its legal exposure. It is a defensive move, not a change in data handling.

However, this also means that users lose a clear, simple assurance. The trade-off is between legal precision and user trust. Google chose precision, but the timing and lack of upfront communication made it look like a cover-up. A more transparent approach would have been to explain the change alongside the Prompt API rollout, perhaps with a pop-up or a blog post. Instead, users discovered it through community vigilance.

What Should Privacy-Conscious Users Do Now?

If you are concerned about the wording change and what it might mean for your data, here are actionable steps:

  • Review your Chrome settings and decide whether you need on-device AI features. If you rarely use scam detection or other AI tools, consider disabling the model.
  • Be cautious about websites that request access to the Prompt API. Chrome will ask for permission before a site can use the API. Pay attention to these prompts and deny access unless you trust the site.
  • Use browser extensions that block unnecessary APIs or monitor network requests. Extensions like uBlock Origin or Privacy Badger can help you see what data is being sent.
  • Stay informed about changes to Chrome’s privacy disclosures. Follow reputable tech news sources or community forums where these subtle shifts are discussed.

The Bigger Picture: On-Device AI vs. Cloud AI in Browsers

The Chrome AI privacy wording debate is part of a larger trend. Browsers are increasingly embedding machine learning models directly into the client. Microsoft Edge has its own on-device AI features, and other browsers are experimenting with local models. The advantage is speed and privacy—no network latency, no server logs. But the line between on-device and cloud is blurring when APIs allow websites to interact with the local model.

Google’s decision to adjust the wording reflects this blur. The company wants to enable powerful web features without making promises that could be broken by third-party interactions. It is a pragmatic choice, but one that requires users to be more vigilant.

Will This Affect All Chrome Users?

The wording change is rolling out gradually, even in Chrome 148. Not every user sees the updated text yet. If you are on an older version or have not received the update, you might still see the original phrasing. Eventually, everyone will get the new version. The underlying AI functionality remains the same across all versions.

For developers, the Prompt API is available in Chrome 131 and later, but it is still experimental. You can enable it via flags. The broader rollout will happen over the coming months.

Add Comment