Starting with today’s release of Chrome (M121), we’re introducing experimental generative AI features to make it even easier and more efficient to browse — all while keeping your experience personalized to you.
You’ll be able to try out these new features in Chrome on Macs and Windows PCs over the next few days, starting in the U.S. Just sign into Chrome, select “Settings” from the three-dot menu and navigate to the “Experimental AI” page. Because these features are early public experiments, they’ll be disabled for enterprise and educational accounts for now.
Parisa Tabriz
Chrome will automatically suggest tab groups for you (a sorting algorithm, very advanced technology), you can generate themes (mashing other people’s real art togerher and picking a dominant colour from the result), and Chrome can generate text in text fields (spicy autocomplete). “AI” sure is changing the world as we know it.
Here comes another couple hundred megabytes of bloat to a web browser near you, because, when one of them implements this stuff, competitors “have to” do it too. I’m one of those people who runs Linux from USB sticks, so I prefer software to be as slim as possible. What’s wrong with just making extra functionality like this for a web browser into an optional add-on for those who want to install it?
kbd,
+1, This should be an optional add on.
Because they want this to be the default and many people don’t install any extra stuff.
jgfenix,
Let people opt-in then. It doesn’t need to be complicated. Of course the real source of friction here is that google really wants to be more forceful. I don’t feel they are different from Microsoft in this regard: don’t ask, just do.
Google want’s people to use it. Best way to do that is to make it there by default and require it to be removable if you don’t want it. However, it does end up as a U2 free album issue, people sometimes get upset when you do things like this. Microsoft and Cortana is a good example, or Google and giving everyone google buzz or google + accounts. You have to make it unobtrusive as possible for it to work. But marketing hates that, they want the press release, so it will be garish, bold and ugly. It will be clippy on a unicycle shooting off fireworks.
Thom, at this point it’s not funny any more. Please learn about AI. I would suggest the excellent AI Explained YT channel.
For the record, all these features are powered by exactly the same technology, and no, it’s not a “sorting algorithm.”
That’s correct. LLMs are much more of a giant hack than a sorting algorithm!
Please explain.
Its kind of right. There is less logic to a LLM’s output. Like you can’t ask a LLM to order a list alphabetically and assume its going to be correct. It may well be, but how it gets to the right answer is very different and much less efficient than a sorting algorithm. Of course the LLM can do other things, but its accuracy is just as questionable with the other things as it is with sorting things alphabetically.
Bill Shooter of Bul,
I don’t know that this is what CaptainN- means, but I agree with you that LLM is not good at executing logic. It’s kind of like creating a calculator by taking (many) random mathematical data points and than interpolating between them. You might get the results you want, but output is not really optimal or guarantied.
That said, I actually think an offshoot of LLM where the neural net isn’t used to generate the output directly, but is used to write an algorithm that generates the output could actually prove to be very useful. So for instance, these LLM models can’t sort, but probably have little problem producing algorithms that can. This is really where I see the future of LLM going: combining LLM with other techniques like adversarial NN training. If successful, it could push AI’s problem solving ability much further than an LLM alone can.
I never use the “tab groups” feature, it just seems to add even more clutter to my tab bar.
Given a choice between local AI and running these remotely on a server, I would always choose the local one.
The question is of course how the “training data” is collected. Are they going to have people “opt-in” (I mean it would probably be some hidden text in a large acknowledgement dialogue)? Are they going to “anonymize” the data (very difficult to do, given the nature of tabs)? Or are they just going to use existing web corpus to build a similarity model? (Best option wrt. privacy, but quality might of course drop).
As the devices get better, we might see more stuff coming to local (like image processing on phones), but this requires some more push.
sukru,
Yes! I’m surprised to hear you say this. I thought you were more of an advocate for moving things to the data center. Regardless though, I agree with this completely, I strongly prefer local implementations to mitigate the privacy and control issues when running on someone else’s hardware.
Bad news for you…
https://www.androidcentral.com/phones/samsung-plans-to-charge-for-certain-galaxy-ai-features-after-two-years
I had the same gripe with excel users being forced to run python via a microsoft service. I think we’re going to see more of this going forward, not necessarily because the software needs it, but because creating deeper dependencies gives providers much more control over us. Same reason car heaters are becoming subscription services. What a happy future to look forward to.
Alfman,
This is a bummer. I was recommending Samsung over Pixel, since they still support HDMI/DP output natively. Turns out Pixel has their significant advantages as well:
https://store.google.com/intl/en/ideas/articles/google-tensor-pixel-smartphone/
The Tensor cores allow processing photos / voice and other data locally. I think you can even disable cloud backups (photos.google.com), and there is an (mostly) offline album application: https://play.google.com/store/apps/details?id=com.google.android.apps.photosgo
I would say, because this was practically not possible in the past. I am pro-AI or at least the ethical uses of it. But that required massive datacenter investments. Now we have powerful enough cores even on mobile devices (though mostly for inference only). Hence no need to share (most) information with the cloud anymore.
(And this might not be entirely altruistic on their part. Running models locally mean, they don’t have to spend so much money powering all those servers, or rather give those servers other tasks).