It seems that if you want to steer clear from having Facebook use your Facebook, WhatsApp, Instagram, etc. data for machine learning training, you might want to consider moving to the European Union.
Meta has apparently paused plans to process mounds of user data to bring new AI experiences to Europe.
The decision comes after data regulators rebuffed the tech giant’s claims that it had “legitimate interests” in processing European Union- and European Economic Area (EEA)-based Facebook and Instagram users’ data—including personal posts and pictures—to train future AI tools.
Ashley Belanger
These are just the opening salvos of the legal war that’s brewing here, so who knows how it’s going to turn out. For now, though, European Union Facebook users are safe from Facebook’s machine learning training.
They are also safe from having locally relevant content.
I can’t share exact numbers, but personalized vs. non-personalized recommendations have a significant difference in quality that cannot be disregarded.
What will happen is, the rest of the world will get increasingly better content, while European Facebook will stay mediocre.
Though, like many other things in life, this is a trade-off. You can choose more privacy, or you can choose more relevant interactions.
For me, I do draw a moral distinction between using “public data” to train AI versus “private data”. Public data that’s already explicitly made public doesn’t have any privacy expectations. But private data should not be used without our explicit permission. I wish more countries (not just the EU) would take our data privacy more seriously. They should legally mandate that all corporate use of private data (as in 100%) be opt it….I cannot stand that companies including google, microsoft, credit card companies, banks, stores, etc take the position that they are entitled to use our private data by default without any compensation to us or legal repercussions. It is ethically wrong of them to do so and frankly the laws should give us explicit ownership of our own data by default. All corporate uses of our private data should be explicit with permission (and no “fine print” bullshit either).
The last time I was on Facebook, it was just a cesspit of hate and misinformation. I just can’t imagine an AI trained on that. Garbage in, garbage out.
dexterous,
You’re not thinking about the same applications that facebook and other companies are thinking of. They want to use AI to help them expose people to addicting content and sell more ads. That is their goal and to that end I believe that AI training using private data is likely to be extremely effective!
With that said, I still have no appetite for corporations using private data to train AI for such purposes.
I had to go and actually look for that sort of rubbish *but* I don’t use Facebook for news feeds or anything like that. What I mostly get is about 3:1 ratio of adverts to friends posts. Most of the ads are for utter rubbish like grounding sheets or EMF protection pendants or some other snake oil/pseudo-science. About 1/10th of the posts I get fed by “THE ALGORITHM” (all hail the mighty algorithm, may we be forever blessed by it’s insightful wisdom) appear to be trying to generate some reaction from me – that is it’s some sort of antagonistic rubbish about how XYZ is destroying the world – for the most part I just walk on by.
“””For now, though, European Union Facebook users are safe from Facebook’s machine learning training.”””
We’re safe as far as we know. They still own the content and an ‘AI’ database is very hard to check. Who knows what they’re doing in the Meta basements. I wouldn’t be surprised if there’s going to be an “oops we forgot to uncheck European servers during the AI training, sorry we got caught” soon.
I’m sorry about the formatting of my last reply. Is there a guide on how to do proper formatting in the replies?
The only thing i would like AI to train on is Public court records so as soon as possible the professional liar class that is lawyers can be made redundant or at least be minimized in terms of disproportional power in the system, and the fees that they can charge.
NaGERST,
That’s an interesting idea, and I follow that logic. However courts are so notoriously inconsistent, biased, and unreliable that I think an AI trained on them could be severely afflicted by garbage in garbage out. Still, I suppose these AI lawyers might not be any worse than those we already have, but my point is they’d be just as scummy because that’s what the system rewards. IMHO a more fundamentally good justice system would require a huge overhaul.
It’s pretty insane to want to train off Facebook data. What would you get at the end – a racist AI that can’t use full stops?