When you Google someone from within the EU, you no longer see what the search giant thinks is the most important and relevant information about an individual. You see the most important information the target of your search is not trying to hide.
Stark evidence of this fact, the result of a European court ruling that individuals had the right to remove material about themselves from search engine results, arrived in the Guardian’s inbox this morning, in the form of an automated notification that six Guardian articles have been scrubbed from search results.
And then the EU wonders why support for even more ‘Europe’ is at an all-time low.
LOL to those of you in the EU who wanted a nanny state. You made your bed, now it’s time to lie in it.
And LOL to the ‘censorship is bad, mmmkay?’ rhetoric by liberals. Apparently it’s only bad when you’re not on the receiving end:
http://wtvr.com/2014/07/01/kendall-jones-hunting-pictures
Just a big FAIL all the way around
Oh dear, I suppose in your head you’re a free man and live on an island lonesome. You certainly can’t be living in the West (all nanny sates without exception)and as for the rest…
Humans have been around for circa 2.5 million years. The idea of individual rights and privacy are about 250 years old. Universal suffrage is about 100 years ago. The ‘inalienable rights’ in the Declaration of Independence originally applied only to rich White men.
I like the fact that I can tell a private company to not index my name since its flawed algorithm can defame me.
So, yes, I feel no pitty for Google on this, because it is a private company that should not be trusted for information of a particular individual, if someone want that kind of information then don’t use a search engine.
Now, people is extrapolating this as if the information is going to be deleted from the Internet, is not, it is still accesible, just not accesible via a private company that collects your data w/o you knowing it, the options are:
a) Rephrase the search query w/o using the name of the person.
b) Go directly to the website w/o using a program of a private company.
So, I’m glad that this new law exists.
Edited 2014-07-02 22:33 UTC
The internet tends to route around blocks….
I don’t see something that benefits me and you as an “internet block”, sorry.
Edited 2014-07-02 23:38 UTC
I think the point is that:
1. History has shown that measures like this quickly get abused for censorship.
2. There’s no technical difference between “right to be forgotten” and “censorship” so any technical measure which affects the viability of one will affect the viability of the other.
3. There’s a strong impetus on the part of organizations like the RIAA and MPAA to use censorship in exactly this vein.
4. A similar dynamic already played out when centrally-indexed P2P file-sharing gave way to P2P systems with no single point that could be sued our blocked out of existence.
(Admittedly, there is some noise in that last example since junk-spamming on things like Gnutella led to moderated BitTorrent and, while modern BitTorrent itself has no central point of failure, a decentralized mechanism for moderated indexes like The Pirate Bay is still in development.)
The point is, given the flood of DMCA takedown requests to sites like Google, I predict we’ll see increasing interest in decentralized search (in the vein of YaCy) which, by design, can’t be censored.
As a result, that will also render the “right to be forgotten” unenforceable.
(Plus, of course, don’t forget the Streisand Effect.)
You forgot to add one thing, the Google search algorithm is not open source, there is no way to know if it is tainted, there is no way to know if it is designed to favorate some and screw others, Google’s goal is profit, there is no way to audit it, becuase if I say right now something like:
“ssokolow is a pedophile”
the Google algorith won’t know if is true or not, it will just index it, who knows, maybe it is already indexed at the moment you read it, and when someone google “ssokolow” the first ocurrence can be “ssokolow is a pedophile”, so, this is why this law is necessary, despite all the misinformation spread.
Edited 2014-07-03 01:47 UTC
I understand that, but I think the “right to be forgotten” is the wrong tool for that job.
It’s like using a pipe wrench as a hammer.
It may not be perfect, but is better than not have it.
Yeap!
I suppose you are indexed and filtered out
by now…
In the U.S., the defamer would be punished and the company that maintains the automated algorithm that indexes the internet would not be held accountable for content other people produced. For once, I find the U.S. legal system much saner than the European counterpart. The art of law is about balancing the rights of the people it governs. While this law may make it easier for some people to move on from their past transgressions, it does so at the cost of drastically reducing the efficacy of other people to warn people of your misgivings. This could be considered sane if the speech being suppressed was libelous, but the law applies even to facts that are available in public records.
That’s not the point, sure, you can prosecute some one for difamation, that doesn’t mean that the search algorithm will display the part were you get clean, it may show only the part when you get accused and not the part where you get exonerated becuase maybe its algorithm doesn’t classify it as a “relevant” information and there is no way to force Google to change it, that’s the goal of that law, to put the power on you, not in Google.
Then the ruling should force the parties actually publishing the libelous content to update all of that content that continues to be publicly available with a warning that the content was proven to be false. Again, why are you blaming the indexing algorithm and not the parties creating and disseminating the libel? And that still doesn’t justify why the law should censor people from report content that IS true.
I don’t think a private company who’s goal is to profit, should even show what I do, be it true or false, and I want the right to not be followed by Google.
Edited 2014-07-03 02:48 UTC
You want to be able to force remove every opinion and fact about you.
And you don’t see the problem with that?
No, I want that the search engine of a private company not to index my name cause its flawed algorithm doens’t know me and can defame me.
You will never be 100% represented as you want unless you pay. Censoring one place only makes you feel better.
Not censuring, just avoiting and untrusted site to index my name.
So…Europe doesn’t have libel laws? I’m confused. If I put up on a website that you’re a pedophile and I have no proof, you sue me and the courts take it down once they’ve determined I don’t know what I’m talking about. At least in the United States.
We have a process that places a burden of proof on you. With this, there isn’t even a burden of proof on the PUBLISHER. I can just randomly remove search results. You’ve created a process that is much more easily abused for censorship by not only people, but corporations and governments. I’m not extrapolating at all. This is already happening.
Most European countries have a law of libel, but it only applies to information that isn’t true. The law of privacy developed by the ECJ, in contrast, applies to the publication of information that is true. The information in the Guardian’s articles – the subject of Thom’s original comment – was all entirely true. The point of the ECJ’s ruling was that European law in some cases gives persons the right to ask Google to stop linking to information about them even if the information is true.
That said, Google has gone well beyond the judgment with its deletions. The judgment makes it very clear that there’s a balance to be struck between the freedom of the press and the rights of the person, which Google seems to have missed.
This has always been the way that indexing services work. When you went to the library with their card indexing services, it didn’t know if the indexed books and articles were correct or not, only that they mention certain topics. It has always been up to the reader to decide on the correctness and usefulness of any information presented to them.
The difference now, is that the EU is asking indexing services to decide on the usefulness of information, on the readers behalf. Another word for that is censorship.
Personally, I agree with the idea that people who made mistakes in their youth (for example) but then cleaned themselves up should not have their past negatively affect their present and future. But, ideally, that should come through forgiveness, not forgetfulness. Unfortunately, a lot of people don’t know how to forgive, which is why there is the ability for old crimes to be wiped from public criminal records. I understand that this ruling aims for a similar goal, but is misses, quite widely.
The right to be forgotten was designed to protect people during situations such as job applications where you don’t want an employer to find out about something embarrassing you did while you were drunk at some party 10 years ago which would be irrelevant but still very influential (even subconsciously).
While I agree with the idea behind this law, I think there should be an easier way to undo a link removal. Perhaps something like this:
1) person asks to remove a link from the search results
2) search engine agrees, removes the link from the search results but adds the link to a public database
3) people searching for that person will no longer see the link. However everyone can still query the urls in the database by regexp’s (so not by key words like a normal search engine) and see which links are removed. If they object, they can request an “undo” (while providing their contact details) after which the link is restored. The link remains visible until a court decides otherwise. The original requester is notified and if he truly believes that the page is no longer relevant nor in the public’s interest then he can take it up in court and prove his case.
This way there is a mechanism to remove embarrassing pictures/stories from people who once did something stupid in their life, and there is plenty of room to challenge unjustified requests.
It’s a bit similar to the DMCA actually, except less harsh and it favours common folk as no-one will bother going to court over low-profile cases which are by definition not relevant nor in the public’s interest. However, high profile cases such as mentioned by the Guardian will simply attract attention and its requester should lose in court.
The Google search algorithm should be open source IMHO so can be audited.
Edited 2014-07-03 02:07 UTC
As a huge fan of open source projects, I have to say that you are completely insane. Google’s search algorithm is a proprietary technology that took years of development. Do you believe that Microsoft and Apple should have to open their source as well so that we can inspect it for security flaws? Should Coca-Cola and KFC have to divulge their secret recipes so that we can see if it contains things that we might not be comfortable drinking? I really don’t understand where this mindset comes from.
Then how do you know they are not hidding information or manipulating it?
You can’t, and that is why this law is necessary.
And people tell me that I’m suspicious and paranoid about corporations.
My trust is based on facts and not popularity, and Google has not my trust.
Given that you admit that you do not have access to the Google indexing algorithm, what “facts” form the basis of your trust?
I have my reasons to not trust it and it has nothing to do with their unknown algorithm, the question is, why should I trust it?
With what resources is Google using to deliberately mislead the public about the details of some, otherwise unimportant, random person whose name is entered into a search field?
Since it is a billionary company, I’m sure it got enough resources.
Awesome nickname! (why don’t you post more?)
organgtool,
I can see your intent is to build up counter examples, but honestly this actually doesn’t sound like such a bad thing. Arguably if all users who bought software had an automatic legal right to inspect (not necessarily to redistribute) the source code, maybe we’d be better off today.
It’s not exactly a recipe per say, but in the US they’re already required to publish the ingredients for exactly that reason. We have a legal right to know what’s in our food.
I don’t agree with the OP, that a private company’s service algorithms need to be published. However they should be legally responsible for taking down any unlawful content their internal algorithms caused to get posted to their website. Google’s “black box” nature shouldn’t obviate it from the same responsibilities other web hosts are under.
Trust me, as a software developer, I would love to be able to view their source code and even tweak it for my own personal use, but it would be insane to enact a law that would force a company to do so and that is what Hiev’s post suggested.
Not to quibble, but you do not know the proportions. Similarly, many people know the ingredients of Google’s algorithm, but no sane entity would force them to divulge their trade secrets.
The law does not state that the content is illegal which is why the publisher of the content is allowed to continue making that content available. The law states that the content is allowed to stand but Google must remove their link to that content.
organgtool,
Not quite the same. Hiev’s post was about the source code behind google’s service, your post was about the software sold by MS/Apple, which gets installed/run by the user himself.
Why would it be insane to entitle software buyers to inspect the source to their software? If anything, it’s only “insane” relative to where we are now. As is often the case with these kinds of ideas, the problem isn’t that the idea is bad, but that it’s difficult to achieve it now that we are here. However, imagine that code inspection was an intrinsic right from the very beginning of the software industry… not only would there be no “shock” to this idea, but it would even be appalling to think of a world where users do not have the right to inspect the software they buy. In other words, not having a right to inspect the source code would be considered “insane”.
Still, since food producers are compelled to disclose ingredients for public interests, it seems to me that the case for citing it as an example in defense of secrecy is significantly weakened.
IMHO google’s “formula” is probably not nearly as important as the quantity of data and computational power that they are able to throw at scouring the web. This gives them a tremendous advantage over would-be competitors.
Edited 2014-07-03 05:47 UTC
I don’t see why this distinction makes a difference in whether or not a company should be forced to give up their source code.
You’re right, this is a progressive idea that is ahead of its time. Right now, software is dominated by money and secrecy. Companies keep their source protected to ensure future profits that can be used for further development. As I mentioned before, I would love for the source to be available for commercial software, but I do not feel that it is right to compel a company to release its secrets. Instead, I would recommend convincing companies of the benefits of open source rather than compelling them to go open source. After all, you catch more flies with honey than with vinegar.
There’s a big difference between having a right to know what you’re putting in your body and having a right to a corporation’s secrets to determine if they’re living out your paranoid scenario.
Perhaps, but that doesn’t mean it’s alright to force them to divulge their secrets. Now I feel like I need to take a shower after vehemently defending corporate IP rights. I’m just looking at this with an empathetic eye: if I had started Google and someone wanted to force me to divulge my hard-earned secrets, thereby giving up my competitive edge, just because they had some paranoid delusions that my algorithm was out to smear their name, I would be pretty upset about it.
organgtool,
In this hypothetical world, the difference would be that when software is sold to run on your own machine, you’d have a right to inspect the code, but you wouldn’t automatically get access to code running at service providers. I guess you could try to make a case for that code as well, however that would be far more radical considering that anyone operating any kind of service would be subject to source code requests. The difference is huge, at least to me.
Well, I actually think the idea is coming in too late in the game rather than ahead of it’s time. The software industry has settled into norms that are very hard to change now. It would have been far easier to achieve at the beginning than at any conceivable point in the future. It’s pretty much the same problem we face with software patents.
That was my point, that the example seemed a bit odd.
I wasn’t the one arguing that google should be forced to divulge their secrets. However I’d like to make the point that even if you or I had google’s code, we lack the global facilities and services that allow google to data mine everything – that’s what makes google special.
Food technologists have been able to make copycat products for Coke and KFC that are indistinguishable from the ‘real’ product for decades. The so called ‘secret’ recipes are just marketing gimmicks
> Do you believe that Microsoft and Apple should have to open their source as well so that we can inspect it for security flaws?
Yes. Operating systems and their bundled applications and hardware drivers should always be open source.
This is an unrealistic view that causes more harm than good towards the public image of open source.
Yes I do believe Google, Microsoft, Apple and all world effectors should be open and revealed to the people they are effecting. Don’t you want to know how you are being gamed?
Yes I do believe Coca-Cola and KFC’s ingredients should be known to those who eat it. Don’t you want to know what you are eating?
Is it really that hard for you to understand? :/
Why can’t you live with your flaws? You must have done nothing with your life for the past 10 years if a single time of stupid behavior defines your impact on the world.
Sometimes you simply want to close a chapter in your past. Things you did or things that happened to you and don’t want to have it follow/burden you all your life.
While I’m personally not affected, I can easily imagine it as a way of giving people a second chance or simply a chance to move on – on the condition that it’s no longer relevant.
A similar question can be asked about the main purpose of prisons: punishment or reintegration? Do you care more about the fact that something happened or about turning someone into a productive member of society again?
In Belgium, criminal records are private and employers can not look into them when hiring someone (though certain sectors can request a “declaration of good behaviour” from the DOJ which will determine if a candidate is trustworthy for that specific function). It’s built on the same ideology behind the right to be forgotten: people change.
Edited 2014-07-03 11:37 UTC
– A politian is embarassed by a dick pic; woosh erased from history, never happenend
– A guy working for a non profit steals all the money; woosh erased from history, never happenend
– An employee keeps stealing; woosh erased from history, never happenend
– A critical news item on a terrorist; woosh erased from history, never happenend
Every piece of information that offends somebody in the world; woosh erased from history, never happenend
You are extrapolating it, it is not deleted from the history, is still there, is just not reachable from a private company’s search engine who’s algorithm may be flawed and defame you.
Edited 2014-07-03 13:23 UTC
How does deleting it from a searchengine make sure that nobody finds out about it? Is the purpose to make it difficult for normal people to find out about stuff? Employers can still find out without that searchengine. It just costs them slightly more money.
It is not deleting from a search engine, is just not indexing it, becase the seach engie may put iformation that is not relevant, and the search engine doesn’t know if the information is true are false and may defame me.
That was the first sentence. There was more which explained it.
It is not the job of a search engine to find the truth.
Then it is a better reason to not let them to index my name.
From your examples, it is clear you do not understand the concept of relevance and public interest.
And I agree that abuse is possible, which is why I suggested a solution to minimize that abuse, rather than throwing it all away. If you’re scared of anything that can be abused, you will never make progress as literally everything is open to abuse – including your so valued free speech.
The great enemy of free speech is not censorship, it’s an overload of information to the point where you no longer know what is true and what is not. Just have a look at all the spin and shills from companies and governments. They finally figured out how to shut people up: use free speech against them and drown them in noise.
Edited 2014-07-03 13:10 UTC
So you agree that removing stuff from a searchengine is stupid.
I’m not. It’s still censorship, no matter how you phrase it and, as things go, it *will* be abused. It’s not a question of “if,” it’s just a question of “when.”
Could privacy then be considered censorship?
Privacy would be about not obtaining personally-identifiable information in the first place, censorship is about denying access to information that is already out there.
Yes, but if something that was originally private was somehow made public against the owner’s wishes, are they enforcing censorship if they get it removed from the public? Is that denying access to information that is already out there?
What I’m getting at is I think we need to be more precise about where privacy ends and censorship begins and vice versa. Saying every instance of removing oneself from search is censorship is too broad, as is saying everyone should have a right to absolute privacy is also too broad. I think public interest should be one factor in this.
I’m still going to say “yes” to both. Wanting something to be private or not should not play any role in it after the information is already made public. I mean, of course you’d want to keep it all private and removed from search-engines if e.g. someone hacked your computer and revealed evidence of kiddie-porn or you murdering your wife. If we start censoring things based on “was it private before?” and “does the person(s) involved want it private again?” we’re doing the public at large a major disservice.
Edited 2014-07-03 07:57 UTC
But those are not the criteria used in this law. It is: “is it still relevant and in the public’s interest?”
* An old fraud case from someone who is in a position of power to commit that same fraud again is relevant. If it is a public figure, then it’s even in the public’s interest.
* An old trauma that happened to you is not relevant and could be unindexed if you wish so. Note that that does not mean it completely disappears, but people who search you are just less likely to find it and hence you’re less likely to have to deal with it again. It’s a way of moving on by reducing the impact of the past.
Everyone here seems to be focused on big criminals, not on victims and people who made mistakes. And yes, your right on free speech ends where my privacy begins. As kwan_e said, the difficulty lies in defining where one starts and the other ends.
Edited 2014-07-03 08:17 UTC
You didn’t get to the part where I said “what I’m getting at”.
The “was it private before” is pointing out the problems of defining something as public just because it was out there for even one second. I even stated in my second paragraph that public interest on both sides of the issue should be a factor, as opposed to having one all-encompassing principle* that everything public must forever remain public.
Do we get rid of search and seizure laws just because it MAY do the public a major service?
Or in your kiddie porn example, since those things are “already out there”, do we argue they must continue to be out there in perpetuity? Surely your reasoning must dictate the removal of kiddie porn from the internet constitutes censorship, which is supposedly absolutely bad in your view.
* Rigid definitions rooted in rigid principles don’t ever work. It doesn’t even work when you write software, so why would it work in something much more complex like a society?
Edited 2014-07-03 08:47 UTC
Have enough data of any issue
and you are going to imply
the Who, When and Where.
In my view privacy is still pending.
And yes, it will involve
filtering out.
I think a lot of people are misunderstanding how this thing works, at least when it comes to Google. They aren’t forcing the original information owner (the source website) to remove the information, they are forcing Google to stop indexing it. The information is still there, you just can’t find it via Google.
I do understand that that is a really big deal, since Google’s search engine is by far the most popular and (some say) the most thorough. But if the source of the information wants to keep it online, so far they haven’t been told not to. I’m sure it will eventually come to that under this law, but the current debate over Google’s indexing seems a bit off track.
The only reason these laws were introduced was to allow the rich and famous (mainly French politicians) to hide their real crimes and misdeeds. The laws are not designed to protect the general public from accidental defamation.
I’m sure you’ll be just as happy when you find out that a convicted pedophile is working on your kid’s kindergarten, and said rules prevented the kindergarten administration from searching his name up in Google.
In case you wonder, something very close to this happened a couple of months ago close to where I live. (The employee got thrown out when his name got raised by a random Google search).
Yet another case in which the common interest was thrown out of the window in-favor of twisted view of personal freedoms.
Glad I’m not living in Europe.
– Gilboa
Edited 2014-07-03 15:16 UTC
I trust the k~A-nder garden that when they hire their staff to do a background check, and I mean a real background check, not using a search engine with a flawed algorithm.
Edited 2014-07-03 15:32 UTC
At least here in Israel (and as far as I know, in the U.S.) is fairly close to being illegal to ask an employee if he/she has a criminal record and/or if he/she were ever indicted in one unless you have a specific and well defined reason to do so. A dumb question might get you sued.
Companies use Google specifically to avoid asking though questions during interview.
– Gilboa
Edited 2014-07-04 08:10 UTC
In England, employers can check the criminal record of anyone to whom they have made a job offer by running what is called a DBS check (formerly a CRB check). Enhanced checks are run for people who will work with children or as carers for adults. It is mandatory to run these checks before employing someone to work in a nursery or a school. There are slightly different procedures in Scotland and Northern Ireland, but the substance is the same. Nobody should be relying on a search engine.
If you take the “right to be forgotten” to it’s logical conclusion, not only would you force search engines to remove you from the index but you might also force news sites to remove articles or even small-time bloggers.
If you took “right to be forgotten” to its logical conclusion, then the government would modify your brain chemistry to make you forget about particular events. Before you write this off as completely insane, there have already been numerous programs dedicated to researching how to do just that to relieve PTSD.
Mental health professionals use cognitive therapies to gradually desensitise PTSD sufferers. However they do not want patients to forget the past. They want patients to take control of their memories to make them less traumatic.
If the EU economy wasn’t teetering, this would be more acceptable.
We seem to have a few articles recently where your comments are less
“Is this a good or bad thing?”
to
“This is bad.”
Although I often agree with your sentiments, OSAlert is not a blog site for personal rants, its a place for discussion of (tech centric) news among peers. If you frame the question as absolutes I feel it diminishes that.
Probably if the site had more writer/submitter the news would be more balanced, I feel that the rant is aligned enough with my view and tech centric enough for me to still follow the news on this site.
Back to the subject, this is how a perfect valid law, that I believed was suppose to also apply to bank/law enforcement/other that keep record of any citizen, is perverted.
As far as I know even in France there are some rotten politician trying to use this right even before their case is discussed in court.
Is there any reason why you, specifically, get to decide what OSAlert is and is not?
Just curious.
No, but he is correct in that it diminishes the discussion as it sets a tone to the debate. In my opinion, the discussions are the only interesting part about OSAlert and using sensational triggers in a summary will skew the discussion towards an emotional fight rather than a reasonable debate. It’s why I often click away from OSAlert lately as I’m not interested in tabloids.
I’d personally prefer it if the summary were more neutral and any short personal statements by the editors were added as a comment so that they can be properly discussed and/or rated – separately from the actual article.
But like I said, it’s just my opinion. Do with it as you please, I’ll do the same, and life goes on.
You are, of course, the editor of OSAlert and as such dictate its direction.
I have been regularly returning for almost a decade now. Why do I keep coming back? Because, on the whole, you get a balanced, Informed debate on a variety of tech related topics. This debate comes from its users.
In the case of a couple of articles recently the “story comments” haven’t helped foster that debate. Any debate Needs two sides to form an argument to function. Without it, I think the site loses something.
That’s really what this is. Do search engines have a right to link to content that others find objectionable? Does your right to privacy extend to simply making less available information about you that is already public? That some “privacy advocates” think this a good law just proves the libertarians’ point more and more. Intentions mean nothing because stupid people write laws. Just stop trying to regulate the internet. Just… Stop…
Do I have he right to not be defamed by a private company who’s profit come from the data it gets from me w/o me know in it? Yes.
Interesting.
Instead of going after the publisher, go after the library that indexes the information rather than go after the original publisher of the bad information.
Sounds reasonable to me. Not.
If there’s something out there about you that you object to, why do you not go after the publisher of the information?
The search engine did not publish the information, they just make it easy to find the information.
Big difference in my eyes.
But it doesn’t verify if the information published is true or not and show it anyway , and a flawed algorithm may not show the relevent information, so, if the search engine is going to show twisted information about me then I prefer that they not publish it at all.
Edited 2014-07-03 15:33 UTC
I don’t think you got the point there. Why should the search engine have to verify that everything it lists is true? It’s not publishing the information, just finding it, so your problem is with who put the information in a publicly accessible place. If Google can find it, someone else can too. It just might take a bit longer.
Google may find the information, but what if it doesn’t find all the information? what if the information it shows is incomplete bacause of a flawed search algorithm? what if that incomplete information difame me? then I should have the righ to tell Google to remove that incomplete and maybe distorted information is getting about me.
My concern with the implementation of this law isnt so much the ability to delete your information its more about how history is recorded.
You cant keep accurate records if you redact the bits you dont want/like. Where I Do sympathise with it is where a teenager makes a “socially dubious” comments (like teenagers do) then having to live with that comment affecting their life and employment decades later..
I got news for you, the information is not being deleted, is just not reacheble from the search engine of a private company, why can’t people get this?
Edited 2014-07-03 13:46 UTC
Yes it is. Read the ruling again.
The legal precedent isnt unique to search engines, it applies to all companies.
I could just as easily get all my information removed from Amazon under the same precedent.
Edit:- A friend of mine recently submitted one to Amazon. My friend had (in their actual name) reviewed a book about “how to cope with depression”. She feels that has affected her seeking a job as it comes up in a google search of her name.
Edited 2014-07-03 14:05 UTC
And why should a private company like Amazon give information about me w/o my permission?
as with all these online companies, once it was online, the comment ‘belongs’ to them
Hold on… If you write a comment, review or whatever on a public forum under your own name, what do you expect Amazon to do? Write to you and ask if it’s ok to post your message publicly? Fair enough if you later don’t want it to be public, then request its removal, but if you didn’t want Amazon to display the public message you posted on their website, don’t post it. They can’t give out your information if they don’t have it. Saying something in public is public by its very nature – permission doesn’t even come into it.
When I write something in a public forum then I agree with the rules, if the rules says that it can be indexed my Amazon then is ok, cause I agree with that, but if I’m afraid that the information I’m writing in a f~A^3rum may be uninterpreted because it may be related to other information Amazon won’t index then I can say to Amazon to not index anyting at all cause it won’t show the accurate information.
Edited 2014-07-03 17:54 UTC
THERE IS NO EU LAW HERE!!!!
For the love of God. This is has NOTHING to do with EU lawmaking.
The only way this in ANY way ties into EU is that an EU court _rejected_ to overrule existing national laws (in the particular case existing Spanish law). The source of the issue was a Spanish law, and all that was ever established was that existing bad laws also apply on the internet.
legal precedent which under UK common law is LAW.
Good thing the decision was not made by a UK court then.
Edited 2014-07-04 12:11 UTC
That isn’t true. The ECJ expressly issued an interpretation of the 1995 Data Protection Directive and on the rights of individuals as well as the obligations of a search engine under that directive. That is very much a question of EU law.
You may find the Commission’s fact sheet on the case useful:
http://ec.europa.eu/justice/data-protection/files/factsheets/factsh…
Please remove me from the Internets. Thanx!
Just in the other thread about Google wanting medical data to be mined we got all these doomsday predictions about the loss of privacy.
Then here, privacy conveniently disappears by the “think of the children” argument.
Tell me, if medical data suddenly got leaked, should there not be a legal avenue to have that data removed from search engines?
Funny how people choose the examples that best play to their fears and then ignore it when a different one appears. It’s like we can never try to imagine what a possible resolution between two opposing forces could be. It’s always one or the other.
Programming languages have conditional statements, but apparently we’re too stupid to consider conditional/circumstantial treatment of issues when it comes to real world stuff.
Hah! Glad someone else remembered that. I guess privacy is only a good thing as long as it doesn’t make it harder for *me* to find what I want…
From the commission website:
http://ec.europa.eu/justice/newsroom/data-protection/news/140602_en…
“The Court also made clear that journalistic work must not be touched; it is to be protected.”
It seems to me that google is misinterpreting the court decision on purpose, letting the EU take the blame and hoping the backlash will change the legal situation.
The “right to be forgotten” was originally a law under discussion that would require social services such as facebook to delete your data if you requested it. It was never supposed to affect any other services. This was never made into law and has nothing to do with what is now mislabeled the “right to be forgotten” by the anti-privacy media.
The new case has NOTHING absolutely NOTHING to do with EU lawmaking. This is where you are being trolled Thom. All that was established is that exising NATIONAL censorship laws also apply to the Internet. You don’t get an excemption from bad laws just because you are Google. So please turn your anger to where it belong: The countless of NATIONAL laws that restrict what is allowed to be indexed or reprinted and what isn’t..
And Thom, I expected better from you. Please don’t bite on manufactured stories from the anti-EU and anti-privacy lobbies.