By: bert64
Don't know about anyone else, but i find the attached bottlecaps extremely annoying.
They get in the way if your trying to pour, if you pour too quickly the stream hits the bottom of the cap and ends up being deflected everywhere, making quite a mess.
They are even worse if you're trying to drink from the bottle because the cap gets in your way even more.
The politicians who came up with this probably never open their own bottles, they don't care because they have waiters to open the bottles for them, pour it into a glass and serve it up on a silver platter. For everyone else it's an inconvenience.
By: rklrkl
The attached plastic screw cap situation is weird here in the UK - we're not part of the EU, but some manufacturers have switched to the attached caps (they may be importing from or exporting to the EU), while others haven't.
From an ergonomic point of view, I find the attached caps quite cumbersome myself. Firstly, if the bottle is small enough (e.g. 500ml or less) to be drunk from directly rather than pouring the contents into a glass, then the attached cap can get in the way and smack you in the face! Secondly, I found the attached cap noticeanly harder to screw back onto the bottle compared to unattached caps. My easiest solution ruins the whole endeavour - I snap the cap attachment off each time!
By: yoshi314@gmail.com
In reply to <a href="https://www.osnews.com/story/140053/apple-first-company-to-be-found-violating-dma/#comment-10441024">M.Onty</a>.
you must be new here. Thom is like that on certain topics, i just learned to filter it out.
By: Alfman
In reply to <a href="https://www.osnews.com/story/140053/apple-first-company-to-be-found-violating-dma/#comment-10441067">flypig</a>.
flypig,
<blockquote>That doesn’t tally with my naive understanding of this, which is that the substantiality argument applies in relation to the original work, not the derived work.</blockquote>
Here are the actual fair use guidelines published by the government.
https://www.copyright.gov/fair-use/
<blockquote.
Amount and substantiality of the portion used in relation to the copyrighted work as a whole: Under this factor, courts look at both the quantity and quality of the copyrighted material that was used. If the use includes a large portion of the copyrighted work, fair use is less likely to be found; if the use employs only a small amount of copyrighted material, fair use is more likely. That said, some courts have found use of an entire work to be fair under certain circumstances. And in other contexts, using even a small amount of a copyrighted work was determined not to be fair because the selection was an important part—or the “heart”—of the work.
For standing, think of a set operation, the original work represents one set, and the derived work represents another. The intersection of both sets is all that matters for copyright infringement. Logically the suing party doesn't have standing to sue over work outside of their set because they don't own it.
<blockquote>To take the example of Shakespeare again, it would be hard to argue that the entire works of Shakespeare don’t represent “the heart of the matter” in relation to Shakespeare’s works.</blockquote>
This relates to my second point. For the portion of subject matter that do overlap, they have to be substantially similar. Copyright precedent is to require similarity for infringement. Otherwise generalizations do not infringe. Assuming someone took the original copyrighted work and then generalized it in their own words, then it would be fair use. It turns out that LLMs are extremely good at this. To declare that generalizations infringe would require a fundamental expansion of copyright law to protect not only similar copies, but generalizations too.
<blockquote>A key question for AI and copyright is whether the act of copying happens when the data is fed in to the training mechanism, or whether it happens when the AI generates some output. I get the feeling this underpins many of the differences in opinion around it.</blockquote>
We actually covered this before, but it's getting hard to keep track of, haha. A lot of technology creates transient copies of copyrighted works without permission, including DVD players, computers, web browsers, cable boxes, rokus, and so on. As far as I know these copies have always been allowed as long as the final form using the intermediate copies does not infringe An LLM that does not infringe (because it is not substantially similar) seems to be complaint with traditional copyrights.
Of course I have to accept that some people want to change copyright laws in order to ban AI applications, which would change everything. But if we apply the same copyright standard to humans and AI, then I think LLMs should be allowed when we're both creating generalizations of an original work.
By: flypig
In reply to <a href="https://www.osnews.com/story/140053/apple-first-company-to-be-found-violating-dma/#comment-10441042">Alfman</a>.
<blockquote>I don’t think we could find any authors that would have a serious legal standing under #3 because the “substantiality” of their contribution would to the NN would be absolutely minuscule, to the point of nearly non-existent.</blockquote>
That doesn't tally with my naive understanding of this, which is that the substantiality argument applies in relation to the original work, not the derived work. To take the example of Shakespeare again, it would be hard to argue that the entire works of Shakespeare don't represent "the heart of the matter" in relation to Shakespeare's works.
A key question for AI and copyright is whether the act of copying happens when the data is fed in to the training mechanism, or whether it happens when the AI generates some output. I get the feeling this underpins many of the differences in opinion around it.
By: torb
I wonder if EU tech sector being somewhat weak in consumer tech is a blessing in disguise, as I wonder how willing to regulate EU would be if it was more of their own companies on the docket.
By: Alfman
In reply to <a href="https://www.osnews.com/story/140053/apple-first-company-to-be-found-violating-dma/#comment-10441036">Book Squirrel</a>.
Book Squirrel,
<blockquote>1. the purpose and character of the use, including whether such use is of a commercial nature or is for nonprofit educational purposes;
...
3. the amount and substantiality of the portion used in relation to the copyrighted work as a whole; and
...
At a glance, it would seem to me that using the entire internet to train generative models for commercial use would fall afoul of both (1) and (3).
</blockquote>
#1 obviously depends on how the AI is used. AI content might be commercial, but it doesn't necessarily have to be. For example it could be used to create FOSS.
I don't think we could find any authors that would have a serious legal standing under #3 because the "substantiality" of their contribution would to the NN would be absolutely minuscule, to the point of nearly non-existent.
I know Shakespeare's works are no longer under copyright, but just using them as an illustration here because of his many well known works. The collective works of Shakespeare are a total of 884k words. Chatgpt was trained on a 300 billion word data set.
https://analyticsindiamag.com/behind-chatgpts-wisdom-300-bn-words-570-gb-data/
https://www.opensourceshakespeare.org/views/plays/plays_numwords.php
1) Shakespeare's 43 collective works, even combined, would only make up the equivalent of 0.29% of the training set, which is hardly substantial.
2) When humans generalize a copyrighted works <i>in our own words</i>, this is allowed and not considered substantially similar. The similarity versus generalization of AI is naturally dependent on training. To the extent that the source gets incorporated into the NN as general knowledge about the source rather than verbatim copies of the source, then in order to be consistent copyright should not consider an AI work substantially similar if the NN only contains a generalized knowledge of the source and not verbatim copies if it. Calling AI generalization "substantially similar" would be inconsistent with the traditional application of copyrights.
Some people have noted that chatgpt will sometimes output copyrighted text verbatim and just so we're clear about this: I do consider this blatant infringement. It's not infringement because it's AI though, it's infringement because of the fact that it output the original copyrighted text. I feel this is the most obvious and straitforward way to apply copyright to AI, the exact same way we'd apply it to a human. The fact that it was generated by AI being completely irrelevant to the finding of infringement.
<blockquote>Buuuut, I guess that’s up to the courts to interpret?</blockquote>
Of course.
By: Book Squirrel
In reply to <a href="https://www.osnews.com/story/140053/apple-first-company-to-be-found-violating-dma/#comment-10441026">Alfman</a>.
Fair use is specific to the US and not really a thing within the EU. EU countries get to put limitations on copyright which can emulate some of the effects of fair use protections, but it's not quite the same.
I work at a public library for example, and public libraries have a specific exception to copyright laws that says we get to make reproductions for non-commercial use.
I do think copyright laws are often a bit too restrictive here, but I don't really have a strong opinion on whether the model itself is better. One can argue that fair use protections are too fuzzy and unclear, and it can be difficult to know if you fall within them until you've been taken to court over it.
With regards to the use of copyrighted data for training generative models though, I wouldn't be so sure that even fair use applies. Wikipedia tells me the following about what should be considered in a fair use case:
<blockquote>1. the purpose and character of the use, including whether such use is of a commercial nature or is for nonprofit educational purposes;
2. the nature of the copyrighted work;
3. the amount and substantiality of the portion used in relation to the copyrighted work as a whole; and
4. the effect of the use upon the potential market for or value of the copyrighted work.</blockquote>
At a glance, it would seem to me that using the entire internet to train generative models for commercial use would fall afoul of both (1) and (3). Buuuut, I guess that's up to the courts to interpret?
By: Geck
It was rather naive, for Apple, deciding against supporting user privacy in EU by boycotting introduction of new features to EU Apple users. I know Apple user base is one of the worse, when it comes to defending Apple, still here Apple already lost and no amount of PR spins can change that. The only question here is on why it took regulators so long, on why could Apple abuse the market for decades, before they were now finally stopped.
By: Alfman
Thom Holwerda,
<blockquote>Apple is in this mess and facing insane fines as high as 10% of their worldwide turnover because spoiled, rich, privileged brats like Tim Cook are not used to anyone ever saying “no”. Silicon Valley has shown, time and time again, from massive data collection for advertising purposes to scraping the entire web for machine learning, that they simply do not understand consent. Now that there’s finally someone big, strong, and powerful enough to not take Silicon Valley’s bullshit, they start throwing tamper tantrums like toddlers.</blockquote>
Apple need to comply with the local laws, they've gotten away with too much for too long. Full agreement there.
However I'm not comfortable expanding the scope of copyright protection for AI training. AI companies should be covered by the same fair use rights that apply to everyone. If we start allowing publishers to nixing fair use rights they don't like, then there would be unsettling side effects for everyone, including osnews for regularly reproducing other's works in editorials without obtaining consent.
It's easy to say "I only want to restrict AI", but the technicalities get really murky, starting with the fact that copyright law has never needed to define AI. And then it's really unintuitive to claim that AI works infringe even if the similarities are minuscule or even if the similarities are not recognizable at all without fundamentally expanding the scope of copyright protection. Traditionally a publisher could show the original along side the copy, and boom infringement becomes more or less self evident. How does a publisher claim to have standing when the new works that are allegedly infringing do not resemble the originals at all? I fear those copyright cases could become absurd. If the answer is to lower the bar for infringement claims, then how do you stop publishers from abusing that and going after everyone? If you're on the defending side, how do you prove you didn't use an AI?
IMHO the KISS principal favors applying copyright laws consistently to AI and humans and not expanding copyright protection against AI. Publishers can seek compensation for any instances where the AI is found to be infringing an original work, as that is a legitimate copyright claim. But in the absence of any such finding the mere existence of AI should not constitute copyright infringement.
By: M.Onty
Is it me or are these articles all coming with a bit more heat of late?
I don't like Apple very much, I'm glad they're getting taken down a peg and I like the bottle-cap analogy.
But still.
> "spoiled, rich, privileged brats like Tim Cook"
Certainly he's rich, but I just looked him up and he comes from a working class family. For what that's worth.
Anyway if that's the tone this site's taking then so be it. Its not like I'll pretend I'll not be coming back because of it, or anything immature like that. It just seems unnecessary.
I'm still glad there's more articles, regardless of the tone.
By: Titanius Anglesmith
In reply to <a href="https://www.osnews.com/story/140053/apple-first-company-to-be-found-violating-dma/#comment-10441020">Titanius Anglesmith</a>.
The bottle cap analogy is perfect, btw.
By: Titanius Anglesmith
Apple is acting like this ultimatum is a negotiation. Their response has been like a child throwing a hissy fit and their yes men are falling over themselves defending Apple and denouncing the EU. I hope they get fined into oblivion for every day they continue this malicious compliance act.
The DMA and similar international laws are coming to the US too and Apple knows it, it's only a matter of time before these rules become the default. I'd like the EU to challenge Apple even further: Either comply or leave the EU market. And I'm saying that as a long time iPhone user.