When you launch a game on a Snapdragon on a Windows laptop, you might get an AI frame rate boost from Microsoft’s mysterious Auto Super Resolution (Auto SR) feature. But while Microsoft hasn’t fully explained how the feature works, The Verge can now confirm it’s not Qualcomm technology, not exclusive to Qualcomm’s new Snapdragon X chips, and not exclusive to specific games, either.
Sean Hollister at The Verge
These resolution enhancer technologies from NVIDIA, AMD, and apparently Microsoft are another great use of what we today call “AI” technologies. Of course, I wish we didn’t have to deal with several proprietary offerings but instead enjoyed several open source versions and possibly a standard to work off of, but give it some time, and we may still get there.
Like I’ve said before – there’s nothing inherently wrong with “AI” technologies, as long as they’re used in ways that make sense, run locally, and most importantly, aren’t based on the wholesale theft of artists’ and programmers’ works. Unsurprisingly, the tech bros at companies like OpenAI don’t really understand the concept of “consent”, and until they do, their offerings should be deemed illegal.
Thom Holwerda,
This “wholesale theft of artists’ and programmers” is debatable though. You could make the case that generated works should be generated under licenses that are compatible to the source works. That’s logical. But using this “wholesale theft” argument for FOSS code seems like a stretch, derivative works are allowed and it’s largely the entire point of FOSS.
I’m not really trying to rebut your opinion, but I am definataely trying to rebut your semantics. There may well be developers who want to ban AI training use cases, but those wishes are not compatible with BSD/GPL/MPL/etc. They would have to create a new license, aka GPL-no-AI, in order to add new restrictions on top of GPL. But the license is very explicit that such restrictions are not allowed under the current GPL license.
Beyond the derivative works that have already been explicitly allowed, copyright law itself does not block new expressions of the same ideas. A new expression that is sufficiently transformed is considered by copyright a new work in it’s own right. IMHO today’s AIs easily pass this bar, at least by any standards that have been applied to humans. Maybe you’d like to apply a different, more stringent, copyright standard for AI than for humans. But if so I’d love to hear a strong case for doing that because so far I haven’t heard anybody make this case. In the absence of a reason to apply different standards to AI, then IMHO it makes the most sense to apply laws blindfolded – that is treating infringement cases without regards to whether the infringement was committed by AI or by human.
There could be patent, trademark, and trade secret violations at play too, but these are strictly different than copyrights.
You’re entirely wrong about the open source licenses here. Go actually read them, then come back here. Good, you remember that clause about not removing the copyright notice? Yep. Exactly. AI generated code doesn’t include it. Meaning it violated the license. And yep, it does so for things that pass the bar of having copyright. Remember copilot reproducing the entire fast inverse square root function including the famous WTF comment? Yep, there was no attribution either. Which makes it a copyright violation.
js,
I just read GPL 3 and it confirms what I’ve said. In fact in another post I just quoted in GNU’s own words how commercial derivations are allowed and additional use case restrictions are explicitly not allowed. If you are talking about another FOSS, then be specific and we can go over it.
There is no reason AI cannot do that and in fact I’ve been careful about explicitly covering this issue when I talk about AI including in the post you just responded to. Here I did so in the very first paragraph:
My comment wasn’t about copilot specifically, but a lot of people seem to want to imply that training AI equates to theft, but like I said that is debatable and honestly I don’t think you’d even be violating the spirit of the GPL to use FOSS code for AI.
GPL:
> You may convey verbatim copies of the Program’s source code as you receive it, in any medium, provided that you conspicuously and appropriately **publish on each copy an appropriate copyright notice**
MIT:
> The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
Each open source license has something like this. Each LLM that generates code and is trained on open source code violates this right now.
And no, your comment did not mention “maybe AI can include the appropriate copyright notice”. And it can’t: It doesn’t know which parts it took from where so it could do that.
So no, the GPL, MIT and other licenses are certainly not in the spirit of “let’s train an AI on it”. Because they put explicit, non-optional requirements about copyright notices that LLMs just ignore. And reproducing several hundred lines of code verbatim is certainly copyright violation if the license is not being followed – which is it not when the proper copyright notice is missing, even when it’s under MIT.
js,
For the record, nothing I’ve said is in contradiction to those.
It was never my claim that it’s not possible to create an AI that infringes copyrights. In fact I’ll go on the record as saying it’s absolutely possible to train an AI that violates copyrights. My point was that using FOSS sources for AI training does not implicitly violating copyrights. The suggestion that GPL code cannot be used to train AI because it violates the licensedoes not hold water. There simply is no restriction on AI training and so long as generated code is licensed under compatible terms (which I’ve said all along), then there is no substance behind allegations the AI training violates copyright.
I accept that some developers may not want their code to go towards any AI training, but we need to get real… GPL does not restrict it and in fact such a restriction would go against the GPL.
You’re criticism seems to be with copilot specifically, which is fine, but as I already said I’m not talking about copilot specifically, I’m saying that in principal FOSS code can be used to train an AI without violating the license.
Edit: I only meant to bold part of that.
Your understanding of copyright is flawed, as it applies to either FOSS or AI. With open source, yes the license often allows for derivative works which can be redistributed – that’s really the point. It requires the license be present, and that the work is clearly labeled. Some of that is just copyright law, not even the license. The problem with AI is that it’s trained on a mix of various things, with varying licenses. The AI doesn’t retain any kind of paper trail from what it’s trained on to what the derived work is based on, and mostly seems incapable of tracing that back even when asked directly. This is going to cause massive problems, and lead to a mountain of new legislation and lawsuits, not in that order. All expressions that boil this down to something simple, really miss the point of all of this.
For what it’s worth, I agree on balance that what AI is doing represents wholesale theft. I’m not sure that’s going to matter in the end. As I said, there will be new legislation, and since legislation is written by lobbyists, and “requested” on the backs of very large checks (especially in the US), I have little hope that we are going to protect the craft art making any more than we protected the craft of shoe making during industrialization. AI represents is the end of art. Get used to it.
CaptainN-,
That’s not a fair criticism of what I am saying. I feel like people are eager to disagree without actually understanding what I keep saying…
What I am NOT SAYING is that copilot complies.
What I AM SAYING is that in principal AI can be trained in a way that complies.
I’m sorry but people who don’t want their FOSS works to be use to train AI should not be using GPL since the GPL explicitly denies them any right to add AI restrictions. People are apparently finding this upsetting, but it’s the truth.
Edit: “the GPL explicitly denies them any right to add any restrictions”.
If we look at the history of Open Source it almost by definition defeats the idea of a standard, so I doubt it would offer anything better as “a standard” to work off.
Given there are so many ways to describe and construct an image, can the solutions for enhancing proprietary streams ever be anything but also proprietary?
But both XeSS and DLSS run locally. Not sure I understand your concerns about them. Yeah, proprietary, yeah, both much better than open source FSR.
And open source FSR will be offered to everyone, so it could even be a default option.