- cross-posted to:
- games@sh.itjust.works
- steam@lemmy.ml
- cross-posted to:
- games@sh.itjust.works
- steam@lemmy.ml
Key points:
- new questions for devs submitting their games (about pre- and live-generated AI content, guardrails preventing generation of anything illegal)
- disclaimers on game’s store page
- new system for reporting illegal content straight from in-game overlay
I have a very strong gut feeling that some 98% of all games mades “mostly with AI” will be shit, cheap shovelware.
So nothing will really change? Seriously though, I’m just glad this stuff will be properly marked so people can make an informed decision.
Isn’t that practically the same as all games in general?
Disappointing, but somehow inevitable.
"This will enable us to release the vast majority of games that use it. "
So it sounds like the floodgates are opening and now it’ll be up to the users to sort out the flood of BS. None of this is truly surprising, while I’m not cynical enough to suggest their temporary stance was a quick way to score some easy points with the anti-AI crowd, we all kind of have to acknowledge that this technology is coming and Steam is too big to be left behind by it. It stands to reason.
I also understand the reasoning for splitting pre/live-generated AI content, but it’s all going to go in the same dumpster for me regardless.
I certainly think it’s possible to use pre-generated AI content in an ethical and reasonable way when you’re committed to having it reach a strong enough stylistic and artistic vision with editors and artists doing sufficient passes over it. The thing is, the people already developing in that way would continue to do so because of their own standards, they won’t be affected by this decision. The people wanting to use generative AI to pump out quick cash grabs are the ones that will latch onto it, I can’t think of any other base this really appeals to.
Valve seems to see the amount of shovelware games released on steam last year as rookie numbers
That’s a decent middle ground, let the minority with the issue with it be able to avoid it while letting devs use the latest tools.
I can think of one legitimate use: character portraits in RPGs. I strongly doubt that there are more.
texture work. you can generate seamless repeating textures pretty quickly
Dope. I can’t wait to have meaningful generative content in a game I play. Will it be perfect? Nah probably not for another 10 years to be honest. But it’s a start. It’ll be nice to not have to rely on a greedy publisher to make a sequel to a beloved game series especially when they’re taking in billions a year from microtransactions. Bring on the human-serving AI
deleted by creator
Pretty cool. I almost had to start liking Epic Store for not having such a dumb stance. The disclaimer on games using generative content is weird, but it’s a solid step forward.
Is it really dumb?
AI generated content has a lot of unanswered legal questions around it which can lead to a lot of headache with moderation and possibility of illegal content showing up (remember that not only “well meaning” devs will use these tools). It’s seems reasonable for a company to try minimize the risk.
As for disclaimer, it will allow people make an informed decision - not sure what’s wrong with that.
AI generated content has a lot of unanswered legal questions around it which can lead to a lot of headache with moderation and possibility of illegal content showing up (remember that not only “well meaning” devs will use these tools). It’s seems reasonable for a company to try minimize the risk.
There were never any unanswered legal questions would prevent you from being able to use generated assets in a game. That’s why Valve’s old stance was so odd. I’m not sure what you mean by the possibility for illegal content, can you elaborate?
I’d like to mention that I’m not exactly up to date with AI related legislation so treat what I’m about to write as a genuine attempt to understand their worries rather than trying to be smart.
I remember there being a lot of uncertainty about the legality of what and how can('t) be used in training models (especially when used for commercial purposes) - has that been settled in any way? I think there was also a case of not being able to copyright AI generated content due to lack of human authorship (I’d have to look for an article on this one as it’s been a while) - this obviously won’t be a problem if generated assets are used as a base to be worked upon.
As for illegal content - Valve mentioned it in regards to live-generated stuff. I assume they’re worried about possibility of plagiarism and things going against their ToS, which is why they ask about guardrails used in such systems. On a more general note, there were also cases of AI articles coming up with fake stories with accusations of criminal behavior involving real people - this probably won’t be a problem with AI usage in games (I hope anyway) but it’s another sensitive topic devs using such tools have to keep in mind.
Again, I’m nowhere near knowledgeable enough to write this stuff from a position of confidence so feel free to correct me if any of this has been dealt with.
I remember there being a lot of uncertainty about the legality of what and how can('t) be used in training models (especially when used for commercial purposes) - has that been settled in any way? I think there was also a case of not being able to copyright AI generated content due to lack of human authorship (I’d have to look for an article on this one as it’s been a while) - this obviously won’t be a problem if generated assets are used as a base to be worked upon.
In the United States, the Authors Guild v. Google case established that Google’s use of copyrighted material in its books search constituted fair use. Most people agree this will apply to generative models as well since the nature of the use is highly transformative.
I recommend reading this article by Kit Walsh, a senior staff attorney at the EFF from April last year if you haven’t already. The EFF is a digital rights group who recently won a historic case: border guards now need a warrant to search your phone.
Works involving the use of AI are copyrightable, but just like everything else, it depends. It’s also important to remember the Copyright Office guidance isn’t law. Their guidance reflects only the office’s interpretation based on its experience, it isn’t binding in the courts or other parties. Guidance from the office is not a substitute for legal advice, and it does not create any rights or obligations for anyone. They are the lowest rung on the ladder for deciding what law means.
As for illegal content - Valve mentioned it in regards to live-generated stuff. I assume they’re worried about possibility of plagiarism and things going against their ToS, which is why they ask about guardrails used in such systems. On a more general note, there were also cases of AI articles coming up with fake stories with accusations of criminal behavior involving real people - this probably won’t be a problem with AI usage in games (I hope anyway) but it’s another sensitive topic devs using such tools have to keep in mind.
I agree live generated stuff could get developers in trouble. With pre-generated assets you can make sure ahead of time everything is above board, but that’s not really possible when you have users influencing what content appears in your game. If they were going to ban anything, the original ban should have been limited to just this.
Thanks for the links, that’s exactly why I wasn’t sure where things stand currently. While I am familiar with EFF, I wasn’t aware of that article so it was an interesting read.
The one I kind of remembered (even though only partially) was the Reuters article, which contains this quote I was referring to:
The office reiterated Wednesday that copyright protection depends on the amount of human creativity involved, and that the most popular AI systems likely do not create copyrightable work.
It’s obviously a bit more complicated than how I mentioned it initially so I’m glad I could read it again.
The original ban was always meant to be temporary as far as I understand, Valve simply wanted some time to decide rather than make a rash decision (it’s easier to open the floodgates than it is to clean up after the fact). I’m sure things will change in the future as AI tools become more and more common anyway.
The one I kind of remembered (even though only partially) was the Reuters article, which contains this quote I was referring to:
The office reiterated Wednesday that copyright protection depends on the amount of human creativity involved, and that the most popular AI systems likely do not create copyrightable work.
This was likely in reference to Midjourney, which was the system in question in its ruling. Midjourney, even for its time had very rudimentary user controls way behind the open standards that likely didn’t impress the registrar.
There’s also a spectrum of involvement depending on what tool you’re using. I know with web based interfaces don’t allow for a lot of freedom due to wanting to keep users from generating things outside their terms of use, but with open source models based on Stable Diffusion you can get a lot more involved and get a lot more freedom. We’re in a completely different world from March 2023 as far as generative tools go.
Take a look at the difference between a Midjourney prompt and a Stable Diffusion prompt.
a 80s hollywood sci-fi movie poster of a gigantic lemming attacking a city, with the title "Attack of the Lemmy!!" --ar 3:5 --v 6.0
sarasf, 1girl, solo, robe, long sleeves, white footwear, smile, wide sleeves, closed mouth, blush, looking at viewer, sitting, tree stump, forest, tree, sky, traditional media, 1990s \(style\), <lora:sarasf_V2-10:0.7>
Negative prompt: (worst quality, low quality:1.4), FastNegativeV2
Steps: 21, VAE: kl-f8-anime2.ckpt, Size: 512x768, Seed: 2303584416, Model: Based64mix-V3-Pruned, Version: v1.6.0, Sampler: DPM++ 2M Karras, VAE hash: df3c506e51, CFG scale: 6, Clip skip: 2, Model hash: 98a1428d4c, Hires steps: 16, "sarasf_V2-10: 1ca692d73fb1", Hires upscale: 2, Hires upscaler: 4x_foolhardy_Remacri, "FastNegativeV2: a7465e7cc2a2",
ADetailer model: face_yolov8n.pt, ADetailer version: 23.11.1, Denoising strength: 0.38, ADetailer mask blur: 4, ADetailer model 2nd: Eyes.pt, ADetailer confidence: 0.3, ADetailer dilate erode: 4, ADetailer mask blur 2nd: 4, ADetailer confidence 2nd: 0.3, ADetailer inpaint padding: 32, ADetailer dilate erode 2nd: 4, ADetailer denoising strength: 0.42, ADetailer inpaint only masked: True, ADetailer inpaint padding 2nd: 32, ADetailer denoising strength 2nd: 0.43, ADetailer inpaint only masked 2nd: True
To break down a bit of what’s going on here, I’d like to explain some of the elements found here.
sarasf
is the token for the LoRA of the character in this image, and<lora:sarasf_V2-10:0.7>
is the character LoRA for Sarah from Shining Force II. LoRA are like supplementary models you use on top of a base model to capture a style or concept, like a patch. Some LoRA don’t have activation tokens, and some with them can be used without their token to get different results.The .07 in
<lora:sarasf_V2-10:0.7>
refers to the strength at which the weights from the LoRA are applied to the output. Lowering the number causes the concept to manifest weaker in the output. You can blend styles this way with just the base model or multiple LoRA at the same time at different strengths. You can even take a monochrome LoRA and take the weight into the negative to get some crazy colors.The Negative Prompt is where you include things you don’t want in your image.
(worst quality, low quality:1.4),
here have their attention set to 1.4, attention is sort of like weight, but for tokens. LoRA bring their own weights to add onto the model, whereas attention on tokens works completely inside the weights they’re given. In this negative promptFastNegativeV2
is an embedding known as a Textual Inversion. It’s sort of like a crystallized collection of tokens that tell the model something precise you want without having to enter the tokens yourself or mess around with the attention manually. Embeddings you put in the negative prompt are known as Negative Embeddings.In the next part,
Steps
stands for how many steps you want the model to take to solve the starting noise into an image. More steps take longer.VAE
is the name of the Variational Autoencoder used in this generation. The VAE is responsible for working with the weights to make each image unique. A mismatch of VAE and model can yield blurry and desaturated images, so some models opt to have their VAE baked in,Size
is the dimensions in pixels the image will be generated at.Seed
is the number representation of the starting noise for the image. You need this to be able to reproduce a specific image.Model
is the name of the model used, andSampler
is the name of the algorithm that solves the noise into an image. There are a few different samplers, each with their own trade-offs for speed, quality, and memory usage.CFG
is basically how close you want the model to follow your prompt. Some models can’t handle high CFG values and flip out, giving over-exposed or nonsense output.Hires steps
represents the amount of steps you want to take on the second pass to upscale the output. This is necessary to get higher resolution images without visual artifacts.Hires upscaler
is the name of the model that was used during the upscaling step, and again there are a ton of those with their own trade-offs and use cases.After
ADetailer
are the parameters for Adetailer, an extension that does a post-process pass to fix things like broken anatomy, faces, and hands. We’ll just leave it at that because I don’t feel like explaining all the different settings found there.Damn, that’s a good chunk of info! Thanks for taking the time to go into details on how things work.
Is this your comfy workflow, or from someone else?