Yes, no, you said that. But since that is a meaningless statement I was expecting some clarification.
But nope, apparently we have now established that a device existing uses up more power than that device not existing.
Which is… accurate, I suppose, but also true of everything. Turns out, televisions? Also consume less power if they don’t exist. Refrigerators. Washing machines? Lots less power by not existing.
So I suppose you’re advocating a return to monke situation, but since I do appreciate having a telephone (which would, in fact, save power by not existing), we’re going to have to agree to disagree.
LLMs major use is mimicking human beings at the cost of incredible amounts of electricity. Last I checked we have plenty of human beings and will all die if our power consumption keeps going up, so it’s absolutely not worth it. Comparing it to literally any useful technology is disingenuous.
And don’t go spouting some bullshit about it getting better over time, because the Datacenters aren’t being built in the hypothetical future when it is better, they’re being built NOW.
Look, I can suggest you start this thread over and read it from the top, because the ways this doesn’t make much sense have been thoroughly explained.
Because this is a long one and if you were going to do that you would have already, I’ll at least summarize the headlines: LLMs exist whether you like them or not, they can be quantized down to more reasonable power usage, are running well locally on laptops and tablets burning just a few watts for just a few seconds (NOW, as you put it). They are just one application of ML tech, and are not useless at all (fuzzy searches with few specific parameters, accessibility features, context-rich explanations of out of context images or text), even if their valid uses are misrepresented by both advocates and detractors. They are far from the only commonplace computing task that is now using a lot more power than the equivalent a few years ago, which is a larger issue than just the popularity of ML apps. Granting that LLMs will exist in any case, running them on a data center is more efficient, and the issue isn’t just “power consumption” but also how the power is generated and what the reclamation of the waste products (in this case excess heat and used water) is on the other end.
I genuinely would not recommend that we engage in a back and forth breaking that down because, again, that’s what this very long thread has been about already and a) I have heard every argument the AI moral panic has puth forth (and the ones the dumb techbro singularity peddlers have put forth, too), and b) we’d just go down a circular rabbit hole of repeating what we’ve already established here over and over again and certainly not convince each other of anything (because see point A).
Absolutely not true. Regulations are both in place and in development, and none of them seem like they would prevent any of the applications currently in the market. I know the fearmongering side keeps arguing that a copyright case will stop the development of these but, to be clear, that’s not going to happen. All it’ll take is an extra line in an EULA to mitigate or investing in the dataset of someone who has a line in their EULA (Twitter, Reddit already, more to come for sure). The industry is actually quite fond of copyright-based training restrictions, as their main effect is most likely to be to close off open source alternatives and make it so that only Meta, Google, and MS/OpenAI can afford model training.
These are super not going away. Regulation is needed, but it’s not restricting or eliminating these applications in any way that would make a dent on the also poorly understood power consumption costs.
Yeah, who’s saying it doesn’t? It prevents the practices it prevents and allows the rest of the practices.
The regulation you’re going to see on this does not, in fact, prevent making LLMs or image generators, though. And it does not, in fact prevent running them and selling them to people.
You guys have gotten it in your head that training data permissions are going to be the roadblock here, and they’re absolutely not going to be. There will be common sense options, like opt-outs and opt-out defaults by mandate, just like there are on issues of data privacy under GDPR, but not absolute bans by any means.
So how much did opt-out defaults under GDPR stop social media and advertising companies from running social media and advertising data businesses?
Exactly.
What that will do is make it so you have to own a large set of accessible data, like social media companies do. They are positively salivating at the possibility that AI training will require paying them, since they’ll have a user agreement that demands allowing your data to be sold for training. Meanwhile, developers of open alternatives, who are currently running out of a combination of openly accessible online data and monetized datasets put together specifically for research, will face more cost to develop alternatives. Ideally, hope the large AI corporations, too much cost pressure and they will be bullied out of the market, or at least forced to lag behind in quality by several generations.
That’s what’s currently happening regarding regulation, along with a bunch of more reasonable guardrails about what you should and should not generate and so on. You’ll notice I didn’t mention anything about power or specific applications there. LLMs and image generators are not going away and their power consumption is not going to be impacted.
Yes, no, you said that. But since that is a meaningless statement I was expecting some clarification.
But nope, apparently we have now established that a device existing uses up more power than that device not existing.
Which is… accurate, I suppose, but also true of everything. Turns out, televisions? Also consume less power if they don’t exist. Refrigerators. Washing machines? Lots less power by not existing.
So I suppose you’re advocating a return to monke situation, but since I do appreciate having a telephone (which would, in fact, save power by not existing), we’re going to have to agree to disagree.
LLMs major use is mimicking human beings at the cost of incredible amounts of electricity. Last I checked we have plenty of human beings and will all die if our power consumption keeps going up, so it’s absolutely not worth it. Comparing it to literally any useful technology is disingenuous.
And don’t go spouting some bullshit about it getting better over time, because the Datacenters aren’t being built in the hypothetical future when it is better, they’re being built NOW.
Look, I can suggest you start this thread over and read it from the top, because the ways this doesn’t make much sense have been thoroughly explained.
Because this is a long one and if you were going to do that you would have already, I’ll at least summarize the headlines: LLMs exist whether you like them or not, they can be quantized down to more reasonable power usage, are running well locally on laptops and tablets burning just a few watts for just a few seconds (NOW, as you put it). They are just one application of ML tech, and are not useless at all (fuzzy searches with few specific parameters, accessibility features, context-rich explanations of out of context images or text), even if their valid uses are misrepresented by both advocates and detractors. They are far from the only commonplace computing task that is now using a lot more power than the equivalent a few years ago, which is a larger issue than just the popularity of ML apps. Granting that LLMs will exist in any case, running them on a data center is more efficient, and the issue isn’t just “power consumption” but also how the power is generated and what the reclamation of the waste products (in this case excess heat and used water) is on the other end.
I genuinely would not recommend that we engage in a back and forth breaking that down because, again, that’s what this very long thread has been about already and a) I have heard every argument the AI moral panic has puth forth (and the ones the dumb techbro singularity peddlers have put forth, too), and b) we’d just go down a circular rabbit hole of repeating what we’ve already established here over and over again and certainly not convince each other of anything (because see point A).
They exist at the current scale because we’re not regulating them, not whether we like it or not.
Absolutely not true. Regulations are both in place and in development, and none of them seem like they would prevent any of the applications currently in the market. I know the fearmongering side keeps arguing that a copyright case will stop the development of these but, to be clear, that’s not going to happen. All it’ll take is an extra line in an EULA to mitigate or investing in the dataset of someone who has a line in their EULA (Twitter, Reddit already, more to come for sure). The industry is actually quite fond of copyright-based training restrictions, as their main effect is most likely to be to close off open source alternatives and make it so that only Meta, Google, and MS/OpenAI can afford model training.
These are super not going away. Regulation is needed, but it’s not restricting or eliminating these applications in any way that would make a dent on the also poorly understood power consumption costs.
Regulating markets absolutely does prevent practices in those markets. Literally the point.
Yeah, who’s saying it doesn’t? It prevents the practices it prevents and allows the rest of the practices.
The regulation you’re going to see on this does not, in fact, prevent making LLMs or image generators, though. And it does not, in fact prevent running them and selling them to people.
You guys have gotten it in your head that training data permissions are going to be the roadblock here, and they’re absolutely not going to be. There will be common sense options, like opt-outs and opt-out defaults by mandate, just like there are on issues of data privacy under GDPR, but not absolute bans by any means.
So how much did opt-out defaults under GDPR stop social media and advertising companies from running social media and advertising data businesses?
Exactly.
What that will do is make it so you have to own a large set of accessible data, like social media companies do. They are positively salivating at the possibility that AI training will require paying them, since they’ll have a user agreement that demands allowing your data to be sold for training. Meanwhile, developers of open alternatives, who are currently running out of a combination of openly accessible online data and monetized datasets put together specifically for research, will face more cost to develop alternatives. Ideally, hope the large AI corporations, too much cost pressure and they will be bullied out of the market, or at least forced to lag behind in quality by several generations.
That’s what’s currently happening regarding regulation, along with a bunch of more reasonable guardrails about what you should and should not generate and so on. You’ll notice I didn’t mention anything about power or specific applications there. LLMs and image generators are not going away and their power consumption is not going to be impacted.