but it is not a feature i want. not now, not ever. An inbuilt bullshit generator, now with less training and more bullshit is not something I ever asked for.
Training one of these ais requires huge datacenters, insanely huge datasets and millions of dollars in resources. And I’m supposed to believe one will be effectively trained by the pittance of data generated by browsing?
Fine tunning is more possible on end user hardware. You also have projects like hive mind and petals that working on distributed training and inference systems to deal with the concentration effects of this you described for base models.
but it is not a feature i want. not now, not ever. An inbuilt bullshit generator, now with less training and more bullshit is not something I ever asked for.
Training one of these ais requires huge datacenters, insanely huge datasets and millions of dollars in resources. And I’m supposed to believe one will be effectively trained by the pittance of data generated by browsing?
Yes but I like it, so where do we go from here?
You clearly are wrong and you should feel bad /s
Fine tunning is more possible on end user hardware. You also have projects like hive mind and petals that working on distributed training and inference systems to deal with the concentration effects of this you described for base models.