• 0 Posts
  • 6 Comments
Joined 1 year ago
cake
Cake day: June 19th, 2023

help-circle
  • illiterate_coder@lemmy.worldtoCommunism@lemmy.mlProtestation
    link
    fedilink
    arrow-up
    3
    arrow-down
    1
    ·
    9 months ago

    Not sure why you’re getting down voted. The OP wasn’t explicitly about the US but Bernie Sanders got 13M votes in the 2016 primary and he is very clearly in support of taxing the wealthy. That sure is a lot of “insane” people isn’t it?

    What is unreasonable is assuming that taxing wealthy individuals is, on its own, enough to solve all these other social problems. There just aren’t enough billionaires for that to work.



  • I doubt anyone you are talking to is opposed to all human rights, that sounds very much like a straw man statement. Reasonable people can disagree about whether any particular right should be protected by law.

    The reason is simple: any legally-protected right you have stands in direct opposition to some other right that I could have:

    • Your right to free speech is necessarily limited by my right to, among other things, freedom from slander/libel, right to a fair trial, right to free and fair elections, right to not be defrauded, etc.
    • Your right to bodily autonomy can conflict with my right to health and safety when there is a global pandemic spreading and you refuse vaccination.
    • Your property rights are curtailed by rules against environmental harm, discrimination, insider trading, etc.

    No right is ever meant to be or can be absolute, and not all good government policy is based on rights. Turning a policy argument into one about human rights is not generally going to win the other person over, it’s akin to calling someone a racist because of their position on affirmative action. There’s no rational discussion that can be had after that point.


  • I believe the answer is, unfortunately, no.

    Long answer: In the past, an ML researcher trying to do this would have used either manual labels (for example a dictionary of parts of speech for each word) or multiple sub-models trained to solve each sub-problem before combining into a full prediction model, and even then performance is not great.

    However, once the models grew to billions of parameters it turned out that none of this external linguistic knowledge is necessary and the model can learn it all on its own. But it takes billions to trillions of examples to learn all these weights, which means a double hit to the training time: each step is slower due to more parameters, and more steps are needed to train on the full dataset.

    None of these models are trainable without a cluster of GPUs, which massively parallelizes the training process.

    That doesn’t mean you can’t try, but my results training a small toy model from scratch for 20-30 hours on a consumer GPU have been underwhelming. You get some nearly-grammatical sentences but also a lot of garbage, repetition, and incoherence.