• frightful_hobgoblin@lemmy.ml
    link
    fedilink
    arrow-up
    1
    ·
    2 months ago

    when you input something into an LLM and regenerate the responses a few times, it can come up with outputs of completely opposite (and equally incorrect) meaning

    Can you paste an example of this error?

    • booty [he/him]@hexbear.net
      link
      fedilink
      English
      arrow-up
      4
      ·
      edit-2
      2 months ago

      Have you ever used an LLM?

      Here’s a screenshot I took after spending literally 10 minutes with chatgpt very confidently stating incorrect answers to a simple question over and over. (from this thread) Not only is it completely incapable of coming up with a very simple answer to a correct question, it is completely incapable of responding in a coherent way to the fact that none of its answers are correct. Humans don’t behave this way. Nothing that understands what is being said would respond this way. It responds this way because it has no understanding of the meaning of anything that is being said. It is responding based on statistical likelihoods of words and phrases following one another, like a markov chain but slightly more advanced.

      • UlyssesT [he/him]@hexbear.net
        link
        fedilink
        English
        arrow-up
        2
        ·
        edit-2
        2 months ago

        You were arguing with such an incredibly misanthropic piece of shit that of course they see a sufficient number of TI-88s bolted together as direct analogues to self-aware and conscious human intelligence.

        Look at how that piece of shit treats other human beings: like the inferior “meat computers” that such a techbro mindset provides.

        https://hexbear.net/comment/5438712