• Simmy@lemmygrad.ml
    link
    fedilink
    arrow-up
    1
    ·
    3 hours ago

    Not a developer. I just wonder about AI hallucinations come about. Is it the ‘need’ to complete the task requested at the cost of being wrong?

    • send_me_your_ink@lemmynsfw.com
      link
      fedilink
      arrow-up
      1
      ·
      36 minutes ago

      Full disclosure - my background is in operations (think IT) not AI research. So some of this might be wrong.

      What’s marketed as AI is something called a large language model. This distinction is important because AI implies intelligence - where as a LLM is something else. At a high level LLMs are using something called “tokens” to break apart natural language into elements that a machine can understand, and then recombining those tokens to “create” something new. When a LLM is creating output it does not know what it is saying - it knows what token statistically comes after the token(s) it has generated already.

      So to answer your question. An AI can hallucinate because it does not know the answer - its using advanced math to know that the period goes at the end of the sentence. and not in the middle.