• cloaker@kbin.social
    link
    fedilink
    arrow-up
    8
    ·
    1 year ago

    Good. There’s no good way to detect whether plain text is ai written. It’s a language model.

    • recycledbits@discuss.tchncs.de
      link
      fedilink
      arrow-up
      3
      arrow-down
      4
      ·
      edit-2
      1 year ago

      “Is this AI written?” is a difficult/impossible question. “Did you write this?” is not. Running the language model against a text and recording its “amount of surprise per token” for all the released GPT x.y variants is something they definitely can do.

      • cynar@lemmy.world
        link
        fedilink
        arrow-up
        6
        ·
        1 year ago

        The issue is that AI detection and AI training are very similar tasks. Anything that can be used reliably to detect an AI written article can also be used to improve it’s training, and so becomes obsolete.

        Meanwhile, a lot of people write in a manner that “looks” like an AI wrote it. This leads to the FAR more serious problem of false positives. Missing an AI written paper at school or university level isn’t a big deal. A false positive could ruin a young person’s life however. It’s the same issue the justice system faces.

  • Spiracle@kbin.social
    link
    fedilink
    arrow-up
    8
    ·
    1 year ago

    Finally. I haven’t seen a single positive use of these yet due to the poor performance. Only slightly more accurate than professors or lawyers asking ChatGPT whether something was written by ChatGPT.

  • RagnarokOnline@reddthat.com
    link
    fedilink
    arrow-up
    6
    ·
    1 year ago

    Sounds like they took it down to improve it?

    “We are working to incorporate feedback and are currently researching more effective provenance techniques for text, and have made a commitment to develop and deploy mechanisms that enable users to understand if audio or visual content is AI-generated,” OpenAI wrote.