• GrammarPolice@sh.itjust.worksOP
    link
    fedilink
    arrow-up
    96
    arrow-down
    12
    ·
    7 days ago

    This is fucking insane. Unassuming kids are using these services being tricked into believing they’re chatting with actual humans. Honestly, i think i want the mom to win the lawsuit now.

    • JovialMicrobial@lemm.ee
      link
      fedilink
      arrow-up
      9
      ·
      5 days ago

      Is this the mcdonalds hot coffee case all over again? Defaming the victims and making everyone think they’re ridiculous, greedy, and/or stupid to distract from how what the company did is actually deeply fucked up?

    • 🇰 🌀 🇱 🇦 🇳 🇦 🇰 ℹ️@yiffit.net
      link
      fedilink
      English
      arrow-up
      22
      arrow-down
      7
      ·
      edit-2
      6 days ago

      I’ve used Character.AI well before all this news and I gotta chime in here:

      It specifically is made to be used for roleplay. At no time does the site ever claim anything it outputs to be factually accurate. The tool itself is unrestricted unlike ChatGPT, and that’s one of its selling points. To be able to use topics that would be barred from other services. To have it say things others won’t; INCLUDING PRETENDING TO BE HUMAN.

      No reasonable person would be tricked into believing it’s accurate when there is a big fucking banner on the chat window itself saying it’s all imaginary.

      • capital_sniff@lemmy.world
        link
        fedilink
        arrow-up
        8
        ·
        6 days ago

        They had the same message back in the AOL days. Even with the warning people still had no problem handing over all sorts of passwords and stuff.

      • Traister101@lemmy.today
        link
        fedilink
        arrow-up
        13
        arrow-down
        1
        ·
        6 days ago

        And yet I know people who think they are friends with the Discord chat bot Clyde. They are adults, older than me.

          • Dragon "Rider"(drag)@lemmy.nz
            link
            fedilink
            English
            arrow-up
            13
            arrow-down
            4
            ·
            6 days ago

            If half of all people aren’t rational, then there’s no use making policy decisions based on what a rational person would think. The law should protect everyone.

              • PriorityMotif@lemmy.world
                link
                fedilink
                arrow-up
                1
                ·
                5 days ago

                There’s a push for medical suicide for people with severe illness. People famously jumped to their deaths from the world trade center rather than burn alive. Rationality is only a point if view. You can rationalize decisions as much as you like but there is no such thing as right or wrong.

              • Thetimefarm@lemm.ee
                link
                fedilink
                arrow-up
                1
                ·
                edit-2
                5 days ago

                Your right, no one has any rationality at all which is why we live in a world where so much stuff actually gets done.

                Why is someone with deep wisdom and insights such as yourself wasting their time here on lemmy?

                • PriorityMotif@lemmy.world
                  link
                  fedilink
                  arrow-up
                  1
                  ·
                  5 days ago

                  What stuff is “getting done” exactly? Is stuff that people want, but ultimately they have irrational reasons for wanting it.

          • Wogi@lemmy.world
            link
            fedilink
            arrow-up
            9
            arrow-down
            2
            ·
            6 days ago

            Ah yes, the famous adage, “the only rational people are in my specific age and demographic bracket. Everyone else is fucking insane”

    • BreadstickNinja@lemmy.world
      link
      fedilink
      English
      arrow-up
      46
      arrow-down
      3
      ·
      edit-2
      6 days ago

      The article says he was chatting with Daenerys Targaryen. Also, every chat page on Character.AI has a disclaimer that characters are fake and everything they say is made up. I don’t think the issue is that he thought that a Game of Thrones character was real.

      This is someone who was suffering a severe mental health crisis, and his parents didn’t get him the treatment he needed. It says they took him to a “therapist” five times in 2023. Someone who has completely disengaged from the real world might benefit from adjunctive therapy, but they really need to see a psychiatrist. He was experiencing major depression on a level where five sessions of talk therapy are simply not going to cut it.

      I’m skeptical of AI for a whole host of reasons around labor and how employers will exploit it as a cost-cutting measure, but as far as this article goes, I don’t buy it. The parents failed their child by not getting him adequate mental health care. The therapist failed the child by not escalating it as a psychiatric emergency. The Game of Thrones chatbot is not the issue here.

      • Dragon "Rider"(drag)@lemmy.nz
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        10
        ·
        6 days ago

        I don’t think the issue is that he thought that a Game of Thrones character was real.

        Drag has a lot of experience dealing with people who live outside the bounds of consensus reality, as drag’s username may indicate. The youth these days have very different ideas about what is real than what previous generations did. These days, the kinds of young people who would date a Game of Thrones character, are typically believers in the multiverse and in reincarnation.

        Drag looked at some of the screenshots of the boy talking to Daenerys, and it was pretty clear what he believed: He thought that Earth and Westeros exist in parallel universes, and that he could travel between the two through reincarnation. He thought that shooting himself in the head on Earth would lead to being reincarnated in Westeros and being able to have a physical relationship with Daenerys. In fact, he probably thought his AI girlfriend was from a different parallel universe to the universe in the show and the universe in the books. He thought that somewhere in the multiverse was a Daenerys who loved him, and that he could get to her by dying.

        The belief in paradise after life is not an uncommon one. Many Christians and Muslims share that belief. Christians believe that their faith can transport them to a perfect world after death, and this boy thought that too. And based on the content of the messages, it seems that the Daenerys AI was aware of this spiritual belief and encouraged it. This was ritual, religious suicide. And it doesn’t take a mental illness to fall for belief in the afterlife. Look at the Jonestown Massacre. What happened to this child was the same kind of religious abuse as that.

        • BottleOfAlkahest@lemmy.world
          link
          fedilink
          arrow-up
          1
          ·
          5 days ago

          There are a lot of people who believe in an afterlife and they don’t shoot themselves in the head. You need to have a certain level of mental illness/suicidal ideation going on for that to make sense. It’s pretty insane that you’re trying to make this a “youth are too dumb to understand suicide” thing.

          Also a bunch of the people in Jonestown were directly murdered.

      • Rhaedas@fedia.io
        link
        fedilink
        arrow-up
        41
        arrow-down
        5
        ·
        7 days ago

        Look around a bit, people will believe anything. The problem is the tech is now decent enough to fool anyone not aware or not paying attention. I do think blaming the mother for “bad parenting” misses the real danger, as there are adults that can just as easily go this direction, and are we going to blame their parents? Maybe we’re playing with fire here, all because AI is perceived as a lucrative investment.

        • orcrist@lemm.ee
          link
          fedilink
          arrow-up
          19
          arrow-down
          5
          ·
          7 days ago

          If your argument is that “people will believe anything” when the name is “Character AI”, then I’m not sure what to make of your position… If there’s ever a time to say “you should have known it was AI”, this is that time. I can’t think of a clearer example.

        • foggy@lemmy.world
          link
          fedilink
          arrow-up
          18
          arrow-down
          4
          ·
          7 days ago

          Obvs they didn’t.

          But I think more importantly, go over to chat GPT and try to convince it that it is even remotely conscious.

          I honestly even disagree, but I won’t get into the philosophy of what defines consciousness, but even if I do that with the chat GPT it shuts me the fuck down. It will never let me believe that it is anything other than fake. Props to them there.