• 3 Posts
  • 665 Comments
Joined 3 years ago
cake
Cake day: June 30th, 2023

help-circle








  • Kiriakou has serious “that happened” energy.

    I’m not really qualified to guess how much of what he says is bullshit, and I’m sure you inevitably gain some stories being a CIA station chief or whatever, but the guy has a story about literally everything ever. He’s been everywhere, he’s met everyone, he knows everything about everything… and it really strains disbelief.

    The guy obviously loves telling stories (and I’d even go so far as to say he’s great at it), but I’ve gotta imagine that the vast majority of it is seriously embellished if not outright bullshit — especially when so much of what he says seems designed to paint him favorably.








  • That’s because it isn’t true. Retraining models is expensive with a capital E, so companies only train a new model once or twice a year. The process of ‘fine-tuning’ a model is less expensive, but the cost is still prohibitive enough that it does not make sense to fine-tune on every single conversation. Any ‘memory’ or ‘learning’ that people perceive in LLMs is just smoke and mirrors. Typically, it looks something like this:

    -You have a conversation with a model.

    -Your conversation is saved into a database with all of the other conversations you’ve had. Often, an LLM will be used to ‘summarize’ your conversation before it’s stored, causing some details and context to be lost.

    -You come back and have a new conversation with the same model. The model no longer remembers your past conversations, so each time you prompt it, it searches through that database for relevant snippets from past (summarized) conversations to give the illusion of memory.


  • Besides, tech bros didn’t program this in, this is just an LLM getting stuck in the data patterns stolen from toxic self-help literature.

    That’s not necessarily true. The AI’s output is obviously shaped by the training data, but much of it is also shaped by the prompt (and I don’t just mean your prompt as a user).

    When you interact with (for example) ChatGPT, your prompt gets merged into a much larger meta-prompt that you don’t get to see. This meta-prompt includes things like what tone the AI should use, how the AI should identify itself, how the AI should steer the conversation, what topics the AI should avoid, etc. All of that is under the control of the people designing these systems, and it’s trivially easy for them to adjust the way the AI behaves in order to, for example, maximize your engagement as a user.


  • How would that work, exactly?

    PA senators are not subject to recalls, so it’s out of the hands of the voters. He could technically be impeached and removed from office by the Senate itself… but for what, exactly? It’s not illegal to vote a certain way, even if it’s against the wishes of your constituency.

    And even if Fetterman were somehow removed, PA is an extremely competitive state. Remember, he won against Doctor fucking Oz, so the chance of someone even worse coming in to replace him is quite high. The devil you know, etc etc…

    Best case scenario is that he gets primaried into irrelevance in 2028 by someone with actual integrity. But even then, I’m not holding my breath…