• BlameThePeacock@lemmy.ca
    link
    fedilink
    English
    arrow-up
    89
    arrow-down
    6
    ·
    3 months ago

    He’s quite a well known pollster. Up until recently he was responsible for Five Thirty Eight, but it got sold and he left.

    He got the 2016 election wrong (71 Hilary, 28 trump) He got the 2020 election right (89 Biden, 10 Trump)

    Right and wrong are the incorrect terms here, but you get what I mean.

    • Ghostalmedia@lemmy.world
      link
      fedilink
      English
      arrow-up
      76
      arrow-down
      10
      ·
      3 months ago

      He didn’t get it wrong. He said the Clinton Trump election was a tight horse race, and Trump had one side of a four sided die.

      The state by state data wasn’t far off.

      Problem is, people don’t understand statistics.

      • FlowVoid@lemmy.world
        link
        fedilink
        English
        arrow-up
        7
        arrow-down
        21
        ·
        edit-2
        3 months ago

        If someone said Trump had over a 50% probability of winning in 2016, would that be wrong?

        • Ghostalmedia@lemmy.world
          link
          fedilink
          English
          arrow-up
          39
          arrow-down
          1
          ·
          3 months ago

          In statistical modeling you don’t really have right or wrong. You have a level of confidence in a model, a level of confidence in your data, and a statistical probability that an event will occur.

          • FlowVoid@lemmy.world
            link
            fedilink
            English
            arrow-up
            7
            arrow-down
            35
            ·
            edit-2
            3 months ago

            So if my model says RFK has a 98% probability of winning, then it is no more right or wrong than Silver’s model?

            If so, then probability would be useless. But it isn’t useless. Probability is useful because it can make predictions that can be tested against reality.

            In 2016, Silver’s model predicted that Clinton would win. Which was wrong. He knew his model was wrong, because he adjusted his model after 2016. Why change something that is working properly?

            • Lauchs@lemmy.world
              link
              fedilink
              arrow-up
              38
              arrow-down
              1
              ·
              3 months ago

              You’re conflating things.

              Your model itself can be wrong, absolutely.

              But for the person above to say Silver got something wrong because a lower probability event happened is a little silly. It’d be like flipping a coin heads side up twice in a row and saying you’ve disproved statistics because heads twice in a row should only happen 1/4 times.

              • FlowVoid@lemmy.world
                link
                fedilink
                English
                arrow-up
                3
                arrow-down
                31
                ·
                edit-2
                3 months ago

                Silver made a prediction. That’s the deliverable. The prediction was wrong.

                Nobody is saying that statistical theory was disproved. But it’s impossible to tell whether Silver applied theory correctly, and it doesn’t even matter. When a Boeing airplane loses a door, that doesn’t disprove physics but it does mean that Boeing got something wrong.

                • MonkRome@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  15
                  arrow-down
                  1
                  ·
                  3 months ago

                  but it does mean that Boeing got something wrong.

                  Comparing it to Boeing shows you still misunderstand probability. If his model predicts 4 separate elections where each underdog candidate had a 1 in 4 chance of winning. If only 1 of those underdog candidates wins, then the model is likely working. But when that candidate wins everyone will say “but he said it was only a 1 in 4 chance!”. It’s as dumb as people being surprised by rain when it says 25% chance of rain. As long as you only get rain 1/4 of the time with that prediction, then the model is working. Presidential elections are tricky because there are so few of them, they test their models against past data to verify they are working. But it’s just probability, it’s not saying this WILL happen, it’s saying these are the odds at this snapshot in time.

                  • FlowVoid@lemmy.world
                    link
                    fedilink
                    English
                    arrow-up
                    4
                    arrow-down
                    5
                    ·
                    3 months ago

                    Presidential elections are tricky because there is only one prediction.

                    Suppose your model says Trump has a 28% chance of winning in 2024, and mine says Trump has a 72% chance of winning in 2024.

                    There will only be one 2024 election. And suppose Trump loses it.

                    If that outcome doesn’t tell us anything about the relative strength of our models, then what’s the point of using a model at all? You might as well write a single line of code that spits out “50% Trump”, it is equally useful.

                    The point of a model is to make a testable prediction. When the TV predicts a 25% chance of rain, that means that it will rain on one fourth of the days that they make such a prediction. It doesn’t have to rain every time.

                    But Silver only makes a 2016 prediction once, and then he makes a new model for the next election. So he has exactly one chance to get it right.

                • Lauchs@lemmy.world
                  link
                  fedilink
                  arrow-up
                  12
                  arrow-down
                  2
                  ·
                  3 months ago

                  Silver made a prediction. That’s the deliverable.

                  I see what you’re not getting! You are confusing giving the odds with making a prediction and those are very different.

                  Let’s go back to the coin flips, maybe it’ll make things more clear.

                  I or Silver might point out there’s a 75% chance anything besides two heads in a row happening (which is accurate.) If, as will happen 1/4 times, two heads in a row does happen, does that somehow mean the odds I gave were wrong?

                  Same with Silver and the 2016 election.

                  • FlowVoid@lemmy.world
                    link
                    fedilink
                    English
                    arrow-up
                    2
                    arrow-down
                    4
                    ·
                    edit-2
                    3 months ago

                    I or Silver might point out there’s a 75% chance anything besides two heads in a row happening (which is accurate.)

                    Is it?

                    Suppose I gave you two coins, which may or may not be weighted. You think they aren’t, and I think they are weighted 2:1 towards heads. Your model predicts one head, and mine predicts two heads.

                    We toss and get two heads. Does that mean the odds I gave are right? Does it mean the odds you gave are wrong?

                    In the real world, your odds will depends on your priors, which you can never prove or disprove. If we were working with coins, then we could repeat the experiment and possibly update our priors.

                    But suppose we only have one chance to toss them, and after which they shatter. In that case, the model we use for the coins, weighted vs unweighted, is just a means to arrive at a prediction. The prediction can be right or wrong, but the internal workings of a one-shot model - including odds - are unfalsifiable. Same with Silver and the 2016 election.

                • humorlessrepost@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  8
                  ·
                  3 months ago

                  Silver made a prediction. That’s the deliverable. The prediction was wrong.

                  Would you mind restating the prediction?

                  • FlowVoid@lemmy.world
                    link
                    fedilink
                    English
                    arrow-up
                    1
                    arrow-down
                    12
                    ·
                    3 months ago

                    He predicted Clinton would win. That’s the only reasonable prediction if her win probability was over 50%

                • SpaceCowboy@lemmy.ca
                  link
                  fedilink
                  arrow-up
                  4
                  ·
                  3 months ago

                  It’s forecasting, not a prediction. If the weather forecast said there was a 28% chance of rain tomorrow and then tomorrow it rained would you say the forecast was wrong? You could say that if you want, but the point isn’t to give a definitive prediction of the outcome (because that’s not possible) it’s to give you an idea of what to expect.

                  If there’s a 28% chance of rain, it doesn’t mean it’s not going to rain, it actually means you might want to consider taking an umbrella with you because there’s a significant probability it will rain. If a batter with a .280 batting average comes to the plate with 2 outs at the bottom of the ninth, that doesn’t mean the game is over. If a politician has a 28% probability of winning an election, it’s not a statement that the politician will definitely lose the election.

                  • FlowVoid@lemmy.world
                    link
                    fedilink
                    English
                    arrow-up
                    1
                    arrow-down
                    5
                    ·
                    edit-2
                    3 months ago

                    If the weather forecast said there was a 28% chance of rain tomorrow and then tomorrow it rained would you say the forecast was wrong?

                    Is it possible for the forecast to be wrong?

                    I think so. If you look at all the times the forecast predicts a 28% chance of rain, then it should rain on 28% of those days. If it rained, say, on half the days that the forecast gave a 28% chance of rain then the forecast would be wrong.

                    With Silver, the same principle applies. Clinton should win at least 50% of the 2016 elections where she has at least a 50% chance of winning. She didn’t.

                    If Silver kept the same model over multiple elections, then we could look at his probabilities in finer detail. But he doesn’t.

                • Miphera@lemmy.world
                  link
                  fedilink
                  arrow-up
                  3
                  ·
                  3 months ago

                  How about this:

                  Two people give the odds for the result of a coin flip of non-weighted coins.

                  Person A: Heads = 50%, Tails = 50%

                  Person B: Heads = 75%, Tails = 25%

                  The result of the coin flip ends up being Heads. Which person had the more accurate model? Did Person A get something wrong?

                  • FlowVoid@lemmy.world
                    link
                    fedilink
                    English
                    arrow-up
                    1
                    arrow-down
                    2
                    ·
                    edit-2
                    3 months ago

                    Person B’s predicted outcome was closer to the truth.

                    Perhaps person A’s prediction would improve if multiple trials were allowed. Perhaps their underlying assumptions are wrong (ie the coins are not unweighted).

            • Logi@lemmy.world
              link
              fedilink
              arrow-up
              7
              ·
              3 months ago

              Probability is useful because it can make predictions that can be tested against reality.

              Yes. But you’d have to run the test repeatedly and see if the outcome, i.e. Clinton winning, happens as often as the model predicts.

              But we only get to run an election once. And there is no guarantee that the most likely outcome will happen on the first try.

              • FlowVoid@lemmy.world
                link
                fedilink
                English
                arrow-up
                2
                arrow-down
                5
                ·
                3 months ago

                If you can only run an election once, then how do you determine which of these two results is better (given than Trump won in 2016):

                1. Clinton has a 72% probability of winning in 2016
                2. Trump has a 72% probability of winning in 2016
                • BreadstickNinja@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  5
                  arrow-down
                  1
                  ·
                  3 months ago

                  You do it by comparing the state voting results to pre-election polling. If the pre-election polling said D+2 and your final result was R+1, then you have to look at your polls and individual polling firms and determine whether some bias is showing up in the results.

                  Is there selection bias or response bias? You might find that a set of polls is randomly wrong, or you might find that they’re consistently wrong, adding 2 or 3 points in the direction of one party but generally tracking with results across time or geography. In that case, you determine a “house effect,” in that either the people that firm is calling or the people who will talk to them lean 2 to 3 points more Democratic than the electorate.

                  All of this is explained on the website and it’s kind of a pain to type out on a cellphone while on the toilet.

                  • FlowVoid@lemmy.world
                    link
                    fedilink
                    English
                    arrow-up
                    2
                    arrow-down
                    1
                    ·
                    edit-2
                    3 months ago

                    You are describing how to evaluate polling methods. And I agree: you do this by comparing an actual election outcome (eg statewide vote totals) to the results of your polling method.

                    But I am not talking about polling methods, I am talking about Silver’s win probability. This is some proprietary method takes other people’s polls as input (Silver is not a pollster) and outputs a number, like 28%. There are many possible ways to combine the poll results, giving different win probabilities. How do we evaluate Silver’s method, separately from the polls?

                    I think the answer is basically the same: we compare it to an actual election outcome. Silver said Trump had a 28% win probability in 2016, which means he should win 28% of the time. The actual election outcome is that Trump won 100% of his 2016 elections. So as best as we can tell, Silver’s win probability was quite inaccurate.

                    Now, if we could rerun the 2016 election maybe his estimate would look better over multiple trials. But we can’t do that, all we can ever do is compare 28% to 100%.

        • machinin@lemmy.world
          link
          fedilink
          arrow-up
          12
          arrow-down
          1
          ·
          3 months ago

          Just for other people reading this thread, the following comments are an excellent case study in how an individual (the above poster) can be so confidently mistaken, even when other posters try to patiently correct them.

          May we all be more respectful of our own ignorance.