Deleted tweet detection is currently running at reduced
capacity due to changes to the Twitter API. Some tweets that have been
deleted by the tweet author may not be labeled as deleted in the PolitiTweet
interface.
Showing page 233 of 729.
Nate Cohn @Nate_Cohn
@gelliottmorris @On_Politike it is entirely possible that a well-calibrated model should have basically *0* state results outside of the CI in a given cycle, but many outside of the CI in 1:20 cycles with significant systematic error — PolitiTweet.org
Nate Cohn @Nate_Cohn
@gelliottmorris @On_Politike you really can't due to state correlated error. — PolitiTweet.org
Nate Cohn @Nate_Cohn
@gelliottmorris @On_Politike it's not wasted knowledge, but it also risks overfitting. if your CI doesn't really allow for the kinds of errors that happened routinely last time--and the reason is because you designed a model that *would* have been better--then I'd say that's what we have here — PolitiTweet.org
Nate Cohn @Nate_Cohn
@gelliottmorris @On_Politike i'm sorry, but the fact that you sophisticated design that would have gotten it right after the fact in *no way* shields you from the real source of forecasting errors at play here — PolitiTweet.org
Nate Cohn @Nate_Cohn
@gelliottmorris @On_Politike :( — PolitiTweet.org
Nate Cohn @Nate_Cohn
@gelliottmorris @On_Politike but if your CI doesn't really even allow for the kinds of errors that happened *often* in the last election, then maybe it doesn't? — PolitiTweet.org
Nate Cohn @Nate_Cohn
@On_Politike @gelliottmorris I am! These models aren't that good this far out, and so you need big confidence intervals! — PolitiTweet.org
Nate Cohn @Nate_Cohn
@gelliottmorris @On_Politike (and that 538 number wasn't nowcast, that was their public forecast at the time. i'm sure it did just fine on the training set) — PolitiTweet.org
Nate Cohn @Nate_Cohn
@gelliottmorris @On_Politike don't take this the wrong way, but it is 100% irrelevant to me that a model fit after the fact would have been fine. it is a grave mistake to believe you're immune to the kinds of errors that forecasters made in the past — PolitiTweet.org
Nate Cohn @Nate_Cohn
@gelliottmorris in all seriousness, getting it right after the fact just doesn't count. fivethirtyeight, without the benefit of hindsight, would have been off by *13* in OH at this point. — PolitiTweet.org
Nate Cohn @Nate_Cohn
@On_Politike @gelliottmorris the nowcast in ND was trump+14. he won by 36! these things happen; they just did! — PolitiTweet.org
Nate Cohn @Nate_Cohn
@On_Politike @gelliottmorris they did! the fivethirtyeight nowcast at this point in '16 had Clinton *ahead* in Iowa, which she lost by more than 9. It had Clinton up *4* in Ohio, which she lost by 8. — PolitiTweet.org
Nate Cohn @Nate_Cohn
@gelliottmorris and look, if we're talking subjective: do you really think your CI in IA/OH/UT, just to take a few I glanced at, is appropriate? — PolitiTweet.org
Nate Cohn @Nate_Cohn
@gelliottmorris i mean, you just saw in the last election that many states--9 or 10--swing 10 points from '16/where you'd forecast them right around now. given the small n, you really don't know just how far that is down the tail. — PolitiTweet.org
Nate Cohn @Nate_Cohn
@Evar_Galois @gelliottmorris more or less — PolitiTweet.org
Nate Cohn @Nate_Cohn
@gelliottmorris You can also think of this in terms of the demographic shocks. *You* know in '16 there's a risk of a Trump surge among white no col voters, but maybe not other kinds. How does the model account for that risk but *not* the ones you *don't* find plausible, like FL Hispanic surge? — PolitiTweet.org
Nate Cohn @Nate_Cohn
@gelliottmorris You know the demographic explanation for OH, so you nod and say it makes sense. But the model doesn't, and so it's got to allow for Trump+9 in FL as a very realistic possibility. And tbh, your model barely even allows for the '16 result in OH/IA at this point! — PolitiTweet.org
Nate Cohn @Nate_Cohn
@gelliottmorris now, it's true--having a model that's prepared for craziness in UT will also allow for some wild things in FL. But pretty crazy things happen! By the numbers, Trump+9 in OH in '16 just isn't very different and it just happened in '16. — PolitiTweet.org
Nate Cohn @Nate_Cohn
@gelliottmorris look, it's easy to point out that tail risks probably won't happen. but like, they can and will! and it's hard to train a model to pick up on the risk of it. maybe utah is the IRL state we should have fat tails, and now where you're in very serious danger of being outside your CI — PolitiTweet.org
Nate Cohn @Nate_Cohn
@gelliottmorris no one knows what the tails look like, since we have like 18 elections or whatever! — PolitiTweet.org
Nate Cohn @Nate_Cohn
@gelliottmorris not if the next state on the list is trump+10! obviously it's an extreme example, but it's absolutely true and it's worth mulling it if the example didn't hit on first read — PolitiTweet.org
Nate Cohn @Nate_Cohn
@gelliottmorris to be a little more practical and less theoretical: serious opportunities in all of GA/TX/OH increase biden's mean EC victory v. '16 without really improving his p(victory) — PolitiTweet.org
Nate Cohn @Nate_Cohn
@gelliottmorris it's not. to take an extreme case, just to prove the principle if not in practice: a mean EC of biden at 290 with biden+10 in the tipping point state, but trump+10 in the next state, would be a higher p(victory) for biden than today's map — PolitiTweet.org
Nate Cohn @Nate_Cohn
@gelliottmorris so, yes, it's clinton+4 v. biden+8 or whatever nationwide. but today we know about biden's challenge in the E.C., in terms of both the polls and our priors. this is a good thing, but it means you can't expect equiv national margins to yield equiv p(dem) — PolitiTweet.org
Nate Cohn @Nate_Cohn
@gelliottmorris here's PA 8/27. Nowcast '16: Clinton+5.3. Polls-plus forecast '16: Clinton+3.9 (73%) today Poll average '20: Biden+5.3 Polls-plus forecast: ~Biden+4, 52-48 (70%) — PolitiTweet.org
Nate Cohn @Nate_Cohn
@gelliottmorris you really need to do the ec tipping point poll avg/forecast (or maybe an average of the few closest), not the nation. the gop ec edge simply was not evident in the polls/priors at this point in '16 (and tbh never was) — PolitiTweet.org
Nate Cohn @Nate_Cohn
RT @bryanwx: Kudos to @NHC_Atlantic for their forecasts for Hurricane Laura. The landfall area was first in the cone on Friday, August 21st… — PolitiTweet.org
Nate Cohn @Nate_Cohn
@gelliottmorris not exactly sure what the comparison is here; between 'nowcast' and forecast? i don't think any number--10 v 25-- is intuitively right or wrong, and the only thing i'm confident about is that we don't have anywhere near enough data to answer it with confidence — PolitiTweet.org
Nate Cohn @Nate_Cohn
That said, they didn't use approval in '16 and presidential approval is not exactly the same as the 'fundamentals,' either. It's so correlated with vote choice that you're getting close to modeling off of the dependent variable here — PolitiTweet.org
Nate Cohn @Nate_Cohn
I think the strongest substantive critique of the FiveThirtyEight model is that it's leaving info on the table by omitting Trump approval. If you don't know what the fundamentals are--and I don't think we do, empirically--the approval at -10 might help sort it out. — PolitiTweet.org