Deleted tweet detection is currently running at reduced capacity due to changes to the Twitter API. Some tweets that have been deleted by the tweet author may not be labeled as deleted in the PolitiTweet interface.

Showing page 261 of 910.

Profile Image

Nate Silver @NateSilver538

This seems like good news on medium-term immunity, as it may be the first study that's followed patients for as long as it has (6 months). But of course it's not getting nearly as much media attention as those ¡AnTiBoDiEs FaDe AfTeR 3 MoNThS! headlines. https://t.co/aPUkPM0aqL https://t.co/p7Dv8oLyeo — PolitiTweet.org

Posted Aug. 13, 2020 Hibernated
Profile Image

Nate Silver @NateSilver538

Thread and article on the awesome design behind our forecast. — PolitiTweet.org

anna wiederkehr @wiederkehra

They let me write for the site about how we made @NateSilver538's 2020 forecast model into pictures. You can read i… https://t.co/a5LX9ZQ9q5

Posted Aug. 13, 2020 Hibernated
Profile Image

Nate Silver @NateSilver538

RT @FiveThirtyEight: How we designed the look of our 2020 forecast —> https://t.co/ogr6OT6U20 — PolitiTweet.org

Posted Aug. 13, 2020 Retweet Hibernated
Profile Image

Nate Silver @NateSilver538

@BrendanNyhan It's a fairly broadly applicable critique and I'm not looking to pick fights with any individual people. It's kind of BS to say that unless I call out individuals I must have a "straw man". — PolitiTweet.org

Posted Aug. 13, 2020 Hibernated
Profile Image

Nate Silver @NateSilver538

It seems like a lot of contemporary data science splits the difference in the worst possible way by focusing on shiny new techniques, without either developing the mathematical intuitions behind them, or doing much to improve understanding of the real-world problems. — PolitiTweet.org

Posted Aug. 12, 2020 Hibernated
Profile Image

Nate Silver @NateSilver538

Statistical models are mathematical representations of the real world. So if you lose sight of either the real world *or what the math you're applying is actually doing*, your model will be prone to failure. — PolitiTweet.org

Posted Aug. 12, 2020 Hibernated
Profile Image

Nate Silver @NateSilver538

* It's not completely random, there are a few constraints, but close enough that very weird maps will come up once in a while. — PolitiTweet.org

Posted Aug. 12, 2020 Hibernated
Profile Image

Nate Silver @NateSilver538

We randomly* pick from among 40,000 simulations to show as the sample maps so there are going to be weird 1-in-10,000 or 1-in-40,000 outliers sometimes! That's part of the "fun" of our new interactive. — PolitiTweet.org

Jacob Smith @jacobfhsmith

In what world do LA and GA go blue as ME, NH, and NC go red? https://t.co/gPXMdr5F6T

Posted Aug. 12, 2020 Hibernated
Profile Image

Nate Silver @NateSilver538

Note that **if** he holds his current lead, Biden's win probability will begin to increase in our forecast with the passage of time. But that will be a very gradual effect at first and then might get more noticeable as of say mid-late September. — PolitiTweet.org

Posted Aug. 12, 2020 Hibernated
Profile Image

Nate Silver @NateSilver538

We've also run our forecast retroactively back to June. If you squint, It's shown a *very* modest improvement in Trump's chances over that period. But that's mostly because economic indicators have looked better lately, not the polls. https://t.co/ajG88SznSA https://t.co/ldK6pHTu4E — PolitiTweet.org

Posted Aug. 12, 2020 Hibernated
Profile Image

Nate Silver @NateSilver538

Not really much sign that the national polls are tightening, with Biden now back up +8.4, though I think you could argue that the recent round of state polls have been a bit better for Trump. https://t.co/cy51vc5isJ https://t.co/v5bGS2qUN9 — PolitiTweet.org

Posted Aug. 12, 2020 Hibernated
Profile Image

Nate Silver @NateSilver538

@gelliottmorris @SethS_D So if one poll weighted for education and 100 others did not, it would adjust the 100 polls toward that one poll? I'm not saying I dislike it, it's not that different than a house effects adjustment. Could be smart. But it does smack of fighting the last war a bit. — PolitiTweet.org

Posted Aug. 12, 2020 Hibernated
Profile Image

Nate Silver @NateSilver538

@gelliottmorris @SethS_D So if no polls used education weighting, you'd shift all of the polls toward Trump? — PolitiTweet.org

Posted Aug. 12, 2020 Hibernated
Profile Image

Nate Silver @NateSilver538

@gelliottmorris @SethS_D It's incorporated into overall uncertainty. The model basically assumes that polls are a pretty rough guide until you're clear of the conventions by ~30 days or so. — PolitiTweet.org

Posted Aug. 12, 2020 Hibernated
Profile Image

Nate Silver @NateSilver538

@gelliottmorris @SethS_D The convention bounce adjustments arguably are a bit hackish. But that's precisely why one needs to be wary of overconfident models! Models make lots of little hacks to improve in-sample fit that usually result in deterioration in out-of-sample performance! — PolitiTweet.org

Posted Aug. 12, 2020 Hibernated
Profile Image

Nate Silver @NateSilver538

@gelliottmorris @SethS_D Yes, I think it's extremely dangerous to think you know what direction the polls will be biased in, or that the biases will carry over in a hugely predictable way from election to election. — PolitiTweet.org

Posted Aug. 12, 2020 Hibernated
Profile Image

Nate Silver @NateSilver538

@gelliottmorris @SethS_D But the broader question is: Do you think your model describes the real-world probability that Biden wins? (Assuming the election is not stolen, both candidates make it to election day etc.) Or is it some sort of conditional prediction? — PolitiTweet.org

Posted Aug. 12, 2020 Hibernated
Profile Image

Nate Silver @NateSilver538

@gelliottmorris @SethS_D "Account[ing] for partisan non-response and weighting problems in polls" seems like a pretty hack-ish way to backfit the 2016 result better. It's exactly the sort of thing that could lead to overfitting out of sample. — PolitiTweet.org

Posted Aug. 12, 2020 Hibernated
Profile Image

Nate Silver @NateSilver538

@gelliottmorris @SethS_D I'm not a huge fan of betting markets. But you don't think there's *any* chance that your model is more than "5% too high" when betting markets only have Biden at 60%? And the only surviving model (ours) with an out-of-sample track record has him at 70%? — PolitiTweet.org

Posted Aug. 12, 2020 Hibernated
Profile Image

Nate Silver @NateSilver538

@gelliottmorris @SethS_D You may not make the same mistakes as models did in 2016 or as Wang did in 2004. But you may make different mistakes. Given the nature of election data (small n's, lots of covariance), it's way easier to make mistakes that result in overconfidence than under-confidence. — PolitiTweet.org

Posted Aug. 12, 2020 Hibernated
Profile Image

Nate Silver @NateSilver538

@gelliottmorris @SethS_D I would say the empirical track record of election models being overconfident, e.g. several models in 2016, Sam Wang with Kerry at 97% in 2004, the often disastrous performance of "fundamentals" models out of sample, is relevant here. https://t.co/uzhRzj9s2F — PolitiTweet.org

Posted Aug. 12, 2020 Hibernated
Profile Image

Nate Silver @NateSilver538

@gelliottmorris @SethS_D For context, 97% is what we get for Biden's popular vote win probability **if we run our model as a now-cast**. So your model is basically as confident on Aug. 12 as ours will be on Election Day. — PolitiTweet.org

Posted Aug. 12, 2020 Hibernated
Profile Image

Nate Silver @NateSilver538

@gelliottmorris Not really. A Biden win in a sample size of n=1 wouldn't provide all that much info to distinguish a "Biden is 90%" hypothesis from a "Biden is 70%" hypothesis. A Trump win would provide slightly more, but still not a ton. — PolitiTweet.org

Posted Aug. 12, 2020 Hibernated
Profile Image

Nate Silver @NateSilver538

@gelliottmorris It's captured in head-to-head polls, though. And using a variable like approval ratings that is extremely collinear with head-to-head polls as a prior is likely to lead to some ugly model specifications that will be fragile out of sample. — PolitiTweet.org

Posted Aug. 12, 2020 Hibernated
Profile Image

Nate Silver @NateSilver538

@gelliottmorris Talk about begging the question? Approval ratings are the dependent variable, more or less. — PolitiTweet.org

Posted Aug. 12, 2020 Hibernated
Profile Image

Nate Silver @NateSilver538

@SethS_D @gelliottmorris Yeah, that's what gets me too. If you just say "let's just keep it sample and look at all the years where we have polling", you'd wind up with probabilities that look a lot like 538's. And I think that's an important sanity check. — PolitiTweet.org

Posted Aug. 12, 2020 Hibernated
Profile Image

Nate Silver @NateSilver538

@gelliottmorris It's somewhere around 11 points of vote share / 22 points of vote margin. It's high. — PolitiTweet.org

Posted Aug. 12, 2020 Hibernated
Profile Image

Nate Silver @NateSilver538

@gelliottmorris @SethS_D That's sort of begging the question, though. We don't know what the rest of the cycle will look like. Polls were EXTREMELY stable in this year's D primary before they started to become EXTREMELY volatile. — PolitiTweet.org

Posted Aug. 12, 2020 Hibernated
Profile Image

Nate Silver @NateSilver538

@gelliottmorris You're basically applying a lot of techniques that are (sometimes) appropriate for large data sets to a small data set and it's leading you down some weird paths. — PolitiTweet.org

Posted Aug. 12, 2020 Hibernated
Profile Image

Nate Silver @NateSilver538

@gelliottmorris You're still not getting it. Given the nature of election data, you don't want do to too much optimization; that's how you wind up with an overconfident model. I don't really care that much what the model would have said in elections where I already know the answer. — PolitiTweet.org

Posted Aug. 12, 2020 Hibernated