Deleted tweet detection is currently running at reduced
capacity due to changes to the Twitter API. Some tweets that have been
deleted by the tweet author may not be labeled as deleted in the PolitiTweet
interface.
Showing page 262 of 910.
Nate Silver @NateSilver538
@gelliottmorris They do happen to improve performance in-sample! But in- sample doesn't matter very much when degrees of freedom far exceed the sample size. And there's no magic you can perform to turn in-sample into out-of-sample data, despite a lot of data science BS to the contrary. — PolitiTweet.org
Nate Silver @NateSilver538
@gelliottmorris I think you have this backward. 538's models actually have a good, well-calibrated out-of-sample track record since 2008. We're basically the only model being published this year that has any empirical validation at all, since all the other models from 2016 called it quits. — PolitiTweet.org
Nate Silver @NateSilver538
@gelliottmorris Given the parameters of election data and the priors researchers approach them with, there's not really any such thing as "out-of-sample data" except for unknown data, i.e. truly predicting the future. If you don't agree with me on that, these convos probably won't be productive. — PolitiTweet.org
Nate Silver @NateSilver538
@gelliottmorris @nataliemj10 And in cases with small sample sizes and high complexity/ "researcher degrees of freedom", the only type of validation that matters anyway is in true prediction where you publish the forecasts ahead of time without knowing the results. — PolitiTweet.org
Nate Silver @NateSilver538
@gelliottmorris @nataliemj10 On the contrary, it's much better to decide on your hypothesis ahead of time, rather than to test multitudes of hypotheses on small, noisy datasets with high covariance. Will yield much more honest assessments of uncertainty that won't deteriorate much in-sample to out-of-sample. — PolitiTweet.org
Nate Silver @NateSilver538
@gelliottmorris @nataliemj10 With small sample sizes and highly correlated variables, it is VERY hard to determine empirically which ones are most important, so it is usually a better strategy to create an index. — PolitiTweet.org
Nate Silver @NateSilver538
@gelliottmorris @nataliemj10 The strategy is that you decide in advance which variables to include based on their real-world significance. And then just see how well they do. That is a good way to avoid p-hacking or overfitting. — PolitiTweet.org
Nate Silver @NateSilver538
@nataliemj10 @gelliottmorris I know I can sound grumpy about this stuff at times. But I've also spent A LOT of time thinking about these issues over MANY election cycles and tried out a lot of strategies that did or didn't work. So it's coming from a place of practical experience. — PolitiTweet.org
Nate Silver @NateSilver538
@nataliemj10 @gelliottmorris I'm not taking them to task for making subjective decisions. I'm taking them to task for making decisions that I don't think are the right decisions. — PolitiTweet.org
Nate Silver @NateSilver538
@nataliemj10 I'm also not sure how subjective is being defined here or what the pushback is, exactly. Models (especially on small samples) require judgement and expertise, and I've long railed against the idea that one should assume it's obvious what "the data says". — PolitiTweet.org
Nate Silver @NateSilver538
@nataliemj10 I'm not sure what the problem is here. The NYT is saying she's a historically important VP selection. And if the model is saying a historic VP selection increases variance a teensy tiny bit relative to a generic white guy, that seems appropriate, at least in broad strokes? — PolitiTweet.org
Nate Silver @NateSilver538
@jbarro It would say very similar things about 2016 to what our 2016 model said. Might have been a bit less bouncy in places, but not a huge difference. — PolitiTweet.org
Nate Silver @NateSilver538
RT @FiveThirtyEight: Some notes about the 2020 forecast: Our model produces probabilistic forecasts, as opposed to hard-and-fast binary pre… — PolitiTweet.org
Nate Silver @NateSilver538
RT @laurabronner: We get to the topline eventually, but there, too, the emphasis is on showing how each outcome might look if it occurred.… — PolitiTweet.org
Nate Silver @NateSilver538
RT @laurabronner: The @FiveThirtyEight forecast is up! And I ❤️❤️❤️ the viz decisions here. Rather than a simple topline, it emphasizes the… — PolitiTweet.org
Nate Silver @NateSilver538
@jtlevy Oh no it's way better this way. It was bouncing around between 71 and 72 and I was really rooting for it to land on exactly 71. — PolitiTweet.org
Nate Silver @NateSilver538
Here's my summation of why the model thinks Trump still has decent chances, despite his current poll deficit. Longer thread later once I'm more awake/more people are awake. But for now go check out the VERY cool graphics and art by our team! https://t.co/mFdmS9p8wk — PolitiTweet.org
Nate Silver @NateSilver538
Coincidentally, these are the exact same odds as in our final forecast in 2016!!! (Clinton 71%, Trump 29%) As was also the case in 2016, our model gives Trump a MUCH higher chance than other statistical models. — PolitiTweet.org
Nate Silver @NateSilver538
Our forecast is up!!! It gives Joe Biden a 71% chance of winning and Donald Trump a 29% chance. https://t.co/ajG88SznSA — PolitiTweet.org
Nate Silver @NateSilver538
RT @AmandaBecker: Sarah Palin’s earnest and genuine advice to Kamala Harris tonight warms my cold dead political reporter heart. https://t.… — PolitiTweet.org
Nate Silver @NateSilver538
RT @baseballot: Amazingly, Harris is also the first person from west of the Rockies to appear on a Democratic presidential ticket—top or bo… — PolitiTweet.org
Nate Silver @NateSilver538
Tuesdays are often pretty bad but this is still a pretty unfavorable report even accounting for that, with deaths and cases up from last Tuesday. With that said, some of this may reflect catch-up after weather & reporting problems in many states last week: https://t.co/uB6TwqjIXk — PolitiTweet.org
The COVID Tracking Project @COVID19Tracking
Our daily update is published. States reported 739k tests and 56k cases, as well as 1,326 deaths. This week, we hop… https://t.co/Lgq5DjoCcM
Nate Silver @NateSilver538
US daily numbers via @COVID19Tracking: Newly reported deaths Today: 1,326 Yesterday: 426 One week ago (8/4): 1,176 Newly reported cases T: 56K Y: 42K 8/4: 52K Newly reported tests T: 739K Y: 716K 8/4: 696K Positive test rate T: 7.5% Y: 5.8% 8/4: 7.4% — PolitiTweet.org
Nate Silver @NateSilver538
Exactly the same thing would be said if the choice were Rice, Klobuchar, perhaps even Whitmer etc. And even Warren would have triggered objections from a small and non-representative but vocal faction of the left. — PolitiTweet.org
Zach Carter @zachdcarter
It's been clear for the past month that whoever Biden picked was going to disappoint the progressive wing of the pa… https://t.co/gb0lebAGft
Nate Silver @NateSilver538
So, one new thing about our model, which launches TOMORROW! ... after major events (e.g. debates, VP picks) it will hedge a bit if there is polling movement to see if that movement can be sustained for a week or two. This should improve accuracy and cut down on the noise some. — PolitiTweet.org
Nate Silver @NateSilver538
In the end, Harris is a fairly obvious choice for the reasons I discussed in my segment this weekend. I never really bought the notion that Biden would pluck a name out of obscurity when he has reasons to be risk-averse and not give voters another unknown. https://t.co/yZgxDV4GqT — PolitiTweet.org
Nate Silver @NateSilver538
Amazing that @perrybaconjr was able to write this so quickly. https://t.co/1qHZyVW0Ij — PolitiTweet.org
Nate Silver @NateSilver538
I'm guessing that Biden will win California now with this Harris pick. — PolitiTweet.org
Nate Silver @NateSilver538
The reason I harp on this so much too is that you *usually* get away with overconfident forecasts, i.e. if your model says something has a 98% chance of happening when it's really 75%, you'll still be right 75% of the time and will say "lol, what was Nate so worked up about?!?". — PolitiTweet.org
Nate Silver @NateSilver538
The blindingly obvious lesson of 2016 for election modelers is "it's super easy to build an overconfident model, so think carefully about sources of uncertainty" but sometimes people are endlessly creative in finding ways to avoid the obvious lessons. — PolitiTweet.org