Deleted tweet detection is currently running at reduced
capacity due to changes to the Twitter API. Some tweets that have been
deleted by the tweet author may not be labeled as deleted in the PolitiTweet
interface.
Showing page 271 of 910.
Nate Silver @NateSilver538
@BenjySarlin @joshtpm It's not clear to me we have fewer restrictions. Both the US Northeast and Europe would seem to demonstrate that a combination of *some* restrictions + mask-wearing + doing stuff outdoors in summer + 10-25% seroprevalence (depending on the metro) may be enough for suppression. — PolitiTweet.org
Nate Silver @NateSilver538
@joshtpm @BenjySarlin Hong Kong is having a resurgence in cases now! So are Japan and Australia. Israel, India and South Africa are having huge problems. Large parts of South America have more or less given up. — PolitiTweet.org
Nate Silver @NateSilver538
@BenjySarlin I'm not super persuaded by Europe between some countries having a second wave (or a first wave if they didn't really have one) and some of the ones that aren't possibly benefiting from comfortable outdoor summer weather, and in some cases fairly high seroprevalence in key metros. — PolitiTweet.org
Nate Silver @NateSilver538
Without those technological improvements, then either lockdowns just *postpone* the inevitable (since no matter how diligent your lockdown, cases will start increasing again one you relax it) *or* you're basically arguing for indefinite lockdowns. — PolitiTweet.org
Nate Silver @NateSilver538
I wish there was more explicit acknowledgment that one of the most persuasive rationales for continued distancing/lockdowns is indeed to buy time for technological improvements. Not just vaccines but also therapeutics, rapid tests, greater scientific understanding. — PolitiTweet.org
Nate Silver @NateSilver538
We try as much as possible to aim for *unconditional* forecasts, but of course there are some limits to this. We don't account for the chance that one candidate drops out, for instance. Our NBA playoff odds (pre-COVID) didn't account for the season being interrupted by COVID. — PolitiTweet.org
Nate Silver @NateSilver538
The other philosophical point here is that you have to decide whether your forecast is a conditional forecast (holds true given certain "reasonable" assumptions) or an unconditional one (holds true given "real world" uncertainties and perhaps even some "unknown unknowns"). — PolitiTweet.org
Nate Silver @NateSilver538
In terms of the magnitude, we're assuming that the additional error from bucket (2) will, on average, change the outcome in 1-2 states. But again, there will be occasional simulations where it will have a larger effect along with many others where it has little/no effect. — PolitiTweet.org
Nate Silver @NateSilver538
We'll also assume that this additional error could either affect all states systematically, or could impact states on a one-off basis. One upshot is that if I were a campaign, I might play a fairly broad map, in case things get weird in some state that's part of my path to 270. — PolitiTweet.org
Nate Silver @NateSilver538
Because the sample size for drawing this inference is basically n=0, we'll draw this additional error term from a distribution containing rather fat tails. What this means is that *most* of the time, we assume this will have little impact but there's SOME chance it has a BIG one. — PolitiTweet.org
Nate Silver @NateSilver538
So we're going to try to account for both types of error. Bucket 2) involves a LOT of guesswork. Technically, it will be based on how much forecast error increases historically when there are big changes in turnout. But to be honest this is just an educated guess. — PolitiTweet.org
Nate Silver @NateSilver538
Bucket 2) is a lot trickier, I think, since there are no real solid precedents for COVID. (Maybe something like Hurricane Sandy, but that only affected a couple of states.) But there have been too many issues IMO with voting in these late-stage primaries to ignore it. — PolitiTweet.org
Nate Silver @NateSilver538
Bucket 1) is easy enough to handle empirically. You can look at whether election cycles with lots of major news developments tend to be associated with more polling volatility and/or polling error, and indeed they do, although note the usual issues with small sample sizes. — PolitiTweet.org
Nate Silver @NateSilver538
Basically, the additional uncertainty introduced into an election forecast by COVID-19 falls into two buckets: 1) It means there's a lot of *news* and economic volatility. 2) It could screw with the mechanics of voting, vote-counting or polling in unpredictable ways. — PolitiTweet.org
Nate Silver @NateSilver538
This report is actually ... not so bad by the (completely awful) standards of recent weeks. Cases down slightly week-over-week despite a lot of tests, making for the lowest positive test rate since July 5. Remains to be seen if this is the start of real improvement or a fluke. — PolitiTweet.org
Nate Silver @NateSilver538
US daily numbers via @COVID19Tracking: Newly reported deaths Today: 558 Yesterday: 1,037 One week ago (7/19): 526 Newly reported cases T: 62K Y: 65K 7/19: 65K Newly reported tests T: 856K Y: 798K 7/19: 799K Positive test rate T: 7.2% Y: 8.2% 7/19: 8.1% — PolitiTweet.org
Nate Silver @NateSilver538
@BrendanNyhan LOL. Yeah, I don't know for sure but that zone is where our model will end up. (Also, The Economist is like 92% for Biden electoral college I think, 99% is the popular vote). — PolitiTweet.org
Nate Silver @NateSilver538
@BrendanNyhan Too low on Biden. — PolitiTweet.org
Nate Silver @NateSilver538
@joshtpm @mattyglesias Yeah, if you have like 5 weeks of basically unchecked spread at R=2.5 or R=3.0 starting on ~February 10 and continuing until ~March 15, that's going to give you a LOT of cases ... somewhere on the order of 1000x whatever your original cluster was. — PolitiTweet.org
Nate Silver @NateSilver538
@BrendanNyhan More like 60-65% given if you account for the vig. We are not done with our model yet but I think that is probably mispriced, yes. And the high-profile events are often the worst (most mispriced) ones. — PolitiTweet.org
Nate Silver @NateSilver538
So although swing states come and go (Vermont was once a swing state!) one thing you see here is that the Electoral College really does create a bias toward people in smaller states which have more EV's per voter. — PolitiTweet.org
Nate Silver @NateSilver538
Here are top states for average VPI since 1968 based on 538 backtested simulations: 1. New Hampshire 3.9 2. New Mexico 3.0 3. Nevada 2.8 4. Iowa 2.5 5. Ohio 2.5 6. Vermont 2.2 7. Maine 2.2 8. Colorado 2.1 9. Pennsylvania 2.1 10. Delaware 2.1 11. Wisconsin 2.0 12. Missouri 1.9 — PolitiTweet.org
Nate Silver @NateSilver538
Here's a calculation we call the Voter Power Index. The states where, in elections since 1968, a voter had the greatest chance of determining the winner of the Electoral College. For instance, a VPI of 2.5x means a was 2.5 times more influential than an average American's. — PolitiTweet.org
Nate Silver @NateSilver538
@thecity2 Yeah. Like, even if you badly misprice things such that you have a -15% ROI, that's really not *that* expensive if you're only making bets once every 4 years. — PolitiTweet.org
Nate Silver @NateSilver538
IMO this reflects a combination of: * Modeling being fairly challenging (so most traders don't have good priors to anchor to) * Traders are emotionally invested in political outcomes * Herd mentality very strong in politics * Markets not super liquid — PolitiTweet.org
Nate Silver @NateSilver538
I don't think people realize how dumb and sometimes even irrational the prices are at political betting markets as compared to almost every other type of market (which is not to say other markets are always rational, either). — PolitiTweet.org
Nate Silver @NateSilver538
@kmedved Some of the key parameters are actually calibrated off of elections since 1936 (error associated with final polling averages) or 1880 (economic + incumbency priors). But we can only run the robust version of the model (with lots of state polling) for more recent elections. — PolitiTweet.org
Nate Silver @NateSilver538
@kmedved Everything is in-sample, more or less, but there are a handful of decently meaningful corrections we make to reflect deterioration in out-of-sample performance, and the odds listed here reflect those. — PolitiTweet.org
Nate Silver @NateSilver538
1968: Nixon 73% to win 1972: Nixon 99% 1976: Carter 55% 1980: Reagan 89% 1984: Reagan 96% 1988: Dukakis 61% 1992: Clinton 69% 1996: Clinton 93% 2000: Bush 73% 2004: Bush 55% 2008: Obama 67% 2012: Obama 68% 2016: Clinton 58% — PolitiTweet.org
Nate Silver @NateSilver538
Backtesting isn't super informative for any number of reasons. But just for "fun", here are the Electoral College odds the current build of the 538 model would have given 100 days out (see next tweet). — PolitiTweet.org