Deleted tweet detection is currently running at reduced
capacity due to changes to the Twitter API. Some tweets that have been
deleted by the tweet author may not be labeled as deleted in the PolitiTweet
interface.
Showing page 329 of 910.
Nate Silver @NateSilver538
"Descents look much slower than ascents" is an important observation in terms of what to expect going forward. Death tolls may decline (hopefully). But we probably won't get a steep, bell-curve-type decline. — PolitiTweet.org
John Burn-Murdoch @jburnmurdoch
NEW: Fri 24 April update of coronavirus trajectories Daily deaths • Still too early to say if US has peaked • Look… https://t.co/kQ3auvtfNC
Nate Silver @NateSilver538
At the same time, I don't know that it's going be as simple as a neat and tidy fall from a peak. In a number of states, cases are still rising (even after adjusting for increases in test volume). And restrictions are being relaxed in GA and perhaps in other places soon. — PolitiTweet.org
Nate Silver @NateSilver538
Most encouraging numbers in a while. Deaths are still very high, but this is now 3 straight days where deaths were less than the same day a week earlier. Meanwhile, the testing logjam finally seems to have been broken a bit as we're suddenly at ~200K tests per day. — PolitiTweet.org
Nate Silver @NateSilver538
US daily numbers via @COVID19Tracking: Newly-reported deaths: Today: 1,772 Yesterday: 1,886 One week ago (4/17): 2,069 Newly-reported cases: T: 31K Y: 31K 4/17: 31K Newly-reported tests: T: 224K Y: 191K 4/17: 156K Share of tests positive: T: 14% Y: 16% 4/17: 20% — PolitiTweet.org
Nate Silver @NateSilver538
Or get the @MonmouthPoll people on it. Or @jaselzer. Or @YouGovUS. People who know something about selection and sampling issues and who understand when you put something out there in public, you need to engage in a lot of transparency about methods. — PolitiTweet.org
Nate Silver @NateSilver538
Honestly there should just be an @UpshotNYT / Siena seroprevalence study. — PolitiTweet.org
Nate Silver @NateSilver538
This is separate from the false positives issue which renders the study ~unusable on its own. The study is a mess, overall. You'd think Stanford profs would know better. Still, it's not obvious in which direction the biases from their sampling strategy would run. — PolitiTweet.org
Nate Silver @NateSilver538
If the Santa Clara study was asking for healthy volunteers and recruiting in wealthy, well-connected neighborhoods, that's definitely going to bias the results, but I'm not sure if it biases them toward overestimation (possibly underestimation instead). https://t.co/QXpat6wUxf https://t.co/qkrH8gMj6v — PolitiTweet.org
Nate Silver @NateSilver538
I was also recently on the Talking Politics, my favorite podcast about UK politics. Got very in the weeds here about cross-country comparisons on coronavirus—and how the US compares—and I think a lot of you will like it! https://t.co/yxUBJrmMaO — PolitiTweet.org
Nate Silver @NateSilver538
@kmedved @JeremyTColes The low rates of detection Upstate also puts some bounds on what the false positive rate can be. — PolitiTweet.org
Nate Silver @NateSilver538
@TheStalwart The day we have a Twitter argument about seat-reclining is the day we'll truly know we've beat this thing. — PolitiTweet.org
Nate Silver @NateSilver538
(Note: deleted a tweet in this thread referring to "specificity" since my brain always confuses "sensitivity" and "specificity". I'll stick with "false positive rate" and "false negative rate" going forward.) https://t.co/3pfaxkX3gq — PolitiTweet.org
Nate Silver @NateSilver538
In short: these antibody tests are valuable exercises. But they could wind up reasonably far from the mark *in either direction* from the true rates of infection. Read the fine print. The better studies may get you in the ballpark but don't assume they're especially precise. — PolitiTweet.org
Nate Silver @NateSilver538
So this gets tricky. If you hear an antibody test finds that 20% of people had COVID-19, that could be an underestimate because of lags. But if you hear it implies a 0.7% fatality rate, that may *not* be an underestimate since both antibody tests *and* deaths lag infection. — PolitiTweet.org
Nate Silver @NateSilver538
Of course, *all* COVID-19 data is lagging to various degrees. Notably, deaths lag infections also because it takes some time to pass away + maybe a couple of additional days for that death to officially show up in the state's statistics. — PolitiTweet.org
Nate Silver @NateSilver538
Also, there can be a lag between when these tests are *conducted* and when they're *reported* to the media. This is not a very big issue in the NY study (their program started quite recently). But always check the dates. — PolitiTweet.org
Nate Silver @NateSilver538
So what you're really measuring is not "how many people have had COVID-19 as of now?" but more something like "how many people had COVID-19 as of a couple of weeks ago?" This can bias the numbers downward, especially if you're on the upswing of the epidemic or near the peak. — PolitiTweet.org
Nate Silver @NateSilver538
The last issue—lags—is more straightforward, but has been a bit overlooked. It takes some time for antibodies to develop from initial onset of symptoms, apparently a couple of weeks. https://t.co/9WBcYiJVXz — PolitiTweet.org
Alex Washburne @Alex_Washburne
Definitely an underestimate given the lag from onset to seroconversion. Will need to know the test used to know th… https://t.co/zlt8yJuzFZ
Nate Silver @NateSilver538
I'm speculating here. Maybe the selection effects run in the opposite directions than I'm guessing. But these are nontrivial issues and not super easy to correct for. Although, weighting based on demographics may help a bit. New York *is* trying to do that, as best as I can tell. — PolitiTweet.org
Nate Silver @NateSilver538
In studies where people apply in advance to get a test, as in the Santa Clara Co. study, selection bias could be a bigger issue. If someone was trying to get a regular COVID-19 test and couldn't get one, maybe they'd be eager to sign up for these antibody tests as a substitute. — PolitiTweet.org
Nate Silver @NateSilver538
NY does *disclose* results of tests to people who want them, which I think is a mistake and may create more selection bias. OTOH it's not like you could plan your day around getting a test in NY since they set these up on short notice (reportedly sometimes annoying store owners). — PolitiTweet.org
Nate Silver @NateSilver538
On the other hand, I would imagine there is some self-selection at work: You see a station set up in the produce aisle, and maybe you're more curious to get a test if you think you had COVID-19 at some point. — PolitiTweet.org
Nate Silver @NateSilver538
Having a station in public place like a grocery store is probably selecting for relatively healthy people, which could lead to an underestimate of the infection rate. At a minimum, anybody who is *currently* experiencing COVID-19 symptoms hopefully will not be grocery shopping. — PolitiTweet.org
Nate Silver @NateSilver538
Also, it's hard to do a TRULY random study. For instance, NY's process seems to be something like: they set up stations at randomly-selected grocery stores. But what happens *within* those stores is less clear and there is probably some degree of self-selection. — PolitiTweet.org
Nate Silver @NateSilver538
Another one: some studies are only conducted among purportedly asymptomatic people. If, say, 10% of *asymptomatic* people in a given population test positive, the overall rate of disease (symptomatic + asymptomatic people) will be higher. — PolitiTweet.org
Nate Silver @NateSilver538
One thing right off the bat is that many of these studies don't include minors. Since children are less likely to have COVID-19, excluding them will tend to bias the reported infection rate upward from the true rate throughout the population. — PolitiTweet.org
Nate Silver @NateSilver538
The next big issue is sampling. Is the study even purporting to capture a random sample of the population? If so, how successful is its strategy for doing that? And what are the likely biases? — PolitiTweet.org
Nate Silver @NateSilver538
I've also seen some tests that produce a third, "borderline" category instead of just positive and negative. It just goes to show that these tests are not exact & to some extent you face a trade-off between false positives and false negatives. https://t.co/hr3TBdaBAi — PolitiTweet.org
Nate Silver @NateSilver538
The test used by NYC claims to have 93% specificity = a 7% false negative rate. That's a nontrivial problem. It may partly counteract the issue with false positives or even outweigh it, depending on the false positive rate and the true rate of infection. https://t.co/MUE08IzKT1 — PolitiTweet.org
Nate Silver @NateSilver538
However, this should be a less of a problem in places that have had a medium-to-bad epidemic; say, Belgium or London—or certainly NYC. Studies in those places should be more reliable, therefore. Further, there is *also* the issue of false negatives. — PolitiTweet.org