Face masks effectively limit the probability of SARS-CoV-2 transmission

This page summarizes the projects mentioned and recommended in the original post on news.ycombinator.com

InfluxDB - Power Real-Time Data Analytics at Scale
Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
www.influxdata.com
featured
SaaSHub - Software Alternatives and Reviews
SaaSHub helps you find the best software and product alternatives
www.saashub.com
featured
  • ptti

    Population-wide Testing, Tracing and Isolation Models

  • Error bars would be nice. They're MIA in large swathes of COVID related research. I've read a lot of COVID papers in the past year and this paper is typical of the field. Things you should expect to see when reading epidemiology literature:

    1. Statistical uncertainty is normally ignored. They can and will tell politicians to adopt major policy changes on the back of a single dataset with 20 people in it. In the rare cases when they bother to include error bars at all they are usually so wide as to be useless. In many other fields researchers debate P-hacking and what threshold of certainty should count as a significant finding. Many people observe that the standard of P=0.05 in e.g. psychology is too high because it means 1 in 20 studies will result significant-but-untrue findings by chance alone. Compared to those debates epidemiology is in the stone age: any claim that can be read into any data is considered significant.

    2. Rampant confusion between models and reality. The top rated comment on this thread observes that the paper doesn't seem to test its model predictions against reality yet makes factual claims about the world. No surprises there; public health papers do that all the time. No-one except out-of-field skeptics actually judge epidemiological models by their predictive power. Epidemiologists admit this problem exists, but public health has become so corrupt that they argue being able to correctly predict things is not a fair way to judge a public health model[1] but governments should still implement whatever policies the models say are required. It's hard to get more unscientific than culturally rejecting the idea that science is about predicting the natural world, but multiple published papers in this field have argued exactly that. A common trick is "validating" a model against other models [2].

    3. Inability to do maths. Setting up a model with reasonable assumptions is one thing but do they actually solve the equations correctly? The Ferguson model from Imperial College, which we're widely assured is one of the world's top teams of epidemiologists, was written in C and filled with race conditions/out of bounds reads that caused their model to totally change its predictions due to timing differences in thread scheduling, different CPUs/compilers etc. These differences were large, e.g. a difference of 80,000 deaths predicted by May for the UK [3]. Nobody in academia saw any problem with this and worse, some researchers argued that such errors didn't matter because they just ran it a bunch of times and averaged the results. This is confusing the act of predicting the behaviour of the world with the act of measuring it, see point (2).

    4. Major logic errors. Assuming correlation implies causation is totally normal. Other fields use sophisticated approaches to try and control for confounding variables, epidemiology doesn't. Circular logic is a lot more common than normal, for some reason.

    None of these problems stop papers being published by supposedly reputable institutions in supposedly reputable journals. After reading or scan-reading about 50 epidemiology papers, including some older papers from 10 years ago, I concluded that not a single thing from this field can be trusted. Life is too short to examine literally every paper making every claim but if you take a sample and nearly all of them contain basic errors or what is clearly actual fraud, then it seems fair to conclude the field has no real standards.

    [1] "few models in healthcare could ever be validated for predictive use. This, however, does not disqualify such models from being used as aids to decision making ... Philips et al state that since a decision-analytic model is an aid to decision making at a particular point in time, there is no empirical test of predictive validity. From a similar premise, Sculpher et al argue that prediction is not an appropriate test of validity for such model" https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3001435/

    [2] https://github.com/ptti/ptti/blob/master/README.md

    [3] https://github.com/mrc-ide/covid-sim/issues/30 https://github.com/mrc-ide/covid-sim/commit/581ca0d8a12cddbd... https://github.com/mrc-ide/covid-sim/commit/3d4e9a4ee633764c...

  • covid-sim

    This is the COVID-19 CovidSim microsimulation model developed by the MRC Centre for Global Infectious Disease Analysis hosted at Imperial College, London.

  • Error bars would be nice. They're MIA in large swathes of COVID related research. I've read a lot of COVID papers in the past year and this paper is typical of the field. Things you should expect to see when reading epidemiology literature:

    1. Statistical uncertainty is normally ignored. They can and will tell politicians to adopt major policy changes on the back of a single dataset with 20 people in it. In the rare cases when they bother to include error bars at all they are usually so wide as to be useless. In many other fields researchers debate P-hacking and what threshold of certainty should count as a significant finding. Many people observe that the standard of P=0.05 in e.g. psychology is too high because it means 1 in 20 studies will result significant-but-untrue findings by chance alone. Compared to those debates epidemiology is in the stone age: any claim that can be read into any data is considered significant.

    2. Rampant confusion between models and reality. The top rated comment on this thread observes that the paper doesn't seem to test its model predictions against reality yet makes factual claims about the world. No surprises there; public health papers do that all the time. No-one except out-of-field skeptics actually judge epidemiological models by their predictive power. Epidemiologists admit this problem exists, but public health has become so corrupt that they argue being able to correctly predict things is not a fair way to judge a public health model[1] but governments should still implement whatever policies the models say are required. It's hard to get more unscientific than culturally rejecting the idea that science is about predicting the natural world, but multiple published papers in this field have argued exactly that. A common trick is "validating" a model against other models [2].

    3. Inability to do maths. Setting up a model with reasonable assumptions is one thing but do they actually solve the equations correctly? The Ferguson model from Imperial College, which we're widely assured is one of the world's top teams of epidemiologists, was written in C and filled with race conditions/out of bounds reads that caused their model to totally change its predictions due to timing differences in thread scheduling, different CPUs/compilers etc. These differences were large, e.g. a difference of 80,000 deaths predicted by May for the UK [3]. Nobody in academia saw any problem with this and worse, some researchers argued that such errors didn't matter because they just ran it a bunch of times and averaged the results. This is confusing the act of predicting the behaviour of the world with the act of measuring it, see point (2).

    4. Major logic errors. Assuming correlation implies causation is totally normal. Other fields use sophisticated approaches to try and control for confounding variables, epidemiology doesn't. Circular logic is a lot more common than normal, for some reason.

    None of these problems stop papers being published by supposedly reputable institutions in supposedly reputable journals. After reading or scan-reading about 50 epidemiology papers, including some older papers from 10 years ago, I concluded that not a single thing from this field can be trusted. Life is too short to examine literally every paper making every claim but if you take a sample and nearly all of them contain basic errors or what is clearly actual fraud, then it seems fair to conclude the field has no real standards.

    [1] "few models in healthcare could ever be validated for predictive use. This, however, does not disqualify such models from being used as aids to decision making ... Philips et al state that since a decision-analytic model is an aid to decision making at a particular point in time, there is no empirical test of predictive validity. From a similar premise, Sculpher et al argue that prediction is not an appropriate test of validity for such model" https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3001435/

    [2] https://github.com/ptti/ptti/blob/master/README.md

    [3] https://github.com/mrc-ide/covid-sim/issues/30 https://github.com/mrc-ide/covid-sim/commit/581ca0d8a12cddbd... https://github.com/mrc-ide/covid-sim/commit/3d4e9a4ee633764c...

  • InfluxDB

    Power Real-Time Data Analytics at Scale. Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.

    InfluxDB logo
NOTE: The number of mentions on this list indicates mentions on common posts plus user suggested alternatives. Hence, a higher number means a more popular project.

Suggest a related project

Related posts

  • ESP32 Drum Synth Machine

    1 project | news.ycombinator.com | 9 May 2024
  • Yes, Ruby is fast, but…

    4 projects | dev.to | 9 May 2024
  • Opening Windows in Linux with sockets, bare hands and 200 lines of C

    2 projects | news.ycombinator.com | 9 May 2024
  • Falsehoods Programmers Believe About Phone Numbers

    1 project | news.ycombinator.com | 9 May 2024
  • Discord-compatible messaging client targeting new and old Windows

    1 project | news.ycombinator.com | 8 May 2024