covid-sim VS ptti

Compare covid-sim vs ptti and see what are their differences.

covid-sim

This is the COVID-19 CovidSim microsimulation model developed by the MRC Centre for Global Infectious Disease Analysis hosted at Imperial College, London. (by mrc-ide)

ptti

Population-wide Testing, Tracing and Isolation Models (by ptti)
InfluxDB - Power Real-Time Data Analytics at Scale
Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
www.influxdata.com
featured
SaaSHub - Software Alternatives and Reviews
SaaSHub helps you find the best software and product alternatives
www.saashub.com
featured
covid-sim ptti
13 3
1,223 12
0.0% -
0.0 0.0
about 1 year ago over 3 years ago
C++ Jupyter Notebook
GNU General Public License v3.0 only GNU General Public License v3.0 only
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

covid-sim

Posts with mentions or reviews of covid-sim. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2022-01-17.
  • Tips para analizar código
    1 project | /r/programacion | 24 Apr 2023
  • Covid-sim: Remove 23 people from Alaska (2020)
    1 project | news.ycombinator.com | 2 Aug 2022
  • Artigo Publicado 15/01/2022 - Estudo de Ivermectina em Itajai 200.000 participantes.
    2 projects | /r/coronabr | 17 Jan 2022
  • Ask HN: Covid Network Simulation?
    1 project | news.ycombinator.com | 13 Dec 2021
  • Ontario COVID-19 science table member resigns after alleging withheld data projects ‘grim fall’
    1 project | /r/LockdownSkepticism | 24 Aug 2021
  • Today's Comments (2021-08-21)
    1 project | /r/LockdownSceptics | 21 Aug 2021
    I have a copy of his code; you can find it here: https://github.com/mrc-ide/covid-sim.git
  • Face masks effectively limit the probability of SARS-CoV-2 transmission
    2 projects | news.ycombinator.com | 21 May 2021
    Error bars would be nice. They're MIA in large swathes of COVID related research. I've read a lot of COVID papers in the past year and this paper is typical of the field. Things you should expect to see when reading epidemiology literature:

    1. Statistical uncertainty is normally ignored. They can and will tell politicians to adopt major policy changes on the back of a single dataset with 20 people in it. In the rare cases when they bother to include error bars at all they are usually so wide as to be useless. In many other fields researchers debate P-hacking and what threshold of certainty should count as a significant finding. Many people observe that the standard of P=0.05 in e.g. psychology is too high because it means 1 in 20 studies will result significant-but-untrue findings by chance alone. Compared to those debates epidemiology is in the stone age: any claim that can be read into any data is considered significant.

    2. Rampant confusion between models and reality. The top rated comment on this thread observes that the paper doesn't seem to test its model predictions against reality yet makes factual claims about the world. No surprises there; public health papers do that all the time. No-one except out-of-field skeptics actually judge epidemiological models by their predictive power. Epidemiologists admit this problem exists, but public health has become so corrupt that they argue being able to correctly predict things is not a fair way to judge a public health model[1] but governments should still implement whatever policies the models say are required. It's hard to get more unscientific than culturally rejecting the idea that science is about predicting the natural world, but multiple published papers in this field have argued exactly that. A common trick is "validating" a model against other models [2].

    3. Inability to do maths. Setting up a model with reasonable assumptions is one thing but do they actually solve the equations correctly? The Ferguson model from Imperial College, which we're widely assured is one of the world's top teams of epidemiologists, was written in C and filled with race conditions/out of bounds reads that caused their model to totally change its predictions due to timing differences in thread scheduling, different CPUs/compilers etc. These differences were large, e.g. a difference of 80,000 deaths predicted by May for the UK [3]. Nobody in academia saw any problem with this and worse, some researchers argued that such errors didn't matter because they just ran it a bunch of times and averaged the results. This is confusing the act of predicting the behaviour of the world with the act of measuring it, see point (2).

    4. Major logic errors. Assuming correlation implies causation is totally normal. Other fields use sophisticated approaches to try and control for confounding variables, epidemiology doesn't. Circular logic is a lot more common than normal, for some reason.

    None of these problems stop papers being published by supposedly reputable institutions in supposedly reputable journals. After reading or scan-reading about 50 epidemiology papers, including some older papers from 10 years ago, I concluded that not a single thing from this field can be trusted. Life is too short to examine literally every paper making every claim but if you take a sample and nearly all of them contain basic errors or what is clearly actual fraud, then it seems fair to conclude the field has no real standards.

    [1] "few models in healthcare could ever be validated for predictive use. This, however, does not disqualify such models from being used as aids to decision making ... Philips et al state that since a decision-analytic model is an aid to decision making at a particular point in time, there is no empirical test of predictive validity. From a similar premise, Sculpher et al argue that prediction is not an appropriate test of validity for such model" https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3001435/

    [2] https://github.com/ptti/ptti/blob/master/README.md

    [3] https://github.com/mrc-ide/covid-sim/issues/30 https://github.com/mrc-ide/covid-sim/commit/581ca0d8a12cddbd... https://github.com/mrc-ide/covid-sim/commit/3d4e9a4ee633764c...

  • TUESDAY, MAY 11, 2021 Alberta Totals: 211,836(+1,449) Active: 24,998(-440) In Hospital: 705(+15) ICU: 163(+5) Recovered: 184,719(+1,887) Deaths: 2,119(+2) Positivity Rate: 12.93% R Value (95% CI): 1.00 (0.99-1.02)
    1 project | /r/Calgary | 12 May 2021
    EDIT: Here's a well known attempt at what you're talking about, but with far fewer variables at play: https://github.com/mrc-ide/covid-sim/tree/7282c948b940c8bd90d6afaa1575afb3848aa8b5/src Maybe the AHS can just share this dude's repo with the public and call it a day lmao
  • So this is how you format mission critical code, eh?
    1 project | /r/programminghorror | 10 May 2021
  • Early lockdown skepticism
    1 project | /r/LockdownSkepticism | 7 Mar 2021
    Not the original.

ptti

Posts with mentions or reviews of ptti. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2022-11-19.
  • Software engineers: consider working on genomics
    6 projects | news.ycombinator.com | 19 Nov 2022
    I recall once hearing from a VC about why they hardly invest in biotech (or it might have been reading it somewhere, memory is fuzzy). It boiled down to: way too much non-replicable research, often with suspicions of fraud by the original labs. It can easily be the case that a biotech startup burns through millions setting up a lab from scratch, then attempting to replicate some academic paper that they thought they could commercialize, only to discover that the effect doesn't really exist. This problem doesn't affect the software industry, so that's where the money goes.

    Why so few tooling companies - is there actually a market for good software in science? For there to be such a market most scientists would have to care about the correctness of their results, and care enough to spend grant money on improvements. They all claim to care, but observation of actual working practices points to the opposite too much of the time (of course there are some good apples!).

    In 2020 I got interested in research about COVID, so over the next couple of years I read a lot of papers and source code coming out of the health world. I also talked to some scientists and a coder who worked alongside scientists. He'd worked on malaria research, before deciding to change field because it was so corrupt. He also told me about an attempt to recruit a coder who'd worked on climate models who turned out to be quitting science entirely, for the same reason. The same anti-patterns would crop up repeatedly:

    - Programs would turn out to contain serious bugs that totally altered their output when fixed, but it would be ignored because nobody wants to retract papers. Instead scientists would lie or BS about the nature of the errors e.g. claiming huge result changes were actually small and irrelevant.

    - Validation is often non-existent or based on circular reasoning. As a consequence there are either no tests or the tests are meaningless.

    - Code is often write-once, run-once. Journals happily accept papers that propose an entirely ad-hoc and situation specific hypothesis that doesn't generalize at all, so very similar code is constantly being written then thrown away by hundreds of different isolated and competing groups.

    These issues will sooner or later cause honest programmers to doubt their role. What's the point in fixing bugs if nobody cares about incorrect results? How do you know your refactoring was correct if there are no unit tests and nobody can even tell you how to write them? How do you get people to use tools with better error checking if the only thing users care about is convenience of development? How do you create widely adopted abstractions beyond trivial data wrangling if the scientists are effectively being paid by LOC written?

    The validation issue is especially neuralgic. Scientists will check if a program they wrote works by simply eyeballing the output and deciding that it looks right. How do they know it looks right? Based on their expertise; you wouldn't understand, it's far too complicated for a non-scientist. Where does that expertise come from? By reading papers with graphs in them. Where do those graphs come from? More unvalidated programs. Missing in a disturbing number of cases - real world data, or acceptance that real data takes precedence over predicted data. Example from [1]: "we believe in checking models against each other, as it's the best way to understand which models work best in what circumstances". Another [2]: "There is agreement in the literature that comparing the results of different models provides important evidence of validity and increases model credibility".

    There are a bunch of people in this thread saying things like, oh, I'd love to help humanity but don't want to take the pay cut. To anyone thinking of going into science I'd strongly suggest you start by taking a few days to download papers from the lab you're thinking of joining and carefully checking them for mistakes, logical inconsistencies, absurd assumptions or assertions etc. Check the citations, ensure they actually support the claim being made. That sort of thing. If they have code on github go read it. Otherwise you might end up taking a huge pay cut only to discover that the lab or even whole field you've joined has simply become a self-reinforcing exercise in grant application, in which the software exists mostly for show.

    [1] https://github.com/ptti/ptti/blob/master/README.md

    [2] https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3001435/

  • Face masks effectively limit the probability of SARS-CoV-2 transmission
    2 projects | news.ycombinator.com | 21 May 2021
    Error bars would be nice. They're MIA in large swathes of COVID related research. I've read a lot of COVID papers in the past year and this paper is typical of the field. Things you should expect to see when reading epidemiology literature:

    1. Statistical uncertainty is normally ignored. They can and will tell politicians to adopt major policy changes on the back of a single dataset with 20 people in it. In the rare cases when they bother to include error bars at all they are usually so wide as to be useless. In many other fields researchers debate P-hacking and what threshold of certainty should count as a significant finding. Many people observe that the standard of P=0.05 in e.g. psychology is too high because it means 1 in 20 studies will result significant-but-untrue findings by chance alone. Compared to those debates epidemiology is in the stone age: any claim that can be read into any data is considered significant.

    2. Rampant confusion between models and reality. The top rated comment on this thread observes that the paper doesn't seem to test its model predictions against reality yet makes factual claims about the world. No surprises there; public health papers do that all the time. No-one except out-of-field skeptics actually judge epidemiological models by their predictive power. Epidemiologists admit this problem exists, but public health has become so corrupt that they argue being able to correctly predict things is not a fair way to judge a public health model[1] but governments should still implement whatever policies the models say are required. It's hard to get more unscientific than culturally rejecting the idea that science is about predicting the natural world, but multiple published papers in this field have argued exactly that. A common trick is "validating" a model against other models [2].

    3. Inability to do maths. Setting up a model with reasonable assumptions is one thing but do they actually solve the equations correctly? The Ferguson model from Imperial College, which we're widely assured is one of the world's top teams of epidemiologists, was written in C and filled with race conditions/out of bounds reads that caused their model to totally change its predictions due to timing differences in thread scheduling, different CPUs/compilers etc. These differences were large, e.g. a difference of 80,000 deaths predicted by May for the UK [3]. Nobody in academia saw any problem with this and worse, some researchers argued that such errors didn't matter because they just ran it a bunch of times and averaged the results. This is confusing the act of predicting the behaviour of the world with the act of measuring it, see point (2).

    4. Major logic errors. Assuming correlation implies causation is totally normal. Other fields use sophisticated approaches to try and control for confounding variables, epidemiology doesn't. Circular logic is a lot more common than normal, for some reason.

    None of these problems stop papers being published by supposedly reputable institutions in supposedly reputable journals. After reading or scan-reading about 50 epidemiology papers, including some older papers from 10 years ago, I concluded that not a single thing from this field can be trusted. Life is too short to examine literally every paper making every claim but if you take a sample and nearly all of them contain basic errors or what is clearly actual fraud, then it seems fair to conclude the field has no real standards.

    [1] "few models in healthcare could ever be validated for predictive use. This, however, does not disqualify such models from being used as aids to decision making ... Philips et al state that since a decision-analytic model is an aid to decision making at a particular point in time, there is no empirical test of predictive validity. From a similar premise, Sculpher et al argue that prediction is not an appropriate test of validity for such model" https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3001435/

    [2] https://github.com/ptti/ptti/blob/master/README.md

    [3] https://github.com/mrc-ide/covid-sim/issues/30 https://github.com/mrc-ide/covid-sim/commit/581ca0d8a12cddbd... https://github.com/mrc-ide/covid-sim/commit/3d4e9a4ee633764c...

  • The Gaslighting of Science – Insight
    1 project | news.ycombinator.com | 11 Apr 2021
    I've read a lot of public health papers in the past year, probably more than 50. Here are some of the problems that cropped up in papers that were used by governments to support policy decisions. Many of those policy decisions later turned out to be incorrect, like lockdowns and wearing mask mandates, both of which easily count as drastic and neither were textbook. In fact pre-COVID pandemic control plans by the WHO explicitly recommended against lockdowns.

    By the way, be careful to separate the question of mask mandates from masks themselves. Mask mandates don't work: just look at any case curve when mandates were introduced or removed and observe the lack of any inflection points. If they worked people would have hundreds of examples by now of case curves which obviously inflected right after a mask mandate was changed, but no such graphs are ever cited because those inflections don't happen. Texas provides a recent example (mask mandate removed, curve continues prior trend) but this problem was obvious from within a week of the first mask mandates being introduced. Look at [1] for some case graphs with mandate change dates drawn on them to see the problem.

    Anyway, errors seen in public health papers:

    1. Circular reasoning.

    2. Invalid citations.

    3. Programming errors in models.

    4. Use of extremely out of date numbers.

    5. Absurd or obviously invalid assumptions/results from models being ignored.

    That's not a comprehensive list by any means. Unfortunately these aren't rare problems. Virtually every public health paper I've read has had at least one of these issues, often multiple.

    Circular logic in particular is mind-numbingly common, to an extent I've never seen before. For example, a common "validation" technique for models is to compare them to other models and declare their outputs to be similar (e.g. [2] or [3]). It's almost unheard of to compare model outputs to actual observed data, probably because doing validation right would invalidate virtually all public health models (this problem was admitted in a 2012 paper [4]).

    For a specific example of these problems see the paper by Flaxman et al from Imperial College London (ICL crop up frequently in these discussions because they seem to have one of the worst epidemiology teams out there, in that they routinely do terrible work yet everyone in the field appears to think they're awesome). This paper argued that lockdowns work using a statistical model, but they actually don't work, so to get this result required a combination of:

    1. Circular logic: the paper concluded government interventions worked, but the model took as a starting assumption that case curves could only be changed by government intervention i.e. could never change naturally. In other words the paper encoded its own conclusions in its assumptions.

    2. Absurd assumptions: the assumption epidemics can only be affected by lockdowns has no rational basis given the long history of epidemics starting and ending naturally.

    3. Deceptive tactics:

    3.1: The paper included Sweden in its data set, which attracted attention because Sweden appears to prove that the models generating the counterfactual were wrong. It managed to conclude lockdowns worked despite this because it concluded Sweden was a freak coincidence with an only 1 in 2000 chance of existing at all; in the graph that showed the different per-country fudge factors the model was allowed to use Sweden was simply hidden to obscure what had happened. The truth was discovered later by people who studied the tables of prior probabilities uploaded to GitHub.

    3.2: The paper admitted half way through that its scenario was "illustrative only" and that "in reality" the results would be different.

    There were other problems too, but none of them stopped the authors telling the press that lockdowns had "saved millions of lives" (i.e. the drastic policy pushed for by that very same research team). Nor did it stop international press agencies citing this paper in "fact checks".

    After reading so many papers with really basic and blatant problems, it's hard not to conclude that open access is going to fundamentally destroy academia's credibility. For anyone to be able to just download and read the output of academic scientists is a very new thing, and one of the few highlights of the time I've spent reading COVID research is that open access is real now: I've hardly ever hit paywalls. Only for old papers.

    Unfortunately open access is a double edged sword. Now we can all read what we're paying for and be astonished at the dangerously low quality. The outcome is easy to predict: the 2020s are going to be defined by a battle between "science believers" and "science skeptics" which will split down ideological lines. Up until now that has been mostly restricted to debates on climatology, but now I think it will widen considerably. There's just no way to read the literature and retain your confidence in academic science when so much of it is entirely un-scientific.

    [1] https://rationalground.com/mask-charts/

    [2] https://github.com/ptti/ptti/blob/master/README.md (see the paragraph starting with "Formalism-agnostic".

    [3] https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3001435/ ("There is agreement in the literature that comparing the results of different models provides important

What are some alternatives?

When comparing covid-sim and ptti you can also consider the following projects:

bioconda-recipes - Conda recipes for the bioconda channel.