brms
testsaslinear
brms  testsaslinear  

9  27  
1,274  481  
    
9.2  0.0  
5 days ago  8 months ago  
R  JavaScript  
GNU General Public License v3.0 only   
Stars  the number of stars that a project has on GitHub. Growth  month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
brms

Bayesian Structural Equation Modeling using blavaan
[2] https://paulbuerkner.github.io/brms/

[Q] Correlated multivariate Beta model
Maybe something like the Logistic Normal ? (e.g. see this issue from brms). If that fits what you are looking for, you can use brms to generate the Stan code for you (brms::make_stan_code()) and work from that.

Stepbystep example of Bayesian ttest?
Okay so first off, I recommend that you read [this](https://link.springer.com/article/10.3758/s1342301612214) article about "The Bayesian New Statistics", which highlights estimation rather than hypothesis testing from a Bayesian perspective (see Fig. 1, second row, second column). Instead of a ttest, then, we can *estimate the difference* between two groups/variables. If you want to go deeper than JASP etc, I recommend that you use [brms](https://paulbuerkner.github.io/brms/), or, if you want to go even deeper, [Stan](https://mcstan.org/) (brms is a frontend to Stan).

[R] Are there methods for ridge and lasso regression that allow the introduction of weights to give more importance to some observations?
I think the brms package (https://github.com/paulbuerkner/brms) or the blavaan package (http://ecmerkle.github.io/blavaan/) have support for SEM. I've never done it myself, so I unfortunately can't give you any direction for that in particular. However, I have used stan in multilevel metaanalysis regression (combining multiple CRISPRa experiments to find determinants of CRISPRa activity, see https://github.com/timydaley/CRISPRasgRNAdeterminants/blob/master/metaAnalysis/NeuronAndSelfRenewalMetaMixtureRegression.Rmd) and had some success.

Package for :Generalized Mixed Effects Models for ZeroInflated Negative Binomial distributions ?
brms baby

Multiple observers
Could also be done using brms and the gr term. See this for the motivation behind this syntax.

I have a small sample size time series with potentially lagged predictor values which are also time series. What could be potential methods to analyse these data?
Anyway, I found I can include weights into the brm function by using gr(RE, by = var) to deal with the heterogeneous variance and it should automatically assume that each observation within a group is correlated according to the brms reference manual.

Brms: adding on a nonlinear component to working MLM model
This is what actually should work I must be declaring my variables incorrectly. The issue I'm having is that what you refer to as lin , I tried calling a few things, from b to LinPred (which worked in the link here: brms issue 47). When I've tried doing this, I receive errors that say "The following variables are missing from the dataset....[insert variable used to symbolize linear part of the model)". But I believe you're code is on the right path for what needs to be done I'll try altering my syntax to be sure it resembles yours let you know if it works.
testsaslinear

The Truth About Linear Regression
1) All common statistical tests are linear models: https://lindeloev.github.io/testsaslinear/
 Common statistical tests are linear models (or: how to teach stats)

Everything Is a Linear Model
I knew the linkedinthearticle https://lindeloev.github.io/testsaslinear/ which is also great. A bit meta on the widespread use of linear models: "Transcending General Linear Reality" by Andrew Abbott, DOI:10.2307/202114

Bayesians Moving from Defense to Offense
Maybe you would find it useful to read a textbook on bayesian stats for inspiration. I can recommend Richard McElreath's "Statistical Rethinking" which makes it very clear how inflexible it is to just know recipes like ttests or anovas.
The canonical approach is to build a generative model with a parameter (or multiple for ~anova) that codes for the difference between groups and do inference on that parameter of interest. Most of the recipes taught in statistics classes can be modelled as a regression of some kind (this counts for frequentist stats too, see https://lindeloev.github.io/testsaslinear/ ). Some advocate to do that inference with bayes factors. Others, like discussed elsewhere in this thread, advocate combining the resulting posterior with a cost/value function, but either way the lesson is that there is less focus on "ttestvsanova" because they're the same thing anyways.
 How to cheat stats: common statistical tests are linear models

Introduction to Modern Statistics
I understand where you're coming from, and I like the idea for a certain kind of people: those who are very good at handling abstractions. Software engineers do have this skill, but the majority of statistics users do not. Trying to explain the similarities between these linear methods and how all is one [1] to a social scientist who doesn't like numbers nor formulas to begin with would only lead to more confusion.
But if you ever do a randomized test with a suitable linear model to estimate the efficacy of these two methods, do let us know, that would be 10/10 :)
[1]: https://lindeloev.github.io/testsaslinear/#41_one_sample_t...
 [Statistics and Probability] Common statistical tests are linear models (or: how to teach stats)

[Q] Critique of a flowchart I made?
My main critique is that these classical tests are often better explained and introduced in the concept of a regression framework. The fact that you even need a flowchart demonstrates how confusing and unintuitive the classical approach to teaching statistics is. If you learn regression, everything else becomes a special case of this much more expressive way of thinking about how to measure variation. This point is made convincingly in this post: https://lindeloev.github.io/testsaslinear/

[Q] Two questions concerning the relationship between nonparametric tools and normal distribution
Most parametric tests don’t assume normality. If you feel that assuming normality is not viable, you are free to choose any other distribution. This may not be immediately obvious, since most intro courses teach inference as a bunch of disjointed formulas, but it will make more sense once one learns about generalized linear models framework and realizes that common statistical tests are all linear models. There is no need to jump straight for nonparametric tests just because something isn’t normal, as cool as they are. (Also a pedantic nitpick: MannWhitney and Co. test difference in average ranks, not difference in means. So they are not really a nonparametric equivalent to T tests).
What are some alternatives?
rstan  RStan, the R interface to Stan
handsonml2  A series of Jupyter notebooks that walk you through the fundamentals of Machine Learning and Deep Learning in Python using ScikitLearn, Keras and TensorFlow 2.
stan  Stan development repository. The master branch contains the current release. The develop branch contains the latest stable development. See the Developer Process Wiki for details.
ims  📚 Introduction to Modern Statistics  A collegelevel opensource textbook with a modern approach highlighting multivariable relationships and simulationbased inference. For v1, see https://openintroims.netlify.app.
tinytex  A lightweight, crossplatform, portable, and easytomaintain LaTeX distribution based on TeX Live
bambi  BAyesian ModelBuilding Interface (Bambi) in Python.
textbook  The textbook Computational and Inferential Thinking: The Foundations of Data Science
stat_rethinking_2020  Statistical Rethinking Course Winter 2020/2021
MLflow  Open source platform for the machine learning lifecycle
rBAPS  R implementation of the BAPS software for Bayesian Analysis of Population Structure
CRISPRasgRNAdeterminants