-
causalglm
Interpretable and model-robust causal inference for heterogeneous treatment effects using generalized linear working models with targeted machine-learning
-
InfluxDB
Power Real-Time Data Analytics at Scale. Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
This package: https://github.com/tlverse/causalglm was recently developed to fill the gap between fully black box causal learning methods for heterogeneous treatment effects and fully parametric generalized linear model approaches. It allows for both semiparametric and nonparametric robust causal inference for user defined “working parametric models” for the estimands of interest. It is still black box in that non relevant features of the data distribution are estimated using machine learning but the relevant conditional parameters are modeled fully parametrically (with nonparametric robust inference when misspecified). It is very new so use with caution.
The tlverse/sl3 super learner library is much better integrated and a lot more powerful (a bit more complicated in the beginning but once you understand it, its great). LMTP has a separate branch that uses sl3: https://github.com/nt-williams/lmtp/tree/sl3-devel. To specify formulas is sl3, you just do Lrnr_glmnet$new(formula = ~ 1 + W + A + A*W), but make sure to download the "dev" version: devtools::install_github("tlverse/sl3", ref = "devel").
Another approach is to make your own SL learner. It turns out to be not as difficult as it may seem to do this. You still pass in the same character string to the SuperLearner functions (e.g. "SL.customlearner") and it will extract the function "SL.customlearner" from your R environment. Here is one example: https://github.com/tlverse/hal9001/blob/devel/R/sl_hal9001.R
Related posts
-
[D] Is there a such thing as "Prespective Statistical Models"?
-
Frouros: An open-source Python library for drift detection in machine learning
-
[D] Major bug in Scikit-Learn's implementation of F-1 score
-
80% faster, 50% less memory, 0% loss of accuracy Llama finetuning
-
80% faster, 50% less memory, 0% accuracy loss Llama finetuning