Copulas
gretel-synthetics
Copulas | gretel-synthetics | |
---|---|---|
1 | 4 | |
556 | 599 | |
0.4% | 2.0% | |
8.2 | 7.1 | |
19 days ago | 16 days ago | |
Python | Python | |
GNU General Public License v3.0 or later | GNU General Public License v3.0 or later |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
Copulas
-
[D] Has anyone used "copulas" before?
nice Python library for modeling with copulas that I've worked with: https://github.com/sdv-dev/Copulas
gretel-synthetics
-
Ask HN: If we train an LLM with “data” instead of “language” tokens
Hey there! Co-founder of Gretel.ai here, and I think I can provide some insights on this topic.
Firstly, the concept you're hinting at is not purely traditional ML. In traditional machine learning, we often prioritize feature extraction and engineering specific to a given problem space before training.
What you're describing and what we've been working on at Gretel.ai, is leveraging the power of models like Large Language Models (LLMs) to understand and extrapolate from vast amounts of diverse data without the need for time-consuming feature engineering. Here's a link to our open-source library https://github.com/gretelai/gretel-synthetics for synthetic data generation (currently supporting GAN and RNN-based language models), and also our recent announcement around a Tabular LLM we're training to help people build with data https://gretel.ai/tabular-llm
A few areas where we've found tabular or Large Data Models to be really useful are:
-
Libraries for synthetic data?
you can try QuantGAN: https://github.com/PakAndrey/QuantGANforRisk also try DoppelGANger https://github.com/gretelai/gretel-synthetics/tree/master/src/gretel_synthetics/timeseries_dgan
- Which open source tool for generating synthetic data sets?
- Gretel-synthetics: open-source library to create synthetic datasets
What are some alternatives?
CTGAN - Conditional GAN for generating synthetic tabular data.
rex-gym - OpenAI Gym environments for an open-source quadruped robot (SpotMicro)
Made-With-ML - Learn how to design, develop, deploy and iterate on production-grade ML applications.
gretel-python-client - The Gretel Python Client allows you to interact with the Gretel REST API.
SDV - Synthetic data generation for tabular data
adversarial-robustness-toolbox - Adversarial Robustness Toolbox (ART) - Python Library for Machine Learning Security - Evasion, Poisoning, Extraction, Inference - Red and Blue Teams
ydata-synthetic - Synthetic data generators for tabular and time-series data
RobustVideoMatting - Robust Video Matting in PyTorch, TensorFlow, TensorFlow.js, ONNX, CoreML!
genalog - Genalog is an open source, cross-platform python package allowing generation of synthetic document images with custom degradations and text alignment capabilities.
AI-basketball-analysis - :basketball::robot::basketball: AI web app and API to analyze basketball shots and shooting pose.