machine-learning-for-trading
gretel-synthetics
Our great sponsors
machine-learning-for-trading | gretel-synthetics | |
---|---|---|
224 | 4 | |
11,797 | 533 | |
- | 4.9% | |
1.1 | 7.3 | |
10 months ago | 19 days ago | |
Jupyter Notebook | Python | |
- | GNU General Public License v3.0 or later |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
machine-learning-for-trading
- Machine Learning for Trading: Notebooks, resources and references accompanying the book Machine Learning for Algorithmic Trading. Courses - star count:10678.0
- How to backtest common strategy / filters in bulk?
- Machine Learning for Trading: Notebooks, resources and references accompanying the book Machine Learning for Algorithmic Trading. Courses - star count:10565.0
gretel-synthetics
-
Ask HN: If we train an LLM with “data” instead of “language” tokens
Hey there! Co-founder of Gretel.ai here, and I think I can provide some insights on this topic.
Firstly, the concept you're hinting at is not purely traditional ML. In traditional machine learning, we often prioritize feature extraction and engineering specific to a given problem space before training.
What you're describing and what we've been working on at Gretel.ai, is leveraging the power of models like Large Language Models (LLMs) to understand and extrapolate from vast amounts of diverse data without the need for time-consuming feature engineering. Here's a link to our open-source library https://github.com/gretelai/gretel-synthetics for synthetic data generation (currently supporting GAN and RNN-based language models), and also our recent announcement around a Tabular LLM we're training to help people build with data https://gretel.ai/tabular-llm
A few areas where we've found tabular or Large Data Models to be really useful are:
-
Libraries for synthetic data?
you can try QuantGAN: https://github.com/PakAndrey/QuantGANforRisk also try DoppelGANger https://github.com/gretelai/gretel-synthetics/tree/master/src/gretel_synthetics/timeseries_dgan
- Which open source tool for generating synthetic data sets?
- Gretel-synthetics: open-source library to create synthetic datasets
What are some alternatives?
SDV - Synthetic data generation for tabular data
Copulas - A library to model multivariate data using copulas.
Mad-Money-Backtesting - Backtesting recommendations from Mad Money and "The Cramer Effect/Bounce"
gretel-python-client - The Gretel Python Client allows you to interact with the Gretel REST API.
documentation - This repository contains the documentation for the current Quantiacs project. Check it out at: https://quantiacs.com/documentation/en/
rex-gym - OpenAI Gym environments for an open-source quadruped robot (SpotMicro)
zero-to-mastery-ml - All course materials for the Zero to Mastery Machine Learning and Data Science course.
adversarial-robustness-toolbox - Adversarial Robustness Toolbox (ART) - Python Library for Machine Learning Security - Evasion, Poisoning, Extraction, Inference - Red and Blue Teams
cryptocurrency-price-prediction - Cryptocurrency Price Prediction Using LSTM neural network
CTGAN - Conditional GAN for generating synthetic tabular data.
TTM - # TTM Stock squeeze detection based on bollinger + keltner channels. Detects which stocks are out after bollinger is inside keltner channel.
AI-basketball-analysis - :basketball::robot::basketball: AI web app and API to analyze basketball shots and shooting pose.