Auto-GPT-MetaTrader-Plugin
promptfoo
Auto-GPT-MetaTrader-Plugin | promptfoo | |
---|---|---|
10 | 20 | |
437 | 2,757 | |
- | 19.2% | |
7.0 | 9.9 | |
6 months ago | 7 days ago | |
Python | TypeScript | |
MIT License | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
Auto-GPT-MetaTrader-Plugin
-
Weekly Megathread
https://github.com/isaiahbjork/Auto-GPT-MetaTrader-Plugin - a software tool that enables traders to connect their MetaTrader 4 or 5 trading account to Auto-GPT.
- isaiahbjork/Auto-GPT-MetaTrader-Plugin: The AutoGPT MetaTrader Plugin is a software tool that enables traders to connect their MetaTrader 4 or 5 trading account to Auto-GPT.
- AutoGPT MetaTrader Plugin
- Auto-GPT MetaTrader Plugin (GPT-4 or GPT-3.5 turbo)
promptfoo
- Google CodeGemma: Open Code Models Based on Gemma [pdf]
- AI Infrastructure Landscape
- Promptfoo – Testing and Evaluation for LLMs
-
Show HN: Prompt-Engineering Tool: AI-to-AI Testing for LLM
Super interesting. We've been experimenting with [promptfoo](https://github.com/promptfoo/promptfoo) at my work, and this looks very similar.
- GitHub – promptfoo/promptfoo: Test your prompts
-
I asked 60 LLMs a set of 20 questions
In case anyone's interested in running their own benchmark across many LLMs, I've built a generic harness for this at https://github.com/promptfoo/promptfoo.
I encourage people considering LLM applications to test the models on their _own data and examples_ rather than extrapolating general benchmarks.
This library supports OpenAI, Anthropic, Google, Llama and Codellama, any model on Replicate, and any model on Ollama, etc. out of the box. As an example, I wrote up an example benchmark comparing GPT model censorship with Llama models here: https://promptfoo.dev/docs/guides/llama2-uncensored-benchmar.... Hope this helps someone.
- Ask HN: Prompt Manager for Developers
- DeepEval – Unit Testing for LLMs
- Show HN: Knit – A Better LLM Playground
- Show HN: CLI for testing and evaluating LLM outputs
What are some alternatives?
shap-e - Generate 3D objects conditioned on text or images
Account-Protector - Automate emergency position closing and autotrading termination using a multi-setting expert advisor.
prompt-engineering - Tips and tricks for working with Large Language Models like OpenAI's GPT-4.
PositionSizer - Calculate your position size based on the risk and account size and execute your trades with this free MetaTrader expert advisor.
WizardLM - Family of instruction-following LLMs powered by Evol-Instruct: WizardLM, WizardCoder and WizardMath
Auto-GPT-Notion - Auto-GPT Notion Plugin
chat-ui - Open source codebase powering the HuggingChat app
backtesting.py - :mag_right: :chart_with_upwards_trend: :snake: :moneybag: Backtest trading strategies in Python.
litellm - Call all LLM APIs using the OpenAI format. Use Bedrock, Azure, OpenAI, Cohere, Anthropic, Ollama, Sagemaker, HuggingFace, Replicate (100+ LLMs)
nsfw-prompt-detection-sd - NSFW Prompt Detection for Stable Diffusion
ChainForge - An open-source visual programming environment for battle-testing prompts to LLMs.