reflexion
nebuly
reflexion | nebuly | |
---|---|---|
6 | 105 | |
2,068 | 8,361 | |
4.4% | -0.1% | |
8.5 | 8.4 | |
7 months ago | 8 months ago | |
Python | Python | |
MIT License | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
reflexion
- What is a belief people have about AI that you hate?
-
‘Overemployed’ Hustlers Exploit ChatGPT To Take On Even More Full-Time Jobs
In terms of defining whether it is a lot better than gpt 3.5 (and it’s different models) I recommend reading the technical report: https://arxiv.org/pdf/2303.08774.pdf or the paper on “self reflection” as it’s quite interesting and allowed it to perform quite well on some parameters https://arxiv.org/abs/2303.11366 or the code for the human-eval test https://github.com/GammaTauAI/reflexion-human-eval
- GPT4 Learning from Reflection
-
AI-enhanced development makes me more ambitious with my projects
One of the things I've found is that at least with GPT-4, you can automate a lot of infrastructure. This can make it much easier to get started with new project where you need to set up various servers, containers, VMs etc. I haven't fully automated it yet, but I've done it enough manually through copy/paste between ChatGPT and the terminal to see it that would work if it was hooked up to langchain. You can create prompts to make it set up VMs, docker containers and then install and configure all kinds of software.
To make it work in most cases it should follow the same stuff as this implementation of a Reflexion agent for SOTA Human-Eval Python results, but use it for infrastructure. GPT-4 can generally figure out how to create tests from the documentation that will make it know when the task it finished correctly.
https://github.com/GammaTauAI/reflexion-human-eval
-
[Discussion] IsItBS: asking GPT to reflect x times will create a feedback loop that causes it to scrutinize itself x times?
Spin off project based on reflection, apparently GPT-4 gets 20% improvement in coding tasks: https://github.com/GammaTauAI/reflexion-human-eval
-
Here's how to make Bing think more like a human. Before and after.
You might also be interested in the "Reflexion" paper that's been doing the rounds in the past few days: https://github.com/GammaTauAI/reflexion-human-eval
nebuly
- Nebuly – The LLM Analytics Platform
- Ask HN: Any tools or frameworks to monitor the usage of OpenAI API keys?
-
What are you building with LLMs? I'm writing an article about what people are building with LLMs
Hi everyone. I’m the creator of ChatLLaMA https://github.com/nebuly-ai/nebullvm/tree/main/apps/accelerate/chatllama, an opensource framework to train LLMs with limited resources and create There’s been amazing usage of LLMs in these days, from chatbots to retrieve about company’s product information, to cooking assistants for traditional dishes, and much more. And you? What you building or would love to build with LLMs? Let me know and I’ll share the article about your stories soon. https://qpvirevo4tz.typeform.com/to/T3PruEuE Cheers
-
Show HN: ChatLLaMA – A ChatGPT style chatbot for Facebook's LLaMA
How does it differentiate from the original ChatLLaMA? https://github.com/nebuly-ai/nebullvm/tree/main/apps/acceler...
-
🤖🌟 Unlock the Power of Personal AI: Introducing ChatLLaMA, Your Custom Personal Assistant! 🚀💬
Was this made with the ChatLLaMA library? https://github.com/nebuly-ai/nebullvm/tree/main/apps/accelerate/chatllama
- Meta LLM LLaMA leaked, all over the internet as we speak
- Meta LLM LLAMA leaked, it's all over the internet as we speak.
- Meta LLM LLAMMA leaked, it's all over the internet as we speak.
-
Plug and play modules to optimize the performances of your AI systems
Some of the available modules include:
Speedster: Automatically apply the best set of SOTA optimization techniques to achieve the maximum inference speed-up on your hardware. https://github.com/nebuly-ai/nebullvm/blob/main/apps/acceler...
Nos: Automatically maximize the utilization of GPU resources in a Kubernetes cluster through real-time dynamic partitioning and elastic quotas. https://github.com/nebuly-ai/nos
ChatLLaMA: Build faster and cheaper ChatGPT-like training process based on LLaMA architectures. https://github.com/nebuly-ai/nebullvm/tree/main/apps/acceler...
OpenAlphaTensor: Increase the computational performances of an AI model with custom-generated matrix multiplication algorithm fine-tuned for your specific hardware. https://github.com/nebuly-ai/nebullvm/tree/main/apps/acceler...
Forward-Forward: The Forward Forward algorithm is a method for training deep neural networks that replaces the backpropagation forward and backward passes with two forward passes. https://github.com/nebuly-ai/nebullvm/tree/main/apps/acceler...
- Open source implementation for LLaMA-based ChatGPT
What are some alternatives?
stanford_alpaca - Code and documentation to train Stanford's Alpaca models, and generate the data.
tvm - Open deep learning compiler stack for cpu, gpu and specialized accelerators
AITemplate - AITemplate is a Python framework which renders neural network into high performance CUDA/HIP C++ code. Specialized for FP16 TensorCore (NVIDIA GPU) and MatrixCore (AMD GPU) inference.
text-generation-webui - A Gradio web UI for Large Language Models. Supports transformers, GPTQ, AWQ, EXL2, llama.cpp (GGUF), Llama models.
alpaca-lora - Instruct-tune LLaMA on consumer hardware
deepsparse - Sparsity-aware deep learning inference runtime for CPUs
TensorRT - NVIDIA® TensorRT™ is an SDK for high-performance deep learning inference on NVIDIA GPUs. This repository contains the open source components of TensorRT.
openvino - OpenVINO™ is an open-source toolkit for optimizing and deploying AI inference
dlcompiler-comparison - The quantitative performance comparison among DL compilers on CNN models.
llama - Inference code for Llama models
tflite-micro - Infrastructure to enable deployment of ML models to low-power resource-constrained embedded targets (including microcontrollers and digital signal processors).
til - Today I Learned