FastLoRAChat
hyde
FastLoRAChat | hyde | |
---|---|---|
2 | 2 | |
119 | 362 | |
- | 10.5% | |
7.2 | 10.0 | |
about 1 year ago | over 1 year ago | |
Jupyter Notebook | Jupyter Notebook | |
Apache License 2.0 | - |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
FastLoRAChat
-
[P] FastLoRAChat Instruct-tune LLaMA on consumer hardware with shareGPT data
Announcing FastLoRAChat , training chatGPT without A100.
- FastLoRAChat – Lora finetuned LLM with ChatGPT capabality
hyde
-
Show HN: Hacker Search – A semantic search engine for Hacker News
HyDE apparently means “Hypothetical Document Embeddings”, which seems to be a kind of generative query expansion/pre-processing
https://arxiv.org/abs/2212.10496
https://github.com/texttron/hyde
From the abstract:
Given a query, HyDE first zero-shot instructs an instruction-following language model (e.g. InstructGPT) to generate a hypothetical document. The document captures relevance patterns but is unreal and may contain false details. Then, an unsupervised contrastively learned encoder~(e.g. Contriever) encodes the document into an embedding vector. This vector identifies a neighborhood in the corpus embedding space, where similar real documents are retrieved based on vector similarity. This second step ground the generated document to the actual corpus, with the encoder's dense bottleneck filtering out the incorrect details.
-
Meet HyDE: An Effective Fully Zero-Shot Dense Retrieval Systems That Require No Relevance Supervision, Works Out-of-Box, And Generalize Across Tasks
Quick Read: https://www.marktechpost.com/2023/01/23/meet-hyde-an-effective-fully-zero-shot-dense-retrieval-systems-that-require-no-relevance-supervision-works-out-of-box-and-generalize-across-tasks/ Paper: https://arxiv.org/pdf/2212.10496.pdf Github: https://github.com/texttron/hyde
What are some alternatives?
ragas - Evaluation framework for your Retrieval Augmented Generation (RAG) pipelines
ReAct - [ICLR 2023] ReAct: Synergizing Reasoning and Acting in Language Models
lora-instruct - Finetune Falcon, LLaMA, MPT, and RedPajama on consumer hardware using PEFT LoRA
DeepLearningExamples - State-of-the-Art Deep Learning scripts organized by models - easy to train and deploy with reproducible accuracy and performance on enterprise-grade infrastructure.
jupyter-notebook-chatcompletion - Jupyter Notebook ChatCompletion is VSCode extension that brings the power of OpenAI's ChatCompletion API to your Jupyter Notebooks!
llama2-haystack - Using Llama2 with Haystack, the NLP/LLM framework.
FinGPT - FinGPT: Open-Source Financial Large Language Models! Revolutionize 🔥 We release the trained model on HuggingFace.
gpt-j-fine-tuning-example - Fine-tuning 6-Billion GPT-J (& other models) with LoRA and 8-bit compression
llm-search - Querying local documents, powered by LLM
alpaca-lora - Instruct-tune LLaMA on consumer hardware
Anima - 33B Chinese LLM, DPO QLORA, 100K context, AirLLM 70B inference with single 4GB GPU