DeepKE
llama_farm
DeepKE | llama_farm | |
---|---|---|
2 | 17 | |
2,973 | 141 | |
4.6% | - | |
9.5 | 6.7 | |
14 days ago | 17 days ago | |
Python | Hy | |
MIT License | GNU Affero General Public License v3.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
DeepKE
- Would this method work to increase the memory of the model? Saving summaries generated by a 2nd model and injecting them depending on the current topic.
-
How to Unleash the Power of Large Language Models for Few-shot Relation Extraction?
Scaling language models have revolutionized widespread NLP tasks, yet little comprehensively explored few-shot relation extraction with large language models. In this paper, we investigate principal methodologies, in-context learning and data generation, for few-shot relation extraction via GPT-3.5 through exhaustive experiments. To enhance few-shot performance, we further propose task-related instructions and schema-constrained data generation. We observe that in-context learning can achieve performance on par with previous prompt learning approaches, and data generation with the large language model can boost previous solutions to obtain new state-of-the-art few-shot results on four widely-studied relation extraction datasets. We hope our work can inspire future research for the capabilities of large language models in few-shot relation extraction. Code is available in \url{https://github.com/zjunlp/DeepKE/tree/main/example/llm.
llama_farm
-
How to overcome the issues of the limit of ~4,000 tokens per input, when dealing with documents summarization?
I do i recursively https://github.com/atisharma/llama_farm/blob/main/llama_farm/summaries.hy
-
Ask HN: Is SICP/HtDP still worth reading in 2023? Any alternatives?
It's funny that you asked that and then someone posted an app that's almost entirely Hy language. I'm just sharing it so you have one example:
https://github.com/atisharma/llama_farm/tree/main
The AI's have limited ability to either handle large documents or track conversations. This tool is an attempt to solve that problem. It works with OpenAI and open-source AI's.
-
Langchain Youtube Summarizer with Oooba api Custom LLM wrapper (and kobold)
Then you might like https://github.com/atisharma/llama_farm
-
What is the best way to create a knowledge-base specific LLM chatbot ?
I use this
-
Is anyone doing always-on voice to text with a local llama at home?
Bark and another one I forgot. See this for example implementation.
- Request for comment / contribution - local AI tool (Hy)
-
Anything like ChatGPT that we can run ourself where we train with with our own data, so we can use it as personal assistant, where it only knows about oneself better than themselves ?
This is what I use
-
balacoon_tts: Fastest neural TTS on Raspberry
It's now incorporated in llama-farm.
- A local model for summarizing articles
- Story writing concept
What are some alternatives?
GoLLIE - Guideline following Large Language Model for Information Extraction
h2ogpt - Private chat with local GPT with document, images, video, etc. 100% private, Apache 2.0. Supports oLLaMa, Mixtral, llama.cpp, and more. Demo: https://gpt.h2o.ai/ https://codellama.h2o.ai/
OpenNRE - An Open-Source Package for Neural Relation Extraction (NRE)
SillyTavern-Extras - Extensions API for SillyTavern.
zshot - Zero and Few shot named entity & relationships recognition
vllm - A high-throughput and memory-efficient inference and serving engine for LLMs
NaLLM - Repository for the NaLLM project
gorilla - Gorilla: An API store for LLMs
ARElight - Granular Viewer of Sentiments Between Entities in Massively Large Documents and Collections of Texts, powered by AREkit
ue5-llama-lora - A proof-of-concept project that showcases the potential for using small, locally trainable LLMs to create next-generation documentation tools.
VLDet - [ICLR 2023] PyTorch implementation of VLDet (https://arxiv.org/abs/2211.14843)
talk - Let's make sand talk