DeepKE
NaLLM
DeepKE | NaLLM | |
---|---|---|
2 | 2 | |
2,973 | 972 | |
4.6% | 6.8% | |
9.5 | 6.3 | |
12 days ago | 7 days ago | |
Python | TypeScript | |
MIT License | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
DeepKE
- Would this method work to increase the memory of the model? Saving summaries generated by a 2nd model and injecting them depending on the current topic.
-
How to Unleash the Power of Large Language Models for Few-shot Relation Extraction?
Scaling language models have revolutionized widespread NLP tasks, yet little comprehensively explored few-shot relation extraction with large language models. In this paper, we investigate principal methodologies, in-context learning and data generation, for few-shot relation extraction via GPT-3.5 through exhaustive experiments. To enhance few-shot performance, we further propose task-related instructions and schema-constrained data generation. We observe that in-context learning can achieve performance on par with previous prompt learning approaches, and data generation with the large language model can boost previous solutions to obtain new state-of-the-art few-shot results on four widely-studied relation extraction datasets. We hope our work can inspire future research for the capabilities of large language models in few-shot relation extraction. Code is available in \url{https://github.com/zjunlp/DeepKE/tree/main/example/llm.
NaLLM
-
RAG Using Unstructured Data and Role of Knowledge Graphs
The article is a good summary of RAG in the enterprise. It shed some light for me on the quality of building KG using LLMs, as recently, it is an approach that Neo4j was proposing [0].
According to the article, it is either costly (if using OpenAI), or slow using open source AI models. In both cases, predicting the quality of generated KG using LLMs is hard.
[0] https://github.com/neo4j/NaLLM
- Would this method work to increase the memory of the model? Saving summaries generated by a 2nd model and injecting them depending on the current topic.
What are some alternatives?
llama_farm - Use local llama LLM or openai to chat, discuss/summarize your documents, youtube videos, and so on.
ai_llm_kb_sandbox - Investigating the use of LLMs to populate knowledge graphs (KG) and then use KG to utilize predictive models
GoLLIE - Guideline following Large Language Model for Information Extraction
OpenNRE - An Open-Source Package for Neural Relation Extraction (NRE)
txtai - 💡 All-in-one open-source embeddings database for semantic search, LLM orchestration and language model workflows
zshot - Zero and Few shot named entity & relationships recognition
llm-experiments - Experiments using ChatGPT, Jupyter, and rdflib for distributed knowledge graph construction
ARElight - Granular Viewer of Sentiments Between Entities in Massively Large Documents and Collections of Texts, powered by AREkit
SillyTavern-Extras - Extensions API for SillyTavern.
VLDet - [ICLR 2023] PyTorch implementation of VLDet (https://arxiv.org/abs/2211.14843)