DeepKE
VLDet
DeepKE | VLDet | |
---|---|---|
2 | 1 | |
2,973 | 170 | |
4.6% | - | |
9.5 | 3.1 | |
13 days ago | about 2 months ago | |
Python | Python | |
MIT License | GNU General Public License v3.0 or later |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
DeepKE
- Would this method work to increase the memory of the model? Saving summaries generated by a 2nd model and injecting them depending on the current topic.
-
How to Unleash the Power of Large Language Models for Few-shot Relation Extraction?
Scaling language models have revolutionized widespread NLP tasks, yet little comprehensively explored few-shot relation extraction with large language models. In this paper, we investigate principal methodologies, in-context learning and data generation, for few-shot relation extraction via GPT-3.5 through exhaustive experiments. To enhance few-shot performance, we further propose task-related instructions and schema-constrained data generation. We observe that in-context learning can achieve performance on par with previous prompt learning approaches, and data generation with the large language model can boost previous solutions to obtain new state-of-the-art few-shot results on four widely-studied relation extraction datasets. We hope our work can inspire future research for the capabilities of large language models in few-shot relation extraction. Code is available in \url{https://github.com/zjunlp/DeepKE/tree/main/example/llm.
VLDet
-
[R] [ICLR'2023🌟]: Vision-and-Language Framework for Open-Vocabulary Object Detection
We're excited to share our latest work "Learning Object-Language Alignments for Open-Vocabulary Object Detection", which got accepted to ICLR'2023. Here're some resources: arxiv paper: https://arxiv.org/abs/2211.14843 github: https://github.com/clin1223/VLDet The proposed method called **VLDet**, which is a a simple yet effective vision-and-language framework for open-vocabulary object detection. Our key efforts are: 🔥 We introduce an open-vocabulary object detector method to learn object-language alignments directly from image-text pair data. 🔥 We propose to formulate region-word alignments as a set-matching problem and solve it efficiently with the Hungarian algorithm. 🔥 We use all nouns from image-text pairs as our object voccabulary which is strictly following the open-vocabulary setting and extensive experiments on two benchmark datasets, COCO and LVIS, demonstrate our superior performance.
What are some alternatives?
llama_farm - Use local llama LLM or openai to chat, discuss/summarize your documents, youtube videos, and so on.
CLIP-Caption-Reward - PyTorch code for "Fine-grained Image Captioning with CLIP Reward" (Findings of NAACL 2022)
GoLLIE - Guideline following Large Language Model for Information Extraction
robo-vln - Pytorch code for ICRA'21 paper: "Hierarchical Cross-Modal Agent for Robotics Vision-and-Language Navigation"
OpenNRE - An Open-Source Package for Neural Relation Extraction (NRE)
VL_adapter - PyTorch code for "VL-Adapter: Parameter-Efficient Transfer Learning for Vision-and-Language Tasks" (CVPR2022)
zshot - Zero and Few shot named entity & relationships recognition
DDNM - [ICLR 2023 Oral] Zero-Shot Image Restoration Using Denoising Diffusion Null-Space Model
NaLLM - Repository for the NaLLM project
OASIS - Official implementation of the paper "You Only Need Adversarial Supervision for Semantic Image Synthesis" (ICLR 2021)
ARElight - Granular Viewer of Sentiments Between Entities in Massively Large Documents and Collections of Texts, powered by AREkit
mmdetection - OpenMMLab Detection Toolbox and Benchmark