hierarchical-domain-adaptation
LLM-Adapters
hierarchical-domain-adaptation | LLM-Adapters | |
---|---|---|
1 | 2 | |
32 | 950 | |
- | 3.1% | |
3.0 | 7.3 | |
8 months ago | 2 months ago | |
Python | Python | |
- | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
hierarchical-domain-adaptation
-
AI2 Introduces Efficient Hierarchical Domain Adaptation for Pretrained Language Models
Github: https://github.com/alexandra-chron/hierarchical-domain-adaptation
LLM-Adapters
-
Google DeepMind CEO Says Some Form of AGI Possible in a Few Years
That is not true, you can for example use an additional adapter to optimize, that takes 50$ and a 1 hour. https://github.com/AGI-Edgerunners/LLM-Adapters
- LLM-Adapters: An Adapter Family for Parameter-Efficient Fine-Tuning of LLMs
What are some alternatives?
pytorch-adapt - Domain adaptation made easy. Fully featured, modular, and customizable.
TencentPretrain - Tencent Pre-training framework in PyTorch & Pre-trained Model Zoo
pykale - Knowledge-Aware machine LEarning (KALE): accessible machine learning from multiple sources for interdisciplinary research, part of the ð¥PyTorch ecosystem. â Star to support our work!
discus - A data-centric AI package for ML/AI. Get the best high-quality data for the best results. Discord: https://discord.gg/t6ADqBKrdZ
custom-diffusion - Custom Diffusion: Multi-Concept Customization of Text-to-Image Diffusion (CVPR 2023)
AGIEval
adapters - A Unified Library for Parameter-Efficient and Modular Transfer Learning
LLM-Finetuning-Hub - Toolkit for fine-tuning, ablating and unit-testing open-source LLMs. [Moved to: https://github.com/georgian-io/LLM-Finetuning-Toolkit]
trankit - Trankit is a Light-Weight Transformer-based Python Toolkit for Multilingual Natural Language Processing
finetuner - :dart: Task-oriented embedding tuning for BERT, CLIP, etc.
VL_adapter - PyTorch code for "VL-Adapter: Parameter-Efficient Transfer Learning for Vision-and-Language Tasks" (CVPR2022)
xTuring - Build, customize and control you own LLMs. From data pre-processing to fine-tuning, xTuring provides an easy way to personalize open-source LLMs. Join our discord community: https://discord.gg/TgHXuSJEk6