Python few-shot-learning

Open-source Python projects categorized as few-shot-learning

Top 16 Python few-shot-learning Projects

  • transferlearning

    Transfer learning / domain adaptation / domain generalization / multi-task learning etc. Papers, codes, datasets, applications, tutorials.-迁移学习

    Project mention: [D] Medium Article: Adaptive Learning for Time Series Forecasting | /r/MachineLearning | 2022-10-02

    The src is available in https://github.com/jindongwang/transferlearning I'll also publish about how to code the model for time series

  • Awesome-Prompt-Engineering

    This repository contains a hand-curated resources for Prompt Engineering with a focus on Generative Pre-trained Transformer (GPT), ChatGPT, PaLM etc

    Project mention: AI lessons | /r/ChatGPT | 2023-05-09

    Yes, there are a lot of different resources online, especially for generative AI. The Awesome Prompt Engineering github is probably a good place to start https://github.com/promptslab/Awesome-Prompt-Engineering. If you're focusing directly on OpenAI's models then the OpenAI Prompt Engineering Guide would be my recommendation https://help.openai.com/en/articles/6654000-best-practices-for-prompt-engineering-with-openai-api.

  • Mergify

    Tired of breaking your main and manually rebasing outdated pull requests?. Managing outdated pull requests is time-consuming. Mergify's Merge Queue automates your pull request management & merging. It's fully integrated to GitHub & coordinated with any CI. Start focusing on code. Try Mergify for free.

  • FSL-Mate

    FSL-Mate: A collection of resources for few-shot learning (FSL).

  • finetuner

    :dart: Task-oriented finetuning for better embeddings on neural search

    Project mention: How do you think search will change with technology like ChatGPT, Bing’s new AI search engine and the upcoming Google Bard? | /r/singularity | 2023-02-21

    And all of that has something to do with finetuners. It basically fine-tunes AI models for specific use cases. With it can create a custom search experience that is tailored to their specific needs. I also wonder how this is going to be integrated into SEO tools soon since those tools are catered to traditional search engines.

  • mmfewshot

    OpenMMLab FewShot Learning Toolbox and Benchmark

    Project mention: MMDeploy: Deploy All the Algorithms of OpenMMLab | /r/u_Allent_pjlab | 2022-11-21

    MMFewShot: OpenMMLab fewshot learning toolbox and benchmark.

  • test

    Measuring Massive Multitask Language Understanding | ICLR 2021 (by hendrycks)

    Project mention: [Colab Notebook] Launch quantized MPT-30B-Chat on Vast.ai using text-generation-inference, integrated with ConversationChain | /r/LangChain | 2023-07-09

    One method for comparison is the MMLU https://arxiv.org/abs/2009.03300.

  • pal

    PaL: Program-Aided Language Models (ICML 2023) (by reasoning-machines)

    Project mention: Prompt Engineering Guide: Guides, papers, and resources for prompt engineering | news.ycombinator.com | 2023-02-21

    Using the terminology that I'm working with this is an example of a second-order analytic augmentation!

    Here's another approach of second-order analytic augmentation, PAL: https://reasonwithpal.com

    And third-order, Toolformer: https://arxiv.org/abs/2302.04761

    The difference isn't in what is going on but rather with framing the approach within the analytic-synthetic distinction developed by Kant and the analytic philosophers who were influenced by his work. There's a dash of functional programming thrown in for good measure!

    I have scribbled on a print-out of the article on my desk:

      Nth Order

  • InfluxDB

    Collect and Analyze Billions of Data Points in Real Time. Manage all types of time series data in a single, purpose-built database. Run at any scale in any environment in the cloud, on-premises, or at the edge.

  • HugNLP

    HugNLP is a unified and comprehensive NLP library based on HuggingFace Transformer. Please hugging for NLP now!😊 (by HugAILab)

    Project mention: HugNLP: A Unified and Comprehensive Open-Source Library for NLP | news.ycombinator.com | 2023-05-03
  • self-refine

    LLMs can generate feedback on their work, use it to improve the output, and repeat this process iteratively.

    Project mention: ChemCrow: Augmenting large-language models with chemistry tools | news.ycombinator.com | 2023-04-17

    >the systems operation are well understood

    That's like saying human behavior is well understood because we know how neurons communicate signals. It's too low level to be useful, hence psychology.

    >They don't have the ability to reason or reflect.

    Yes they do

    https://selfrefine.info/

    https://arxiv.org/abs/2303.11366

  • memprompt

    A method to fix GPT-3 after deployment with user feedback, without re-training.

    Project mention: Allen Institute for Artificial Intelligence Introduces MemPrompt: A New Method to “fix” GPT-3 After Deployment with User Interaction | /r/machinelearningnews | 2022-12-18

    Quick Read: https://www.marktechpost.com/2022/12/18/allen-institute-for-artificial-intelligence-introduces-memprompt-a-new-method-to-fix-gpt-3-after-deployment-with-user-interaction/ Paper: https://arxiv.org/abs/2201.06009 Code: https://github.com/madaan/memprompt

  • zshot

    Zero and Few shot named entity & relationships recognition

    Project mention: A transformer-based method for zero and few-shot biomedical NER | news.ycombinator.com | 2023-05-12
  • HugNLP

    HugNLP is a unified and comprehensive NLP library based on HuggingFace Transformer. Please hugging for NLP now!😊 HugNLP will released to @HugAILab

    Project mention: HugNLP: A Unified and Comprehensive Library for Natural Language Processing | news.ycombinator.com | 2023-04-13
  • deep-kernel-transfer

    Official pytorch implementation of the paper "Bayesian Meta-Learning for the Few-Shot Setting via Deep Kernels" (NeurIPS 2020)

    Project mention: What approach to take predicting a simple data stream? | /r/neuralnetworks | 2022-10-03

    Interesting approach to small datasets. Here is an implementation I'll look at: https://github.com/BayesWatch/deep-kernel-transfer

  • TaiChi

    Open source library for few shot NLP (by salesforce)

  • ORBIT-Dataset

    The ORBIT dataset is a collection of videos of objects in clean and cluttered scenes recorded by people who are blind/low-vision on a mobile phone. The dataset is presented with a teachable object recognition benchmark task which aims to drive few-shot learning on challenging real-world data.

  • prompt-lib

    A set of utilities for running few-shot prompting experiments on large-language models

    Project mention: Using Da-Vinci-003 in a Jupyter Notebook | /r/OpenAI | 2022-12-02

    While it's a bit of an overkill, prompt-lib provides a notebook to do this: https://github.com/reasoning-machines/prompt-lib/blob/main/notebooks/QueryOpenAI.ipynb

  • Sonar

    Write Clean Python Code. Always.. Sonar helps you commit clean code every time. With over 225 unique rules to find Python bugs, code smells & vulnerabilities, Sonar finds the issues while you focus on the work.

NOTE: The open source projects on this list are ordered by number of github stars. The number of mentions indicates repo mentiontions in the last 12 Months or since we started tracking (Dec 2020). The latest post mention was on 2023-07-09.

Python few-shot-learning related posts

Index

What are some of the best open-source few-shot-learning projects in Python? This list will help you:

Project Stars
1 transferlearning 11,956
2 Awesome-Prompt-Engineering 2,140
3 FSL-Mate 1,550
4 finetuner 1,187
5 mmfewshot 593
6 test 538
7 pal 368
8 HugNLP 332
9 self-refine 320
10 memprompt 310
11 zshot 262
12 HugNLP 240
13 deep-kernel-transfer 183
14 TaiChi 79
15 ORBIT-Dataset 78
16 prompt-lib 71
Write Clean Python Code. Always.
Sonar helps you commit clean code every time. With over 225 unique rules to find Python bugs, code smells & vulnerabilities, Sonar finds the issues while you focus on the work.
www.sonarsource.com