lm-scorer
LMOps
lm-scorer | LMOps | |
---|---|---|
4 | 6 | |
294 | 3,186 | |
- | 3.0% | |
0.0 | 8.1 | |
about 2 years ago | 17 days ago | |
Python | Python | |
MIT License | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
lm-scorer
- How to obbtain probability for entire sequence (Huggingface transformers)
-
MLM vs CLM for actual language modeling
I've tried this once and found the CLM score to be a better indicator than BERT log prob for my use-case. For CLM, I had used lm-scorer.
- "simonepri/lm-scorer: Language Model based sentences scoring library" ("This package provides a simple programming interface to score sentences using different ML language models.")
-
Whole sentence rather than word frequency nltk?
As in, how generally would a sentence make sense in the totality of English? You could look into language models that give probability of a sentence. You can try a library called lm-scorer.
LMOps
-
Has anyone found a way to use Microsoft's Promptist in Stable Diffusion?
Microsoft released an open-sourced prompt optimizer. Has anyone used https://github.com/microsoft/LMOps for prompt optimization? They have it in this demo: https://huggingface.co/spaces/microsoft/Promptist, I'm not sure how to use it for 1.5.
-
π Microsoft Open Source LMOps: An AI Prompt Optimization Toolkit For Generative AI Models
Quick Read: https://www.marktechpost.com/2023/02/14/microsoft-open-source-lmops-an-ai-prompt-optimization-toolkit-for-generative-ai-models/ Paper: https://arxiv.org/pdf/2212.09611.pdf Github: https://github.com/microsoft/LMOps
- General technology for enabling AI capabilities with LLMs and Generative models
- microsoft/LMOps: General technology for enabling AI capabilities w/ LLMs and Generative AI models
-
Microsoft Promptist : Optimising Stable Diffusion prompts via a language model fine-tuned with reinforcement learning
Project Page : https://github.com/microsoft/LMOps/tree/main/promptist
What are some alternatives?
transformers - π€ Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.
langchain - β‘ Building applications with LLMs through composability β‘ [Moved to: https://github.com/langchain-ai/langchain]
penney - Penney's Game
langchain - π¦π Build context-aware reasoning applications
Sentence-Adder-Anki-Addon - Add sentences to Anki editor window in one click
Awesome-Efficient-LLM - A curated list for Efficient Large Language Models
Tyche - A library for probabilistic reasoning and belief modelling in Python.
lora-instruct - Finetune Falcon, LLaMA, MPT, and RedPajama on consumer hardware using PEFT LoRA
ModuleFormer - ModuleFormer is a MoE-based architecture that includes two different types of experts: stick-breaking attention heads and feedforward experts. We released a collection of ModuleFormer-based Language Models (MoLM) ranging in scale from 4 billion to 8 billion parameters.
CUPCAKEAGI - π§π Welcome to CupcakeAGI, where we bake up some sweet and creamy AGI goodness! π°π€
ImageNet21K - Official Pytorch Implementation of: "ImageNet-21K Pretraining for the Masses"(NeurIPS, 2021) paper
Locomotive - Toolkit for training/converting LibreTranslate compatible language models π