mage
peft
mage | peft | |
---|---|---|
117 | 26 | |
1,771 | 13,877 | |
1.5% | 4.1% | |
10.0 | 9.7 | |
1 day ago | 6 days ago | |
Java | Python | |
MIT License | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
mage
- Open source rules engine for Magic: The Gathering
-
Fine Tuning Mistral 7B on Magic the Gathering Draft
Yeah, that surprised me too, given that https://github.com/magefree/mage is open source and pretty actively developed.
-
I Hacked Magic the Gathering: Arena for a 100% Win Rate
The project you want is http://xmage.today/
All cards available, and full rules engine.
-
Find Legal Moves in Brass Birmingham with Datalog
My guess is that fully modeling MtG's rules in Datalog would take a lot more work, because MtG's rules are a lot more complex. Looking through the source for one of the freely-available game engines like https://github.com/magefree/mage would probably be more informative, though.
-
Draft Time Spiral, Lorwyn, and other older sets online for free with the XMage Draft Historical Society!
There's a draft every day, with events at different starting times to accommodate players from around the world, plus asynchronous side events including Rich Draft, Team Sealed, and Rotisserie Draft. We play on XMage which has full rules enforcement like MTGO but is 100% free software.
-
Drafting Help
If you can tolerate some of its shortcomings, you can draft against bots on Xmage.
-
Bugged cards (xmage beta 1.4.51)
https://github.com/magefree/mage/issues is a more reliable place to report - this might not be a trivial fix
-
Xmage beta server update
Update: My problem happened because the launcher had reverted to the non-beta xmage home address on its own. You probably have the same problem. You need to go to settings, set branch to custom, and the xmage home address to http://xmage.today/
-
Seriously. Just woke up one morning and it made so much sense.
Real life example: The open source implementation of MTG card game multiplayer
- Firkraag not working
peft
- LoftQ: LoRA-fine-tuning-aware Quantization
-
Fine Tuning Mistral 7B on Magic the Gathering Draft
There is not a lot of great content out there making this clear, but basically all that matters for basic fine tuning is how much VRAM you have -- since the 3090 / 4090 have 24GB VRAM they're both pretty decent fine tuning chips. I think you could probably fine-tune a model up to ~13B parameters on one of them with PEFT (https://github.com/huggingface/peft)
-
Whisper prompt tuning
Hi everyone. Recently I've been looking into the PEFT library (https://github.com/huggingface/peft) and I was wondering if it would be possible to do prompt tuning with OpenAI's Whisper model. They have an example notebook for tuning Whisper with LoRA (https://colab.research.google.com/drive/1vhF8yueFqha3Y3CpTHN6q9EVcII9EYzs?usp=sharing) but I'm not sure how to go about changing it to use prompt tuning instead.
-
Code Llama - The Hugging Face Edition
In the coming days, we'll work on sharing scripts to train models, optimizations for on-device inference, even nicer demos (and for more powerful models), and more. Feel free to like our GitHub repos (transformers, peft, accelerate). Enjoy!
- PEFT 0.5 supports fine-tuning GPTQ models
-
Exploding loss when trying to train OpenOrca-Platypus2-13B
image
-
[D] Is there a difference between p-tuning and prefix tuning ?
I discussed part of this here: https://github.com/huggingface/peft/issues/123
-
How does using QLoRAs when running Llama on CPU work?
It seems like the merge_and_unload function in this PEFT script might be what they are referring to: https://github.com/huggingface/peft/blob/main/src/peft/tuners/lora.py
-
How to merge the two weights into a single weight?
To obtain the original llama model, one may refer to this doc. To merge a lora model with a base model, one may refer to PEFT or use the merge script provided by LMFlow.
-
[D] [LoRA + weight merge every N step] for pre-training?
you could use a callback, like show here, https://github.com/huggingface/peft/issues/286 and call code to merge them here.
What are some alternatives?
Cockatrice - A cross-platform virtual tabletop for multiplayer card games
lora - Using Low-rank adaptation to quickly fine-tune diffusion models.
docker-mtgo - Docker image with ready-to-play MTGO (Magic Online) for Linux and macOS
LoRA - Code for loralib, an implementation of "LoRA: Low-Rank Adaptation of Large Language Models"
magarena - Magarena is a single-player fantasy card game played against a computer opponent.
alpaca-lora - Instruct-tune LLaMA on consumer hardware
HattrickOrganizer - Assistant for Hattrick online football manager
dalai - The simplest way to run LLaMA on your local machine
PacketProxy - A local proxy written in Java
mlc-llm - Enable everyone to develop, optimize and deploy AI models natively on everyone's devices.
mtg-deck-builder - Magic: The Gathering (MTG) deck builder that considers inventory (but not others decks... yet)
minLoRA - minLoRA: a minimal PyTorch library that allows you to apply LoRA to any PyTorch model.