maid
MixtralKit
maid | MixtralKit | |
---|---|---|
5 | 4 | |
801 | 758 | |
33.8% | 2.2% | |
9.9 | 8.1 | |
5 days ago | 5 months ago | |
Dart | Python | |
MIT License | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
maid
-
Phi-3 Technical a Highly Capable Language Model Locally on Your Phone
I've been trying this app but haven't had any luck getting it to actually generate text yet:
https://github.com/Mobile-Artificial-Intelligence/maid
The UI looks nice and includes a native compilation of llama.cpp.
My main phone's screen broke so I'm on an old Pixel 4 until it's repaired but I've had no luck getting 2-3GB models to run so far.
- Maid: Cross-platform multi-API (local or remote) AI chat
-
Running Wizard 7b (Q2) on an 8gb Android Phone
Try this: https://github.com/MaidFoundation/maid
- Mixtral 8x7B is a scaled-down GPT-4
-
Ai on a android phone?
This one uses cpu it makes less heat: https://github.com/MaidFoundation/maid
MixtralKit
-
Paris-Based Startup and OpenAI Competitor Mistral AI Valued at $2B
> Mistral's latest just released model is well below GPT-3 out of the box
The early information I see implies it is above. Mind you, that is mostly because GPT-3 was comparatively low: for instance its 5-shot MMLU score was 43.9%, while Llama2 70B 5-shot was 68.9%[0]. Early benchmarks[1] give Mixtral scores above Llama2 70B on MMLU (and other benchmarks), thus transitively, it seems likely to be above GPT-3.
Of course, GPT-3.5 has a 5-shot score of 70, and it is unclear yet whether Mixtral is above or below, and clearly it is below GPT-4’s 86.5. The dust needs to settle, and the official inference code needs to be released, before there is certainty on its exact strength.
[0]: https://paperswithcode.com/sota/multi-task-language-understa...
[1]: https://github.com/open-compass/MixtralKit#comparison-with-o...
- Benchmarks results reveal Mixtral-8x7B BEATS LLaMA-2-70b
- Inference and Evaluation of Mistral AI's MoE model(Mixtral-8x7b-32kseqlen)
-
Mixtral 8x7B is a scaled-down GPT-4
Inference code: https://github.com/open-compass/MixtralKit Evaluation results will be updated soon
What are some alternatives?
mlc-llm - Enable everyone to develop, optimize and deploy AI models natively on everyone's devices.