Our great sponsors
-
petals
🌸 Run LLMs at home, BitTorrent-style. Fine-tuning and inference up to 10x faster than offloading
-
WorkOS
The modern identity platform for B2B SaaS. The APIs are flexible and easy-to-use, supporting authentication, user identity, and complex enterprise features like SSO and SCIM provisioning.
Feature request for Mistral API maintainers: the https://api.mistral.ai/v1/models API endpoint returns all of the language models and mistral-embed as well, but there's currently nothing in the JSON to help distinguish that embedding models from the others: https://github.com/simonw/llm-mistral/issues/5#issuecomment-...
It would be useful if there was an indication of which models are embedding models.
So how long until we can do an open source Mistral Large?
We could make a start on Petals or some other open source distributed training network cluster possibly?
[0] https://petals.dev/
Look more closely at the software.
It does not just magically conjure LLM model files out of thin air.
Where do those models come from?
https://github.com/ollama/ollama/issues/2390