Our great sponsors
-
BigDL
Accelerate local LLM inference and finetuning (LLaMA, Mistral, ChatGLM, Qwen, Baichuan, Mixtral, Gemma, etc.) on Intel CPU and GPU (e.g., local PC with iGPU, discrete GPU such as Arc, Flex and Max). A PyTorch LLM library that seamlessly integrates with llama.cpp, HuggingFace, LangChain, LlamaIndex, DeepSpeed, vLLM, FastChat, ModelScope, etc.
-
WorkOS
The modern identity platform for B2B SaaS. The APIs are flexible and easy-to-use, supporting authentication, user identity, and complex enterprise features like SSO and SCIM provisioning.
NOTE:
The number of mentions on this list indicates mentions on common posts plus user suggested alternatives.
Hence, a higher number means a more popular project.
Related posts
- PyTorch Library for Running LLM on Intel CPU and GPU
- Fast, distributed, secure AI for Big Data
- Help Needed: Converting PlantNet-300k Pretrained Model Weights from Tar to h5 Format Help
- Can You Achieve GPU Performance When Running CNNs on a CPU?
- [D] DeepSparse: 1,000X CPU Performance Boost & 92% Power Reduction with Sparsified Models in MLPerf™ Inference v3.0