Our great sponsors
-
LLaVA
[NeurIPS'23 Oral] Visual Instruction Tuning (LLaVA) built towards GPT-4V level capabilities and beyond.
-
InfluxDB
Power Real-Time Data Analytics at Scale. Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
Researchers released LLaVA-1.5. LLaVA (Large Language and Vision Assistant) is an open-source large multimodal model that combines a vision encoder and Vicuna for general-purpose visual and language understanding. LLaVA-1.5 achieved SoTA on 11 benchmarks, with just simple modifications to the original LLaVA and completed training in ~1 day on a single 8-A100 node [Demo | Paper | GitHub].
slowllama: Finetune llama2-70b and codellama on MacBook Air without quantization [Link].
Related posts
- Show HN: I Remade the Fake Google Gemini Demo, Except Using GPT-4 and It's Real
- Llamafile lets you distribute and run LLMs with a single file
- LLaVA: Visual Instruction Tuning: Large Language-and-Vision Assistant
- SlowLlama: Finetune llama2-70B and codellama on MacBook Air without quantization
- SlowLlama: Finetune llama2-70B and codellama on MacBook Air without quantization