Our great sponsors
-
InfluxDB
Power Real-Time Data Analytics at Scale. Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
-
text-generation-webui
A Gradio web UI for Large Language Models. Supports transformers, GPTQ, AWQ, EXL2, llama.cpp (GGUF), Llama models.
Do you by chance have any details on how to run oobagooba on the Orin? I keep running into this issue seemingly related to bitsandbytes.
How did you compile llama.cpp? Did you just do the patches from this thread: https://github.com/ggerganov/llama.cpp/issues/1455
I'm not sure what to expect with this. Does the LLaMA part auto-start on boot? Or do I need to start each individually? Should this be as easy as once I get it figured out I can go to http://agx.fqdn:7860 and it'll look like a web-browser with a chat window (like what the github shows)?
Related posts
- I started a list of fetch utilities, like Neofetch, by the community
- susfetch: sus and fast fetch utility made in C
- Awesome Fetch | awesome-fetch – command-line fetch tools for system information. Operating system, kernel, CPU, GPU, memory info …
- Awesome Fetch | awesome-fetch – command-line fetch tools for system information. Operating system, kernel, CPU, GPU, memory info …
- Flexfetch, a fast and generic fetch program