SaaSHub helps you find the best software and product alternatives Learn more →
Gorilla Alternatives
Similar projects and alternatives to gorilla
-
text-generation-webui
A Gradio web UI for Large Language Models with support for multiple inference backends.
-
CodeRabbit
CodeRabbit: AI Code Reviews for Developers. Revolutionize your code reviews with AI. CodeRabbit offers PR summaries, code walkthroughs, 1-click suggestions, and AST-based analysis. Boost productivity and code quality across all major languages with each PR.
-
-
-
-
-
LocalAI
:robot: The free, Open Source alternative to OpenAI, Claude and others. Self-hosted and local-first. Drop-in replacement for OpenAI, running on consumer-grade hardware. No GPU required. Runs gguf, transformers, diffusers and many more models architectures. Features: Generate Text, Audio, Video, Images, Voice Cloning, Distributed, P2P inference
-
RWKV-LM
RWKV (pronounced RwaKuv) is an RNN with great LLM performance, which can also be directly trained like a GPT transformer (parallelizable). We are at RWKV-7 "Goose". So it's combining the best of RNN and transformer - great performance, linear time, constant space (no kv-cache), fast training, infinite ctx_len, and free sentence embedding.
-
SaaSHub
SaaSHub - Software Alternatives and Reviews. SaaSHub helps you find the best software and product alternatives
-
-
open_llama
OpenLLaMA, a permissively licensed open source reproduction of Meta AI’s LLaMA 7B trained on the RedPajama dataset
-
-
-
-
-
-
-
-
DB-GPT
AI Native Data App Development framework with AWEL(Agentic Workflow Expression Language) and Agents
-
-
-
-
SaaSHub
SaaSHub - Software Alternatives and Reviews. SaaSHub helps you find the best software and product alternatives
gorilla discussion
gorilla reviews and mentions
- Gorilla: Bridging LLMs and the Real World
-
Raft: Sailing Llama towards better domain-specific RAG
Retrieval-Augmented Fine-Tuning is a really promising technique.
FTA:
> Tianjun and Shishir were looking to improve these deficiencies of RAG. They hypothesized that a student who studies the textbooks before the open-book exam would be more likely to perform better than a student who references the textbook only during the exam. Translating that back to LLMs, if a model “studied” the documents beforehand, could that improve its RAG performance?
Incidentally, the team who wrote the paper released some nice code to generate domain-specific fine-tuning datasets: https://github.com/ShishirPatil/gorilla/tree/main/raft
-
Launch HN: Nango (YC W23) – Open-Source Unified API
Do you leverage https://gorilla.cs.berkeley.edu/ at all? If not, perhaps consider if it would solve some pain for you.
- Autonomous LLM agents with human-out-of-loop
- Show HN: I made a script to scrape your Facebook group
-
Pushing ChatGPT's Structured Data Support to Its Limits
* Gorilla [https://github.com/ShishirPatil/gorilla]
Could be interesting to try some of these exercises with these models.
-
Guidance for selecting a function-calling library?
gorilla
- Gorilla: An API Store for LLMs
-
Show HN: OpenAPI DevTools – Chrome ext. that generates an API spec as you browse
Nice this made me go back and check up on the Gorilla LLM project [1] to see whats they are doing with API and if they have applied their fine tuning to any of the newer foundation models but looks like things have slowed down since they launched (?) or maybe development is happening elsewhere on some invisible discord channel but I hope the intersection of API calling and LLM as a logic processing function keep getting focus it's an important direction for interop across the web.
[1] https://github.com/ShishirPatil/gorilla
-
RestGPT
"Gorilla: Large Language Model Connected with Massive APIs" (2023) https://gorilla.cs.berkeley.edu/ :
> Gorilla enables LLMs to use tools by invoking APIs. Given a natural language query, Gorilla comes up with the semantically- and syntactically- correct API to invoke. With Gorilla, we are the first to demonstrate how to use LLMs to invoke 1,600+ (and growing) API calls accurately while reducing hallucination. We also release APIBench, the largest collection of APIs, curated and easy to be trained on! Join us, as we try to expand the largest API store and teach LLMs how to write them!
eval/:
-
A note from our sponsor - SaaSHub
www.saashub.com | 16 Mar 2025
Stats
ShishirPatil/gorilla is an open source project licensed under Apache License 2.0 which is an OSI approved license.
The primary programming language of gorilla is Python.
Review ★★★★☆ 8/10