modelfusion
ollama
modelfusion | ollama | |
---|---|---|
18 | 224 | |
984 | 71,334 | |
8.9% | 12.2% | |
9.9 | 9.9 | |
5 days ago | 7 days ago | |
TypeScript | Go | |
MIT License | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
modelfusion
-
Next.js and GPT-4: A Guide to Streaming Generated Content as UI Components
ModelFusion is an AI integration library that I am developing. It enables you to integrate AI models into your JavaScript and TypeScript applications. You can install it with the following command:
-
Effortlessly Generate Structured Information with Ollama, Zod, and ModelFusion
ModelFusion is an open-source library I'm developing to integrate AI models seamlessly into TypeScript projects. It provides an Ollama client and a generateStructure function.
-
Create Your Own Local Chatbot with Next.js, Ollama, and ModelFusion
ModelFusion: ModelFusion is a library for building multi-modal AI applications that I've been working on. It provides a streamText function that calls AI models and returns a streaming response. ModelFusion also contains an Ollama integration that we will use to access the OpenHermes 2.5 Mistral model.
-
PDF Chat with Node.js, OpenAI and ModelFusion
You can find the complete code for the chatbot here: github/com/lgrammel/modelfusion/examples/pdf-chat-terminal
-
Ask HN: Tell us about your project that's not done yet but you want feedback on
I’m working on ModelFusion, a TypeScript library for working with AI models (llm, image, etc.)
https://github.com/lgrammel/modelfusion
It is only getting limited traction so I’m wondering if I’m missing something fundamental with the approach that I’m taking.
-
LangChain Agent Simulation – Multi-Player Dungeons and Dragons
If you work with JS or TS, check out this alternative that I've been working on:
https://github.com/lgrammel/modelfusion
It lets you stay in full control over the prompts and control flow while make a lot of things easier and more convenient.
-
Introducing ModelFusion: Build AI apps with JavaScript and TypeScript.
The response also contains additional information such as the metadata and the full response. The ModelFusion documentation contains many examples and demo apps.
- Show HN: AI-utils.js – TypeScript-first lib for AI apps, chatbots, and agents
-
ai-utils.js VS langchainjs - a user suggested alternative
2 projects | 26 Jul 2023
- ai-utils.js: TypeScript-first library for building AI apps, chatbots, and agents.
ollama
-
Ollama 0.1.42
`file://*` URLs are now allowed => ollama works with simple html files now
https://github.com/ollama/ollama/commit/1a29e9a879433fc55cf1...
-
How to setup a free, self-hosted AI model for use with VS Code
This guide assumes you have a supported NVIDIA GPU and have installed Ubuntu 22.04 on the machine that will host the ollama docker image. AMD is now supported with ollama but this guide does not cover this type of setup.
-
beginner guide to fully local RAG on entry-level machines
Nowadays, running powerful LLMs locally is ridiculously easy when using tools such as ollama. Just follow the installation instructions for your #OS. From now on, we'll assume using bash on Ubuntu.
- Codestral: Mistral's Code Model
- AIM Weekly 27 May 2024
-
Devoxx Genie Plugin : an Update
I focused on supporting Ollama, GPT4All, and LMStudio, all of which run smoothly on a Mac computer. Many of these tools are user-friendly wrappers around Llama.cpp, allowing easy model downloads and providing a REST interface to query the available models. Last week, I also added "👋🏼 Jan" support because HuggingFace has endorsed this provider out-of-the-box.
- Ask HN: Are companies self hosting LLMs?
- Ollama v0.1.39 Pre-release. Support Phi-3 Medium
-
Ask HN: Which LLMs can run locally on most consumer computers
I was able to successfully run Llama 3 8B, mistral 7B, phi and other 7B models using Ollama [1] on my M1 MacBook Air.
[1] https://ollama.com
What are some alternatives?
langchainjs - 🦜🔗 Build context-aware reasoning applications 🦜🔗
llama.cpp - LLM inference in C/C++
langroid - Harness LLMs with Multi-Agent Programming
gpt4all - gpt4all: run open-source LLMs anywhere
aipl - Array-Inspired Pipeline Language
text-generation-webui - A Gradio web UI for Large Language Models. Supports transformers, GPTQ, AWQ, EXL2, llama.cpp (GGUF), Llama models.
hamilton - Hamilton helps data scientists and engineers define testable, modular, self-documenting dataflows, that encode lineage and metadata. Runs and scales everywhere python does.
private-gpt - Interact with your documents using the power of GPT, 100% privately, no data leaks
async-interval-job - ✨ setInterval for promises and async/sync functions. Support graceful shutdown and prevent multiple executions from overlapping in time.
LocalAI - :robot: The free, Open Source OpenAI alternative. Self-hosted, community-driven and local-first. Drop-in replacement for OpenAI running on consumer-grade hardware. No GPU required. Runs gguf, transformers, diffusers and many more models architectures. It allows to generate Text, Audio, Video, Images. Also with voice cloning capabilities.
chatflow - Leveraging LLM to build Conversational UIs
llama - Inference code for Llama models