-
We are going to use an ollama docker image to host AI models that have been pre-trained for assisting with coding tasks. We are going to use the VS Code extension Continue to integrate with VS Code. If you are running VS Code on the same machine as you are hosting ollama, you could try CodeGPT but I could not get it to work when ollama is self-hosted on a machine remote to where I was running VS Code (well not without modifying the extension files). There are currently open issues on GitHub with CodeGPT which may have fixed the problem now.
-
CodeRabbit
CodeRabbit: AI Code Reviews for Developers. Revolutionize your code reviews with AI. CodeRabbit offers PR summaries, code walkthroughs, 1-click suggestions, and AST-based analysis. Boost productivity and code quality across all major languages with each PR.
-
ollama
Get up and running with Llama 3.3, DeepSeek-R1, Phi-4, Gemma 3, Mistral Small 3.1 and other large language models.
This guide assumes you have a supported NVIDIA GPU and have installed Ubuntu 22.04 on the machine that will host the ollama docker image. AMD is now supported with ollama but this guide does not cover this type of setup.
-
Note you should select the NVIDIA Docker image that matches your CUDA driver version. Look in the unsupported list if your driver version is older.
Related posts
-
How I Built a Multi-Agent AI Analyst Bot Using GPT, LangGraph & Market News APIs
-
Beyond Vibe Coding: What I Discovered Testing 10 AI Coding Tools
-
Show HN: Morphik – Open-source RAG that understands PDF images, runs locally
-
Building an HR Team-Matching Agent With MongoDB Vector Search, Voyage AI, & Vercel AI SDK
-
Anthropic Claude API: The Ultimate Guide