clip-interrogator
dspy
clip-interrogator | dspy | |
---|---|---|
27 | 22 | |
2,491 | 10,820 | |
- | 17.5% | |
4.8 | 9.9 | |
3 months ago | 6 days ago | |
Python | Python | |
MIT License | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
clip-interrogator
-
AI Horde’s AGPL3 hordelib receives DMCA take-down from hlky
It's image -> words, the inverse of stable diffusion.
see: https://github.com/pharmapsychotic/clip-interrogator
-
What are the "fastest" image classifiers I can use?
I have been using this on a CPU https://github.com/pharmapsychotic/clip-interrogator, I tried a lot of pre-trained models combinations, all are slow.
- -New Monthly Event!-
-
I keep trying to recreate this scene as a painting. But the AI doesn't get it. How do I describe that the man is reaching behind to stab a lion in the head, as the lion has pounced and is biting the rear of the horse. The AI always redraws this without the lion or not how it is shown here.
I'm addition to controlnet, try the clip interrogator to see how clip would describe the image and then use that language in your prompt. You can try the whole image or cropped portions. There is a colab available if you don't want to run it locally.
-
For Lora training, isn’t there a good AI that discribes the pictures you want to use for training?
In my current process, I use CLIP Interrogator to produce a high level caption and wd14 tagger for more granular booru tags. Typically in that order, because you can append the results from the latter to the former. Both tools perform with greater accuracy than the standard interrogators in img2img and give you more flexibility and features as well. You still have to do some manual adjustments, but I generally prefer this process over starting from scratch.
- Midjourney Image2text
-
Tech pioneers call for six-month pause of "out-of-control" AI development
If you are interested in this, definitely see if you can get some of the OSS models running and get a feel for how to interrogate them. Maybe see if you can get some mileage out of the CLIP-Interrogator
-
ChatGPT 3.5 vs 4 & Stable Diffusion
Next, I used the lists of artists, flavors, mediums, movements, and negatives that are used for the clip-interrogator and pasted these in the chat and told the bot to categorize them accordingly. As you can only paste up to certain characters in single message (4-5K in 3.5 and 6-8K in 4).
-
Any idea of what type of prompt has been used to make this?
Here’s the specific one I’m using (runs in browser)
-
CLIP Interrogator 2 locally
I really enjoy using the CLIP Interrogator on huggingspaces, but it is often super slow and sometimes straight up breaks. Now it is possible to locally install it, https://github.com/pharmapsychotic/clip-interrogator but I don't know if its viable to run on a laptop with 6gb videocard anyway.
dspy
-
Computer Vision Meetup: Develop a Legal Search Application from Scratch using Milvus and DSPy!
Legal practitioners often need to find specific cases and clauses across thousands of dense documents. While traditional keyword-based search techniques are useful, they fail to fully capture semantic content of queries and case files. Vector search engines and large language models provide an intriguing alternative. In this talk, I will show you how to build a legal search application using the DSPy framework and the Milvus vector search engine.
-
Pydantic Logfire
I’ve observed that Pydantic - which we’ve used for years in our API stack - has become very popular in LLM applications, for its type-adjacent features. It serves as a foundational technology for prompting libraries like [DSPy](https://github.com/stanfordnlp/dspy) which are abstracting “up the stack” of LLM apps. (some opinions there)
Operating AI apps reveals a big challenge, in that debugging probabilistic code paths requires more than the usual introspective abilities, and in an environment where function calls can have very real monetary impact we have to be able to see what’s happening in the runtime. See LangChain’s hosted solution (can’t recall the name) that allows an operator to see prompts and responses “on the wire”. (It just occurred to me that Langchain and Pydantic have a lot in common here, in approach.)
Having a coupling between Pydantic - which is *just about* the data layer itself - and an observability tool seems very interesting to me, and having this come from the folks who built it does not seem unreasonable. WRT open source and monetization, I would be lying if I said I wasn’t a little worried - given the recent few months - but I am choosing to see this in a positive light, given this team’s “believability weight” (to overuse Dalio) and history of delivering solid and really useful tooling.
- Ask HN: Most efficient way to fine-tune an LLM in 2024?
-
Princeton group open sources "SWE-agent", with 12.3% fix rate for GitHub issues
DSPy is the best tool for optimizing prompts [0]: https://github.com/stanfordnlp/dspy
Think of it as a meta-prompt optimizer, it uses a LLM to optimize your prompts, to optimize your LLM.
-
Winner of the SF Mistral AI Hackathon: Automated Test Driven Prompting
Isn’t this just a very naive implementation of what DsPY does?
https://github.com/stanfordnlp/dspy
I don’t understand what is exceptional here.
-
Show HN: Fructose, LLM calls as strongly typed functions
Have you done any comparison with DSPy ? (https://github.com/stanfordnlp/dspy)
Feels very similiar to DSPy except you dont have optimizations yet. But I like your API and the programming model your are enforcing through this.
-
AI Prompt Engineering Is Dead
I'm interested in hearing if anyone has used DSPy (https://github.com/stanfordnlp/dspy) just for prompt optimization for GPT-3.5 or GPT-4. Was it worth the effort and much better than manual prompt iteration? Was the optimized prompt some weird incantation? Any other insights?
-
Ask HN: Are you using a GPT to prompt-engineer another GPT?
You should check out x.com/lateinteraction's DSPy — which is like an optimizer for prompts — https://github.com/stanfordnlp/dspy
- SuperDuperDB - how to use it to talk to your documents locally using llama 7B or Mistral 7B?
- FLaNK Stack Weekly for 12 September 2023
What are some alternatives?
stable-diffusion-webui-wd14-tagger - Labeling extension for Automatic1111's Web UI
semantic-kernel - Integrate cutting-edge LLM technology quickly and easily into your apps
laion-datasets - Description and pointers of laion datasets
open-interpreter - A natural language interface for computers
dalle-2-preview
playground - Play with neural networks!
stable-diffusion-artists - Curated list of artists for Stable Diffusion prompts
MLflow - Open source platform for the machine learning lifecycle
hordelib - A wrapper around ComfyUI to allow use by the AI Horde. [UnavailableForLegalReasons - Repository access blocked]
FastMJPG - FastMJPG is a command line tool for capturing, sending, receiving, rendering, piping, and recording MJPG video with extremely low latency. It is optimized for running on constrained hardware and battery powered devices.
prompt-engine-py - A utility library for creating and maintaining prompts for Large Language Models