twitter-archive-parser
open_clip
twitter-archive-parser | open_clip | |
---|---|---|
12 | 28 | |
2,382 | 8,452 | |
- | 3.4% | |
10.0 | 8.2 | |
over 1 year ago | 22 days ago | |
Python | Jupyter Notebook | |
GNU General Public License v3.0 only | GNU General Public License v3.0 or later |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
twitter-archive-parser
-
FLiP Stack Weekly 19-dec-2022
https://pulsar-edit.dev/
- đ 5 Awesome Python Projects People Donât Know About
-
It sure seems like Elon Musk is purging left-leaning Twitter accounts
You're also gonna want this: https://github.com/timhutton/twitter-archive-parser unless you don't mind losing all the full-size image content, DM references etc.
- Preserving the Tweets
- Caffè Italia * 21/11/22
-
Apple Executive Phil Schiller Deactivates Twitter Account
Also there are several open source scripts to parse your archive to make it more useful to you, for example, here's one: https://github.com/timhutton/twitter-archive-parser
- GitHub - timhutton/twitter-archive-parser: Python code to parse a Twitter archive and output in various ways
- Backup twitter now! Multiple critical infra teams have resigned
-
Is there a way to automate downloading copies of all of my twitter bookmarks / likes?
Then, this script lets you download the remaining pieces: https://github.com/timhutton/twitter-archive-parser
open_clip
- FLaNK AI Weekly for 29 April 2024
-
A History of CLIP Model Training Data Advances
While OpenAIâs CLIP model has garnered a lot of attention, it is far from the only game in townâand far from the best! On the OpenCLIP leaderboard, for instance, the largest and most capable CLIP model from OpenAI ranks just 41st(!) in its average zero-shot accuracy across 38 datasets.
-
How to Build a Semantic Search Engine for Emojis
Whenever Iâm working on semantic search applications that connect images and text, I start with a family of models known as contrastive language image pre-training (CLIP). These models are trained on image-text pairs to generate similar vector representations or embeddings for images and their captions, and dissimilar vectors when images are paired with other text strings. There are multiple CLIP-style models, including OpenCLIP and MetaCLIP, but for simplicity weâll focus on the original CLIP model from OpenAI. No model is perfect, and at a fundamental level there is no right way to compare images and text, but CLIP certainly provides a good starting point.
-
Database of 16,000 Artists Used to Train Midjourney AI Goes Viral
It is a misconception that Adobe's models have not been trained on copyrighted work. Nobody should be repeating their marketing claims.
Adobe has not shown how they train the text encoders in Firefly, or what images were used for the text-based conditioning (i.e. "text to image") part of their image generation model. They are almost certainly using CLIP or T5, which are trained on LAION2b, an image dataset with the very problems they are trying to address, C4 (a text dataset similarly encumbered) and similar.
I welcome anyone who works at Adobe to simply answer this question of how they trained the text encoders for text conditioning and put it to rest. There is absolutely nothing sensitive about the issue, unless it exposes them in a lie.
So no chance. I think it's a big fat lie. They'd have to have made some other scientific breakthrough, which they didn't.
Using information from https://openai.com/research/clip and https://github.com/mlfoundations/open_clip, it's possible to investigate the likelihood that using just their stock image dataset, can they make a working text encoder?
It's certainly not impossible, but it's impracticable. On 248m images (roughly the size of Adobe Stock), CLIP gets 37% on ImageNet, and on the 2000m from LAION, it performs 71-80%. And even with 2000m images, CLIP is substantially worse performing than the approach that Imagen uses for "text comprehension," which relies on essentially many billions more images and text tokens.
-
MetaCLIP â Meta AI Research
https://github.com/mlfoundations/open_clip/blob/main/docs/op...
-
COMFYUI SDXL WORKFLOW INBOUND! Q&A NOW OPEN! (WIP EARLY ACCESS WORKFLOW INCLUDED!)
in the modal card it says: pretrained text encoders (OpenCLIP-ViT/G and CLIP-ViT/L).
-
Is Nicholas Renotte a good guide for a person who knows nothing about ML?
also, if you describe your task a bit more, we might be able to direct you to a fairly out-of-the-box solution, e.g. you might be able to use one of the pretrained models supported by https://github.com/mlfoundations/open_clip without any additional training
-
Generate Image from Vector Embedding
It says on the Stable Diffusion Github repo that it uses the âOpenCLIP-ViT/Hâ https://github.com/mlfoundations/open_clip model as a text encoder, and from my prior experience with CLIP, I have found that it is very easy to generate image and text embeddings (because CLIP is a multimodal model).
-
What's up in the Python community? â April 2023
https://replicate.com/pharmapsychotic/clip-interrogator
using:
cfg.apply_low_vram_defaults()
interrogate_fast()
I tried lighter models like vit32/laion400 and others etc all are very very slow to load or use (model list: https://github.com/mlfoundations/open_clip)
I'm desperately looking for something more modest and light.
-
Low accuracy on my CNN model.
A library that is very useful for this kind of application is timm. You may also find the feature representation provided by a CLIP model particularly powerful.
What are some alternatives?
skypilot - SkyPilot: Run LLMs, AI, and Batch jobs on any cloud. Get maximum savings, highest GPU availability, and managed executionâall with a simple interface.
CLIP - CLIP (Contrastive Language-Image Pretraining), Predict the most relevant text snippet given an image
textual-markdown
DALLE-pytorch - Implementation / replication of DALL-E, OpenAI's Text to Image Transformer, in Pytorch
pulsar-admintool - Apache Pulsar - simple admin tool for schemas
taming-transformers - Taming Transformers for High-Resolution Image Synthesis
spring-pulsar - Spring Friendly Abstractions for Apache Pulsar
Dreambooth-Stable-Diffusion - Implementation of Dreambooth (https://arxiv.org/abs/2208.12242) with Stable Diffusion
terminal-copilot - A smart terminal assistant that helps you find the right command.
bitsandbytes - Accessible large language models via k-bit quantization for PyTorch.
tiktoken - tiktoken is a fast BPE tokeniser for use with OpenAI's models.
clip-retrieval - Easily compute clip embeddings and build a clip retrieval system with them