tiktoken
bootcamp
tiktoken | bootcamp | |
---|---|---|
32 | 24 | |
9,980 | 1,634 | |
6.4% | 3.2% | |
6.7 | 9.1 | |
about 1 month ago | 6 days ago | |
Python | HTML | |
MIT License | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
tiktoken
- FLaNK AI - 01 April 2024
-
GPT-3.5 crashes when it thinks about useRalativeImagePath too much
Their tokenizer is open source: https://github.com/openai/tiktoken
Data files that contain vocabulary are listed here: https://github.com/openai/tiktoken/blob/9e79899bc248d5313c7d...
-
How fast is JS tiktoken?
OpenAI's refference tokeniser - https://github.com/openai/tiktoken
-
Anthropic announces Claude 2.1 – 200k context, less refusals
ChatGPT presumably adds them as special tokens to the cl100k_base tokenizer, as they demo in the tiktoken documentation: https://github.com/openai/tiktoken#extending-tiktoken
-
What is the best way to get an approximate number of tokens for a piece of text?
I want to measure the approximate number of tokens in a piece of text to understand if I will need to modify it before passing it into the context of an OpenAI API call. Tiktoken can do this, but I'm not sure if it's overkill to use that library just for this simple task. I don't need to actually tokenize the text, I just need an approximate count (e.g. within like 1% of the text's actual token length for text that represents the visible text on a webpage).
-
Show HN: LLaMA tokenizer that runs in browser
https://platform.openai.com/tokenizer or the official python library tiktoken https://github.com/openai/tiktoken or this JS port of tiktoken https://github.com/dqbd/tiktoken
-
Made a GPT-3.5-Turbo and GPT-4 Tokenizer
It's built on top of the tiktoken library and is basically just a lambda function in the backend.
- AiPrice - an API for calculating OpenAI tokens and pricing
-
Anyone able to explain what happened here?
"All" is a single token in OpenAI's tiktoken Tokenizer, unrelated to the token for capital "A". Even lowercase "all" is a distinct token from "All" or "ALL."
-
Which lib is the tokenizer page using to calculate the tokens?
check tiktoken
bootcamp
- FLaNK AI - 01 April 2024
- FLaNK Stack Weekly 22 January 2024
-
Milvus Adventures Jan 5, 2023
Metadata Filtering with Zilliz Cloud Pipelines This tutorial discuss scalar or metadata filtering and how you can perform metadata filtering in Zilliz Cloud. This blog continues on the previous blog on Getting started with RAG in just 5 minutes. You can find its code in this notebook and scroll down to Cell #27.
-
Build a search engine, not a vector DB
Partially agree.
Vector DBs are critical components in retrieval systems. What most applications need are retrieval systems, rather than building blocks of retrieval systems. That doesn't mean the building blocks are not important.
As someone working on vector DB, I find many users struggling in building their own retrieval systems with building blocks such as embedding service (openai,cohere), logic orchestration framework (langchain/llamaindex) and vector databases, some even with reranker models. Putting them together is not as easy as it looks. A fairly changeling system work. Letting alone quality tuning and devops.
The struggle is no surprise to me, as tech companies who are experts on this (google,meta) all have dedicated teams working on retrieval system alone, making tons of optimizations and develop a whole feedback loop of evaluating and improving the quality. Most developers don't get access to such resource.
No one size fits all. I think there shall exist a service that democratize AI-powered retrieval, in simple words the know-how of using embedding+vectordb and a bunch of tricks to achieve SOTA retrieval quality.
With this idea I built a Retrieval-as-a-service solution, and here is its demo:
https://github.com/milvus-io/bootcamp/blob/master/bootcamp/R...
Curious to learn your thoughts.
-
Vector Database in a Jupyter Notebook
Although it's common to use vector databases in conjunction with LLMs, I like to talk about vector databases in the context of unstructured data, i.e. any data that you can vectorize with (or without) an ML model. Yes, this includes text, but it also includes things like visual data, molecular structures, and geospatial data.
For folks who want to learn a bit more, there are examples of vector database use cases beyond semantic text search in our bootcamp: https://github.com/milvus-io/bootcamp
-
Beginner-ish resources for choosing a vector database?
Easy to get started: Here are some tutorials for Milvus in a Jupyter Notebook that I wrote - reverse image search, semantic text search
-
Semantic Similarity Search
I think you can just store your vector embeddings in the vector store somewhere and then query with your second document. I created a short tutorial on this that shows how to get the top 2 vector embeddings from a text query
-
[D] Looking for open source projects to contribute
For more beginner tasks associated with the Milvus vector database, you can contribute to the Bootcamp project( https://github.com/milvus-io/bootcamp), where we build a lot of data-driven solutions using ML and Milvus vector database, including reverse image search, recommender systems, etc.
-
I built an image similarity search system... Suggestions needed: what are some fun image datasets or scenarios I can use with this? :)
Source code here: https://github.com/milvus-io/bootcamp/tree/master/solutions/reverse_image_search
- Faiss: Facebook's open source vector search library
What are some alternatives?
tokenizer - Pure Go implementation of OpenAI's tiktoken tokenizer
Milvus - A cloud-native vector database, storage for next generation AI applications
daath-ai-parser - Daath AI Parser is an open-source application that uses OpenAI to parse visible text of HTML elements.
google-research - Google Research
CLIP - CLIP (Contrastive Language-Image Pretraining), Predict the most relevant text snippet given an image
docarray - Represent, send, store and search multimodal data
skypilot - SkyPilot: Run LLMs, AI, and Batch jobs on any cloud. Get maximum savings, highest GPU availability, and managed execution—all with a simple interface.
es-clip-image-search - Sample implementation of natural language image search with OpenAI's CLIP and Elasticsearch or Opensearch.
bricks - Open-source natural language enrichments at your fingertips.
habitat-sim - A flexible, high-performance 3D simulator for Embodied AI research.
terminal-copilot - A smart terminal assistant that helps you find the right command.
annoy - Approximate Nearest Neighbors in C++/Python optimized for memory usage and loading/saving to disk