MyScaleDB
tiktoken
MyScaleDB | tiktoken | |
---|---|---|
4 | 32 | |
680 | 10,380 | |
86.6% | 10.0% | |
9.0 | 6.7 | |
14 days ago | 7 days ago | |
C++ | Python | |
Apache License 2.0 | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
MyScaleDB
- Myscaledb: Open-source SQL vector database to build AI apps using SQL
-
FLaNK AI - 01 April 2024
Vector Db built on clickhouse https://github.com/myscale/myscaledb
- Show HN: High-Performance SQL Vector Database MyScaleDB Goes Open Source
- Show HN: MyScaleDB open-sourced: a SQL vector database to Build AI APPs with SQL
tiktoken
- FLaNK AI - 01 April 2024
-
GPT-3.5 crashes when it thinks about useRalativeImagePath too much
Their tokenizer is open source: https://github.com/openai/tiktoken
Data files that contain vocabulary are listed here: https://github.com/openai/tiktoken/blob/9e79899bc248d5313c7d...
-
How fast is JS tiktoken?
OpenAI's refference tokeniser - https://github.com/openai/tiktoken
-
Anthropic announces Claude 2.1 – 200k context, less refusals
ChatGPT presumably adds them as special tokens to the cl100k_base tokenizer, as they demo in the tiktoken documentation: https://github.com/openai/tiktoken#extending-tiktoken
-
What is the best way to get an approximate number of tokens for a piece of text?
I want to measure the approximate number of tokens in a piece of text to understand if I will need to modify it before passing it into the context of an OpenAI API call. Tiktoken can do this, but I'm not sure if it's overkill to use that library just for this simple task. I don't need to actually tokenize the text, I just need an approximate count (e.g. within like 1% of the text's actual token length for text that represents the visible text on a webpage).
-
Show HN: LLaMA tokenizer that runs in browser
https://platform.openai.com/tokenizer or the official python library tiktoken https://github.com/openai/tiktoken or this JS port of tiktoken https://github.com/dqbd/tiktoken
-
Made a GPT-3.5-Turbo and GPT-4 Tokenizer
It's built on top of the tiktoken library and is basically just a lambda function in the backend.
- AiPrice - an API for calculating OpenAI tokens and pricing
-
Anyone able to explain what happened here?
"All" is a single token in OpenAI's tiktoken Tokenizer, unrelated to the token for capital "A". Even lowercase "all" is a distinct token from "All" or "ALL."
-
Which lib is the tokenizer page using to calculate the tokens?
check tiktoken
What are some alternatives?
bootcamp - Dealing with all unstructured data, such as reverse image search, audio search, molecular search, video analysis, question and answer systems, NLP, etc.
tokenizer - Pure Go implementation of OpenAI's tiktoken tokenizer
CML_AMP_Deploy-Mistral7B-CML-Native-Model - Deploy Mistral 7b model in CML using in-built, native CML models appliance
daath-ai-parser - Daath AI Parser is an open-source application that uses OpenAI to parse visible text of HTML elements.
tracecat - 😼 The open source alternative to Tines / Splunk SOAR. Build AI-assisted workflows, orchestrate alerts, and close cases fast.
CLIP - CLIP (Contrastive Language-Image Pretraining), Predict the most relevant text snippet given an image
zenml - ZenML 🙏: Build portable, production-ready MLOps pipelines. https://zenml.io.
skypilot - SkyPilot: Run LLMs, AI, and Batch jobs on any cloud. Get maximum savings, highest GPU availability, and managed execution—all with a simple interface.
bricks - Open-source natural language enrichments at your fingertips.
terminal-copilot - A smart terminal assistant that helps you find the right command.
jupyter-scheduler - Run Jupyter notebooks as jobs
twitter-archive-parser - Python code to parse a Twitter archive and output in various ways