TypeScript JS

Open-source TypeScript projects categorized as JS

Top 23 TypeScript JS Projects

  • tiptap

    The headless rich text editor framework for web artisans.

    Project mention: Shadcn UI: Must-Have Tools & Resources | dev.to | 2024-06-14

    novel - Novel is a Notion-style WYSIWYG editor with AI-powered autocompletion. Built with Tiptap + Vercel AI SDK.

  • InfluxDB

    Power Real-Time Data Analytics at Scale. Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.

    InfluxDB logo
  • docz

    ✍ It has never been so easy to document your things!

  • lexical

    Lexical is an extensible text editor framework that provides excellent reliability, accessibility and performance.

    Project mention: Ask HN: Who wants to be hired? (July 2024) | news.ycombinator.com | 2024-07-01

    - Best way to play settlers of catan online: https://colonist.io

    I also contribute to open source when I get the chance to. You can find my github at https://github.com/meronogbai.

    Here's some of my open source PRs:

    - Facebook's lexical: https://github.com/facebook/lexical/pull/6271

  • face-api.js

    JavaScript API for face detection and face recognition in the browser and nodejs with tensorflow.js

  • ky

    🌳 Tiny & elegant JavaScript HTTP client based on the browser Fetch API

    Project mention: Ky: Tiny and elegant JavaScript HTTP client based on the browser Fetch API | news.ycombinator.com | 2024-04-24
  • tsParticles

    tsParticles - Easily create highly customizable JavaScript particles effects, confetti explosions and fireworks animations and use them as animated backgrounds for your website. Ready to use components available for React.js, Vue.js (2.x and 3.x), Angular, Svelte, jQuery, Preact, Inferno, Solid, Riot and Web Components.

    Project mention: Pride Month | dev.to | 2024-06-06

    To add the confetti effect I used an open-source library called tsparticles, to see its code you can check here:https://github.com/tsparticles/tsparticles

  • notion-sdk-js

    Official Notion JavaScript Client

    Project mention: Show HN: Pages CMS – A CMS for GitHub | news.ycombinator.com | 2024-02-22
  • SaaSHub

    SaaSHub - Software Alternatives and Reviews. SaaSHub helps you find the best software and product alternatives

    SaaSHub logo
  • hackernews-react-graphql

    Hacker News clone rewritten with universal JavaScript, using React and GraphQL.

  • metaplex

    A directory of what the Metaplex Foundation works on!

  • supabase-js

    An isomorphic Javascript client for Supabase. Query your Supabase database, subscribe to realtime events, upload and download files, browse typescript examples, invoke postgres functions via rpc, invoke supabase edge functions, query pgvector.

    Project mention: Managing environment variables in Angular apps | dev.to | 2024-07-02

    In such cases Supabase is a great open-source alternative to setting up a custom backend, and integrating it into an Angular app is fairly simple, given the existing supabase dependencies like @supabase/supabase-js. The only prerequisite to make it work is initialising a supabase project and generating an API key.

  • node-casbin

    An authorization library that supports access control models like ACL, RBAC, ABAC in Node.js and Browser

  • Pipcook

    Machine learning platform for Web developers

  • typegoose

    Typegoose - Define Mongoose models using TypeScript classes.

  • JSONForms

    Customizable JSON Schema-based forms with React, Angular and Vue support out of the box.

  • solana-web3.js

    Solana JavaScript SDK

  • composio

    Composio equips agents with well-crafted tools empowering them to tackle complex tasks

    Project mention: I built an AI Agent to validate my PR without actually doing it myself 🚀⚡ | dev.to | 2024-07-15

    In Composio, we review tens of pull requests every week.

  • chibisafe

    Blazing fast file vault written in TypeScript! 🚀

    Project mention: How do I host Chibisafe on different port? | /r/selfhosted | 2023-11-14

    I want to migrate from lolisafe to chibisafe , but I can't find any option to change the default port it uses, which is problematic as I already have a wordpress server using port 8000. I tried changing the port in a few of the files, but that seemed too hacky and didn't work anyway. In lolisafe you have your config.js and you can change your port to whatever you want, here, not so much. Did anyone here face the same issue and have a solution? I'm using yarn, so no docker.

  • ollama-js

    Ollama JavaScript library

    Project mention: Codestral Mamba | news.ycombinator.com | 2024-07-16

    I can give a summary of what's happened the past couple of years and what tools are out there.

    After ChatGPT released, there was a lot of hype in the space but open source was far behind. Iirc the best open foundation LLM that existed was GPT-2 but it was two generations behind.

    Awhile later Meta released LLaMA[1], a well trained base foundation model, which brought an explosion to open source. It was soon implemented in the Hugging Face Transformers library[2] and the weights were spread across the Hugging Face website for anyone to use.

    At first, it was difficult to run locally. Few developers had the system or money to run. It required too much RAM and iirc Meta's original implementation didn't support running on the CPU but developers soon came up with methods to make it smaller via quantization. The biggest project for this was Llama.cpp[3] which probably is still the biggest open source project today for running LLMs locally. Hugging Face Transformers also added quantization support through bitsandbytes[4].

    Over the next months there was rapid development in open source. Quantization techniques improved which meant LLaMA was able to run with less and less RAM with greater and greater accuracy on more and more systems. Tools came out that we're capable of finetuning LLaMA and there were hundreds of LLaMA finetunes that came out which finetuned LLaMA on instruction following, RLHF, and chat datasets which drastically increased accuracy even further. During this time, Stanford's Alpaca, Lmsys's Vicuna, Microsoft's Wizard, 01ai's Yi, Mistral, and a few others made their way onto the open LLM scene with some very good LLaMA finetunes.

    A new inference engine (software for running LLMs like Llama.cpp, Transformers, etc) called vLLM[5] came out which was capable of running LLMs in a more efficient way than was previously possible in open source. Soon it would even get good AMD support, making it possible for those with AMD GPUs to run open LLMs locally and with relative efficiency.

    Then Meta released Llama 2[6]. Llama 2 was by far the best open LLM for its time. Released with RLHF instruction finetunes for chat and with human evaluation data that put its open LLM leadership beyond doubt. Existing tools like Llama.cpp and Hugging Face Transformers quickly added support and users had access to the best LLM open source had to offer.

    At this point in time, despite all the advancements, it was still difficult to run LLMs. Llama.cpp and Transformers were great engines for running LLMs but the setup process was difficult and required a lot of time. You had to find the best LLM, quantize it in the best way for your computer (or figure out how to identify and download one from Hugging Face), setup whatever engine you wanted, figure out how to use your quantized LLM with the engine, fix any bugs you made along the way, and finally figure out how to prompt your specific LLM in a chat-like format.

    However, tools started coming out to make this process significantly easier. The first one of these that I remember was GPT4All[7]. GPT4All was a wrapper around Llama.cpp which made it easy to install, easy to select the LLM that you want (pre-quantized options for easy download from a download manager), and a chat UI which made LLMs easy to use. This significantly reduced the barrier to entry for those who were interested in using LLMs.

    The second project that I remember was Ollama[8]. Also a wrapper around Llama.cpp, Ollama gave most of what GPT4All had to offer but in an even simpler way. Today, I believe Ollama is bigger than GPT4All although I think it's missing some of the higher-level features of GPT4All.

    Another important tool that came out during this time is called Exllama[9]. Exllama is an inference engine with a focus on modern consumer Nvidia GPUs and advanced quantization support based on GPTQ. It is probably the best inference engine for squeezing performance out of consumer Nvidia GPUs.

    Months later, Nvidia came out with another new inference engine called TensorRT-LLM[10]. TensorRT-LLM is capable of running most LLMs and does so with extreme efficiency. It is the most efficient open source inferencing engine that exists for Nvidia GPUs. However, it also has the most difficult setup process of any inference engine and is made primarily for production use cases and Nvidia AI GPUs so don't expect it to work on your personal computer.

    With the rumors of GPT-4 being a Mixture of Experts LLM, research breakthroughs in MoE, and some small MoE LLMs coming out, interest in MoE LLMs was at an all-time high. The company Mistral had proven itself in the past with very impressive LLaMA finetunes, capitalized on this interest by releasing Mixtral 8x7b[11]. The best accuracy for its size LLM that the local LLM community had seen to date. Eventually MoE support was added to all inference engines and it was a very popular mid-to-large sized LLM.

    Cohere released their own LLM as well called Command R+[12] built specifically for RAG-related tasks with a context length of 128k. It's quite large and doesn't have notable performance on many metrics, but it has some interesting RAG features no other LLM has.

    More recently, Llama 3[13] was released which similar to previous Llama releases, blew every other open LLM out of the water. The smallest version of Llama 3 (Llama 3 8b) has the greatest accuracy for its size of any other open LLM and the largest version of Llama 3 released so far (Llama 3 70b) beats every other open LLM on almost every metric.

    Less than a month ago, Google released Gemma 2[14], the largest of which, performs very well under human evaluation despite being less than half the size of Llama 3 70b, but performs only decently on automated benchmarks.

    If you're looking for a tool to get started running LLMs locally, I'd go with either Ollama or GPT4All. They make the process about as painless as possible. I believe GPT4All has more features like using your local documents for RAG, but you can also use something like PrivateGPT with Ollama to get the same functionality.

    If you want to get into the weeds a bit and extract some more performance out of your machine, I'd go with using Llama.cpp, Exllama, or vLLM depending upon your system. If you have a normal, consumer Nvidia GPU, I'd go with Exllama. If you have an AMD GPU that supports ROCm 5.7 or 6.0, I'd go with vLLM. For anything else, including just running it on your CPU, I'd go with Llama.cpp. TensorRT-LLM only makes sense if you have an AI Nvidia GPU like the A100, V100, A10, H100, etc.

    [1] https://ai.meta.com/blog/large-language-model-llama-meta-ai/

    [2] https://github.com/huggingface/transformers

    [3] https://github.com/ggerganov/llama.cpp

    [4] https://github.com/bitsandbytes-foundation/bitsandbytes

    [5] https://github.com/vllm-project/vllm

    [6] https://ai.meta.com/blog/llama-2/

    [7] https://www.nomic.ai/gpt4all

    [8] http://ollama.ai/

    [9] https://github.com/turboderp/exllamav2

    [10] https://github.com/NVIDIA/TensorRT-LLM

    [11] https://mistral.ai/news/mixtral-of-experts/

    [12] https://cohere.com/blog/command-r-plus-microsoft-azure

    [13] https://ai.meta.com/blog/meta-llama-3/

    [14] https://blog.google/technology/developers/google-gemma-2/

  • modelfusion

    The TypeScript library for building AI applications.

    Project mention: Next.js and GPT-4: A Guide to Streaming Generated Content as UI Components | dev.to | 2024-01-25

    ModelFusion is an AI integration library that I am developing. It enables you to integrate AI models into your JavaScript and TypeScript applications. You can install it with the following command:

  • js-dos

    The best API for running dos programs in browser

    Project mention: Web-Based Turbo Pascal Compiler | news.ycombinator.com | 2024-04-15
  • fzf-for-js

    Do fuzzy matching using FZF algorithm in JavaScript

  • materialize

    Materialize, a web framework based on Material Design (by materializecss)

  • SaaSHub

    SaaSHub - Software Alternatives and Reviews. SaaSHub helps you find the best software and product alternatives

    SaaSHub logo
NOTE: The open source projects on this list are ordered by number of github stars. The number of mentions indicates repo mentiontions in the last 12 Months or since we started tracking (Dec 2020).

TypeScript JS discussion

Log in or Post with

TypeScript JS related posts

  • Send Web Push messages with Deno

    2 projects | dev.to | 14 Jul 2024
  • Ask HN: Recommended front end stack for complete beginner?

    2 projects | news.ycombinator.com | 4 Jul 2024
  • Understanding Array Data Structures

    1 project | dev.to | 26 Jun 2024
  • Node Boost: Clusters & Threads

    1 project | dev.to | 19 Jun 2024
  • Are Sync Engines The Future of Web Applications?

    6 projects | dev.to | 17 Jun 2024
  • What is Software Testing

    8 projects | dev.to | 16 May 2024
  • ChatCrafters - Chat with AI powered personas

    3 projects | dev.to | 12 Apr 2024
  • A note from our sponsor - InfluxDB
    www.influxdata.com | 20 Jul 2024
    Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality. Learn more →


What are some of the best open-source JS projects in TypeScript? This list will help you:

Project Stars
1 tiptap 25,150
2 docz 23,551
3 lexical 18,391
4 face-api.js 16,308
5 ky 11,896
6 tsParticles 7,338
7 notion-sdk-js 4,718
8 hackernews-react-graphql 4,422
9 metaplex 3,304
10 supabase-js 2,987
11 mmenu 2,582
12 node-casbin 2,538
13 Pipcook 2,519
14 typegoose 2,166
15 JSONForms 2,034
16 solana-web3.js 1,977
17 composio 1,810
18 chibisafe 1,651
19 ollama-js 1,607
20 modelfusion 1,038
21 js-dos 1,003
22 fzf-for-js 880
23 materialize 867

Power Real-Time Data Analytics at Scale
Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.

Did you konow that TypeScript is
the 2nd most popular programming language
based on number of metions?