cursor VS RWKV-LM

Compare cursor vs RWKV-LM and see what are their differences.

RWKV-LM

RWKV is an RNN with transformer-level LLM performance. It can be directly trained like a GPT (parallelizable). So it's combining the best of RNN and transformer - great performance, fast inference, saves VRAM, fast training, "infinite" ctx_len, and free sentence embedding. (by BlinkDL)
SurveyJS - Open-Source JSON Form Builder to Create Dynamic Forms Right in Your App
With SurveyJS form UI libraries, you can build and style forms in a fully-integrated drag & drop form builder, render them in your JS app, and store form submission data in any backend, inc. PHP, ASP.NET Core, and Node.js.
surveyjs.io
featured
InfluxDB - Power Real-Time Data Analytics at Scale
Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
www.influxdata.com
featured
cursor RWKV-LM
13 84
20,218 11,704
2.1% -
7.7 8.8
7 months ago 13 days ago
TypeScript Python
MIT License Apache License 2.0
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

cursor

Posts with mentions or reviews of cursor. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-10-10.
  • GitHub Copilot Loses an Average of $20 per User per Month
    3 projects | news.ycombinator.com | 10 Oct 2023
  • Show HN: Tall Sandwiches
    2 projects | news.ycombinator.com | 18 Sep 2023
    Dumb weekend project made entirely with AI.

    Code: [cursor.so](https://cursor.so)

  • Money Is Pouring into AI. Skeptics Say It’s a ‘Grift Shift.’
    1 project | news.ycombinator.com | 30 Aug 2023
    AI investment is actually down recently, looks like the hype is wearing off since most of the companies funded were just wrapping OpenAI APIs. I will copy paste a post I submitted before regarding a similar issue.

    https://twitter.com/0xSamHogan/status/1680725207898816512

    Nitter: https://nitter.net/0xSamHogan/status/1680725207898816512#m

    ---

    6 months ago it looked like AI / LLMs were going to bring a much needed revival to the venture startup ecosystem after a few tough years.

    With companies like Jasper starting to slow down, it’s looking like this may not be the case.

    Right now there are 2 clear winners, a handful of losers, and a small group of moonshots that seem promising.

    Let’s start with the losers.

    Companies like Jasper and the VCs that back them are the biggest losers right now. Jasper raised >$100M at a 10-figure valuation for what is essentially a generic, thin wrapper around OpenAI. Their UX and brand are good, but not great, and competition from companies building differentiated products specifically for high-value niches are making it very hard to grow with such a generic product. I’m not sure how this pans out but VC’s will likely lose their money.

    The other category of losers are the VC-backed teams building at the application layer that raised $250K-25M in Dec - March on the back of the chatbot craze with the expectation that they would be able to sell to later-stage and enterprise companies. These startups typically have products that are more focused than something very generic like Jasper, but still don't have a real technology moat; the products are easy to copy.

    Executives at enterprise companies are excited about AI, and have been vocal about this from the beginning. This led a lot of founders and VC's to believe these companies would make good first customers. What the startups building for these companies failed to realize is just how aligned and savvy executives and the engineers they manage would be at quickly getting AI into production using open-source tools. An engineering leader would rather spin up their own @LangChainAI and @trychroma infrastructure for free and build tech themselves than buy something from a new, unproven startup (and maybe pick up a promotion along the way).

    In short, large companies are opting to write their own AI success stories rather than being a part of the growth metrics a new AI startup needs to raise their next round.

    (This is part of an ongoing shift in the way technology is adopted; I'll discuss this in a post next week.)

    This brings us to our first group of winners — established companies and market incumbents. Most of them had little trouble adding AI into their products or hacking together some sort of "chat-your-docs" application internally for employee use. This came as a surprise to me. Most of these companies seemed to be asleep at the wheel for years. They somehow woke up and have been able to successfully navigate the LLM craze with ample dexterity.

    There are two causes for this:

    1. Getting AI right is a life or death proposition for many of these companies and their executives; failure here would mean a slow death over the next several years. They can't risk putting their future in the hands of a new startup that could fail and would rather lead projects internally to make absolutely sure things go as intended.

    2. There is a certain amount of kick-ass wafting through halls of the C-Suite right now. Ambitious projects are being green-lit and supported in ways they weren't a few years ago. I think we owe this in part to @elonmusk reminding us of what is possible when a small group of smart people are highly motivated to get things done. Reduce red-tape, increase personal responsibility, and watch the magic happen.

    Our second group of winners live on the opposite side of this spectrum; indie devs and solopreneurs. These small, often one-man outfits do not raise outside capital or build big teams. They're advantage is their small size and ability to move very quickly with low overhead. They build niche products for niche markets, which they often dominate. The goal is build a saas product (or multiple) that generates ~$10k/mo in relatively passive income. This is sometimes called "mirco-saas."

    These are the @levelsio's and @dannypostmaa's of the world. They are part software devs, part content marketers, and full-time modern internet businessmen. They answer to no one except the markets and their own intuition.

    This is the biggest group of winners right now. Unconstrained by the need for a $1B+ exit or the goal of $100MM ARR, they build and launch products in rapid-fire fashion, iterating until PMF and cashflow, and moving on to the next. They ruthlessly shutdown products that are not performing.

    LLMs and text-to-image models a la Stable Diffusion have been a boon for these entrepreneurs, and I personally know of dozens of successful (keeping in mind their definition of successful) apps that were started less than 6 months ago. The lifestyle and freedom these endeavors afford to those that perform well is also quite enticing.

    I think we will continue to see the number of successful micro-saas AI apps grow in the next 12 months. This could possibly become one of the biggest cohorts creating real value with this technology.

    The last group I want to talk about are the AI Moonshots — companies that are fundamentally re-imagining an entire industry from the ground up. Generally, these companies are VC-backed and building products that have the potential to redefine how a small group of highly-skilled humans interact with and are assisted by technology. It's too early to tell if they'll be successful or not; early prototypes have been compelling. This is certainly the most exciting segment to watch.

    A few companies I would put in this group are:

    1. https://cursor.so - an AI-first code editor that could very well change how software is written.

    2. https://harvey.ai - AI for legal practices

    3. https://runwayml.com - an AI-powered video editor

    This is an incomplete list, but overall I think the Moonshot category needs to grow massively if we're going to see the AI-powered future we've all been hoping for.

    If you're a founder in the $250K-25M raised category and are having a hard time finding PMF for your chatbot or LLMOps company, it may be time to consider pivoting to something more ambitious.

    Lets recap:

    1. VC-backed companies are having a hard time. The more money a company raised, the more pain they're feeling.

    2. Incumbents and market leaders are quickly become adept at deploying cutting-edge AI using internal teams and open-source, off-the-shelf technology, cutting out what seemed to be good opportunities for VC-backed startups.

    3. Indie devs are building small, cash-flowing businesses by quickly shipping niche AI-powered products in niche markets.

    4. A small number of promising Moonshot companies with unproven technology hold the most potential for VC-sized returns.

    It's still early. This landscape will continue to change as new foundational models are released and toolchains improve. I'm sure you can find counter examples to everything I've written about here. Put them in the comments for others to see.

    And just to be upfront about this, I fall squarely into the "raised $250K-25M without PMF" category.

  • Imminent Death of ChatGPT [and Generative AI] Is Greatly Exaggerated
    1 project | news.ycombinator.com | 25 Aug 2023
    I'm gonna copy paste a post I submitted before regarding a similar issue.

    https://twitter.com/0xSamHogan/status/1680725207898816512

    Nitter: https://nitter.net/0xSamHogan/status/1680725207898816512#m

    ---

    6 months ago it looked like AI / LLMs were going to bring a much needed revival to the venture startup ecosystem after a few tough years.

    With companies like Jasper starting to slow down, it’s looking like this may not be the case.

    Right now there are 2 clear winners, a handful of losers, and a small group of moonshots that seem promising.

    Let’s start with the losers.

    Companies like Jasper and the VCs that back them are the biggest losers right now. Jasper raised >$100M at a 10-figure valuation for what is essentially a generic, thin wrapper around OpenAI. Their UX and brand are good, but not great, and competition from companies building differentiated products specifically for high-value niches are making it very hard to grow with such a generic product. I’m not sure how this pans out but VC’s will likely lose their money.

    The other category of losers are the VC-backed teams building at the application layer that raised $250K-25M in Dec - March on the back of the chatbot craze with the expectation that they would be able to sell to later-stage and enterprise companies. These startups typically have products that are more focused than something very generic like Jasper, but still don't have a real technology moat; the products are easy to copy.

    Executives at enterprise companies are excited about AI, and have been vocal about this from the beginning. This led a lot of founders and VC's to believe these companies would make good first customers. What the startups building for these companies failed to realize is just how aligned and savvy executives and the engineers they manage would be at quickly getting AI into production using open-source tools. An engineering leader would rather spin up their own @LangChainAI and @trychroma infrastructure for free and build tech themselves than buy something from a new, unproven startup (and maybe pick up a promotion along the way).

    In short, large companies are opting to write their own AI success stories rather than being a part of the growth metrics a new AI startup needs to raise their next round.

    (This is part of an ongoing shift in the way technology is adopted; I'll discuss this in a post next week.)

    This brings us to our first group of winners — established companies and market incumbents. Most of them had little trouble adding AI into their products or hacking together some sort of "chat-your-docs" application internally for employee use. This came as a surprise to me. Most of these companies seemed to be asleep at the wheel for years. They somehow woke up and have been able to successfully navigate the LLM craze with ample dexterity.

    There are two causes for this:

    1. Getting AI right is a life or death proposition for many of these companies and their executives; failure here would mean a slow death over the next several years. They can't risk putting their future in the hands of a new startup that could fail and would rather lead projects internally to make absolutely sure things go as intended.

    2. There is a certain amount of kick-ass wafting through halls of the C-Suite right now. Ambitious projects are being green-lit and supported in ways they weren't a few years ago. I think we owe this in part to @elonmusk reminding us of what is possible when a small group of smart people are highly motivated to get things done. Reduce red-tape, increase personal responsibility, and watch the magic happen.

    Our second group of winners live on the opposite side of this spectrum; indie devs and solopreneurs. These small, often one-man outfits do not raise outside capital or build big teams. They're advantage is their small size and ability to move very quickly with low overhead. They build niche products for niche markets, which they often dominate. The goal is build a saas product (or multiple) that generates ~$10k/mo in relatively passive income. This is sometimes called "mirco-saas."

    These are the @levelsio's and @dannypostmaa's of the world. They are part software devs, part content marketers, and full-time modern internet businessmen. They answer to no one except the markets and their own intuition.

    This is the biggest group of winners right now. Unconstrained by the need for a $1B+ exit or the goal of $100MM ARR, they build and launch products in rapid-fire fashion, iterating until PMF and cashflow, and moving on to the next. They ruthlessly shutdown products that are not performing.

    LLMs and text-to-image models a la Stable Diffusion have been a boon for these entrepreneurs, and I personally know of dozens of successful (keeping in mind their definition of successful) apps that were started less than 6 months ago. The lifestyle and freedom these endeavors afford to those that perform well is also quite enticing.

    I think we will continue to see the number of successful micro-saas AI apps grow in the next 12 months. This could possibly become one of the biggest cohorts creating real value with this technology.

    The last group I want to talk about are the AI Moonshots — companies that are fundamentally re-imagining an entire industry from the ground up. Generally, these companies are VC-backed and building products that have the potential to redefine how a small group of highly-skilled humans interact with and are assisted by technology. It's too early to tell if they'll be successful or not; early prototypes have been compelling. This is certainly the most exciting segment to watch.

    A few companies I would put in this group are:

    1. https://cursor.so - an AI-first code editor that could very well change how software is written.

    2. https://harvey.ai - AI for legal practices

    3. https://runwayml.com - an AI-powered video editor

    This is an incomplete list, but overall I think the Moonshot category needs to grow massively if we're going to see the AI-powered future we've all been hoping for.

    If you're a founder in the $250K-25M raised category and are having a hard time finding PMF for your chatbot or LLMOps company, it may be time to consider pivoting to something more ambitious.

    Lets recap:

    1. VC-backed companies are having a hard time. The more money a company raised, the more pain they're feeling.

    2. Incumbents and market leaders are quickly become adept at deploying cutting-edge AI using internal teams and open-source, off-the-shelf technology, cutting out what seemed to be good opportunities for VC-backed startups.

    3. Indie devs are building small, cash-flowing businesses by quickly shipping niche AI-powered products in niche markets.

    4. A small number of promising Moonshot companies with unproven technology hold the most potential for VC-sized returns.

    It's still early. This landscape will continue to change as new foundational models are released and toolchains improve. I'm sure you can find counter examples to everything I've written about here. Put them in the comments for others to see.

    And just to be upfront about this, I fall squarely into the "raised $250K-25M without PMF" category. If you're a founder in the same boat, I'd love to talk. My DMs are open.

    If you enjoyed this post, don't forget to follow me, Sam Hogan. I share one long-form post per week covering AI, startups, open-source, and more.

    That's all folks! Thanks for reading. See you next week.

  • Show HN: Semi-Autonomous LLM with a dev workstation
    1 project | news.ycombinator.com | 19 Aug 2023
    This feels scammy and low quality. Compare this site with something like https://cursor.so that targets a similar idea.
  • Cursor.sh – Fork of VSCode with AI Built-In
    1 project | news.ycombinator.com | 17 Aug 2023
    You seem to have a word, "closed source fork" https://github.com/getcursor/cursor#oss

    I don't know what kind of world you live in, but submitting a closed source editor to HN with a comment in the readme of "send us email if you want the source opened" is some ... welcome, I hope you enjoy your stay here

  • Check cursor.so: Build Software. Fast. Write, edit, and chat about your code with a powerful AI
    1 project | /r/ChatGPTPro | 5 Apr 2023
    Just stumbled upon cursor.so and I think y'all might like it - check https://cursor.so
  • Cursor: An editor made for programming with AI
    1 project | news.ycombinator.com | 3 Apr 2023
  • cursor - An editor made for programming with AI
    1 project | /r/LocalGPT | 3 Apr 2023
  • AI plugin overview
    18 projects | /r/neovim | 3 Apr 2023
    the new https://cursor.so editor demonstrates how editing with AI is the future, and real powerful. Now I love neovim, but only because it makes me productive. I don't want to leave neovim, but without solid AI integration like cursor, it looks obvious editors without strong AI integration will never be as productive as those with. So, I went out to scour the current neovim AI plugin landscape, and to hear what others have found the best AI integration.

RWKV-LM

Posts with mentions or reviews of RWKV-LM. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-12-09.
  • Do LLMs need a context window?
    1 project | news.ycombinator.com | 25 Dec 2023
    https://github.com/BlinkDL/RWKV-LM#rwkv-discord-httpsdiscord... lists a number of implementations of various versions of RWKV.

    https://github.com/BlinkDL/RWKV-LM#rwkv-parallelizable-rnn-w... :

    > RWKV: Parallelizable RNN with Transformer-level LLM Performance (pronounced as "RwaKuv", from 4 major params: R W K V)

    > RWKV is an RNN with Transformer-level LLM performance, which can also be directly trained like a GPT transformer (parallelizable). And it's 100% attention-free. You only need the hidden state at position t to compute the state at position t+1. You can use the "GPT" mode to quickly compute the hidden state for the "RNN" mode.

    > So it's combining the best of RNN and transformer - great performance, fast inference, saves VRAM, fast training, "infinite" ctx_len, and free sentence embedding (using the final hidden state).

    > "Our latest version is RWKV-6,*

  • People who've used RWKV, whats your wishlist for it?
    9 projects | /r/LocalLLaMA | 9 Dec 2023
  • Paving the way to efficient architectures: StripedHyena-7B
    1 project | news.ycombinator.com | 8 Dec 2023
  • Understanding Deep Learning
    1 project | news.ycombinator.com | 26 Nov 2023
    That is not true. There are RNNs with transformer/LLM-like performance. See https://github.com/BlinkDL/RWKV-LM.
  • Q-Transformer: Scalable Reinforcement Learning via Autoregressive Q-Functions
    3 projects | news.ycombinator.com | 19 Sep 2023
    This is what RWKV (https://github.com/BlinkDL/RWKV-LM) was made for, and what it will be good at.

    Wow. Pretty darn cool! <3 :'))))

  • Personal GPT: A tiny AI Chatbot that runs fully offline on your iPhone
    14 projects | /r/ChatGPT | 30 Jun 2023
    Thanks for the support! Two weeks ago, I'd have said longer contexts on small on-device LLMs are at least a year away, but developments from last week seem to indicate that it's well within reach. Once the low hanging product features are done, I think it's a worthy problem to spend a couple of weeks or perhaps even months on. Speaking of context lengths, recurrent models like RWKV technically have infinite context lengths, but in practice the context slowly fades away after a few thousands of tokens.
  • "If you see a startup claiming to possess top-secret results leading to human level AI, they're lying or delusional. Don't believe them!" - Yann LeCun, on the conspiracy theories of "X company has reached AGI in secret"
    1 project | /r/singularity | 26 Jun 2023
    This is the reason there are only a few AI labs, and they show little of the theoretical and scientific understanding you believe is required. Go check their code, there's nothing there. Even the transformer with it's heads and other architectural elements turns out to not do anything and it is less efficient than RNNs. (see https://github.com/BlinkDL/RWKV-LM)
  • The Secret Sauce behind 100K context window in LLMs: all tricks in one place
    3 projects | news.ycombinator.com | 17 Jun 2023
    I've been pondering the same thing, as simply extending the context window in a straightforward manner would lead to a significant increase in computational resources. I've had the opportunity to experiment with Anthropics' 100k model, and it's evident that they're employing some clever techniques to make it work, albeit with some imperfections. One interesting observation is that their prompt guide recommends placing instructions after the reference text when inputting lengthy text bodies. I noticed that the model often disregarded the instructions if placed beforehand. It's clear that the model doesn't allocate the same level of "attention" to all parts of the input across the entire context window.

    Moreover, the inability to cache transformers makes the use of large context windows quite costly, as all previous messages must be sent with each call. In this context, the RWKV-LM project on GitHub (https://github.com/BlinkDL/RWKV-LM) might offer a solution. They claim to achieve performance comparable to transformers using an RNN, which could potentially handle a 100-page document and cache it, thereby eliminating the need to process the entire document with each subsequent query. However, I suspect RWKV might fall short in handling complex tasks that require maintaining multiple variables in memory, such as mathematical computations, but it should suffice for many scenarios.

    On a related note, I believe Anthropics' Claude is somewhat underappreciated. In some instances, it outperforms GPT4, and I'd rank it somewhere between GPT4 and Bard overall.

  • Meta's plan to offer free commercial AI models puts pressure on Google, OpenAI
    1 project | news.ycombinator.com | 16 Jun 2023
    > The only reason open-source LLMs have a heartbeat is they’re standing on Meta’s weights.

    Not necessarily.

    RWKV, for example, is a different architecture that wasn't based on Facebook's weights whatsoever. I don't know where BlinkDL (the author) got the training data, but they seem to have done everything mostly independently otherwise.

    https://github.com/BlinkDL/RWKV-LM

    disclaimer: I've been doing a lot of work lately on an implementation of CPU inference for this model, so I'm obviously somewhat biased since this is the model I have the most experience in.

  • Eliezer Yudkowsky - open letter on AI
    1 project | /r/HPMOR | 15 Jun 2023
    I think the main concern is that, due to the resources put into LLM research for finding new ways to refine and improve them, that work can then be used by projects that do go the extra mile and create things that are more than just LLMs. For example, RWKV is similar to an LLM but will actually change its own model after every processed token, thus letting it remember things longer-term without the use of 'context tokens'.

What are some alternatives?

When comparing cursor and RWKV-LM you can also consider the following projects:

codeium.nvim - A native neovim extension for Codeium

llama - Inference code for Llama models

copilot.lua - Fully featured & enhanced replacement for copilot.vim complete with API for interacting with Github Copilot

alpaca-lora - Instruct-tune LLaMA on consumer hardware

CodeGPT.nvim - CodeGPT is a plugin for neovim that provides commands to interact with ChatGPT.

flash-attention - Fast and memory-efficient exact attention

ai.vim - Generate and edit text in Neovim using OpenAI and GPT.

koboldcpp - A simple one-file way to run various GGML and GGUF models with KoboldAI's UI

chatgpt.nvim - Query ChatGPT in Neovim

gpt4all - gpt4all: run open-source LLMs anywhere

vim_codex - Supercharge your Vim editor with AI-powered code completion using OpenAI Codex. Boost productivity and save time with intelligent suggestions.

RWKV-CUDA - The CUDA version of the RWKV language model ( https://github.com/BlinkDL/RWKV-LM )