playground
tree-sitter
Our great sponsors
playground | tree-sitter | |
---|---|---|
16 | 62 | |
11,662 | 16,380 | |
1.0% | 5.3% | |
0.0 | 9.8 | |
3 months ago | 3 days ago | |
TypeScript | Rust | |
Apache License 2.0 | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
playground
-
Why do tree-based models still outperform deep learning on tabular data? (2022)
Not the parent, but NNs typically work better when you can't linearize your data. For classification, that means a space in which hyperplanes separate classes, and for regression a space in which a linear approximation is good.
For example, take the circle dataset here: https://playground.tensorflow.org
That doesn't look immediately linearly separable, but since it is 2D we have the insight that parameterizing by radius would do the trick. Now try doing that in 1000 dimensions. Sometimes you can, sometimes you can't or do want to bother.
-
Introduction to TensorFlow for Deep Learning
For visualisation and some fun: http://playground.tensorflow.org/
- TensorFlow Playground – Tinker with a NN in the Browser
- Visualization of Common Algorithms
-
Stanford A.I. Courses
There’s an interactive neural network you can train here, which can give some intuition on wider vs larger networks:
https://mlu-explain.github.io/neural-networks/
See also here:
-
Let's revolutionize the CPU together!
This site is worth playing around with to get a feel for neural networks, and somewhat about ML in general. There are lots of strategies for statistical learning, and neural nets are only one of them, but they essentially always boil down into figuring out how to build a “classifier”, to try to classify data points into whatever category they best belong in.
-
Curious about Inputs for neural network
I don’t know much experimenting you’ve done, but many repeated small scale experiments might give you a better intuition at least. I highly recommend this online tool for playing with different environmental variables, even if you’re comfortable coding up your own experiments: http://playground.tensorflow.org
-
Intel Announces Aurora genAI, Generative AI Model With 1 Trillion Parameters
Even if you can’t code, play around with this tool: https://playground.tensorflow.org — you can adjust the shape of the NN and watch how well it classifies the data. Model size obviously matters.
-
Where have all the hackers gone?
I don't think so. You can easily play around in the browser, using Javascript, or on https://processing.org/, https://playground.tensorflow.org/, https://scratch.mit.edu/, etc.
If anything the problem is that today's kids have too many options. And sure, some are commercial.
-
[Discussion] Questions about linear regression, polynomial features and multilayer NN.
Well there is no point of using a multilayer linear neural network, because a cascade of linear transformations can be reduced to a single linear transformation. So you can only approximate linear functions. However if you have prior knowledge about the non linearity of your data lets say you know that it is a linear combination of polynomials up to certain degree, you can expand your input space by explicitly making non linear transformation. For instance a 1D linear regression can be modeled by 2 input neurons and 1 output neuron where the activation of the output is the identity. The input neuron x0 will take a constant input namely 1 and the second input neuron x1 will takes your data x. The output neuron will be y=w_0 * 1+w_1 *x which is equal to y=w_0 +w_1 * x. Let us say that your data follows a polynomial form, the idea is to add input neurons and expand your input to for instance X=[1 x x2] in this case you have 3 input neurons where the third is an explict non linear form of the input so y=w_0 + w_1 x +w_2 x2. The general idea is to find a space where the problem becomes linear. In real life example these spaces are non trivial the power of neural network is that they can find by optimization such space without explicitly encoding these non linearities. Try playing around with https://playground.tensorflow.org/ you can get an intuition about your question.
tree-sitter
-
Lezer: A Parsing System for CodeMirror, Inspired by Tree-Sitter
I learned from a google search that these days upstream tree-sitter provides WebAssembly bindings.
Source: https://github.com/tree-sitter/tree-sitter/tree/master/lib/b...
NPM: https://www.npmjs.com/package/web-tree-sitter
Download from the latest Github release: js file (https://github.com/tree-sitter/tree-sitter/releases/download...) and wasm file (https://github.com/tree-sitter/tree-sitter/releases/download...)
-
Difftastic, a structural diff tool that understands syntax
Tree-sitter optimizes for performance (to use in editors), not for correctness. In fact even TS' core developers advocate for not bothering too much with correctness of grammars[1]. I imagine this constraint would be a deal-breaker for GitHub or anyone else in their position.
[1] https://github.com/tree-sitter/tree-sitter/issues/130#issuec...
-
Effective Neovim Setup. A Beginner’s Guide
This is a plugin that provides a simple way to use the tree-sitter in Neovim and also provides functionalities like highlighting, etc.
- An incremental parsing system for programming tools
-
Topiary: A code formatting engine leveraging Tree-sitter
From the tree-sitter side, I am tracking https://github.com/tree-sitter/tree-sitter/issues/1942
-
Shiki Syntax Highlighter
Is tree-sitter really slower than TextMate grammars? Some benchmarks indicate that this isn't really the case [1]. On the other hand, breaking parse trees is a real issue, because the error-recovery in tree-sitter is pretty rudimentary [2][3], but as you said, it's not an issue for Shiki.
Several TextMate grammars suffer from inaccuracy bugs, and issues of maintainability. Perhaps the biggest hindrance in the adoption of tree-sitter, is that the most popular editor, VSCode, still doesn't support it.
[1]: https://github.com/microsoft/vscode/pull/161479
-
It seems that some BIG improvements of Treesitter on BIG FILEs have been merged into Nightly! (minutes ago!)
u/lewis6991 I think the biggest performance gain was made by tree-sitter itself: https://github.com/tree-sitter/tree-sitter/pull/2085
-
Looking for Tree-sitter query documentations and guides
I asked on the repo's discussions but responses are limited and not explanatory (I'm not shaming anyone here, discussions aren't a place for detailed how-tos and documentations anyway).
-
Will Treesitter ever be stable on big files?
The following discussion here. TS query cannot be incremental, that is why I regard it as design fault.
-
Detailed syntax highlighting
Hi, so I've recently decided to give Neovim yet another try, this time using some predefined plugins with kickstart.nvim, for syntax it uses tree-sitter.
What are some alternatives?
clip-interrogator - Image to prompt with BLIP and CLIP
nvim-treesitter - Nvim Treesitter configurations and abstraction layer
Visual Studio Code - Visual Studio Code
dspy - DSPy: The framework for programming—not prompting—foundation models
indent-blankline.nvim - Indent guides for Neovim
pyllama - LLaMA: Open and Efficient Foundation Language Models
doom-emacs - An Emacs framework for the stubborn martian hacker [Moved to: https://github.com/doomemacs/doomemacs]
lake.nvim - A simplified ocean color scheme with treesitter support
language-server-protocol - Defines a common protocol for language servers.
developer - the first library to let you embed a developer agent in your own app!
coc-explorer - 📁 Explorer for coc.nvim