grammars-v4
guidance
grammars-v4 | guidance | |
---|---|---|
29 | 89 | |
9,803 | 12,248 | |
0.8% | - | |
9.6 | 9.5 | |
3 days ago | 9 months ago | |
ANTLR | Jupyter Notebook | |
MIT License | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
grammars-v4
- Operadores de adição e subtração
-
Visual Basic for Applications Language Specification [pdf]
Perhaps the one from ANTLR's collection [0] is a good start (there are also others ANTLR VB6 grammars documented elsewhere). It does require knowing ANTLR, but that should be less effort for someone already familiar with language implementation, particularly, the visitor pattern (my favorite reference [1]).
[0] https://github.com/antlr/grammars-v4/tree/master/vb6
[1] https://craftinginterpreters.com/representing-code.html
-
Postgres Language Server: Implementing the Parser
Where is the SQLite test suite, please? I'd be very interested.
There are already SQL grammars, check https://github.com/antlr/grammars-v4 specifically in here I think https://github.com/antlr/grammars-v4/tree/master/sql I contributed to one of them, and I wrote my own for some personal work. Be warned, it's very involved, very complex and MSSQL is rather ill-defined.
Names bracket identifiers) in SQL are bloody awful. Sometimes square brackets are even compulsory, and why you can usually replace [...] with the SQL standard "..." , not always! Trust me, it gets worse.
I don't find antlr grammars to be brittle, and while they can lose in performance (by how much I don't know, perhaps quite considerably) they are very easy to maintain and I am very fortunate to have antlr to work with.
-
Llama: Add Grammar-Based Sampling
This grammar "library" was cited as an example of what the format could look like:.
https://github.com/antlr/grammars-v4
There is everything from assembly and C++ to glsl and scripting languages, arithmetic, games, and other weird formats.
-
Structured Output from LLMs (Without Reprompting!)
> Which brings me to the other approach: steering the LLM's output __as it is generating tokens__
A relevant PR:
https://github.com/ggerganov/llama.cpp/pull/1773
The plan is to support arbitrary grammar files to constrain tokens as they are generated, like the ones here:
https://github.com/antlr/grammars-v4
-
SQL-Parsing
Have a look at jooq - I know this has been used to rewrite SQL from one dialect to another, so it MUST be capable of collating code activity metrics. Look here. Otherwise, you might want to look into writing your own parser. ANTLR has a T-SQL dialect parser script here.
-
How should I prepare for AI-driven changes in the industry as a Software Engineering Manager
Find a Perl grammar file for ANTLR, like https://github.com/antlr/grammars-v4/tree/master/perl Save the grammar file as Perl.g4 in your project. Now, you can create the Kotlin program: import org.antlr.v4.runtime.* import org.antlr.v4.runtime.tree.ParseTree import java.io.File
- Can you create a cpp file in a program like you could a txt file?
-
DELD: An experimental HTTP-Client
Antlr is another option. You could generate a parser using the JSON antlr grammar.
- Are there any resources available to convert a code from Basic to C++? need to do this for the sake of an assignment. anything will be helpful
guidance
-
Guidance: A guidance language for controlling large language models
This IS Microsoft Guidance, they seem to have spun off a separate GitHub organization for it.
https://github.com/microsoft/guidance redirects to https://github.com/guidance-ai/guidance now.
- LangChain Agent Simulation – Multi-Player Dungeons and Dragons
-
Llama: Add Grammar-Based Sampling
... and it sets the value of "armor" to "leather" so that you can use that value later in your code if you wish to. Guidance is pretty powerful, but I find the grammar hard to work with. I think the idea of being able to upload a bit of code or a context-free grammar to guide the model is super smart.
https://github.com/microsoft/guidance/blob/d2c5e3cbb730e337b...
-
Introducing TypeChat from Microsoft
Here's one thing I don't get.
Why all the rigamarole of hoping you get a valid response, adding last-mile validators to detect invalid responses, trying to beg the model to pretty please give me the syntax I'm asking for...
...when you can guarantee a valid JSON syntax by only sampling tokens that are valid? Instead of greedily picking the highest-scoring token every time, you select the highest-scoring token that conforms to the requested format.
This is what Guidance does already, also from Microsoft: https://github.com/microsoft/guidance
But OpenAI apparently does not expose the full scores of all tokens, it only exposes the highest-scoring token. Which is so odd, because if you run models locally, using Guidance is trivial, and you can guarantee your json is correct every time. It's faster to generate, too!
-
Accessing Llama 2 from the command-line with the LLM-replicate plugin
Perhaps something as simple as stating it was first built around OpenAI models and later expanded to local via plugins?
I've been meaning to ask you, have you seen/used MS Guidance[0] 'language' at all? I don't know if it's the right abstraction to interface as a plugin with what you've got in llm cli but there's a lot about Guidance that seems incredibly useful to local inference [token healing and acceleration especially].
[0]https://github.com/microsoft/guidance
-
AutoChain, lightweight and testable alternative to LangChain
LangChain is just too much, personal solutions are great, until you need to compare metrics or methodologies of prompt generation. Then the onus is on these n-parties who are sharing their resources to ensure that all of them used the same templates, they were generated the same way, with the only diff being the models these prompts were run on.
So maybe a simpler library like Microsoft's Guidance (https://github.com/microsoft/guidance)? It does this really well.
-
Structured Output from LLMs (Without Reprompting!)
I am unclear on the status of the project but here is the conversation that seem to be tracking it: https://github.com/microsoft/guidance/discussions/201
-
/r/guidance is now a subreddit for Guidance, Microsoft's template language for controlling language models!
Let's have a subreddit about Guidance!
- Is there a UI that can limit LLM tokens to a preset list?
-
Any suggestions for an open source model for parsing real estate listings?
You should look at guidance for an LLM to fill out a template. Define the output data structure and provide the real estate listing in the context (see the JSON template example here https://github.com/microsoft/guidance)
What are some alternatives?
ANTLR - ANTLR (ANother Tool for Language Recognition) is a powerful parser generator for reading, processing, executing, or translating structured text or binary files.
semantic-kernel - Integrate cutting-edge LLM technology quickly and easily into your apps
tree-sitter-sql - SQL grammar for tree-sitter
lmql - A language for constraint-guided and efficient LLM programming.
lezer-snowsql
langchain - ⚡ Building applications with LLMs through composability ⚡ [Moved to: https://github.com/langchain-ai/langchain]
rewrite - Automated mass refactoring of source code.
NeMo-Guardrails - NeMo Guardrails is an open-source toolkit for easily adding programmable guardrails to LLM-based conversational systems.
tree-sitter-sql - SQL syntax highlighting for tree-sitter
llama-cpp-python - Python bindings for llama.cpp
go-mysql-server - A MySQL-compatible relational database with a storage agnostic query engine. Implemented in pure Go.
langchainrb - Build LLM-powered applications in Ruby