unix-permissions
guardrails
Our great sponsors
unix-permissions | guardrails | |
---|---|---|
0 | 13 | |
127 | 3,147 | |
- | 12.2% | |
7.9 | 9.9 | |
about 1 month ago | 3 days ago | |
JavaScript | Python | |
Apache License 2.0 | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
unix-permissions
We haven't tracked posts mentioning unix-permissions yet.
Tracking mentions began in Dec 2020.
guardrails
- Is there a UI that can limit LLM tokens to a preset list?
-
A minimal design pattern for LLM-powered microservices with FastAPI & LangChain
You're absolutely correct, and I agree that there's potentially a risk of quality loss. But likewise, since these are all intrinsically linked, it may be possible to leverage strength by combining these tasks. I'm unaware of a paper reviewing the reliability and/or performance of LLMs in this specific scenario. If you find any, do share :) With regards to generating JSON responses - there are simple ways to nudge the model and even validate it, using libraries such as https://github.com/promptslab/Promptify, https://github.com/eyurtsev/kor and https://github.com/ShreyaR/guardrails
- Ask HN: People who were laid off or quit recently, how are you doing?
-
Ask HN: AI to study my DSL and then output it?
There are a couple different approaches:
- Use multi-shot prompting with something like guardrails to try prompting a commercial model until it works. [1]
- Use a local model with something with a final layer that steers token selection towards syntactically valid tokens [2]
[1] https://github.com/ShreyaR/guardrails
[2] "Structural Alignment: Modifying Transformers (like GPT) to Follow a JSON Schema" @ https://github.com/newhouseb/clownfish.
-
Introducing :🤖 Megabots - State-of-the-art, production ready full-stack LLM apps made mega-easy with LangChain and FastAPI
👍 validate and correct the outputs of LLMs using guardrails
-
[D] Is all the talk about what GPT can do on Twitter and Reddit exaggerated or fairly accurate?
not vouching for it, but I know this is at least a thing that exists and I like the general idea: https://github.com/shreyar/guardrails
- Introducing Agents in Haystack: Make LLMs resolve complex tasks
What are some alternatives?
rate-limiter-flexible - Atomic counters and rate limiting tools. Limit resource access at any scale.
snyk - Snyk CLI scans and monitors your projects for security vulnerabilities. [Moved to: https://github.com/snyk/cli]
Auto SNI - 🔐 Free, automated HTTPS for NodeJS made easy.
lmql - A language for constraint-guided and efficient LLM programming.
RegEx-DoS - :cop: :punch: RegEx Denial of Service (ReDos) Scanner
crypto-hash - Tiny hashing module that uses the native crypto API in Node.js and the browser
💀 SimpleDDoS - [UNMAINTAINED AND UNPUBLISHED] 💀 Multi-threaded DDoS script
GPTCache - Semantic cache for LLMs. Fully integrated with LangChain and llama_index.
nsp
jose-simple - Jose-Simple allows the encryption and decryption of data using the JOSE (JSON Object Signing and Encryption) standard.
credential-plus - 🔒Unified API for password hashing algorithms
is-website-vulnerable - finds publicly known security vulnerabilities in a website's frontend JavaScript libraries