Propan
jsonformer
Propan | jsonformer | |
---|---|---|
16 | 25 | |
466 | 3,793 | |
- | - | |
8.8 | 5.4 | |
about 1 month ago | 2 months ago | |
Python | Jupyter Notebook | |
MIT License | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
Propan
-
FastStream: Python's framework for Efficient Message Queue Handling
Later, we discovered Propan, a library created by Nikita Pastukhov, which solved similar problems but for RabbitMQ. Recognizing the potential for collaboration, we joined forces with Nikita to build a unified library that could work seamlessly with both Kafka and RabbitMQ. And that's how FastStream came to be—a solution born out of the need for simplicity and efficiency in microservices development.
-
How we deprecated two successful projects and joined forces to create an even more successful one
The next step was to figure out what to do next. We posted questions on a few relevant subreddits and got quite a few feature requests, mostly around supporting other protocols, encoding schemas etc. But, we also got a message from a developer of a similar framework Propan that was released at about the same time and was gaining quite a traction in the RabbitMQ community. That developer was Nikita Pastukhov and he made an intriguing proposal: let's join our efforts and create one framework with the best features of both. Both projects were growing at roughly the same speed but targeted different communities. So the potential for double growth was there. After a quick consideration, we realized there was not much to lose and there was a lot to gain. Of course, we would lose absolute control over the project but losing control to the community is the only way for an open-source project to succeed. On the positive side, we would gain a very skilled maintainer who single-handedly created a similar framework all by himself. The frameworks were conceptually very similar so we concluded there would not be much friction of ideas and we should be able to reach consensus on the most important design issues.
-
Introducing FastStream: the easiest way to write microservices for Apache Kafka and RabbitMQ in Python
FastStream simplifies the process of writing producers and consumers for message queues, handling all the parsing, networking and documentation generation automatically. It is a new package based on the ideas and experiences gained from FastKafka and Propan. By joining our forces, we picked up the best from both packages and created a unified way to write services capable of processing streamed data regardless of the underlying protocol. We'll continue to maintain both packages, but new development will be in this project.
-
FastStream: the easiest way to add Kafka and RabbitMQ support to FastAPI services
FastStream (https://github.com/airtai/faststream) is a new Python framework, born from Propan and FastKafka teams' collaboration (both are deprecated now). It extremely simplifies event-driven system development, handling all the parsing, networking, and documentation generation automatically. Now FastStream supports RabbitMQ and Kafka, but supported brokers are constantly growing (wait for NATS and Redis a bit). FastStream itself is a really great tool to build event-driven services. Also, it has a native FastAPI integration. Just create a StreamRouter (very close to APIRouter) and register event handlers the same with the regular HTTP-endpoints way:
-
Propan – Python Framework for building messaging services has a big update
Hello everyone!
Two months ago I told you about Propan - the Python framework to build messaging services based on Any Message Broker. So, there were a lot of changes for this time and I want you to tell me again about them.
At first, we added Kafka, Redis Pub/Sub, SQS, and NatsJS support (to RabbitMQ and regular NATS). At now you can interact with these brokers via the same Propan interfaces.
Also, we added an AsyncAPI schema autogeneration, so you already have documentation for your services if you are using Propan.
And the last (but not least) - PydanticV2 support! You can use V1 and V2 both, but V2 is much faster - it is a preferred way to write new services.
By the way: we have a new Propan major version draft, so if you want to participate in the discussion and suggest a new feature, it is time to join our discord and tell about it!
Propan: https://github.com/Lancetnik/Propan
- Looking for Python contributors to a new Messaging Framework
-
Help wanted: support for PR
Also it is important for my own Propan package implementing some custom routers.
- FLaNK Stack Weekly 29 may 2023
-
Propan is a best way to interact SQS from Python
As you may know, I am developing Propan framework to interact with various message brokers single way. When I published a post about the existence of the framework, users immediately asked "When to expect SQS support?". Now!
-
Propan 0.1.2 - new way to interact with Kafka from Python
A couple of days ago I wrote about the release of my framework for working with various message brokers - Propan!
jsonformer
- Forcing AI to Follow a Specific Answer Pattern Using GBNF Grammar
-
Refact LLM: New 1.6B code model reaches 32% HumanEval and is SOTA for the size
- Tools like jsonformer https://github.com/1rgs/jsonformer are not possible with OpenAIs API.
-
Show HN: LLMs can generate valid JSON 100% of the time
How does this compare in terms of latency, cost, and effectiveness to jsonformer? https://github.com/1rgs/jsonformer
-
Ask HN: Explain how size of input changes ChatGPT performance
You're correct with interpreting how the model works wrt it returning tokens one at a time. The model returns one token, and the entire context window gets shifted right by one to for account it when generating the next one.
As for model performance at different context sizes, it's seems a bit complicated. From what I understand, even if models are tweaked (for example using the superHOT RoPE hack or sparse attention) to be able to use longer contexts, they still have to be fined tuned on input of this increased context to actually utilize it, but performance seems to degrade regardless as input length increases.
For your question about fine tuning models to respond with only "yes" or "no", I recommend looking into how the jsonformers library works: https://github.com/1rgs/jsonformer . Essentially, you still let the model generate many tokens for the next position, and only accept the ones that satisfy certain criteria (such as the token for "yes" and the token for "no".
You can do this with openAI API too, using tiktoken https://twitter.com/AAAzzam/status/1669753722828730378?t=d_W... . Be careful though as results will be different on different selections of tokens, as "YES", "Yes", "yes", etc are all different tokens to the best of my knowledge
- A framework to securely use LLMs in companies – Part 1: Overview of Risks
-
LLMs for Schema Augmentation
From here, we just need to continue generating tokens until we get to a closing quote. This approach was borrowed from Jsonformer which uses a similar approach to induce LLMs to generate structured output. Continuing to do so for each property using Replit's code LLM gives the following output:
-
Doesn't a 4090 massively overpower a 3090 for running local LLMs?
https://github.com/1rgs/jsonformer or https://github.com/microsoft/guidance may help get better results, but I ended up with a bit more of a custom solution.
-
“Sam altman won't tell you that GPT-4 has 220B parameters and is 16-way mixture model with 8 sets of weights”
I think function calling is just JSONformer idk: https://github.com/1rgs/jsonformer
- Inference Speed vs. Quality Hacks?
-
Best bet for parseable output?
jsonformer: https://github.com/1rgs/jsonformer
What are some alternatives?
DB-GPT - AI Native Data App Development framework with AWEL(Agentic Workflow Expression Language) and Agents
mlc-llm - Enable everyone to develop, optimize and deploy AI models natively on everyone's devices.
faststream - FastStream is a powerful and easy-to-use Python framework for building asynchronous services interacting with event streams such as Apache Kafka, RabbitMQ, NATS and Redis.
aider - aider is AI pair programming in your terminal
kafka-native - Kafka broker compiled to native using Quarkus and GraalVM.
clownfish - Constrained Decoding for LLMs against JSON Schema
fastgron - High-performance JSON to GRON (greppable, flattened JSON) converter
outlines - Structured Text Generation
bunny-storm - RabbitMQ asynchronous connector library for Python with built in RPC support
gpt-json - Structured and typehinted GPT responses in Python
FastDepends - FastDepends - FastAPI Dependency Injection system extracted from FastAPI and cleared of all HTTP logic. Async and sync modes are both supported.
jikkou - The Open source Resource as Code framework for Apache Kafka