temporal-large-payload-codec

HTTP service and accompanying Temporal Payload Codec which allows Temporal clients to automatically persist large payloads outside of workflow histories. (by DataDog)

Temporal-large-payload-codec Alternatives

Similar projects and alternatives to temporal-large-payload-codec

NOTE: The number of mentions on this list indicates mentions on common posts plus user suggested alternatives. Hence, a higher number means a better temporal-large-payload-codec alternative or higher similarity.

temporal-large-payload-codec reviews and mentions

Posts with mentions or reviews of temporal-large-payload-codec. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2024-05-09.
  • Temporal Python – A Durable, Distributed Asyncio Event Loop
    2 projects | news.ycombinator.com | 9 May 2024
    We migrated from an in-house redis queuing system.

    Temporal has its own way of doing things; there's rules about what you can and cant do in workflows, what has to live in activities, etc. Its generally quite easy to adapt existing code work with it. We use typescript.

    The worst part for us has been error/anomaly handling. Workflows can sometimes hit a state where the status reads in progress and errors aren't reported anywhere except buried in the event log; which surfaces great in the UI but we still haven't figured out how to programmatically respond to this condition.

    A good example is: we use a home-grown version of this [1] to proxy large payloads to S3. However, if those payloads get REALLY large, they can take some time to upload and download; and if that "some time" is longer than 5 seconds, the control plane will believe that the worker has died, it won't reschedule, and the workflow just sits in In Progress. There's always a beautiful error on the temporal dashboard, and we can manually terminate/retry, but the world just seems to die when this happens and we can't do error-level cleanup stuff like alert the user that the thing they were doing didn't finish.

    Temporal is also challenging to get support for. Its new, open source, we don't pay for temporal cloud, and there's not a ton of resources or people using it. The documentation is quite bad (if you like 500,000 word pages, codegen'd library sites with no comments, and one example for each feature, you'll like their documentation). Given we run our own temporal cluster, we've also had pretty large challenges in the self-hosting world. We work through them, usually after deep-diving into the temporal server code itself, but there's startlingly little documentation on self-hosting, and even less community support.

    Overall, we don't regret adopting it, but if we had a time machine we wouldn't do it again. I feel it makes a series of sacrifices in order to create a system that has extremely high standards for processing, like financial/bank/healthcare level stuff. But, not only are we not building that, but the system has never behaved in a way which makes me think I'd even want to use it if I worked in those industries. Obviously I feel like I'm the one in the wrong here, and I'm sure its just a matter of "we screwed up something somewhere", but that leads back to: bad documentation, no way to get professional support without being on their cloud, and a lack of community support.

    [1] https://github.com/DataDog/temporal-large-payload-codec

Stats

Basic temporal-large-payload-codec repo stats
1
26
3.5
4 months ago

DataDog/temporal-large-payload-codec is an open source project licensed under MIT License which is an OSI approved license.

The primary programming language of temporal-large-payload-codec is Go.


Sponsored
SaaSHub - Software Alternatives and Reviews
SaaSHub helps you find the best software and product alternatives
www.saashub.com