Stream helps developers build engaging apps that scale to millions with performant and flexible Chat, Feeds, Moderation, and Video APIs and SDKs powered by a global edge network and enterprise-grade infrastructure. Learn more →
Secai Alternatives
Similar projects and alternatives to secai
-
-
Stream
Stream - Scalable APIs for Chat, Feeds, Moderation, & Video. Stream helps developers build engaging apps that scale to millions with performant and flexible Chat, Feeds, Moderation, and Video APIs and SDKs powered by a global edge network and enterprise-grade infrastructure.
-
-
12-factor-agents
What are the principles we can use to build LLM-powered software that is actually good enough to put in the hands of production customers?
-
-
-
-
InfluxDB
InfluxDB – Built for High-Performance Time Series Workloads. InfluxDB 3 OSS is now GA. Transform, enrich, and act on time series data directly in the database. Automate critical tasks and eliminate the need to move data externally. Download now.
-
mochi
Mochi is a small, fast, embeddable programming language designed for agents, data, and AI. It combines functional syntax, stream-first semantics, and native support for datasets, graphs, and simulation.
-
agents.erl
Agents are distributed systems, and in this repository, they are treated as such. [email protected] for projects / employment opportunities
-
secai discussion
secai reviews and mentions
-
We hit a wall testing AI agents, agents simulations works better
Your question is “how to test opaque nondeterministic databases”. I test my agents deterministically, because I know how to IoC. Check out this code [0] and follow the usage. In the rest of cases, you assert with embeds. Good luck.
[0] https://github.com/pancsta/secai/blob/74d79ad449c0f60a57b600...
-
Show HN: Agents.erl (AI Agents in Erlang)
It's nice to see that BEAM is still alive. If you're into actor model / state machine agents, I can recommend secai, which is in Golang [0]. It does have a form of goroutine cancellation. Do you happen to have some screenshots of your devflow in beam? How do you debug?
[0] https://github.com/pancsta/secai
-
12-factor Agents: Patterns of reliable LLM applications
Thanks, terminal UI is an important design choice - it's fast, cheap, and runs everywhere (like the web via wasm / ssh, or on iphones with touch). The LLM layer is still fresh, and I personally use it for web scraping, but the underlying workflow engine is quite mature and ubiquitous - it was used for sync engines, UIs, daemons, network services. It shines when faces complexity, nondeterminism, and retry logic - the more chaotic the flow is, the bigger the gains.
The approach is to shape behavior from chaos by exclusion, instead of defining all possible transitions. With LLMs, this process could be automated and effectively an agent would be dynamically creating itself using a DSL (state schema and predefined states). The great thing about LLMs is being charged by tokens instead of a number of requests. We can just interrogate them about every detail separately and build a flow graph with transparent (and debuggable) reasoning. I also have API sketches for proactive scenarios (originally made for an ML prototype) [0].
[0] https://github.com/pancsta/secai/blob/474433796c5ffbc7ec5744...
-
A note from our sponsor - Stream
getstream.io | 12 Jul 2025
Stats
pancsta/secai is an open source project licensed under MIT License which is an OSI approved license.
The primary programming language of secai is Go.