vector
Toshi
Our great sponsors
vector | Toshi | |
---|---|---|
95 | 12 | |
16,366 | 4,108 | |
4.8% | 0.5% | |
9.9 | 6.1 | |
6 days ago | 3 months ago | |
Rust | Rust | |
Mozilla Public License 2.0 | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
vector
- FLaNK AI Weekly 18 March 2024
- Vector: A high-performance observability data pipeline
-
Hacks to reduce cloud spend
we are doing something similar with OTEL but we are looking at using https://vector.dev/
-
About reading logs
We don't pull logs, we forward logs to a centralized logging service.
-
Self hosted log paraer
opensearch - amazon fork of Elasticsearch https://opensearch.org/docs/latestif you do this an have distributed log sources you'd use logstash for, bin off logstash and use vector (https://vector.dev/) its better out of the box for SaaS stuff.
-
creating a centralize syslog server with elastic search
I have done something similar in the past: you can send the logs through a centralized syslog servers (I suggest syslog-ng) and from there ingest into ELK. For parsing I am advice to use something like Vector, is a lot more faster than logstash. When you have your logs ingested correctly, you can create your own dashboard in Kibana. If this fit your requirements, no need to install nginx (unless you want to use as reverse proxy for Kibana), php and mysql.
-
Show HN: Homelab Monitoring Setup with Grafana
I think there's nothing currently that combines both logging and metrics into one easy package and visualizes it, but it's also something I would love to have.
Vector[1] would work as the agent, being able to collect both logs and metrics. But the issue would then be storing it. I'm assuming the Elastic Stack might now be able to do both, but it's just to heavy to deal with in a small setup.
A couple of months ago I took a brief look at that when setting up logging for my own homelab (https://pv.wtf/posts/logging-and-the-homelab). Mostly looking at the memory usage to fit it on my synology. Quickwit[2] and Log-Store[3] both come with built in web interfaces that reduce the need for grafana, but neither of them do metrics.
- [1] https://vector.dev
-
Retaining Logs generated by service running in pod.
Log to stdout/stderr and collect your logs with a tool like vector (vector.dev) and send it to something like Grafana Loki.
-
Lightweight logging on RPi?
I would recommend that you run vector as a systems service so you don't have to worry about managing it. Here is a basic config to do that - https://github.com/vectordotdev/vector/blob/master/distribution/systemd/vector.service .
-
Monitoring traefik access logs easily
You could have a look at Grafana Loki, it's easy to run (single binary for a small setup). Shipping your logs can be done by Promtail or something like Vector. They're both lightweight log shippers with support for Loki.
Toshi
-
Tantivy 0.20 is released: Schemaless column store, Schemaless aggregations, Phrase prefix queries, Percentiles, and more...
I don't think you have an active project that addresses all those use cases. There was an attempt in Rust with Toshi that is built on top of tantivy, but the project seems to have stalled.
- An alternative to Elasticsearch that runs on a few MBs of RAM
-
Postgres Full Text Search vs. the Rest
I wish we had an extension like ZomboDB but using a lighter search engine like https://github.com/quickwit-oss/quickwit, https://github.com/toshi-search/Toshi and https://github.com/mosuka/bayard
Here I'm listing engines based on https://github.com/quickwit-oss/tantivy - tantivy is comparable to Lucene in its scope - but I'm sure there are other engines that could tackle ElasticSearch.
Another thing that could happen is maybe directly embed tantivy in Postgres using an extension, perhaps this could be an option too.
-
Ask HN: Does anybody still use bookmarking services?
I do something similar, though I index the page myself via a little browser extension I wrote. I click a button, the content gets POSTed to a server that throws it in Toshi[1]. I hacked it together on a Saturday, and it's been pretty handy; as you describe, much more useful than any bookmarking approach I've tried before.
-
*set Edge as default browser*
There is some incredible work being done in the web department, frameworks like rocket.rs and actix.rs are amazing. To get the latest info on web development in Rust, check arewewebyet.org. It doesn't list Toshi though, which is weird.
- Zinc Search engine. A lightweight alternative to elasticsearch that requires minimal resources, written in Go.
- Zinc Search engine. A lightweight alternative to Elasticsearch written in Go
- AWS releases forked Elasticsearch code. Announces new name: OpenSearc
What are some alternatives?
graylog - Free and open log management
elasticsearch-rs - Official Elasticsearch Rust Client
Fluentd - Fluentd: Unified Logging Layer (project under CNCF)
MeiliSearch - A lightning-fast search API that fits effortlessly into your apps, websites, and workflow
agent - Vendor-neutral programmable observability pipelines.
narg - A tool to generate LC/AP formulas for a given seed in Noita.
syslog-ng - syslog-ng is an enhanced log daemon, supporting a wide range of input and output methods: syslog, unstructured text, queueing, SQL & NoSQL.
sonic - 🦔 Fast, lightweight & schema-less search backend. An alternative to Elasticsearch that runs on a few MBs of RAM.
OpenSearch - 🔎 Open source distributed and RESTful search engine.
lnx - ⚡ Insanely fast, 🌟 Feature-rich searching. lnx is the adaptable, typo tollerant deployment of the tantivy search engine.
tracing - Application level tracing for Rust.