vector
cue
Our great sponsors
vector | cue | |
---|---|---|
96 | 28 | |
16,366 | 3,181 | |
4.8% | - | |
9.9 | 9.1 | |
7 days ago | almost 3 years ago | |
Rust | Go | |
Mozilla Public License 2.0 | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
vector
-
Docker Log Observability: Analyzing Container Logs in HashiCorp Nomad with Vector, Loki, and Grafana
job "vector" { datacenters = ["dc1"] # system job, runs on all nodes type = "system" group "vector" { count = 1 network { port "api" { to = 8686 } } ephemeral_disk { size = 500 sticky = true } task "vector" { driver = "docker" config { image = "timberio/vector:0.30.0-debian" ports = ["api"] volumes = ["/var/run/docker.sock:/var/run/docker.sock"] } env { VECTOR_CONFIG = "local/vector.toml" VECTOR_REQUIRE_HEALTHY = "false" } resources { cpu = 100 # 100 MHz memory = 100 # 100MB } # template with Vector's configuration template { destination = "local/vector.toml" change_mode = "signal" change_signal = "SIGHUP" # overriding the delimiters to [[ ]] to avoid conflicts with Vector's native templating, which also uses {{ }} left_delimiter = "[[" right_delimiter = "]]" data=<
- FLaNK AI Weekly 18 March 2024
- Vector: A high-performance observability data pipeline
-
Hacks to reduce cloud spend
we are doing something similar with OTEL but we are looking at using https://vector.dev/
-
About reading logs
We don't pull logs, we forward logs to a centralized logging service.
-
Self hosted log paraer
opensearch - amazon fork of Elasticsearch https://opensearch.org/docs/latestif you do this an have distributed log sources you'd use logstash for, bin off logstash and use vector (https://vector.dev/) its better out of the box for SaaS stuff.
-
creating a centralize syslog server with elastic search
I have done something similar in the past: you can send the logs through a centralized syslog servers (I suggest syslog-ng) and from there ingest into ELK. For parsing I am advice to use something like Vector, is a lot more faster than logstash. When you have your logs ingested correctly, you can create your own dashboard in Kibana. If this fit your requirements, no need to install nginx (unless you want to use as reverse proxy for Kibana), php and mysql.
-
Show HN: Homelab Monitoring Setup with Grafana
I think there's nothing currently that combines both logging and metrics into one easy package and visualizes it, but it's also something I would love to have.
Vector[1] would work as the agent, being able to collect both logs and metrics. But the issue would then be storing it. I'm assuming the Elastic Stack might now be able to do both, but it's just to heavy to deal with in a small setup.
A couple of months ago I took a brief look at that when setting up logging for my own homelab (https://pv.wtf/posts/logging-and-the-homelab). Mostly looking at the memory usage to fit it on my synology. Quickwit[2] and Log-Store[3] both come with built in web interfaces that reduce the need for grafana, but neither of them do metrics.
- [1] https://vector.dev
-
Retaining Logs generated by service running in pod.
Log to stdout/stderr and collect your logs with a tool like vector (vector.dev) and send it to something like Grafana Loki.
-
Lightweight logging on RPi?
I would recommend that you run vector as a systems service so you don't have to worry about managing it. Here is a basic config to do that - https://github.com/vectordotdev/vector/blob/master/distribution/systemd/vector.service .
cue
- The Perfect Configuration Format? Try TypeScript
- YAML: It's Time to Move On
-
Ask HN: What you up to? (Who doesn't want to be hired?)
I'm continuing to work on https://concise-encoding.org which is a new security-conscious ad-hoc encoding format to replace JSON/XML and friends. I've been at it for 3 years so far and am close to a release.
In a nutshell:
- Edit in text, transmit in binary. One can be seamlessly converted to the other, but binary is far more efficient for processing, storage and transmission, while text is better for humans to read and edit (which happens far less often than the other things).
- Secure by design: Everything is tightly specced and accounted for so that there aren't differences between implementations that can be exploited to compromise your system. https://github.com/kstenerud/concise-encoding/blob/master/ce...
- Real type support because coercing everything into strings sucks (and is another security risk and source of incompatibilities).
XML had a good run but was replaced by JSON which was a big improvement. JSON also had a good run but it's time for it to retire now that the landscape has changed even further: Security and efficiency are the desires of today, and JSON provides neither.
I've got the spec nailed down and can finally see the light at the end of the tunnel for the reference implementation in golang. I still need to come up with a system for schemas, but I'm hoping that https://cuelang.org will fit the bill.
-
No YAML
Has anyone taken a look at Cue who can share any experiences?
It's mentioned on the site as an alternative to Yaml. Recently watched (~half of) this intro to it: https://youtu.be/fR_yApIf6jU
- Ask HN: Is there a good way to run integration tests on Kubernetes?
-
Cue: A new language for data validation
the most interesting summary explanation of cue lang and its differences is from a bug filing - https://github.com/cuelang/cue/issues/33
>CUE is a bit different from the languages used in linguistics and more tailored to the general configuration issue as we've seen it at Google. But under the hood it adheres strictly to the concepts and principles of these approaches and we have been careful not to make the same mistakes made in BCL (which then were copied in all its offshoots). It also means that CUE can benefit from 30 years of research on this topic. For instance, under the hood, CUE uses a first-order unification algorithm, allowing us to build template extractors based on anti-unification (see issue #7 and #15), something that is not very meaningful or even possible with languages like BCL and Jsonnet.
-
CMake proposal: Unified way of describing dependencies of a project
I agree with you. Personally, I think Cue is much better than either YAML, TOML or JSON because it adds the concept of types to the idea of describing configuration.
-
Cloud Infrastructure as SQL
true, but the tooling and workflow remains the same.
Not sure of any tool that could abstract the details sufficiently to be widely adopted. There is just too much nuance in cloud config.
I'm exploring using CUE (https://cuelang.org) to define TF resources, exporting as JSON for TF. So far it's much nicer
What are some alternatives?
graylog - Free and open log management
terraform - Terraform enables you to safely and predictably create, change, and improve infrastructure. It is a source-available tool that codifies APIs into declarative configuration files that can be shared amongst team members, treated as code, edited, reviewed, and versioned.
Fluentd - Fluentd: Unified Logging Layer (project under CNCF)
dhall-lang - Maintainable configuration files
agent - Vendor-neutral programmable observability pipelines.
jsonnet - Jsonnet - The data templating language
syslog-ng - syslog-ng is an enhanced log daemon, supporting a wide range of input and output methods: syslog, unstructured text, queueing, SQL & NoSQL.
Pulumi - Pulumi - Infrastructure as Code in any programming language. Build infrastructure intuitively on any cloud using familiar languages 🚀
OpenSearch - 🔎 Open source distributed and RESTful search engine.
ytt - YAML templating tool that works on YAML structure instead of text
tracing - Application level tracing for Rust.
starlark-rust - A Rust implementation of the Starlark language