RRDReST
Collectd
RRDReST | Collectd | |
---|---|---|
2 | 8 | |
28 | 3,211 | |
- | 0.9% | |
0.0 | 8.4 | |
about 3 years ago | 2 months ago | |
Python | C | |
- | GNU General Public License v3.0 or later |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
RRDReST
-
How to (almost) natively integrate LibreNMS and Grafana
RRDRest - https://github.com/tbotnz/RRDReST - Includes instructions on setup
- RRDReST ( read rrds from a rest endpoint )
Collectd
-
Collect Logs and Metrics from non-AWS Server using CloudWatch Agent
CloudWatch Agent uses the CollectD service to collect metrics. If CollectD is not installed on your system, the Agent will fail to start. If you are not sure if it's installed, here is how you can check if CollectD is installed and active:
-
μMon: Stupid simple monitoring
https://collectd.org/ does the gathering (and writing to RRDTool database, if you so desire) part very well. Many plugins, easy to add more (just return one line of text)
Still need RRD viewere but that's not a huge stack
And it scales all the way to hundreds of hosts, as on top of network send/receive of stats it supports few other write formats aside from just RRD files.
-
Post Mortem on Mastodon Outage with 30k users
Then you will have same problems but now you can bother manufacturer about it!
Also unless there is something horribly wrong about how often data is written, that SSD should run for ages.
We ran (for a test) consumer SSDs in busy ES cluster and they still lasted like 2 years just fine
The whole setup was a bit of overcomplicated too. RAID10 with 5+1 or 7+1 (yes Linux can do 7 drive RAID10) with hotspare woud've been entirely fine, easier, and most likely faster. You need backups anyway so ZFS doesn't give you much here, just extra CPU usage
Either way, monitoring wait per drive (easy way is to just plug collectd [1] into your monitoring stack, it is light and can monitor A TON of different metrics)
* [1]https://collectd.org/
-
IT Pro Tuesday #217 - Python Frameworks, Logging Tutorial, Android Terminal & More
Collectd pulls metrics from the OS, applications, logfiles and external devices for use in monitoring systems, finding performance bottlenecks and capacity planning. hombre_sabio explains, "Collectd is a tiny daemon that gathers information from a system. It enables mechanisms to collect and observe the values in different techniques. It is an open-source monitoring tool to retrieve and manage SNMP master agents."
- PHP7.4 Installation Fail
-
CPU Performance of a docker minecraft java server on Raspberry Pi 4
For metrics storage I'm using a Graphite database and the graph UI itself is Grafana. To get these I'm using the Debian repos they supply with mostly off-the-shelf configs. For collecting metrics from the Pi to send to Graphite I use collectd. It has a lot of off-the-shelf plugins you can use to grab metrics like CPU usage & load average, network in/out, memory stats etc. The Minecraft-specific stuff you can get from configuring collectd plugins as well, like the tick lag graph I use the "tail" plugin to follow and parse the server log.
-
Lightweight alternative to Grafana
For monitoring, personally I use collectd and Collectd Graph Panel (sadly the latter is abandoned, but it still works fine)