opentelemetry-collector-co
gitlab-runner
opentelemetry-collector-co | gitlab-runner | |
---|---|---|
10 | 47 | |
- | - | |
- | - | |
- | - | |
- | - | |
- | - |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
opentelemetry-collector-co
-
All you need is Wide Events, not "Metrics, Logs and Traces"
The open telemetry collector does just that. https://github.com/open-telemetry/opentelemetry-collector-co...
-
Migrating to OpenTelemetry
If you are using the prometheus exporter, you can use the transform processor to get specific resource attributes into metric labels.
With the advantage that you get only the specific attributes you want, thus avoiding a cardinality explosion.
https://github.com/open-telemetry/opentelemetry-collector-co...
-
Vendor lock-in is in the small details
The article seems to suggest https://github.com/open-telemetry/opentelemetry-collector-co... was silently killed, yet it appears to have been merged in January, am I missing something?
-
Ask HN: What's Your Opinion on Opentelemetry?
OpenTelemetry is a large suite of software, that supports many use cases. I think you got what you wanted but didn't realised it!
The dedicated executable that you are after is called the OpenTelemtry Collector.
The OpenTelemetry SDK for language of choice should include many exporters, which describe the format and transport mechanism for the traces. The OpenTelemetry Collector can then use an appropriate receiver to ingest those traces.
Here is a file based receiver for the collector:
https://github.com/open-telemetry/opentelemetry-collector-co...
-
OpenTelemetry at Scale: Using Kafka to handle bursty traffic
This arch is how the big players do it at scale (ie. datadog, new relic - the second it passes their edge it lands in a kafka queue). Also otel components lack rate limiting(1) meaning its super easy to overload your backend storage (s3).
Grafana has some posts how they softened the s3 blow with memcached(2,3).
1. https://github.com/open-telemetry/opentelemetry-collector-co...
-
Show HN: HyperDX – open-source dev-friendly Datadog alternative
Ah yeah the easiest way is probably using the OpenTelemetry collector to set up a process to pull your logs out of jounrnald and send them via otel logs to HyperDX (or anywhere else that speaks otel) - the docs might be a bit tricky to go around depending on your familiarity with OpenTelemetry but this is what you'd be looking for:
https://github.com/open-telemetry/opentelemetry-collector-co...
Happy to dive more into the discord too if you'd like!
-
DataDog asked OpenTelemetry contributor to kill pull request
Link to exact comment: https://github.com/open-telemetry/opentelemetry-collector-co...
-
Elastic, Loki and SigNoz – A Perf Benchmark of Open-Source Logging Platforms
What schema does SigNoz use with Clickhouse? The Open Telemetry Collector uses this schema https://github.com/open-telemetry/opentelemetry-collector-co... and I found out that accesing map attributes is much slower (10-50x) compared to regular columns. I expected some slow down but this is too much.
-
Podman: A tool for managing OCI containers and pods
Podman does support docker API so you can use something like the OpenTelemetry Collector to fetch metrics using the docker API and forward them to prometheus.
Collector: https://github.com/open-telemetry/opentelemetry-collector-co...
Docker receiver: https://github.com/open-telemetry/opentelemetry-collector-co...
Prometheus exporters: https://github.com/open-telemetry/opentelemetry-collector-co... and https://github.com/open-telemetry/opentelemetry-collector-co...
gitlab-runner
-
🦊 GitLab CI: Deploy a Majestic Single Server Runner on AWS
#!/bin/bash # ### Script to initialize a GitLab runner on an existing AWS EC2 instance with NVME disk(s) # # - script is not interactive (can be run as user_data) # - will reboot at the end to perform NVME mounting # - first NVME disk will be used for GitLab custom cache # - last NVME disk will be used for Docker data (if only one NVME, the same will be used without problem) # - robust: on each reboot and stop/start, disks are mounted again (but data may be lost if stop and then start after a few minutes) # - runner is tagged with multiple instance data (public dns, IP, instance type...) # - works with a single spot instance # - should work even with multiple ones in a fleet, with same user_data (not tested for now) # # /!\ There is no prerequisite, except these needed variables : MAINTAINER=zenika RUNNER_NAME="majestic-runner" GITLAB_URL=https://gitlab.com/ GITLAB_TOKEN=XXXX # prepare docker (re)install sudo apt-get -y install apt-transport-https ca-certificates curl gnupg lsb-release sysstat curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg echo "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list >/dev/null sudo apt-get update # needed to use the docker.list # install gitlab runner curl -L "https://packages.gitlab.com/install/repositories/runner/gitlab-runner/script.deb.sh" | sudo bash sudo apt-get -y install gitlab-runner # create NVME initializer script cat </home/ubuntu/nvme-initializer.sh #!/bin/bash # # To be run on each fresh start, since NVME disks are ephemeral # so first start, start after stop, but not on reboot # inspired by https://stackoverflow.com/questions/45167717/mounting-a-nvme-disk-on-aws-ec2 # date | tee -a /home/ubuntu/nvme-initializer.log ### Handle NVME disks # get NVME disks bigger than 100Go (some small size disk may be there for root, depending on server type) NVME_DISK_LIST=\$(lsblk -b --output=NAME,SIZE | grep "^nvme" | awk '{if(\$2>100000000000)print\$1}' | sort) echo "NVME disks are: \$NVME_DISK_LIST" | tee -a /home/ubuntu/nvme-initializer.log # there may be 1 or 2 NVME disks, then we split (or not) the mounts between GitLab custom cache and Docker data export NVME_GITLAB=\$(echo "\$NVME_DISK_LIST" | head -n 1) export NVME_DOCKER=\$(echo "\$NVME_DISK_LIST" | tail -n 1) echo "NVME_GITLAB=\$NVME_GITLAB and NVME_DOCKER=\$NVME_DOCKER" | tee -a /home/ubuntu/nvme-initializer.log # format disks if not sudo mkfs -t xfs /dev/\$NVME_GITLAB | tee -a /home/ubuntu/nvme-initializer.log || echo "\$NVME_GITLAB already formatted" # this may already be done sudo mkfs -t xfs /dev/\$NVME_DOCKER | tee -a /home/ubuntu/nvme-initializer.log || echo "\$NVME_DOCKER already formatted" # disk may be the same, then already formated by previous command # mount on /gitlab-host/ and /var/lib/docker/ sudo mkdir -p /gitlab sudo mount /dev/\$NVME_GITLAB /gitlab | tee -a /home/ubuntu/nvme-initializer.log sudo mkdir -p /gitlab/custom-cache sudo mkdir -p /var/lib/docker sudo mount /dev/\$NVME_DOCKER /var/lib/docker | tee -a /home/ubuntu/nvme-initializer.log ### reinstall Docker (which data may have been wiped out) # docker (re)install sudo apt-get -y reinstall docker-ce docker-ce-cli containerd.io docker-compose-plugin | tee -a /home/ubuntu/nvme-initializer.log echo "NVME initialization succesful" | tee -a /home/ubuntu/nvme-initializer.log EOF # set NVME initializer script as startup script sudo tee /etc/systemd/system/nvme-initializer.service >/dev/null <
-
Atlassian prepares to abandon on-prem server products
GitLab team member here, thanks for sharing.
> Still not a big fan of how stiff Yaml pipelines feel in Gitlab CI
Maybe the pipeline editor in "Build > Pipeline editor" can help with live linting, or more advanced features such as parent-child pipelines or merge trains.
If you need tips for optimizing the CI/CD pipeline, suggest following these tips in the docs https://docs.gitlab.com/ee/ci/pipelines/pipeline_efficiency.... or a few more tips in my recent talk "Efficient DevSecOps pipelines in cloud-native world", slides from Chemnitz Linux Days 2023 in https://docs.google.com/presentation/d/1_kyGo_cWi5dKyxi3BfYj...
> and that tickets for what seems like a simple feature [1] hang around for years, but it is nice.
Thanks for sharing. (FYI for everyone) The linked issue suggests a Docker cache cleanup script, which might be helpful. https://gitlab.com/gitlab-org/gitlab-runner/-/issues/27332#n... -> https://docs.gitlab.com/runner/executors/docker.html#clear-t...
-
GitHub Actions could be so much better
If only competitors could do better...
https://gitlab.com/gitlab-org/gitlab-runner/-/issues/2797
- SLOT77 ; Daftar Situs Judi Slot 777 Online Terbaik & Terpercaya 2023
- Gacor88 : Daftar Slot Gacor88 Terbaru Anti Boncos Gampang Maxwin Disini Bos
- SLOT GACOR88 ; SITUS SLOT GACOR 88 TERBARU DAN TERPERCAYA GAMPANG MENANG 2023
- SLOT4D : SITUS SLOT GACOR 4D TERUPDATE MUDAH MAXWIN NEW MEMBER X250 X500
- Gitlab runner in-depth - communication and CI_JOB_TOKEN
-
Caching of GitLab CI is too slow for rust build.
GitLab MR for the CACHE_COMPRESSION_LEVEL implementation
-
The GMP library's website is under attack by a single GitHub user
And in general just making caching stuff easier. I feel like it is unnecessarily complicated for example to cache apt-get in Gitlab which I assume makes most people not do it.
https://gitlab.com/gitlab-org/gitlab-runner/-/issues/991#not...
What are some alternatives?
podman-compose - a script to run docker-compose.yml using podman
woodpecker - Woodpecker is a simple yet powerful CI/CD engine with great extensibility.
cockpit-podman - Cockpit UI for podman containers
kaniko - Build Container Images In Kubernetes
traefik - The Cloud Native Application Proxy
singularity - Singularity has been renamed to Apptainer as part of us moving the project to the Linux Foundation. This repo has been persisted as a snapshot right before the changes.
logs-benchmark - Logs performance benchmark repo: Comparing Elastic, Loki and SigNoz
onedev - Git Server with CI/CD, Kanban, and Packages. Seamless integration. Unparalleled experience.
dd-trace-py - Datadog Python APM Client
opentelemetry-collector-contrib - Contrib repository for the OpenTelemetry Collector
machine