ubicloud
cli
ubicloud | cli | |
---|---|---|
16 | 67 | |
3,065 | 111 | |
3.9% | 8.1% | |
9.9 | 9.3 | |
5 days ago | 7 days ago | |
Ruby | Go | |
GNU Affero General Public License v3.0 | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
ubicloud
- FLaNK AI for 11 March 2024
-
Show HN: Open-source x64 and Arm GitHub runners. Reduces GitHub Actions bill 10x
The docs still say the Elastic license is used but looking at https://github.com/ubicloud/ubicloud/blob/main/LICENSE it looks like the project might have switched to GNU Affero General Public License v3.0 in the last day.
- GitHub - ubicloud/ubicloud: Open, free, and portable cloud. Elastic compute, block storage (non replicated), and virtual networking services in public alpha.
-
Ask HN: How does your company balance test coverage and deploy speed?
At Ubicloud, we have 100% line and branch coverage that is mandated on every PR (https://github.com/ubicloud/ubicloud). We also have an E2E test suite that we run periodically and with every commit. We did not really feel like our tests are slowing us down, it actually makes us faster since we have a higher trust to the payload and many manual checks that would need to be done is safely skipped.
-
Ubicloud – open, free and portable cloud
> Taken from here: https://ubicloud.com/
Am I the only one getting a certificate error browsing there?
-
Ask HN: Thoughts about Elastic V2, SSPL, or mixed software licenses?
Link to our project: https://github.com/ubicloud/ubicloud
We’re choosing Elastic V2 for three reasons: (1) We’re planning to monetize through a managed service and we’d like the license to support that, (2) Later if we change our mind, we think it’s easier on our users if we go from a restrictive license to a more permissive one, and (3) The Elastic V2 license is much simpler than its cousin, Server Side Public License (SSPL).
That said, Elastic V2 is a new license and doesn’t seem to as popular as SSPL. Also, some projects out there mix and match multiple licenses in their repo to be able to call themselves open source.
Any insights / feedback on Elastic V2 or software licenses in general?
- Attribute-Based Access Control (ABAC) Implementation in 130 Lines of Code
cli
-
Show HN: Managed GitHub Actions Runners for AWS
Hey HN! I'm Jacob, one of the founders of Depot (https://depot.dev), a build service for Docker images, and I'm excited to show what we’ve been working on for the past few months: run GitHub Actions jobs in AWS, orchestrated by Depot!
Here's a video demo: https://www.youtube.com/watch?v=VX5Z-k1mGc8, and here’s our blog post: https://depot.dev/blog/depot-github-actions-runners.
While GitHub Actions is one of the most prevalent CI providers, Actions is slow, for a few reasons: GitHub uses underpowered CPUs, network throughput for cache and the internet at large is capped at 1 Gbps, and total cache storage is limited to 10GB per repo. It is also rather expensive for runners with more than 2 CPUs, and larger runners frequently take a long time to start running jobs.
Depot-managed runners solve this! Rather than your CI jobs running on GitHub's slow compute, Depot routes those same jobs to fast EC2 instances. And not only is this faster, it’s also 1/2 the cost of GitHub Actions!
We do this by launching a dedicated instance for each job, registering that instance as a self-hosted Actions runner in your GitHub organization, then terminating the instance when the job is finished. Using AWS as the compute provider has a few advantages:
- CPUs are typically 30%+ more performant than alternatives (the m7a instance type).
- Each instance has high-throughput networking of up to 12.5 Gbps, hosted in us-east-1, so interacting with artifacts, cache, container registries, or the internet at large is quick.
- Each instance has a public IPv4 address, so it does not share rate limits with anyone else.
We integrated the runners with the distributed cache system (backed by S3 and Ceph) that we use for Docker build cache, so jobs automatically save / restore cache from this cache system, with speeds of up to 1 GB/s, and without the default 10 GB per repo limit.
Building this was a fun challenge; some matrix workflows start 40+ jobs at once, then requiring 40 EC2 instances to launch at once.
We’ve effectively gotten very good at starting EC2 instances with a "warm pool" system which allows us to prepare many EC2 instances to run a job, stop them, then resize and start them when an actual job request arrives, to keep job queue times around 5 seconds. We're using a homegrown orchestration system, as alternatives like autoscaling groups or Kubernetes weren't fast or secure enough.
There are three alternatives to our managed runners currently:
1. GitHub offers larger runners: these have more CPUs, but still have slow network and cache. Depot runners are also 1/2 the cost per minute of GitHub's runners.
2. You can self-host the Actions runner on your own compute: this requires ongoing maintenance, and it can be difficult to ensure that the runner image or container matches GitHub's.
3. There are other companies offering hosted GitHub Actions runners, though they frequently use cheaper compute hosting providers that are bottlenecked on network throughput or geography.
Any feedback is very welcome! You can sign up at https://depot.dev/sign-up for a free trial if you'd like to try it out on your own workflows. We aren't able to offer a trial without a signup gate, both because using it requires installing a GitHub app, and we're offering build compute, so we need some way to keep out the cryptominers :)
-
Show HN: Open-source x64 and Arm GitHub runners. Reduces GitHub Actions bill 10x
Depot [0] founder here. Thanks for the mention. We're also planning on bringing a bit of a different take to GitHub Action runners that's not tied to Hetzner directly. It will be entirely open-source as well, so you can take it and run it on your own instances if you'd like. Similar to how Depot supports self-hosted builders in your own AWS account [1].
[0] https://depot.dev/
[1] https://depot.dev/docs/self-hosted/architecture
-
Dive: A tool for exploring a Docker image, layer contents and more
Dive is an amazing tool in the container/Docker space. It makes life so much easier to debug what is actually in your container. When we were first getting started with Depot [0], we often got asked how to reduce image size as well as make builds faster. So we wrote up a quick blog post that shows how to use Dive to help with that problem [1]. It might be a bit dated now, but in case it helps a future person.
Dive also inspired us to make it easier to surface what is actually in your build context, on every build. So we shipped that as a feature in Depot a few weeks back.
[0] https://depot.dev
-
Build Docker images faster using build cache
If you want to learn more about how Depot can help you optimize your Docker image builds, sign up for our free trial.
-
Show HN: WarpBuild – x86-64 and arm GitHub Action runners for 30% faster builds
We have this with https://depot.dev out of the box. You connect to a native BuildKit and run your Docker image build on native Intel and Arm CPUs with fast persistent SSD cache orchestrated across builds. It’s immediately there on the next build without having to save/load it over the network.
-
Launch HN: Loops (YC W22) – Email for SaaS Companies
We use Loops to power the core of our email things for Depot [0] and it's been quite a breeze to use.
I think there are some logic things to get right at the API level, like should I use events or contact properties to trigger loops? We're working on some of that and wish the guidance was a bit better/clearer, at the moment, any properties you send with an event get added to the contact, so it seems like contact properties are the way to go.
My last request would be to support array properties on contacts as a given contact could be in multiple "things".
[0] https://depot.dev/
-
Show HN: An OIDC issuer for GitHub Actions pull_request workflows
We encountered a specific GitHub Actions restriction at Depot[0]: for pull_request workflows that originate from open-source forks, Actions disables access to all repository secrets and to the Actions OIDC issuer, as a security mechanism to deny untrusted code access to those secrets.
But we needed a way to authenticate our CLI within those public workflows. This OIDC issuer is the result of that need, and works like so:
1. The pull_request workflow makes a "claim request" to the OIDC issuer, claiming certain details about the workflow like the ID, run ID, repository, etc.
2. The OIDC issuer responds with a "challenge code" that the workflow must periodically print to its logs
3. The OIDC issuer connects to the GitHub Actions websocket endpoint for log streaming, validates that the challenge code is being printed, then returns a new OIDC token to the workflow
This is working well for us, and lets us acquire an OIDC token similar to the GitHub Actions native OIDC token. The issuer itself runs as a Cloudflare Worker.
Happy to answer questions and I'd love any feedback you may have!
[0] https://depot.dev
-
Show HN: depot.ai – easily embed ML / AI models in your Dockerfile
To optimize build speed, cache hits, and registry storage, we're building each image reproducibly and indexing the contents with eStargz[0]. The image is stored on Cloudflare R2, and served via a Cloudflare Worker. Everything is open source[1]!
Compared to alternatives like `git lfs clone` or downloading your model at runtime, embedding it with `COPY` produces layers that are cache-stable, with identical hash digests across rebuilds. This means they can be fully cached, even if your base image or source code changes.
And for Docker builders that enable eStargz, copying single files from the image will download only the requested files. eStargz can be enabled in a variety of image builders[2], and we’ve enabled it by default on Depot[3].
Here’s an announcement post with more details: https://depot.dev/blog/depot-ai.
We’d love to hear any feedback you may have!
[0] https://github.com/containerd/stargz-snapshotter/blob/main/docs/estargz.md
[1] https://github.com/depot/depot.ai
[2] https://github.com/containerd/stargz-snapshotter/blob/main/docs/integration.md#image-builders
[3] https://depot.dev
-
Launch HN: Resend (YC W23) – Email API for Developers Using React
We use Resend for our transactional email at https://depot.dev after migrating away from Postmark following their acquisition. It's been awesome so far and because our app is Remix underneath the hood, it was delightfully easy to get our emails exactly how we wanted them.
The visibility into what emails have been sent, to who, and what the content was is also incredibly helpful when we are talking about transactional emails. Double bonus for being able to share that email as well.
- Docker layer cache is better when shared with your team.
What are some alternatives?
manageiq - ManageIQ Open-Source Management Platform
lime - New standard library and runtime for the D programming language
fog-azure-rm - Fog for Azure Resource Manager
plane - A distributed system for running WebSocket services at scale.
cloudfront-signer - Ruby gem for signing AWS CloudFront private content URLs and streaming paths.
windmill - Open-source developer platform to turn scripts into workflows and UIs. Fastest workflow engine (5x vs Airflow). Open-source alternative to Airplane and Retool.
AWS SDK for Ruby - The official AWS SDK for Ruby.
fasten-onprem - Fasten is an open-source, self-hosted, personal/family electronic medical record aggregator, designed to integrate with 100,000's of insurances/hospitals/clinics
forem - For empowering community 🌱
resend-node - resend's node.js sdk
homebrew-portable-ruby - đźš— Versions of Ruby that can be installed and run from anywhere on the filesystem.
resend-java - Resend's Java SDK