SaaSHub helps you find the best software and product alternatives Learn more →
Top 23 Go cloud-native Projects
TiDB is an open-source, cloud-native, distributed, MySQL-Compatible database for elastic scale and real-time analytics. Try AI-powered Chat2Query free at : https://tidbcloud.com/free-trialProject mention: Ask HN: Who is hiring? (January 2023) | news.ycombinator.com | 2023-01-02
PingCAP | https://www.pingcap.com | Database Engineer, Product Manager, Developer Advocate and more | Remote in California | Full-time
We work on a MySQL compatible distributed database called TiDB https://github.com/pingcap/tidb/ and key-value store called TiKV.
TiDB is written in Go and TiKV is written in Rust.
More roles and locations are available on https://www.pingcap.com/careers/
A cloud-native Go microservices framework with cli tool for productivity.
Static code analysis for 29 languages.. Your projects are multi-language. So is SonarQube analysis. Find Bugs, Vulnerabilities, Security Hotspots, and Code Smells so you can release quality code every time. Get started analyzing your projects today for free.
Your ultimate Go microservices framework for the cloud-native era.
https://github.com/go-kratos/kratos has good examples for project layout
An open source trusted cloud native registry project that stores, signs, and scans content.Project mention: Open source/free registry with HA | reddit.com/r/devops | 2023-01-26
Does it HAVE to be those types of packages, have you thought of using containers instead and thus open the options for more types of storage like https://goharbor.io/ ?
Vector database for scalable similarity search and AI applications.Project mention: Show HN: Turning books into chatbots with GPT-3 | news.ycombinator.com | 2023-01-24
If you sprinkle in a bit of infrastructure, I think we're already there. The ability to distill a variety of content into vectors and perform approximate nearest neighbor search (shameless plug: https://milvus.io) across all of them can really help power a lot of these applications. With the retrieved vectors, you could match questions with answers or create a reverse index to the original content to perform summarization.
With that being said, one of the main challenges ahead will be multimodal learning. We're sort-of there combining text with visual data, but there are many other modalities out there as well.
OpenID Certified™ OpenID Connect and OAuth Provider written in Go - cloud native, security-first, open source API security for your infrastructure. SDKs for any language. Works with Hardware Security Modules. Compatible with MITREid.Project mention: how to implement oauth2 for API security | reddit.com/r/golang | 2023-01-23
We used hydra (https://github.com/ory/hydra) to build our OAuth provider
Workflow engine for KubernetesProject mention: Which build system do you use? | reddit.com/r/golang | 2023-02-02
go-git has a lot of bugs and is not actively maintained. The bug even affects Argo Workflow, which caused our data pipeline to fail unexpectedly (reference: https://github.com/argoproj/argo-workflows/issues/10091)
Build time-series-based applications quickly and at scale.. InfluxDB is the Time Series Platform where developers build real-time applications for analytics, IoT and cloud-native services. Easy to start, it is available in the cloud or on-premises.
High-Performance server for NATS.io, the cloud and edge native messaging system.Project mention: Show HN: DriftDB is an open source WebSocket back end for real-time apps | news.ycombinator.com | 2023-02-03
Nice, have you come across NATS? https://nats.io. The server natively supports WebSockets. There are many clients including Deno, Node, WebSockets, Rust, Go, C, Python, etc.
In addition to stateless messaging, it supports durable streams, and optimized API layers on top like key-value, and object storage.
The server also natively supports MQTT 3.1.1.
The container platform tailored for Kubernetes multi-cloud, datacenter, and edge management ⎈ 🖥 ☁️Project mention: How to Provision and Manage Amazon EKS with Ease | dev.to | 2022-05-05
##################################################### ### Welcome to KubeSphere! ### ##################################################### Account: admin Password: [email protected] NOTES： 1. After logging into the console, please check the monitoring status of service components in the "Cluster Management". If any service is not ready, please wait patiently until all components are ready. 2. Please modify the default password after login. ##################################################### https://kubesphere.io 2020-xx-xx xx:xx:xx
Storage Orchestration for KubernetesProject mention: How to Deploy and Scale Strapi on a Kubernetes Cluster 2/2 | dev.to | 2023-02-03
Rook (this is a nice article for Rook NFS)
An open source, general-purpose policy engine.Project mention: What are well-developed web applications in Golang? | reddit.com/r/golang | 2023-01-28
Fast and Simple Serverless Functions for KubernetesProject mention: Why would someone need serverless infrastructure? | reddit.com/r/homelab | 2023-01-08
JuiceFS is a distributed POSIX file system built on top of Redis and S3.Project mention: Migrating ClickHouse’ warm & cold data to object storage with JuiceFS | dev.to | 2023-01-30
$ ls -l /var/lib/clickhouse/data// drwxr-xr-x 2 test test 64B Aug 8 13:46 202208_1_3_0 drwxr-xr-x 2 test test 64B Aug 8 13:46 202208_4_6_1 drwxr-xr-x 2 test test 64B Sep 8 13:46 202209_1_1_0 drwxr-xr-x 2 test test 64B Sep 8 13:46 202209_4_4_0 Enter fullscreen mode Exit fullscreen mode In the rightmost column of the above example, the name of each subdirectory is preceded by time, i.e., 202208, but 202208 is also the partition name, which can be defined by user but usually named by time.For example, the partition, 202208, will have two subdirectories (i.e., parts), and each partition usually consists of multiple parts. When writing data to ClickHouse, data will be written to memory first, and then persisted to disk according to the data structure in memory. If the data in a partition is too large, the partition will become many parts on the disk. ClickHouse doesn’t recommend creating too many parts under one table, it will also merge parts to reduce its total number. This is one of the reasons why it’s called the MergeTree engine.There is another example helping us to understand “part” in ClickHouse. There are many small files in the part, some of which are meta-information, such as index information, which facilitates lookup performance. $ ls -l /var/lib/clickhouse/data///202208_1_3_0 -rw-r--r-- 1 test test ?? Aug 8 14:06 ColumnA.bin -rw-r--r-- 1 test test ?? Aug 8 14:06 ColumnA.mrk -rw-r--r-- 1 test test ?? Aug 8 14:06 ColumnB.bin -rw-r--r-- 1 test test ?? Aug 8 14:06 ColumnB.mrk -rw-r--r-- 1 test test ?? Aug 8 14:06 checksums.txt -rw-r--r-- 1 test test ?? Aug 8 14:06 columns.txt -rw-r--r-- 1 test test ?? Aug 8 14:06 count.txt -rw-r--r-- 1 test test ?? Aug 8 14:06 minmax_ColumnC.idx -rw-r--r-- 1 test test ?? Aug 8 14:06 partition.dat -rw-r--r-- 1 test test ?? Aug 8 14:06 primary.idx Enter fullscreen mode Exit fullscreen mode The most right column of the above example, the files prefixed by Column are actual data files, which are relatively large compared to meta information. There are only two columns in this example, A and B, and a table may consist of many columns in actual uses. All these files, including meta and index information, will together help users to quickly jump between files or look up files. ClickHouse storage policy If you want to tier hot and cold data in ClickHouse, you will use a lifecycle policy similar to the one mentioned in ES, which is called Storage Policy in ClickHouse. Slightly different from ES, ClickHouse does not divide data into different stages, i.e., hot, warm, cold. Instead, ClickHouse provides some rules and configuration methods that require users to develop their own data tiering policy. Each ClickHouse node supports the simultaneous configuration of multiple disks, and the storage medium can be varied. For example, users usually configure a ClickHouse node with an SSD for better performance; for warm and cold data, users can store the data in a medium with a lower cost, such as a mechanical disk. Users of ClickHouse will not be aware of the underlying storage medium.Similar to ES, ClickHouse users need to create a storage policy based on data characteristics, such as the size of each subdirectory in part, the proportion of space left on the entire disk, etc. The execution of the storage policy is triggered when a certain data characteristic occurs. This policy will migrate one part from one disk to another. In ClickHouse, multiple disks configured in the same node have priority, and by default data will fall on the highest priority disk. This enables the transfer of the part from one storage medium to another.Data migration can be triggered manually through SQL commands in ClickHouse, such as MOVE PARTITION/PART, and users can also do function validation through these commands. Secondly there may be some cases where explicitly need to move a part from the current storage medium to another one by manual means. ClickHouse also supports time-based migration policy, which is independent of the storage policy. After data is written, ClickHouse triggers the migration of data on disk according to the time set by the TTL property of each table. For example, if the TTL is set to 7 days, ClickHouse will re-write the data in the table, which is older than 7 days, from the current disk (e.g. default SSD) to another lower priority disk (e.g. JuiceFS) What is JuiceFS？ JuiceFS is a high-performance, open-source distributed POSIX file system, which can be built on top of any object storage. For more details: https://github.com/juicedata/juicefs Integration of ClickHouse + JuiceFS Step 1: Mount the JuiceFS file system on all ClickHouse nodes. Any path would work because ClickHouse will have a configuration file to point to the mount point.Step 2: Modify the ClickHouse configuration to add a new JuiceFS disk. Add the JuiceFS file system mount point that you just mounted in ClickHouse so that ClickHouse can recognize this new disk.Step 3: Add a new storage policy and set the rules for sinking data. This storage policy will automatically sink data from the default disk to the specified store, such as JuiceFS, according to the user's rules.Step 4: Set the storage policy and TTL for a specific table. After the storage policy is set, you need to apply the policy to a table. In the pre-testing and validation phases, it is recommended to use a relatively large table, and if users want to achieve data sinking based on the time dimension, they need to set the TTL on the table at the same time. The whole sinking process is automatic, you can check the parts that are currently processing data migration and migration progress through ClickHouse's system table.Step 5: Manually move the part for validation. You can verify whether the current configuration or storage policy is in effect by manually executing the MOVE PARTITION command.As an example below, ClickHouse has a configuration item called storage_configuration, which contains disks configuration, in which JuiceFS is added as a disk and named "jfs" (他the name is arbitrary) and the mount point is the /jfs directory. /jfs default 1073741824 jfs 0.1 Enter fullscreen mode Exit fullscreen mode Further down are the policies configuration items, where a storage policy called hot_and_cold is defined, and the user needs to define some specific rules, such as prioritizing the volumes in order of hot first and then cold, with the data first falling to the first hot disk in the volumes and the default ClickHouse disk (usually the local SSD). The max_data_part_size_bytes configuration in volumes means that when the size of a part exceeds the set size, the storage policy will be triggered and the corresponding part will sink to the next volume, i.e. cold volume. In the above example, JuiceFS is the cold volume. The bottom move_ factor configuration means that ClickHouse will trigger the execution of the storage policy based on the portion of the remaining disk space. CREATE TABLE test ( d DateTime, ... ) ENGINE = MergeTree ... TTL d + INTERVAL 1 DAY TO DISK 'jfs' SETTINGS storage_policy = 'hot_and_cold'; Enter fullscreen mode Exit fullscreen mode As the above code snippet shows, you can set the storage_policy to the previously defined hot_and_cold storage policy in SETTINGS when you create a table or modify the schema of this table. The TTL in the second to last line of the above code is the time-based tiering rule mentioned above. In this example, we specify a column called d in the table, which is of type DateTime; with INTERVAL 1 DAY, that line of code presents that the data will be transferred to JuiceFS when new data is written in for more than one day. From JuiceFS/Juicedata.
Cloud Native Control PlanesProject mention: Automated provisioning for data resources | reddit.com/r/devops | 2022-12-13
In the overall scheme of things , look at services like backstage.io , crossplane.io and opslevel.com to get ideas. This is not necessarily an endorsement of the services. If all you want is to handle cloud resources and that's it, Terraform can be enough with what ever flavor of web technologies you and your team are comfortable with and can support it along the way. Doesn't take much to create a js based website to collect data from a form, or use other means to collecting data as long as its recorded and transparent for accountability.
concurrent, cache-efficient, and Dockerfile-agnostic builder toolkitProject mention: Rails on Docker · Fly | news.ycombinator.com | 2023-01-26
I know those questions are probably rhetorical, but to answer them anyway:
> > Nice syntax
> Is it though?
The most common alternative is to use a backslash at the end of each line, to create a line continuation. This swallows the newline, so you also need a semicolon. Forgetting the semicolon leads to weird errors. Also, while Docker supports comments interspersed with line continuations, sh doesn't, so if such a command contains comments it can't be copied into sh.
There heredoc syntax doesn't have any of these issues; I think it is infinitely better.
(There is also JSON-style syntax, but it requires all backslashes to be doubled and is less popular.)
*In practice "&&" is normally used rather than ";" in order to stop the build if any command fails (otherwise sh only propagates the exit status of the last command). This is actually a small footgun with the heredoc syntax, because it is tempting to just use a newline (equivalent to a semicolon). The programmer must remember to type "&&" after each command, or use `set -e` at the start of the RUN command, or use `SHELL ["/bin/sh", "-e", "-c"]` at the top of the Dockerfile. Sigh...
> Are the line breaks semantic, or is it all a multiline string?
The line breaks are preserved ("what you see is what you get").
> Is EOF a special end-of-file token
You can choose which token to use (EOF is a common convention, but any token can be used). The text right after the "<<" indicates which token you've chosen, and the heredoc is terminated by the first line that contains just that token.
This allows you to easily create a heredoc containing other heredocs. Can you think of any other quoting syntax that allows that? (Lisp's quote form comes to mind.)
> Where is it documented?
The introduction blog post has already been linked. The reference documentation (https://github.com/moby/buildkit/blob/master/frontend/docker...) mentions but doesn't have a formal specification (unfortunately this is a wider problem for Dockerfiles, see https://supercontainers.github.io/containers-wg/ideas/docker... instead it links to the sh syntax (https://pubs.opengroup.org/onlinepubs/9699919799/utilities/V...), on which the Dockerfile heredoc syntax is based.
(Good luck looking up this syntax if you don't know what it's called. But that's the same for most punctuation-based syntax.)
Dragonfly is an intelligent P2P based image and file distribution system. (by dragonflyoss)Project mention: MinIO passes 1B cumulative Docker Pulls | news.ycombinator.com | 2022-09-21
Kubernetes Native Edge Computing Framework (project under CNCF)Project mention: Best Four IoT Platforms | reddit.com/r/kubernetes | 2022-12-16
Introduction：KubeEdge is an open source edge computing platform. Based on the native container arrangement and scheduling ability of Kubernetes (K8s), it realizes the functions of cloud edge collaboration, computing sinking, massive edge device management, edge autonomy, etc. It is completely open, scalable, easy to develop and maintain, and supports offline mode and cross-platform. GitHub：https://github.com/kubeedgeKubeEdge: https://kubeedge.ioArchitecture diagram of KubeEdge: https://kubeedge.io/en/docs/kubeedge/#architecture Features:
A Chaos Engineering Platform for Kubernetes.Project mention: Elon Musk is disconnecting random Twitter-servers just to see what happens | news.ycombinator.com | 2022-12-24
bootstrap K3s over SSH in < 60s 🚀Project mention: Despliega un clúster de Kubernetes en segundos con k3sup | dev.to | 2023-01-09
$ curl -sLS https://get.k3sup.dev | sh x86_64 Downloading package https://github.com/alexellis/k3sup/releases/download/0.12.12/k3sup as /home/ec2-user/k3sup Download complete. ============================================================ The script was run as a user who is unable to write to /usr/local/bin. To complete the installation the following commands may need to be run manually. ============================================================ sudo cp k3sup /usr/local/bin/k3sup ================================================================ alexellis's work on k3sup needs your support https://github.com/sponsors/alexellis ================================================================ No nos devolverá nada, pero podremos correr lo siguiente para saber si k3sup efectivamente se instalo:
A Cloud Native traffic orchestration systemProject mention: Kubernetes Ingress: Nginx Ingress Edition | dev.to | 2022-05-04
Easegress IngressController is an Easegress based API gateway that can run as an ingress controller.
An Efficient Enterprise-class Container Engine
Golang client for NATS, the cloud native messaging system.Project mention: Asyncapi with Go | reddit.com/r/golang | 2022-12-09
SaaSHub - Software Alternatives and Reviews. SaaSHub helps you find the best software and product alternatives
Go cloud-native related posts
How to Deploy and Scale Strapi on a Kubernetes Cluster 2/2
18 projects | dev.to | 3 Feb 2023
LoxiLB - An open-source cloud-native load-balancer
8 projects | reddit.com/r/ipv6 | 30 Jan 2023
Running 2 web apps in one application using Go Routines
2 projects | reddit.com/r/golang | 30 Jan 2023
Migrating ClickHouse’ warm & cold data to object storage with JuiceFS
1 project | dev.to | 30 Jan 2023
What course for learning microservices in Golang?
1 project | reddit.com/r/golang | 22 Jan 2023
JuiceFS: A distributed Posix file system built on top of Redis and S3
1 project | news.ycombinator.com | 17 Jan 2023
Newsletter #1 - 9st January 2023
4 projects | dev.to | 17 Jan 2023
A note from our sponsor - #<SponsorshipServiceOld:0x00007fea59a55f00>
www.saashub.com | 4 Feb 2023
What are some of the best open-source cloud-native projects in Go? This list will help you:
|12||OPA (Open Policy Agent)||7,606|