SaaSHub helps you find the best software and product alternatives Learn more →
Top 23 Go cloud-native Projects
-
tidb
TiDB is an open-source, cloud-native, distributed, MySQL-Compatible database for elastic scale and real-time analytics. Try AI-powered Chat2Query free at : https://tidbcloud.com/free-trial
PingCAP | https://www.pingcap.com | Database Engineer, Product Manager, Developer Advocate and more | Remote in California | Full-time
We work on a MySQL compatible distributed database called TiDB https://github.com/pingcap/tidb/ and key-value store called TiKV.
TiDB is written in Go and TiKV is written in Rust.
More roles and locations are available on https://www.pingcap.com/careers/
-
-
SonarQube
Static code analysis for 29 languages.. Your projects are multi-language. So is SonarQube analysis. Find Bugs, Vulnerabilities, Security Hotspots, and Code Smells so you can release quality code every time. Get started analyzing your projects today for free.
-
https://github.com/go-kratos/kratos has good examples for project layout
-
Does it HAVE to be those types of packages, have you thought of using containers instead and thus open the options for more types of storage like https://goharbor.io/ ?
-
Project mention: Show HN: Turning books into chatbots with GPT-3 | news.ycombinator.com | 2023-01-24
If you sprinkle in a bit of infrastructure, I think we're already there. The ability to distill a variety of content into vectors and perform approximate nearest neighbor search (shameless plug: https://milvus.io) across all of them can really help power a lot of these applications. With the retrieved vectors, you could match questions with answers or create a reverse index to the original content to perform summarization.
With that being said, one of the main challenges ahead will be multimodal learning. We're sort-of there combining text with visual data, but there are many other modalities out there as well.
-
Ory Hydra
OpenID Certified™ OpenID Connect and OAuth Provider written in Go - cloud native, security-first, open source API security for your infrastructure. SDKs for any language. Works with Hardware Security Modules. Compatible with MITREid.
We used hydra (https://github.com/ory/hydra) to build our OAuth provider
-
go-git has a lot of bugs and is not actively maintained. The bug even affects Argo Workflow, which caused our data pipeline to fail unexpectedly (reference: https://github.com/argoproj/argo-workflows/issues/10091)
-
InfluxDB
Build time-series-based applications quickly and at scale.. InfluxDB is the Time Series Platform where developers build real-time applications for analytics, IoT and cloud-native services. Easy to start, it is available in the cloud or on-premises.
-
Project mention: Show HN: DriftDB is an open source WebSocket back end for real-time apps | news.ycombinator.com | 2023-02-03
Nice, have you come across NATS? https://nats.io. The server natively supports WebSockets. There are many clients including Deno, Node, WebSockets, Rust, Go, C, Python, etc.
In addition to stateless messaging, it supports durable streams, and optimized API layers on top like key-value, and object storage.
The server also natively supports MQTT 3.1.1.
-
kubesphere
The container platform tailored for Kubernetes multi-cloud, datacenter, and edge management ⎈ 🖥 ☁️
##################################################### ### Welcome to KubeSphere! ### ##################################################### Account: admin Password: [email protected] NOTES: 1. After logging into the console, please check the monitoring status of service components in the "Cluster Management". If any service is not ready, please wait patiently until all components are ready. 2. Please modify the default password after login. ##################################################### https://kubesphere.io 2020-xx-xx xx:xx:xx
-
Rook (this is a nice article for Rook NFS)
-
Open source API Gateway (Apache APISIX and Traefik), Service Mesh (Istio and Linkerd) solutions are capable of doing traffic splitting and implementing functionalities like Canary Release and Blue-Green deployment. With canary testing, you can make a critical examination of a new release of an API by selecting only a small portion of your user base. We will cover the canary release next section.
-
Project mention: What are well-developed web applications in Golang? | reddit.com/r/golang | 2023-01-28
-
Project mention: Why would someone need serverless infrastructure? | reddit.com/r/homelab | 2023-01-08
-
Project mention: Migrating ClickHouse’ warm & cold data to object storage with JuiceFS | dev.to | 2023-01-30
$ ls -l /var/lib/clickhouse/data// drwxr-xr-x 2 test test 64B Aug 8 13:46 202208_1_3_0 drwxr-xr-x 2 test test 64B Aug 8 13:46 202208_4_6_1 drwxr-xr-x 2 test test 64B Sep 8 13:46 202209_1_1_0 drwxr-xr-x 2 test test 64B Sep 8 13:46 202209_4_4_0 Enter fullscreen mode Exit fullscreen mode In the rightmost column of the above example, the name of each subdirectory is preceded by time, i.e., 202208, but 202208 is also the partition name, which can be defined by user but usually named by time.For example, the partition, 202208, will have two subdirectories (i.e., parts), and each partition usually consists of multiple parts. When writing data to ClickHouse, data will be written to memory first, and then persisted to disk according to the data structure in memory. If the data in a partition is too large, the partition will become many parts on the disk. ClickHouse doesn’t recommend creating too many parts under one table, it will also merge parts to reduce its total number. This is one of the reasons why it’s called the MergeTree engine.There is another example helping us to understand “part” in ClickHouse. There are many small files in the part, some of which are meta-information, such as index information, which facilitates lookup performance. $ ls -l /var/lib/clickhouse/data///202208_1_3_0 -rw-r--r-- 1 test test ?? Aug 8 14:06 ColumnA.bin -rw-r--r-- 1 test test ?? Aug 8 14:06 ColumnA.mrk -rw-r--r-- 1 test test ?? Aug 8 14:06 ColumnB.bin -rw-r--r-- 1 test test ?? Aug 8 14:06 ColumnB.mrk -rw-r--r-- 1 test test ?? Aug 8 14:06 checksums.txt -rw-r--r-- 1 test test ?? Aug 8 14:06 columns.txt -rw-r--r-- 1 test test ?? Aug 8 14:06 count.txt -rw-r--r-- 1 test test ?? Aug 8 14:06 minmax_ColumnC.idx -rw-r--r-- 1 test test ?? Aug 8 14:06 partition.dat -rw-r--r-- 1 test test ?? Aug 8 14:06 primary.idx Enter fullscreen mode Exit fullscreen mode The most right column of the above example, the files prefixed by Column are actual data files, which are relatively large compared to meta information. There are only two columns in this example, A and B, and a table may consist of many columns in actual uses. All these files, including meta and index information, will together help users to quickly jump between files or look up files. ClickHouse storage policy If you want to tier hot and cold data in ClickHouse, you will use a lifecycle policy similar to the one mentioned in ES, which is called Storage Policy in ClickHouse. Slightly different from ES, ClickHouse does not divide data into different stages, i.e., hot, warm, cold. Instead, ClickHouse provides some rules and configuration methods that require users to develop their own data tiering policy. Each ClickHouse node supports the simultaneous configuration of multiple disks, and the storage medium can be varied. For example, users usually configure a ClickHouse node with an SSD for better performance; for warm and cold data, users can store the data in a medium with a lower cost, such as a mechanical disk. Users of ClickHouse will not be aware of the underlying storage medium.Similar to ES, ClickHouse users need to create a storage policy based on data characteristics, such as the size of each subdirectory in part, the proportion of space left on the entire disk, etc. The execution of the storage policy is triggered when a certain data characteristic occurs. This policy will migrate one part from one disk to another. In ClickHouse, multiple disks configured in the same node have priority, and by default data will fall on the highest priority disk. This enables the transfer of the part from one storage medium to another.Data migration can be triggered manually through SQL commands in ClickHouse, such as MOVE PARTITION/PART, and users can also do function validation through these commands. Secondly there may be some cases where explicitly need to move a part from the current storage medium to another one by manual means. ClickHouse also supports time-based migration policy, which is independent of the storage policy. After data is written, ClickHouse triggers the migration of data on disk according to the time set by the TTL property of each table. For example, if the TTL is set to 7 days, ClickHouse will re-write the data in the table, which is older than 7 days, from the current disk (e.g. default SSD) to another lower priority disk (e.g. JuiceFS) What is JuiceFS? JuiceFS is a high-performance, open-source distributed POSIX file system, which can be built on top of any object storage. For more details: https://github.com/juicedata/juicefs Integration of ClickHouse + JuiceFS Step 1: Mount the JuiceFS file system on all ClickHouse nodes. Any path would work because ClickHouse will have a configuration file to point to the mount point.Step 2: Modify the ClickHouse configuration to add a new JuiceFS disk. Add the JuiceFS file system mount point that you just mounted in ClickHouse so that ClickHouse can recognize this new disk.Step 3: Add a new storage policy and set the rules for sinking data. This storage policy will automatically sink data from the default disk to the specified store, such as JuiceFS, according to the user's rules.Step 4: Set the storage policy and TTL for a specific table. After the storage policy is set, you need to apply the policy to a table. In the pre-testing and validation phases, it is recommended to use a relatively large table, and if users want to achieve data sinking based on the time dimension, they need to set the TTL on the table at the same time. The whole sinking process is automatic, you can check the parts that are currently processing data migration and migration progress through ClickHouse's system table.Step 5: Manually move the part for validation. You can verify whether the current configuration or storage policy is in effect by manually executing the MOVE PARTITION command.As an example below, ClickHouse has a configuration item called storage_configuration, which contains disks configuration, in which JuiceFS is added as a disk and named "jfs" (他the name is arbitrary) and the mount point is the /jfs directory. /jfs default 1073741824 jfs 0.1 Enter fullscreen mode Exit fullscreen mode Further down are the policies configuration items, where a storage policy called hot_and_cold is defined, and the user needs to define some specific rules, such as prioritizing the volumes in order of hot first and then cold, with the data first falling to the first hot disk in the volumes and the default ClickHouse disk (usually the local SSD). The max_data_part_size_bytes configuration in volumes means that when the size of a part exceeds the set size, the storage policy will be triggered and the corresponding part will sink to the next volume, i.e. cold volume. In the above example, JuiceFS is the cold volume. The bottom move_ factor configuration means that ClickHouse will trigger the execution of the storage policy based on the portion of the remaining disk space. CREATE TABLE test ( d DateTime, ... ) ENGINE = MergeTree ... TTL d + INTERVAL 1 DAY TO DISK 'jfs' SETTINGS storage_policy = 'hot_and_cold'; Enter fullscreen mode Exit fullscreen mode As the above code snippet shows, you can set the storage_policy to the previously defined hot_and_cold storage policy in SETTINGS when you create a table or modify the schema of this table. The TTL in the second to last line of the above code is the time-based tiering rule mentioned above. In this example, we specify a column called d in the table, which is of type DateTime; with INTERVAL 1 DAY, that line of code presents that the data will be transferred to JuiceFS when new data is written in for more than one day. From JuiceFS/Juicedata.
-
In the overall scheme of things , look at services like backstage.io , crossplane.io and opslevel.com to get ideas. This is not necessarily an endorsement of the services. If all you want is to handle cloud resources and that's it, Terraform can be enough with what ever flavor of web technologies you and your team are comfortable with and can support it along the way. Doesn't take much to create a js based website to collect data from a form, or use other means to collecting data as long as its recorded and transparent for accountability.
-
I know those questions are probably rhetorical, but to answer them anyway:
> > Nice syntax
> Is it though?
The most common alternative is to use a backslash at the end of each line, to create a line continuation. This swallows the newline, so you also need a semicolon. Forgetting the semicolon leads to weird errors. Also, while Docker supports comments interspersed with line continuations, sh doesn't, so if such a command contains comments it can't be copied into sh.
There heredoc syntax doesn't have any of these issues; I think it is infinitely better.
(There is also JSON-style syntax, but it requires all backslashes to be doubled and is less popular.)
*In practice "&&" is normally used rather than ";" in order to stop the build if any command fails (otherwise sh only propagates the exit status of the last command). This is actually a small footgun with the heredoc syntax, because it is tempting to just use a newline (equivalent to a semicolon). The programmer must remember to type "&&" after each command, or use `set -e` at the start of the RUN command, or use `SHELL ["/bin/sh", "-e", "-c"]` at the top of the Dockerfile. Sigh...
> Are the line breaks semantic, or is it all a multiline string?
The line breaks are preserved ("what you see is what you get").
> Is EOF a special end-of-file token
You can choose which token to use (EOF is a common convention, but any token can be used). The text right after the "<<" indicates which token you've chosen, and the heredoc is terminated by the first line that contains just that token.
This allows you to easily create a heredoc containing other heredocs. Can you think of any other quoting syntax that allows that? (Lisp's quote form comes to mind.)
> Where is it documented?
The introduction blog post has already been linked. The reference documentation (https://github.com/moby/buildkit/blob/master/frontend/docker...) mentions but doesn't have a formal specification (unfortunately this is a wider problem for Dockerfiles, see https://supercontainers.github.io/containers-wg/ideas/docker... instead it links to the sh syntax (https://pubs.opengroup.org/onlinepubs/9699919799/utilities/V...), on which the Dockerfile heredoc syntax is based.
(Good luck looking up this syntax if you don't know what it's called. But that's the same for most punctuation-based syntax.)
-
Dragonfly
Dragonfly is an intelligent P2P based image and file distribution system. (by dragonflyoss)
-
Introduction:KubeEdge is an open source edge computing platform. Based on the native container arrangement and scheduling ability of Kubernetes (K8s), it realizes the functions of cloud edge collaboration, computing sinking, massive edge device management, edge autonomy, etc. It is completely open, scalable, easy to develop and maintain, and supports offline mode and cross-platform. GitHub:https://github.com/kubeedgeKubeEdge: https://kubeedge.ioArchitecture diagram of KubeEdge: https://kubeedge.io/en/docs/kubeedge/#architecture Features:
-
Project mention: Elon Musk is disconnecting random Twitter-servers just to see what happens | news.ycombinator.com | 2022-12-24
-
$ curl -sLS https://get.k3sup.dev | sh x86_64 Downloading package https://github.com/alexellis/k3sup/releases/download/0.12.12/k3sup as /home/ec2-user/k3sup Download complete. ============================================================ The script was run as a user who is unable to write to /usr/local/bin. To complete the installation the following commands may need to be run manually. ============================================================ sudo cp k3sup /usr/local/bin/k3sup ================================================================ alexellis's work on k3sup needs your support https://github.com/sponsors/alexellis ================================================================ No nos devolverá nada, pero podremos correr lo siguiente para saber si k3sup efectivamente se instalo:
-
Easegress IngressController is an Easegress based API gateway that can run as an ingress controller.
-
-
-
SaaSHub
SaaSHub - Software Alternatives and Reviews. SaaSHub helps you find the best software and product alternatives
Go cloud-native related posts
- How to Deploy and Scale Strapi on a Kubernetes Cluster 2/2
- LoxiLB - An open-source cloud-native load-balancer
- Running 2 web apps in one application using Go Routines
- Migrating ClickHouse’ warm & cold data to object storage with JuiceFS
- What course for learning microservices in Golang?
- JuiceFS: A distributed Posix file system built on top of Redis and S3
- Newsletter #1 - 9st January 2023
-
A note from our sponsor - #<SponsorshipServiceOld:0x00007fea59a55f00>
www.saashub.com | 4 Feb 2023
Index
What are some of the best open-source cloud-native projects in Go? This list will help you:
Project | Stars | |
---|---|---|
1 | tidb | 33,288 |
2 | go-zero | 22,405 |
3 | kratos | 19,736 |
4 | Harbor | 19,278 |
5 | milvus | 14,836 |
6 | Ory Hydra | 13,684 |
7 | argo | 12,436 |
8 | NATS | 12,169 |
9 | kubesphere | 11,904 |
10 | rook | 10,676 |
11 | conduit | 9,259 |
12 | OPA (Open Policy Agent) | 7,606 |
13 | fission | 7,476 |
14 | juicefs | 7,313 |
15 | crossplane | 6,528 |
16 | buildkit | 6,335 |
17 | Dragonfly | 6,025 |
18 | kubeedge | 5,596 |
19 | chaos-mesh | 5,477 |
20 | k3sup | 5,032 |
21 | easegress | 5,008 |
22 | pouch | 4,574 |
23 | NATS | 4,335 |