Ruby Cloud

Open-source Ruby projects categorized as Cloud

Top 12 Ruby Cloud Projects

  • Fog

    The Ruby cloud services library.

  • AWS SDK for Ruby

    The official AWS SDK for Ruby.

    Project mention: Fix a pod install error "undefined method 'exist' for File:Class" React Native | dev.to | 2023-01-27

    In Ruby 3.2, "Fill.exists" was removed. brew will automatically install the most recent version of Ruby when you perform a brew install cocoapods, which is why you encounter this problem.

  • InfluxDB

    Access the most powerful time series database as a service. Ingest, store, & analyze all types of time series data in a fully-managed, purpose-built database. Keep data forever with low-cost storage and superior data compression.

  • foreman

    an application that automates the lifecycle of servers (by theforeman)

    Project mention: Deploying 100+ windows 10 devices per week. Need to automate. | reddit.com/r/sysadmin | 2023-05-13

    In case you're unable to use intune, a free approach might be https://theforeman.org/ That works well for provisioning baremetal windows (with discovery image or pxe boot) once you've set it up. It supports script access as well as a nice hierarchy for configurations. But it's really not as well documented as it should be.

  • manageiq

    ManageIQ Open-Source Management Platform

    Project mention: Rancher in 2023 | reddit.com/r/kubernetes | 2023-02-17

    I mean, I get that deprecating products can be annoying. But at the same time, we're talking about a for profit company. There does need to be a level of customer interest to make the maths work. Just because Red Hat doesn't have a productised version doesn't mean you can't use it though. There is always an upstream version of Red Hat products, in the case of CloudForms, that community is ManageIQ: https://www.manageiq.org

  • click-to-deploy

    Source for Google Click to Deploy solutions listed on Google Cloud Marketplace.

  • browse-everything

    Rails engine providing access to files in cloud storage

  • Docker-Provider

    Azure Monitor for Containers

    Project mention: Enable annotation based scraping in Azure Monitor managed service for Prometheus | dev.to | 2023-02-20

    kind: ConfigMap apiVersion: v1 data: schema-version: #string.used by agent to parse config. supported versions are {v1}. Configs with other schema versions will be rejected by the agent. v1 config-version: #string.used by customer to keep track of this config file's version in their source control/repository (max allowed 10 chars, other chars will be truncated) ver1 log-data-collection-settings: |- # Log data collection settings # Any errors related to config map settings can be found in the KubeMonAgentEvents table in the Log Analytics workspace that the cluster is sending data to. [log_collection_settings] [log_collection_settings.stdout] # In the absense of this configmap, default value for enabled is true enabled = true # exclude_namespaces setting holds good only if enabled is set to true # kube-system,gatekeeper-system log collection are disabled by default in the absence of 'log_collection_settings.stdout' setting. If you want to enable kube-system,gatekeeper-system, remove them from the following setting. # If you want to continue to disable kube-system,gatekeeper-system log collection keep the namespaces in the following setting and add any other namespace you want to disable log collection to the array. # In the absense of this configmap, default value for exclude_namespaces = ["kube-system","gatekeeper-system"] exclude_namespaces = ["kube-system","gatekeeper-system"] [log_collection_settings.stderr] # Default value for enabled is true enabled = true # exclude_namespaces setting holds good only if enabled is set to true # kube-system,gatekeeper-system log collection are disabled by default in the absence of 'log_collection_settings.stderr' setting. If you want to enable kube-system,gatekeeper-system, remove them from the following setting. # If you want to continue to disable kube-system,gatekeeper-system log collection keep the namespaces in the following setting and add any other namespace you want to disable log collection to the array. # In the absense of this configmap, default value for exclude_namespaces = ["kube-system","gatekeeper-system"] exclude_namespaces = ["kube-system","gatekeeper-system"] [log_collection_settings.env_var] # In the absense of this configmap, default value for enabled is true enabled = true [log_collection_settings.enrich_container_logs] # In the absense of this configmap, default value for enrich_container_logs is false enabled = false # When this is enabled (enabled = true), every container log entry (both stdout & stderr) will be enriched with container Name & container Image [log_collection_settings.collect_all_kube_events] # In the absense of this configmap, default value for collect_all_kube_events is false # When the setting is set to false, only the kube events with !normal event type will be collected enabled = false # When this is enabled (enabled = true), all kube events including normal events will be collected #[log_collection_settings.schema] # In the absence of this configmap, default value for containerlog_schema_version is "v1" # Supported values for this setting are "v1","v2" # See documentation at https://aka.ms/ContainerLogv2 for benefits of v2 schema over v1 schema before opting for "v2" schema # containerlog_schema_version = "v2" #[log_collection_settings.enable_multiline_logs] # fluent-bit based multiline log collection for go (stacktrace), dotnet (stacktrace) # if enabled will also stitch together container logs split by docker/cri due to size limits(16KB per log line) # enabled = "false" prometheus-data-collection-settings: |- # Custom Prometheus metrics data collection settings [prometheus_data_collection_settings.cluster] # Cluster level scrape endpoint(s). These metrics will be scraped from agent's Replicaset (singleton) # Any errors related to prometheus scraping can be found in the KubeMonAgentEvents table in the Log Analytics workspace that the cluster is sending data to. #Interval specifying how often to scrape for metrics. This is duration of time and can be specified for supporting settings by combining an integer value and time unit as a string value. Valid time units are ns, us (or µs), ms, s, m, h. interval = "1m" ## Uncomment the following settings with valid string arrays for prometheus scraping #fieldpass = ["metric_to_pass1", "metric_to_pass12"] #fielddrop = ["metric_to_drop"] # An array of urls to scrape metrics from. # urls = ["http://myurl:9101/metrics"] # An array of Kubernetes services to scrape metrics from. # kubernetes_services = ["http://my-service-dns.my-namespace:9102/metrics"] # When monitor_kubernetes_pods = true, replicaset will scrape Kubernetes pods for the following prometheus annotations: # - prometheus.io/scrape: Enable scraping for this pod # - prometheus.io/scheme: If the metrics endpoint is secured then you will need to # set this to `https` & most likely set the tls config. # - prometheus.io/path: If the metrics path is not /metrics, define it with this annotation. # - prometheus.io/port: If port is not 9102 use this annotation monitor_kubernetes_pods = true ## Restricts Kubernetes monitoring to namespaces for pods that have annotations set and are scraped using the monitor_kubernetes_pods setting. ## This will take effect when monitor_kubernetes_pods is set to true ## ex: monitor_kubernetes_pods_namespaces = ["default1", "default2", "default3"] # monitor_kubernetes_pods_namespaces = ["default1"] ## Label selector to target pods which have the specified label ## This will take effect when monitor_kubernetes_pods is set to true ## Reference the docs at https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/#label-selectors # kubernetes_label_selector = "env=dev,app=nginx" ## Field selector to target pods which have the specified field ## This will take effect when monitor_kubernetes_pods is set to true ## Reference the docs at https://kubernetes.io/docs/concepts/overview/working-with-objects/field-selectors/ ## eg. To scrape pods on a specific node # kubernetes_field_selector = "spec.nodeName=$HOSTNAME" [prometheus_data_collection_settings.node] # Node level scrape endpoint(s). These metrics will be scraped from agent's DaemonSet running in every node in the cluster # Any errors related to prometheus scraping can be found in the KubeMonAgentEvents table in the Log Analytics workspace that the cluster is sending data to. #Interval specifying how often to scrape for metrics. This is duration of time and can be specified for supporting settings by combining an integer value and time unit as a string value. Valid time units are ns, us (or µs), ms, s, m, h. interval = "1m" ## Uncomment the following settings with valid string arrays for prometheus scraping # An array of urls to scrape metrics from. $NODE_IP (all upper case) will substitute of running Node's IP address # urls = ["http://$NODE_IP:9103/metrics"] #fieldpass = ["metric_to_pass1", "metric_to_pass12"] #fielddrop = ["metric_to_drop"] metric_collection_settings: |- # Metrics collection settings for metrics sent to Log Analytics and MDM [metric_collection_settings.collect_kube_system_pv_metrics] # In the absense of this configmap, default value for collect_kube_system_pv_metrics is false # When the setting is set to false, only the persistent volume metrics outside the kube-system namespace will be collected enabled = false # When this is enabled (enabled = true), persistent volume metrics including those in the kube-system namespace will be collected alertable-metrics-configuration-settings: |- # Alertable metrics configuration settings for container resource utilization [alertable_metrics_configuration_settings.container_resource_utilization_thresholds] # The threshold(Type Float) will be rounded off to 2 decimal points # Threshold for container cpu, metric will be sent only when cpu utilization exceeds or becomes equal to the following percentage container_cpu_threshold_percentage = 95.0 # Threshold for container memoryRss, metric will be sent only when memory rss exceeds or becomes equal to the following percentage container_memory_rss_threshold_percentage = 95.0 # Threshold for container memoryWorkingSet, metric will be sent only when memory working set exceeds or becomes equal to the following percentage container_memory_working_set_threshold_percentage = 95.0 # Alertable metrics configuration settings for persistent volume utilization [alertable_metrics_configuration_settings.pv_utilization_thresholds] # Threshold for persistent volume usage bytes, metric will be sent only when persistent volume utilization exceeds or becomes equal to the following percentage pv_usage_threshold_percentage = 60.0 # Alertable metrics configuration settings for completed jobs count [alertable_metrics_configuration_settings.job_completion_threshold] # Threshold for completed job count , metric will be sent only for those jobs which were completed earlier than the following threshold job_completion_threshold_time_minutes = 360 integrations: |- [integrations.azure_network_policy_manager] collect_basic_metrics = false collect_advanced_metrics = false [integrations.azure_subnet_ip_usage] enabled = false # Doc - https://github.com/microsoft/Docker-Provider/blob/ci_prod/Documentation/AgentSettings/ReadMe.md agent-settings: |- # prometheus scrape fluent bit settings for high scale # buffer size should be greater than or equal to chunk size else we set it to chunk size. # settings scoped to prometheus sidecar container. all values in mb [agent_settings.prometheus_fbit_settings] tcp_listener_chunk_size = 10 tcp_listener_buffer_size = 10 tcp_listener_mem_buf_limit = 200 # prometheus scrape fluent bit settings for high scale # buffer size should be greater than or equal to chunk size else we set it to chunk size. # settings scoped to daemonset container. all values in mb # [agent_settings.node_prometheus_fbit_settings] # tcp_listener_chunk_size = 1 # tcp_listener_buffer_size = 1 # tcp_listener_mem_buf_limit = 10 # prometheus scrape fluent bit settings for high scale # buffer size should be greater than or equal to chunk size else we set it to chunk size. # settings scoped to replicaset container. all values in mb # [agent_settings.cluster_prometheus_fbit_settings] # tcp_listener_chunk_size = 1 # tcp_listener_buffer_size = 1 # tcp_listener_mem_buf_limit = 10 # The following settings are "undocumented", we don't recommend uncommenting them unless directed by Microsoft. # They increase the maximum stdout/stderr log collection rate but will also cause higher cpu/memory usage. ## Ref for more details about Ignore_Older - https://docs.fluentbit.io/manual/v/1.7/pipeline/inputs/tail # [agent_settings.fbit_config] # log_flush_interval_secs = "1" # default value is 15 # tail_mem_buf_limit_megabytes = "10" # default value is 10 # tail_buf_chunksize_megabytes = "1" # default value is 32kb (comment out this line for default) # tail_buf_maxsize_megabytes = "1" # default value is 32kb (comment out this line for default) # tail_ignore_older = "5m" # default value same as fluent-bit default i.e.0m # On both AKS & Arc K8s enviornments, if Cluster has configured with Forward Proxy then Proxy settings automatically applied and used for the agent # Certain configurations, proxy config should be ignored for example Cluster with AMPLS + Proxy # in such scenarios, use the following config to ignore proxy settings # [agent_settings.proxy_config] # ignore_proxy_settings = "true" # if this is not applied, default value is false metadata: name: container-azm-ms-agentconfig namespace: kube-system

  • SonarLint

    Clean code begins in your IDE with SonarLint. Up your coding game and discover issues early. SonarLint is a free plugin that helps you find & fix bugs and security issues from the moment you start writing code. Install from your favorite IDE marketplace today.

  • linux-for-pirates

    A book we're writing about Linux, in the theme of Pirates!

    Project mention: Upskilling as a Linux Engineer | reddit.com/r/linuxadmin | 2022-12-29
  • aws-foundations-cis-baseline

    InSpec profile to validate your VPC to the standards of the CIS Amazon Web Services Foundations Benchmark v1.1.0

  • cloudfront-signer

    Ruby gem for signing AWS CloudFront private content URLs and streaming paths.

  • cloud_wordpress

    Manage multiple wordpress

  • fog-azure-rm

    Fog for Azure Resource Manager

    Project mention: Is there a way to use CarrierWave with Azure these days? | reddit.com/r/rails | 2023-03-14

    [1]: https://github.com/fog/fog-azure-rm

NOTE: The open source projects on this list are ordered by number of github stars. The number of mentions indicates repo mentiontions in the last 12 Months or since we started tracking (Dec 2020). The latest post mention was on 2023-05-13.

Ruby Cloud related posts

Index

What are some of the best open-source Cloud projects in Ruby? This list will help you:

Project Stars
1 Fog 4,317
2 AWS SDK for Ruby 3,444
3 foreman 2,342
4 manageiq 1,272
5 click-to-deploy 669
6 browse-everything 109
7 Docker-Provider 102
8 linux-for-pirates 98
9 aws-foundations-cis-baseline 71
10 cloudfront-signer 42
11 cloud_wordpress 36
12 fog-azure-rm 35
TestGPT | Generating meaningful tests for busy devs
Get non-trivial tests (and trivial, too!) suggested right inside your IDE, so you can code smart, create more value, and stay confident when you push.
codium.ai