beam
Redash
Our great sponsors
beam | Redash | |
---|---|---|
30 | 38 | |
7,508 | 24,948 | |
1.5% | 1.0% | |
10.0 | 9.5 | |
5 days ago | about 12 hours ago | |
Java | Python | |
Apache License 2.0 | BSD 2-clause "Simplified" License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
beam
-
Ask HN: Does (or why does) anyone use MapReduce anymore?
The "streaming systems" book answers your question and more: https://www.oreilly.com/library/view/streaming-systems/97814.... It gives you a history of how batch processing started with MapReduce, and how attempts at scaling by moving towards streaming systems gave us all the subsequent frameworks (Spark, Beam, etc.).
As for the framework called MapReduce, it isn't used much, but its descendant https://beam.apache.org very much is. Nowadays people often use "map reduce" as a shorthand for whatever batch processing system they're building on top of.
-
beam VS quix-streams - a user suggested alternative
2 projects | 7 Dec 2023
-
How do Streaming Aggregation Pipelines work?
Apache Beam is one of many tools that you can use
-
Releasing Temporian, a Python library for processing temporal data, built together with Google
Flexible runtime ☁️: Temporian programs can run seamlessly in-process in Python, on large datasets using Apache Beam.
-
Kafka cluster loses or duplicates messages
To perform the tests I'm using a Kafka cluster on Kubernetes from the Beam repo (here).
- Apache Beam
-
Real Time Data Infra Stack
Apache Beam: Streaming framework which can be run on several runner such as Apache Flink and GCP Dataflow
-
Google Cloud Reference
Apache Beam: Batch/streaming data processing 🔗Link
-
Composer out of resources - "INFO Task exited with return code Negsignal.SIGKILL"
What you are looking for is Dataflow. It can be a bit tricky to wrap your head around at first, but I highly suggest leaning into this technology for most of your data engineering needs. It's based on the open source Apache Beam framework that originated at Google. We use an internal version of this system at Google for virtually all of our pipeline tasks, from a few GB, to Exabyte scale systems -- it can do it all.
-
Pub/Sub parallel processing best practices
That being said, there is a learning curve in understanding how Apache Beam works. Take a look at the beam website for more information.
Redash
- Redash: Connect to data source, easily visualize, dashboard and share your data
- FLaNK Stack 26 February 2024
- Contribuir con proyectos Open Source
-
Auto reloading Odoo with Docker
It seems like there may be an issue with Watchdog on Apple Silicon.
-
Tool or service for querying and exposing database through API
I am looking for service or tool similiar to Metabase or Redash that allows me to add data source - for example Postgres connection, and create raw SQL queries that can be shared or exposed through API. So instead of keeping raw SQL code somewhere, my other service would call this tool e.g. http://microservice/query=1?param1=xx&page=2 and get the results from the DB. These calls are internal only and part of ETL processes, but of course authentication would be required.
-
A PostgreSQL Docker container that automatically upgrades PostgreSQL
Yeah, a lot of the time I'd agree with you.
This container came about for the Redash project (https://github.com/getredash/redash), which has been stuck on PostgreSQL 9.5 (!) for years.
Moving to a new PostgreSQL container version is easy enough for new installations, but rolling that kind of change out to an existing userbase isn't so pretty.
For people familiar with the command line, PostgreSQL, and Docker then no worries.
But a large number of Redash deployments seem to have been done by people not skilled in those things. "We deployed it from the Digital Ocean droplet / AWS image / etc!"
For those situations, something that takes care of the database upgrade process automatically is the better approach. :)
-
Did anyone try Openblocks for multi-tenant client reporting?
I have tried Metabase, Redash beore (both self hosted open source versions), from my experience I find Metabase a bit easy to work with.
-
Best apps for transitioning from Spreadsheets to SQLite?
Regarding visualization tools, sqliteviz has proven to be the best I've found so far. Their web app runs locally but has some trackers, so I run it locally via a simple, static HTTP server. Falcon and Redash seem like overkill for my needs.
-
Chartbrew – create live reporting dashboards from APIs, MongoDB, Firestore, etc.
Redash seems to be dead or at least in hibernation. There hasn't been a release in over a year.
https://github.com/getredash/redash/issues/5891
-
Real Time Data Infra Stack
redash
What are some alternatives?
Apache Arrow - Apache Arrow is a multi-language toolbox for accelerated data interchange and in-memory processing
Apache Superset - Apache Superset is a Data Visualization and Data Exploration Platform [Moved to: https://github.com/apache/superset]
Apache Hadoop - Apache Hadoop
Metabase - The simplest, fastest way to get business intelligence and analytics to everyone in your company :yum:
Scio - A Scala API for Apache Beam and Google Cloud Dataflow.
plotly - The interactive graphing library for Python :sparkles: This project now includes Plotly Express!
Apache Spark - Apache Spark - A unified analytics engine for large-scale data processing
cube.js - 📊 Cube — The Semantic Layer for Building Data Applications
Airflow - Apache Airflow - A platform to programmatically author, schedule, and monitor workflows
bokeh - Interactive Data Visualization in the browser, from Python
Apache Hive - Apache Hive
Druid - Apache Druid: a high performance real-time analytics database.