druid-datasets
Druid
Our great sponsors
druid-datasets | Druid | |
---|---|---|
1 | 22 | |
0 | 12,855 | |
- | 0.9% | |
10.0 | 9.8 | |
7 months ago | 5 days ago | |
Java | Java | |
Apache License 2.0 | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
druid-datasets
-
Analysing Github Stars - Extracting and analyzing data from Github using Apache NiFi®, Apache Kafka® and Apache Druid®
Apache NiFi supports powerful and scalable directed graphs of data routing, transformation, and system mediation logic. Nifi is very useful when data needs to be loaded from different sources. In this case, I will nifi to access the Github API as it is very easy to make repeated calls to a Http endpoint and get data from multiple pages. You can see what I did by downloading NiFi yourself and then adding my template from the Druid Datasets repo: https://github.com/implydata/druid-datasets/blob/main/githubstars/github_stars.xml
Druid
-
Show HN: The simplest tiny analytics tool – storywise
https://github.com/apache/druid
It's always a question of tradeoffs.
The awesome-selfhosted project has a nice list of open-source analytics projects. It's really good inspiration to dig into these projects and find out about the technology choices that other open-source tools in the space have made.
-
Analysing Github Stars - Extracting and analyzing data from Github using Apache NiFi®, Apache Kafka® and Apache Druid®
As part of the developer relations team in Imply, I thought it would be interesting to extract data about users who had starred the apache/druid repository. Stars don’t just help us understand how many people find Druid interesting, they also give insight into what other repositories people find interesting. And that is really important to me as an advocate – I can work out what topics people might be interested in knowing more about in my articles and at Druid meetups.
Spencer Kimball (now CEO at CockroachDB) wrote an interesting article on this topic in 2021 where they created spencerkimball/stargazers based on a Python script. So I started thinking: could I create a data pipeline using Nifi and Kafka (two OSS tools often used with Druid) to get the API data into Druid - and then use SQL to do the analytics? The answer was yes! And I have documented the outcome below. Here’s my analytical pipeline for Github stars data using Nifi, Kafka and Druid.
-
Real Time Data Infra Stack
Apache Druid
-
When you should use columnar databases and not Postgres, MySQL, or MongoDB
But then you realize there are other databases out there focused specifically on analytical use cases with lots of data and complex queries. Newcomers like ClickHouse, Pinot, and Druid (all open source) respond to a new class of problem: The need to develop applications using endpoints published on analytical queries that were previously confined only to the data warehouse and BI tools.
-
Druids by Datadog
Datadog's product is a bit too close to Apache Druid to have named their design system so similarly.
From https://druid.apache.org/ :
> Druid unlocks new types of queries and workflows for clickstream, APM, supply chain, network telemetry, digital marketing, risk/fraud, and many other types of data. Druid is purpose built for rapid, ad-hoc queries on both real-time and historical data.
-
Mom at 54 is thinking about coding and a complete career shift. Thoughts?
Maybe rare for someone to be seeking their first coding job at that age. But plenty of us are in our 50s or older and still coding up a storm. And not necessarily ancient tech or anything. My current project exposes analytics data from Apache Druid and Cassandra via Go microservices hosted in K8s.
-
Building an arm64 container for Apache Druid for your Apple Silicon
Fortunately, it is super easy to build your own leveraging the binary distribution and existing docker.sh.
-
Apache ShardingSphere Enterprise User Case — Energy Monster
The configuration items have 108 pieces of real tables in a database. According to the configuration of maxConnectionsizeperquery=50, ShardingSphere-JDBC uses the connection limit mode, divides the query requests into three groups, and merges the results with in-memory. As a result, 36 database connections are required for one query. But the maxActive configured by the druid thread pool is set to 20, resulting in a deadlock.
-
how do I store and process user interactions and footprint via DRF
Personally I'd use Google Analytics or something else purpose-built for this, and avoid trying to reinvent the wheel. I don't know much about it, but Apache Druid is supposed to be for analytics data: https://druid.apache.org/
What are some alternatives?
iced - A cross-platform GUI library for Rust, inspired by Elm
Apache Cassandra - Mirror of Apache Cassandra
Apache HBase - Apache HBase
cube.js - 📊 Cube — The Semantic Layer for Building Data Applications
egui - egui: an easy-to-use immediate mode GUI in Rust that runs on both web and native
Scylla - NoSQL data store using the seastar framework, compatible with Apache Cassandra
Redash - Make Your Company Data Driven. Connect to any data source, easily visualize, dashboard and share your data.
tauri - Build smaller, faster, and more secure desktop applications with a web frontend.
Snowplow - The enterprise-grade behavioral data engine (web, mobile, server-side, webhooks), running cloud-natively on AWS and GCP
OpenTSDB - A scalable, distributed Time Series Database.
Metabase - The simplest, fastest way to get business intelligence and analytics to everyone in your company :yum:
Apache Superset - Apache Superset is a Data Visualization and Data Exploration Platform [Moved to: https://github.com/apache/superset]