|Apache Spark||Apache Calcite|
|6 days ago||1 day ago|
|Apache License 2.0||Apache License 2.0|
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
What do I need to know about distributed algorithms and systems?
1 project | reddit.com/r/AskProgramming | 22 May 2022
You generally want to keep your data in memory, rather than disk, to keep things reasonably fast. A system like Apache Spark tries to do this for you, spilling to disk when needed. In general, I'd recommend researching Spark, since it will cover a lot of the concepts you care about.
How to use Spark and Pandas to prepare big data
3 projects | dev.to | 10 May 2022
Apache Spark is one of the most actively developed open-source projects in big data. The following code examples require that you have Spark set up and can execute Python code using the PySpark library. The examples also require that you have your data in Amazon S3 (Simple Storage Service). All this is set up on AWS EMR (Elastic MapReduce).
AWS Glue: what is it and how does it work?
1 project | dev.to | 5 May 2022
With Glue, Apache Spark runs in the background. But if this is the first time you’ve heard of the popular open-source analytics engine, it may take you a while to familiarize yourself with the cloud software.
Real-time Open Source Indexes: Databases, Headless CMSs and Static Site Generators
7 projects | dev.to | 4 May 2022
Spark SQL (302 active contributors).
Top Responsibilities of a Data Engineering Manager
1 project | reddit.com/r/dataengineering | 2 May 2022
What’s more, picking the right technology is always evolving. New tools come out all the time, often with different functionality than existing tools. So it’s important that you stay up-to-date on what technologies are available and their latest features. For example, four years ago Apache Spark was completely unknown but today it is quickly becoming the de facto standard for stream processing.
Apache Spark, Hive, and Spring Boot — Testing Guide
6 projects | dev.to | 22 Apr 2022
In this article, I'm showing you how to create a Spring Boot app that loads data from Apache Hive via Apache Spark to the Aerospike Database. More than that, I'm giving you a recipe for writing integration tests for such scenarios that can be run either locally or during the CI pipeline execution. The code examples are taken from this repository.
Cannot find col function in pyspark
1 project | reddit.com/r/codehunter | 22 Apr 2022
from pyspark.sql.functions import col but when I try to look it up in the Github source code I find no col function in functions.py file, how can python import a function that doesn't exist?
How To Start Your Next Data Engineering Project
6 projects | dev.to | 16 Apr 2022
Big Data Processing, EMR with Spark and Hadoop | Python, PySpark
2 projects | dev.to | 27 Mar 2022
Apache Spark is an open-source, distributed processing system used for big data workloads. Wanna dig more dipper?
1 project | reddit.com/r/196 | 24 Mar 2022
CITIC Industrial Cloud — Apache ShardingSphere Enterprise Applications
1 project | dev.to | 14 Apr 2022
The SQL Federation engine contains processes such as SQL Parser, SQL Binder, SQL Optimizer, Data Fetcher and Operator Calculator, suitable for dealing with co-related queries and subqueries cross multiple database instances. At the underlying layer, it uses Calcite to implement RBO (Rule Based Optimizer) and CBO (Cost Based Optimizer) based on relational algebra, and query the results through the optimal execution plan.
Postgres wire compatible SQLite proxy
14 projects | news.ycombinator.com | 31 Mar 2022
Awesome to see work in the DB wire compatible space. On the MySQL side, there was MySQL Proxy (https://github.com/mysql/mysql-proxy), which was scriptable with Lua, with which you could create your own MySQL wire compatible connections. Unfortunately it appears to have been abandoned by Oracle and IIRC doesn't work with 5.7 and beyond. I used it in the past to hack together a MySQL wire adapter for Interana (https://scuba.io/).
I guess these days the best approach for connecting arbitrary data sources to existing drivers, at least for OLAP, is Apache Calcite (https://calcite.apache.org/). Unfortunately that feels a little more involved.
Launch HN: Hydra (YC W22) – Query Any Database via Postgres
4 projects | news.ycombinator.com | 23 Feb 2022
For anyone interested, Apache Calcite is an open source data management framework which seems to do many of the same things that Hydra claims to do, but taking a different approach. Operating as a Java library, Calcite contains "adapters" to many different data sources from existing JDBC connectors to Elasticsearch to Cassandra. All of these different data sources can be joined together as desired. Calcite also has it's own optimizer which is able to push down relevant parts of the query to the different data sources. However, you get full SQL on data sources which don't support it, with Calcite executing the remaining bits itself.
Unfortunately, I would not be too surprised if Calcite was found to be less performance-optimized than Hydra. That said, there are users of Calcite at Google, Uber, Spotify, and others who have made great use of various parts of the framework.
Anyone know of any software that can help in designing then outputting to various database
1 project | reddit.com/r/DatabaseHelp | 21 Nov 2021
Abstraction Layer - You can use something like Calcite to abstract out your data storage. https://calcite.apache.org/
Open Source SQL Parsers
17 projects | dev.to | 8 Oct 2021
There are multiple projects that maintain parsers for popular open source databases like MySQL and Postgres. For other open source databases, the grammar can be extracted from the open-source project. For commercial databases, the only option is to reverse engineer the complete grammar. There are SQL parser/optimizer platforms like Apache Calcite that help to reduce the effort to implement the SQL dialect of your choice.
Introduction to the Join Ordering Problem
1 project | dev.to | 26 Sep 2021
In this post, we took a sneak peek at the join ordering problem and got a bird's-eye view of its complexity. In further posts, we will explore the complexity of join order planning for different graph topologies, dive into details of concrete enumeration techniques, and analyze existing and potential strategies of join planning in Apache Calcite. Stay tuned!
Does Java have an open source package that can execute SQL on txt/csv?
3 projects | reddit.com/r/programming | 22 Sep 2021
Yes. Apache Calcite can do that.
High-performance, columnar, in-memory store with bitmap indexing in Go
1 project | news.ycombinator.com | 21 Jun 2021
Memoization in Cost-based Optimizers
2 projects | dev.to | 9 Jun 2021
You may find a similar design in many production-grade heuristic optimizers. In our previous blog post about Presto, we discussed the Memo class that manages such references. In Apache Calcite, the heuristic optimizer HepPlanner models node references through the class HepRelVertex.
Superintendent: Load multiple CSV files and write SQL
2 projects | reddit.com/r/programming | 7 Jun 2021
Apache Calcite? Its tutorial is about querying CSV files.
What are some alternatives?
Trino - Official repository of Trino, the distributed SQL query engine for big data, formerly known as PrestoSQL (https://trino.io)
Scalding - A Scala API for Cascading
Presto - The official home of the Presto distributed SQL query engine for big data
mrjob - Run MapReduce jobs on Hadoop or Amazon Web Services
ANTLR - ANTLR (ANother Tool for Language Recognition) is a powerful parser generator for reading, processing, executing, or translating structured text or binary files.
luigi - Luigi is a Python module that helps you build complex pipelines of batch jobs. It handles dependency resolution, workflow management, visualization etc. It also comes with Hadoop support built in.
Pytorch - Tensors and Dynamic neural networks in Python with strong GPU acceleration
Smile - Statistical Machine Intelligence & Learning Engine
Airflow - Apache Airflow - A platform to programmatically author, schedule, and monitor workflows
Apache Arrow - Apache Arrow is a multi-language toolbox for accelerated data interchange and in-memory processing