spark-snowflake
databricks-nutter-repos-demo
Our great sponsors
spark-snowflake | databricks-nutter-repos-demo | |
---|---|---|
1 | 2 | |
196 | 144 | |
-0.5% | - | |
5.6 | 4.3 | |
2 months ago | 3 months ago | |
Scala | Python | |
Apache License 2.0 | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
spark-snowflake
-
Why Databricks Is Winning
Snowflake and Databricks are different, sometimes complementary technologies. You can store data in Snowflake & query it with Databricks for example: https://github.com/snowflakedb/spark-snowflake
Snowflake predicate pushdown filtering seems quite promising: https://www.snowflake.com/blog/snowflake-spark-part-2-pushin...
Think both these companies can win.
databricks-nutter-repos-demo
-
Ask HN: Tips for software engineering sanity with Databricks notebooks?
You can use Databricks Repos (https://docs.databricks.com/repos/index.html) specifically files in repos (https://docs.databricks.com/repos/work-with-notebooks-other-...) functionality that allows to use Python files (not notebooks!) as Python modules.
Another alternative is to split notebooks into “library notebooks” that just define transformations, and “orchestration notebooks” that use code library notebooks to execute a “business logic”.
In both approaches you can do code testing, etc.
P.S. I have a demo of both approaches here: https://github.com/alexott/databricks-nutter-repos-demo
-
Why Databricks Is Winning
I’m sorry for delay, will fix ASAP...
My point is that you can do that even without jars/wheels - you can do VC and tests of notebooks. For example, https://github.com/alexott/databricks-nutter-projects-demo
What are some alternatives?
flintrock - A command-line tool for launching Apache Spark clusters.
Apache Spark - Apache Spark - A unified analytics engine for large-scale data processing
chispa - PySpark test helper methods with beautiful error messages
dask-gateway - A multi-tenant server for securely deploying and managing Dask clusters.
delta - An open-source storage framework that enables building a Lakehouse architecture with compute engines including Spark, PrestoDB, Flink, Trino, and Hive and APIs