sqlfluff
Apache Spark
sqlfluff | Apache Spark | |
---|---|---|
35 | 101 | |
7,219 | 38,378 | |
1.2% | 0.6% | |
9.6 | 10.0 | |
4 days ago | 5 days ago | |
Python | Scala | |
MIT License | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
sqlfluff
-
ππ 23 issues to grow yourself as an exceptional open-source Python expert π§βπ» π₯
Repo : https://github.com/sqlfluff/sqlfluff
-
SQL Reserved Words β The Empirical List
I'm surprised sqlfluff hasn't been mentioned yet. Perhaps not a comprehensive list, but it's worked for everything I've thrown at it. There's an ANSI keyword list [0], and then dialect-specific lists for everything from DB2 [1] to Snowflake [2].
[0]: https://github.com/sqlfluff/sqlfluff/blob/main/src/sqlfluff/...
-
Show HN: Postgres Language Server
It has tons of annoying quirks, but I couldn't imagine running a DBT project without it: https://github.com/sqlfluff/sqlfluff
-
Front page news headline scraping data engineering project
Move SQL queries to sql files and read from files (Use sqlfluff to lint the code https://github.com/sqlfluff/sqlfluff)
- Anything like SQLFluff written in Rust?
-
Code autoformatter for SQL in VSCode that plays nicely with dbt
SQLFluff is a good CLI tool for this and includes support for jinja and dbt. I don't think there's a VSCode plugin for it yet.
-
Ask HN: How do you test SQL?
This linter can really enforce some best practices https://github.com/sqlfluff/sqlfluff
A list of best practices:
-
What is something you would learn at college but not a bootcamp (hard skills)
BigQuery SQL and SQLFluff
-
Is the knowledge on how Compilers work applicable to the role of a Data Engineer?
There's a SQL parser/linter called SQLFluff that my team uses for our CI/CD. I've made a few pull requests to fix the parser for the particular SQL dialect we used, and my college compiler classes definitely helped.
-
sqlfluff VS ANTLR - a user suggested alternative
2 projects | 12 Dec 2022
Apache Spark
- "xAI will open source Grok"
-
Groovy π· Cheat Sheet - 01 Say "Hello" from Groovy
Recently I had to revisit the "JVM languages universe" again. Yes, language(s), plural! Java isn't the only language that uses the JVM. I previously used Scala, which is a JVM language, to use Apache Spark for Data Engineering workloads, but this is for another post π.
-
π¦Ώπ΄Smarcity garbage reporting automation w/ ollama
Consume data into third party software (then let Open Search or Apache Spark or Apache Pinot) for analysis/datascience, GIS systems (so you can put reports on a map) or any ticket management system
-
Go concurrency simplified. Part 4: Post office as a data pipeline
also, this knowledge applies to learning more about data engineering, as this field of software engineering relies heavily on the event-driven approach via tools like Spark, Flink, Kafka, etc.
-
Five Apache projects you probably didn't know about
Apache SeaTunnel is a data integration platform that offers the three pillars of data pipelines: sources, transforms, and sinks. It offers an abstract API over three possible engines: the Zeta engine from SeaTunnel or a wrapper around Apache Spark or Apache Flink. Be careful, as each engine comes with its own set of features.
-
Apache Spark VS quix-streams - a user suggested alternative
2 projects | 7 Dec 2023
-
Integrate Pyspark Structured Streaming with confluent-kafka
Apache Spark - https://spark.apache.org/
-
Spark β A micro framework for creating web applications in Kotlin and Java
A JVM based framework named "Spark", when https://spark.apache.org exists?
- Rest in Peas: The Unrecognized Death of Speech Recognition (2010)
-
PySpark SparkSession Builder with Kubernetes Master
I recently saw a pull request that was merged to the Apache/Spark repository that apparently adds initial Python bindings for PySpark on K8s. I posted a comment to the PR asking a question about how to use spark-on-k8s in a Python Jupyter notebook, and was told to ask my question here.
What are some alternatives?
vscode-sqlfluff - An extension to use the sqlfluff linter in vscode.
Trino - Official repository of Trino, the distributed SQL query engine for big data, formerly known as PrestoSQL (https://trino.io)
sqlparse - A non-validating SQL parser module for Python
Pytorch - Tensors and Dynamic neural networks in Python with strong GPU acceleration
dbt-utils - Utility functions for dbt projects.
Airflow - Apache Airflow - A platform to programmatically author, schedule, and monitor workflows
ale - Check syntax in Vim/Neovim asynchronously and fix files, with Language Server Protocol (LSP) support
Scalding - A Scala API for Cascading
soda-sql - Data profiling, testing, and monitoring for SQL accessible data.
mrjob - Run MapReduce jobs on Hadoop or Amazon Web Services
Metabase - The simplest, fastest way to get business intelligence and analytics to everyone in your company :yum:
luigi - Luigi is a Python module that helps you build complex pipelines of batch jobs. It handles dependency resolution, workflow management, visualization etc. It also comes with Hadoop support built in.