astro-sdk
gnu-parallel
astro-sdk | gnu-parallel | |
---|---|---|
7 | 22 | |
317 | 25 | |
0.9% | - | |
8.5 | 10.0 | |
5 days ago | about 9 years ago | |
Python | Perl | |
Apache License 2.0 | GNU General Public License v3.0 only |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
astro-sdk
-
Orchestration: Thoughts on Dagster, Airflow and Prefect?
Have you tried the Astro SDK? https://github.com/astronomer/astro-sdk
-
Airflow as near real time scheduler
One interesting point about putting the data into s3, is that if the data is in an S3 file then OP can use the Astro SDK to pretty easily upload that data into a table or a dataframe (there's even an s3 dynamic task function in the SDK that might fit the use-case well here).
-
Most ideal Airflow task structure?
I think you should take a look at the Astro SDK Itβs an open source python package that removes the complexity of writing DAGs , particularly in the context of Extract, Load, Transform (ELT) use cases. Look at the doc here, especially aql.transform, aql.run_raw_sql, etc. That will definitely help you
-
ELT pipeline using airflow
- Astro SDK*: Made for folks who are doing their ETL in airflow and want to simplify movement between DBs and Pandas
-
After Airflow. Where next for DE?
More of a general principle but when you don't have design patterns, you get varying levels of results right? I think what Astro is doing to introduce "strong defaults" through projects like the astro-sdk or the cloud ide are interesting experiments to remove some of the busy work of common dags (load from s3, do something, push to database) will HELP reduce the cognitive load of really common, simple actions and give them a better single pattern to optimize on. I don't think those efforts reduce the optionality of true power users at all who want to custom code their s3 log sink to have some unique implementation while at the same time maybe solving some of the fragmentation to very frequently performed operations. π€
-
Airflow - Passing large data volumes between tasks
Have you looked into the astro python SDK? My team and I built this out over the last year to do exactly this :). You can you use the `@dataframe` decorator to pull the API data into a dataframe, store it in GCS and the access it in future steps. Lemme know if you have any questions!
-
What's the best tool to build pipelines from REST APIs?
I have an example here using COVID data. basically you just write a python function that reads the API and returns a dataframe (or any number of dataframes) and downstream tasks can then read the output as either a dataframe or a SQL table.
gnu-parallel
-
SQL query execution idea
You can use GNU Parallel (https://www.gnu.org/software/parallel/) to run command-line clients with all of those queries. You can set up the upper limit of simultaneous clients run, and this will automatically handle all possible parallelism.
- Parallel β shell tool for executing jobs in parallel using one or more computers
-
Distcc: A fast, free distributed C/C++ compiler
Some other multi machine options that have worked well for me, well beyond just compilation of C/C++ on multiple machines with multiple cores.
1) set up passwordless, ssh.
and
2) use the gnu parallel. https://www.gnu.org/software/parallel/
gnu parallel is super flexible, very useful.
-
Peplum: F/OSS distributed parallel computing and supercomputing at Home with Ruby infrastructure
How does this stack up againg GNU parallel? If you just wanna parallelize CLI work-loads (like nmap), parallel should be easier, I guess.
-
Search in your Jupyter notebooks from the CLI, fast.
It requires jq for JSON processing and GNU parallel for concurrent searches in the notebooks.
- Is there a way to use all CPU cores while using RIBlast?
-
Can cuda help me here?
Since you've got lots of images, you could use GNU Parallel to spread the job across multiple CPUs.
-
5 great Perl scripts to keep in your sysadmin toolbox
Gnu parallel
- Is there an .deb package for installing GNU parallel?
-
Modern SPAs without bundlers, CDNs, or Node.js
You could easily use something like GNU Parallel:
https://www.gnu.org/software/parallel/
What are some alternatives?
Mage - π§ The modern replacement for Airflow. Mage is an open-source data pipeline tool for transforming and integrating data. https://github.com/mage-ai/mage-ai
Parallel
quadratic - Quadratic | Data Science Spreadsheet with Python & SQL
bazel-buildfarm - Bazel remote caching and execution service
astro - Astro SDK allows rapid and clean development of {Extract, Load, Transform} workflows using Python and SQL, powered by Apache Airflow. [Moved to: https://github.com/astronomer/astro-sdk]
lolcate-rs - Lolcate -- A comically fast way of indexing and querying your filesystem. Replaces locate / mlocate / updatedb. Written in Rust.
starthinker - Reference framework for building data workflows provided by Google. Accelerates authentication, logging, scheduling, and deployment of solutions using GCP. To borrow a tagline.. "The framework for professionals with deadlines."
xidel - Command line tool to download and extract data from HTML/XML pages or JSON-APIs, using CSS, XPath 3.0, XQuery 3.0, JSONiq or pattern matching. It can also create new or transformed XML/HTML/JSON documents.
astronomer-cosmos - Run your dbt Core projects as Apache Airflow DAGs and Task Groups with a few lines of code
jc - CLI tool and python library that converts the output of popular command-line tools, file-types, and common strings to JSON, YAML, or Dictionaries. This allows piping of output to tools like jq and simplifying automation scripts.
awesome-pipeline - A curated list of awesome pipeline toolkits inspired by Awesome Sysadmin
ripgrep - ripgrep recursively searches directories for a regex pattern while respecting your gitignore