|9 days ago||about 7 hours ago|
|Apache License 2.0||Apache License 2.0|
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
ImportError: No module named requests
reddit.com/r/codehunter | 2021-10-20
Whenever I try to import requests, I get an error saying No module Named requests.
What are the differences between the urllib, urllib2, urllib3 and requests module?
reddit.com/r/codehunter | 2021-10-15
In Python, what are the differences between the urllib, urllib2, urllib3 and requests modules? Why are there three? They seem to do the same thing...
Where can I read quality Python code
reddit.com/r/learnprogramming | 2021-09-29
Keep in mind that there's a lot of clutter in Requests' repository. The entire module is actually just these files.reddit.com/r/learnprogramming | 2021-09-29
Starting with an incredibly complex project like a machine learning platform probably isn't a good idea. Try finding something easier. Maybe something like requests.
Why is the PSF ignoring its own CoC?
reddit.com/r/Python | 2021-09-02
https://github.com/psf/requests/pull/5923 adds back a logo that people find offensive because someone got it tattooed on their body? Doesn't the logo violate the Code of Conduct?
3 Ways to Unit Test REST APIs in Python
dev.to | 2021-07-22
To retrieve the weather data, we'll use requests. We can create a function that receives a city name as parameter and returns a json. The json will contain the temperature, weather description, sunset, sunrise time and so on.
reader 2.0 released – a Python feed reader library
reddit.com/r/Python | 2021-07-19
want to change the way feeds are retrieved by using Requests?
How To Write Clean Code in Python
dev.to | 2021-07-14
Explore other well-written code bases. If you want some examples of well-written, clean, and Pythonic code, check out the Python requests library.
Everything to know about Requests v2.26.0
dev.to | 2021-07-13
Requests v2.26.0 is a large release which changes and removes many features and dependencies that you should know about when upgrading.
Multiple payloads with one POST request?
reddit.com/r/learnpython | 2021-07-13
Are you talking about multipart/mixed or some other form of multipart type? If so, look here: https://github.com/psf/requests/issues/1736 the parameter to pass is very confusingly named files. But, it seems to be going in the direction you want.
Noobie who is trying to use K8s needs confirmation to know if this is the way or he is overestimating Kubernetes.
reddit.com/r/kubernetes | 2021-10-20
The Data Engineer Roadmap 🗺
dev.to | 2021-10-19
Anything Comparable to power automate or flow for Linux?
reddit.com/r/sysadmin | 2021-10-17
I never used Power Automate, but it looks like a workflow orchestrator. So checkout https://airflow.apache.org/
Airflow with different conda environments
reddit.com/r/dataengineering | 2021-10-13
If Airflow is the way to go then try DockerOperators (https://github.com/apache/airflow/blob/main/airflow/providers/docker/example_dags/example_docker.py). It's not the easiest set up but will do what you from what I get from your question.
Databricks jobs and Airflow on Kubernetes
reddit.com/r/dataengineering | 2021-10-02
I have not used databricks but it is something we are looking into integrating into our infrastructure in the future. Since Databricks is a service that does not run locally, I would use the databricks Operators/Hooks that come with airflow, rather than trying to build out anything of my own. https://github.com/apache/airflow/blob/main/airflow/providers/databricks/hooks/databricks.py
what do you think about airflow?
reddit.com/r/dataengineering | 2021-10-02
I think one of the main design problems I have with Airflow is the fact that it tends to tightly couple processing/transform code with data movement code which makes debugging tricky. The way I have solved this is by building a command line interface to all the processing code so I can debug the processing code outside of any airflow infrastructure (which can be painful to get running locally if one does not use Airflow Breeze.
BigQuery vs Relational Databases
reddit.com/r/bigquery | 2021-09-08
However, my typical go-to is to utilize something like [DBT](https://www.getdbt.com/) or [Airflow](https://airflow.apache.org/) to orchestrate sets of related queries. There are a lot of powerful patterns you can adopt by using these kind of orchestration services in conjunction with BigQuery.
Airflow, Spark, other tool ?
reddit.com/r/dataengineering | 2021-09-08
I actually used this example from Airflow: https://github.com/apache/airflow/blob/main/airflow/example_dags/tutorial_taskflow_api_etl.py
Ask HN: What is a good Python project for a mid lv engineer to contribute to?
news.ycombinator.com | 2021-08-26
Come help out with Apache Airflow! It's a great project to get involved with because it has a ton of users and real world problems, but it's still early enough that there's low hanging fruit in terms of adding functionality.
A helpful place to start is the provider packages, since Airflow has integrations with so many 3rd party providers, and if you have knowledge in any of them it can be a good jumping off point.
Serverless data-engineering pipeline suggestions
reddit.com/r/dataengineering | 2021-08-17
What are some alternatives?
urllib3 - Python HTTP library with thread-safe connection pooling, file post support, user friendly, and more.
httplib2 - Small, fast HTTP client library for Python. Features persistent connections, cache, and Google App Engine support. Originally written by Joe Gregorio, now supported by community.
Kedro - A Python framework for creating reproducible, maintainable and modular data science code.
grequests - Requests + Gevent = <3
luigi - Luigi is a Python module that helps you build complex pipelines of batch jobs. It handles dependency resolution, workflow management, visualization etc. It also comes with Hadoop support built in.
Dask - Parallel computing with task scheduling
dagster - A data orchestrator for machine learning, analytics, and ETL.
Pandas - Flexible and powerful data analysis / manipulation library for Python, providing labeled data structures similar to R data.frame objects, statistical functions, and much more
treq - Python requests like API built on top of Twisted's HTTP client.
Apache Camel - Apache Camel is an open source integration framework that empowers you to quickly and easily integrate various systems consuming or producing data.
Numba - NumPy aware dynamic Python compiler using LLVM