data_engineering_on_gcp_book
scraper
data_engineering_on_gcp_book | scraper | |
---|---|---|
12 | 12 | |
116 | 98 | |
- | - | |
2.6 | 0.0 | |
about 3 years ago | about 1 year ago | |
TypeScript | ||
- | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
data_engineering_on_gcp_book
-
How possible is it for a beginner to establish pipelines, data warehouse, and visualization solution as a team of 1?
This book will walk you through setting up a complete data engineering stack on GCP: https://github.com/Nunie123/data_engineering_on_gcp_book
-
Python & SQL knowledge needed for ETL?
As for resources, this book goes over a lot of these: https://github.com/Nunie123/data_engineering_on_gcp_book. However, this goes over the 'how', not the 'why'. The only method I know for understanding the 'why' is experience. Whether at work or personal projects.
-
Learning Python and SQL: What should be my next step?
Here's a good book to follow along to introduce you to common tooling and design patterns: https://github.com/Nunie123/data_engineering_on_gcp_book
-
Github Repo with All Data tranformation,Cleaning,Validation
I'm not sure if this is exactly what you're looking for, but here's a book on GitHub that talks about the tools and steps for building data pipelines into a data warehouse: https://github.com/Nunie123/data_engineering_on_gcp_book
-
What is the low hanging fruit for a brand new GCP data engineer to learn?
Check out this book: https://github.com/Nunie123/data_engineering_on_gcp_book
-
Unsure about overall process of data engineering
If you're interested in example of how to build a complete data engineering infrastructure, you should check out this book: https://github.com/Nunie123/data_engineering_on_gcp_book
-
[HELP] Airflow Reverse proxy + load balancer +docker
If you want to try Airflow without the setup headache, you can try Composer on GCP, which is a hosted version of Airflow. I wrote some info on how to do that here: https://github.com/Nunie123/data_engineering_on_gcp_book/blob/master/ch_2_orchestration.md
-
Transition from a Quality engineer to Data engineer
This book might be a good resource for you: https://github.com/Nunie123/data_engineering_on_gcp_book
-
Accepted a data engineer intern role at a Big N company - how do I learn as much as possible?
If you want a place to start on personal projects you can check out this book, https://github.com/Nunie123/data_engineering_on_gcp_book, which will walk you through the basics of setting up a full data engineering stack.
-
What tools, software, programming languages, and etc. does a data engineer need to have in 2021
If you are interested in tooling, here's a free book on setting up a basic data engineering tech stack on GCP: https://github.com/Nunie123/data_engineering_on_gcp_book
scraper
- Most Used Individual JavaScript Libraries - jQuery still leads
-
Most Used JavaScript Libraries (percentage) - June 2022 [OC]
Additional info and source code for generating the dataset, summarizing it and rendering the chart are available at https://github.com/get-set-fetch/scraper/tree/main/datasets/javascript-libs-from-top-1mm-sites
-
How to collaborate on web scraping?
Store the scrape progress (to-be-scraped / in-progress / scraped / in-error URLs) in a database shared by all participants and scrape in parallel with as many machines as the db load permits. Got a connection timeout / IP is blocked on one machine ? Update the scrape status for the corresponding URL and let another machine retry it. https://github.com/get-set-fetch/scraper (written in typescript) follows this idea. Using Terraform from a simple config file you can adjust the number of scraper instances to be deployed in cloud at startup and during the scraping process. In benchmarks a PostgreSQL server running on a DigitalOcean vm with 4vCPU, 8GB memory allows for ~2000 URLs to be scraped per second (synthetic data, no external traffic). From my own experience this is almost never the bottleneck. Obeying robots.txt crawl-delay will surely put you under this limit. Disclaimer: I'm the npm package author.
-
How to serve scrapped data?
Written in typescript https://github.com/get-set-fetch/scraper stores scraped content directly in a database (sqlite, mysql, postgresql). Each URL represents a Resource. You can implement your own IResourceStorage and define the exact db columns you need.
-
How to scrape entire blogs with content?
You can use https://github.com/get-set-fetch/scraper with a custom plugin based on the mozilla/readability as detailed in https://getsetfetch.org/node/custom-plugins.html (extracting news article content). I think it's a close match to your use case.
-
A simple solution to rotate proxies or how to spin up your own rotation proxy server with Puppeteer and only a few lines of JS code
I'm currently implementing concurrency conditions at project/proxy/domain/session level in https://github.com/get-set-fetch/scraper . On each level you can define the maximum number of requests and the delay between two consecutive requests.
-
Web scraping content into postgresql? Scheduling web scrapers into a pipeline with airflow?
If you're familiar with nodejs give https://github.com/get-set-fetch/scraper a try. Scraped content can be stored in sqlite, mysql or postgresql. It also supports puppeteer, playwright, cheerio or jsdom for the actual content extraction. No scheduler though.
-
Web Scraping 101 with Python
I'm using this exact strategy to scrape content directly from DOM using APIs like document.querySelectorAll. You can use the same code in both headless browser clients like Puppeteer or Playwright and DOM clients like cheerio or jsdom (assuming you have a wrapper over document API). Depending on the way a web page was fetched (opened in a browser tab or fetched via nodejs http/https requests), ExtractHtmlContentPlugin, ExtractUrlsPlugin use different DOM wrappers (native, cheerio, jsdom) to scrape the content.
[1] https://github.com/get-set-fetch/scraper/blob/main/src/plugi...
-
What is your “I don't care if this succeeds” project?
https://github.com/get-set-fetch/scraper - I've been working (intermittently :) ) on a nodejs or browser extension scraper for the last 3 years, see the other projects under the get-set-fetch umbrella. Putting a lot more effort lately as I really want to do those Alexa top 1 million analysis like top js libraries, certificate authorities and so on. A few weeks back I've posted on Show:HN as you can do basic/intermediate? scraping with it.
Not capable of handling 1 mil+ pages as it still limited to puppeteer or playwright. Working on adding cheerio/jsdom support right now.
What are some alternatives?
shotcaller - A moddable RTS/MOBA game made with bracket-lib and minigene.
puppeteer-cluster - Puppeteer Pool, run a cluster of instances in parallel
FactGraph - FactGraph monorepo (backend + frontend + landing page + blog)
playwright-recaptcha-solver - ReCaptcha V2 solver for Playwright
beubo - Beubo is a free, simple, and minimal CMS with unlimited extensibility using plugins
playwright-python - Python version of the Playwright testing and automation library.
distribyted - Torrent client with HTTP, fuse, and WebDAV interfaces. Start exploring your torrent files right away, even zip, rar, or 7zip archive contents!
pyppeteer - Headless chrome/chromium automation library (unofficial port of puppeteer)
go-plugin - Golang plugin system over RPC.
Twitch-Drops-Bot - A Node.js bot that will automatically watch Twitch streams and claim drop rewards.
dali - Indie assembler/linker for Dalvik VM .dex & .apk files (Work In Progress)
vopono - Run applications through VPN tunnels with temporary network namespaces