open-gov-crawlers
auto-backup
open-gov-crawlers | auto-backup | |
---|---|---|
13 | 6 | |
61 | 1 | |
- | - | |
6.8 | 0.0 | |
27 days ago | over 1 year ago | |
Python | Python | |
- | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
open-gov-crawlers
-
What are the best repos that are a display of clean code and good programming practices that I can learn from?
I get feedback occasionally that this is the cleanest web scraping code someone’s seen: https://github.com/public-law/open-gov-crawlers
-
Sunday Daily Thread: What's everyone working on this week?
Writing more scrapers for legal glossaries of many country governments: adding Australia and the UK: https://github.com/public-law/open-gov-crawlers
-
Just a little custom coding to auto-generate spider info in a repo
Here's the repo's README.md with the table. I made this to help onboarding new open-source developers. Also to help people understand what's there.
-
Why and how to use conda?
I find it very agnostic. I use it for app development, not packages. E.g.: https://github.com/public-law/open-gov-crawlers
-
Wanted: contractor who can complete this HTML scrape
The original PDF: https://github.com/public-law/open-gov-crawlers/blob/rome-statute-english/docs/Rome-Statute.pdf
- Is there anything a webdev can do to help Ukraine right now ?
-
I want to make International Law easy to read and search: how many versions of Chinese do I need to publish?
For techies, here's the GitHub repo: https://github.com/public-law/open-gov-crawlers/discussions/70
- Project in support of Ukraine: International Criminal Law parsers/crawlers
- Scrapy project in support of Ukraine: International Criminal Law (war crimes and the crime of aggression)
- New project in support of Ukraine: International Criminal Law parsers/crawlers
auto-backup
-
One step deeper - software development basics - for beginner
In my little auto-backup project, I setup a pipeline which checks the linting with flake8.
-
Sunday Daily Thread: What's everyone working on this week?
Goal is to also then create cronjobs via CLI in an easy way. https://github.com/grumpyp/auto-backup
- I made a tool to upload files and folders to your desired remote storage via CLI
- Auto backup package - contributions and feedback wanted
What are some alternatives?
scrapy-playwright - 🎭 Playwright integration for Scrapy
mciwb - Minecraft Interactive world builder
clean-code-python - :bathtub: Clean Code concepts adapted for Python
datasets - World legal info: scraped, organized, and permissively licensed under Creative Commons.
burplist - Web crawler for Burplist, a search engine for craft beers in Singapore
subscriptable-path - A subclass of python's pathlib.PurePath that allows subscripting (`p[2]` returns 2nd item in the path).
hltv-scraping - Scraping data from hltv.org
python_rm_duplicates - Pure Python script to remove duplicate files from one or more directories.
clean-code-typescript - Clean Code concepts adapted for TypeScript
programming-principles - Categorized overview of programming principles & design patterns
clean-code-dotnet - :bathtub: Clean Code concepts and tools adapted for .NET
miniforge - A conda-forge distribution.