goskyr
croncert-config
goskyr | croncert-config | |
---|---|---|
2 | 3 | |
32 | 10 | |
- | - | |
8.7 | 9.3 | |
2 days ago | 6 days ago | |
Go | ||
GNU General Public License v3.0 only | - |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
goskyr
-
No code command line webscraper
I am currently building a webscraper, called goskyr, that can be run from the command line and is supposed to be easily configurable. So instead of having to write code to scrape a website you'd just write a configuration snippet and run the scraper. I realize that there are a number of gui based scraping services that make it extremely easy to setup a scraping process for any website, so for people having no coding experience whatsoever that would probably be the easiest solution. I'm trying to come close to those gui based solutions in terms of functionality by providing a 'smart' way of finding potentially interesting data/fields and letting the user select a subset in a terminal based ui. Also date extraction & parsing and the newly added machine learning capability is probably worth mentioning. Still, those other, gui based solutions are really awesome, eg octoparse or scrapestorm.
-
Crowdsourced concert scraping project
I am currently working on a configurable command line webscraper, called goskyr and my first use case is collecting as much concert data as possible for this website idea I had, croncert.ch I am hoping that people other than me are willing to contribute to the scraper configuration file in this repository, https://github.com/jakopako/croncert-config, which also contains a github action to regularly run the scraper. What do you think? Could this work? How should I spread the word?
croncert-config
-
No code command line webscraper
I actually started this scraping project because of an idea I wanted to try, which is scraping concert data from as many websites as possible with as little effort as possible, see https://github.com/jakopako/croncert-config This seems to work better and better. Still I am wondering if there are any other valid use cases for such a terminal based scraper or if it's rather niche. What do you think?
-
Crowdsourced concert scraping project
I am currently working on a configurable command line webscraper, called goskyr and my first use case is collecting as much concert data as possible for this website idea I had, croncert.ch I am hoping that people other than me are willing to contribute to the scraper configuration file in this repository, https://github.com/jakopako/croncert-config, which also contains a github action to regularly run the scraper. What do you think? Could this work? How should I spread the word?
-
New concert website
croncert.ch is a website that lists concerts worldwide (currently, ‘worldwide’ is more of a euphemism), focussing on smaller venues. An automated process regularly scrapes the underlying concert data. The idea is that anyone can contribute by extending the scraper configuration with new concert venues. Feel free to check out https://github.com/jakopako/croncert-config for more details!
What are some alternatives?
colly - Elegant Scraper and Crawler Framework for Golang
requests-html - Pythonic HTML Parsing for Humans™
fitter - New way for collect information from the API's/Websites
soup - Web Scraper in Go, similar to BeautifulSoup
open-dictionary - 🦄 An initiative to create a dictionary which is free for everyone 🚀
lux - 👾 Fast and simple video download library and CLI tool written in Go
Ferret - Declarative web scraping
Rendora - dynamic server-side rendering using headless Chrome to effortlessly solve the SEO problem for modern javascript websites
osdg-data - The OSDG Community Dataset (OSDG-CD) is a public dataset of thousands of text excerpts, validated by OSDG Community Platform (OSDG-CP) citizen scientists with respect to the Sustainable Development Goals (SDGs). The dataset is updated every quarter and published on Zenodo.
Geziyor - Geziyor, blazing fast web crawling & scraping framework for Go. Supports JS rendering.
Crawly - Crawly, a high-level web crawling & scraping framework for Elixir.