curlconverter
awesome-web-archiving
curlconverter | awesome-web-archiving | |
---|---|---|
6 | 13 | |
7,174 | 1,842 | |
1.3% | 3.4% | |
7.6 | 5.2 | |
about 2 months ago | 16 days ago | |
TypeScript | ||
MIT License | Creative Commons Zero v1.0 Universal |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
curlconverter
-
Convert Curl Commands to Code
Simple way would be to add a “curl” options, looks like you’d just need to write up a method that matches this Request interface [0] to some curl command substrings you mash together.
Problem is of course: the headers and options are all going to be included. You could make it so it organizes them better though, maybe indenting and grouping like-options together so it’s easier to remove stuff.
[0] https://github.com/curlconverter/curlconverter/blob/e4b6fb74...
-
Show HN: OpenAPI DevTools – Chrome ext. that generates an API spec as you browse
I made a fork of the Chrome DevTools that adds exactly this. You can tell Chrome to use a different version of the DevTools if you start it from the command line
https://github.com/curlconverter/curlconverter/issues/64#iss...
- Program that converts a curl call into python code?
-
Ask HN: What companies are embracing “HTML over the wire”?
Someone added a ColdFusion Markup Language generator to https://curlconverter.com/cfml/ last year and after a few months I decided to remove it since I've never heard of it so nobody could possibly be using it, and the next day the guy who added support for it and 3 other people complained about it, so it seems like they're out there.
https://github.com/curlconverter/curlconverter.github.io/pul...
-
I absolutely love web scraping.
Relevant tools: - Browser dev tools and front-end tooling to debug JS and reconstruct requests in your code - grep.app and SourceGraph to check open-source parsers for some URLs (often, there are such repositories) - curlconverter to quickly draft a script from the cURL command - Regex and regex playgrounds to extract data from inline JavaScript - GraphQL introspection tools - Optionally, Fiddler or Wireshark to intercept and debug network requests (I don't use but my teammate does)
-
Convert curl commands to code in several languages
Original author here. Many smart people have contributed code over the years, but one warrants special mention.
About a year ago, verhovsky showed up out of nowhere. He rewrote the core of the application and increased the professionalism across the board. (dedicated domain, github page hosting, UI refresh, privacy improvements, and much more)
The tree-sitter PR is a monster achievement: https://github.com/curlconverter/curlconverter/pull/278
Search for parseAnsiCString in there. I don't think that had ever been implemented in JavaScript before.
For you, verhovsky, 10x engineer might be an understatement. Thank you!
awesome-web-archiving
-
Show HN: OpenAPI DevTools – Chrome ext. that generates an API spec as you browse
https://github.com/iipc/awesome-web-archiving/blob/main/READ...
-
DPReview.com is going down effective April 10.
People have pasted this around, https://github.com/iipc/awesome-web-archiving Could probably do it with wget if you had enough time?
- DPReview.com to close on April 10 after 25 years of operation
-
This Layoff Does Not Exist: tech layoff announcements but weird
Maybe something on this list can help you https://github.com/iipc/awesome-web-archiving
-
Software to keep Website pages "alive"?
Awesome Web Archiving has a longer list of tools and software
-
How to Download All of Wikipedia onto a USB Flash Drive
Not related to the OP topic or zim but I was looking into archiving my bookmarks and other content like documentation sites and wikis. I'll list some of the things I ended up using.
ArchiveBox[1]: Pretty much a self-hosted wayback machine. It can save websites as plain html, screenshot, text, and some other formats. I have my bookmarks archived in it and have a bookmarklet to easily add new websites to it. If you use the docker-compose you can enable a full-text search backend for an easy search setup.
WebRecorder[2]: A browser extension that creates WACZ archives directly in the browser capturing exactly what content you load. I use it on sites with annoying dynamic content that sites like wayback and ArchiveBox wouldn't be able to copy.
ReplayWeb[3]: An interface to browse archive types like WARC, WACZ, and HAR. The interface is just like browsing through your browser. It can be self-hosted as well for the full offline experience.
browsertrix-crawler[4]: A CLI tool to scrape websites and output to WACZ. Its super easy to run with Docker and I use it to scrape entire blogs and docs for offline use. It uses Chrome to load webpages and has some extra features like custom browser profiles, interactive login, and autoscroll/autoplay. I use the `--generateWACZ` parameter so I can use ReplayWeb to easily browse through the final output.
For bookmark and misc webpage archiving then ArchiveBox should be more than enough. Check out this repo for an amazing list of tools and resources https://github.com/iipc/awesome-web-archiving
[1] https://github.com/ArchiveBox/ArchiveBox
- Self Hosted Roundup #14
- SingleFile: Save a Complete Web Page into a Single HTML File
- [HELP] Starting Out for a Beginner
- Reflections as the Internet Archive turns 25
What are some alternatives?
curl-to-php - Convert curl commands to PHP code in your browser
SingleFileZ - Web Extension to save a faithful copy of an entire web page in a self-extracting ZIP file
NSwag - The Swagger/OpenAPI toolchain for .NET, ASP.NET Core and TypeScript.
ArchiveBox - 🗃 Open source self-hosted web archiving. Takes URLs/browser history/bookmarks/Pocket/Pinboard/etc., saves HTML, JS, PDFs, media, and more...
curl-to-go - Convert curl commands to Go code in your browser
obelisk - Go package and CLI tool for saving web page as single HTML file
blog
SingleFile-MV3 - SingleFile version compatible with Manifest V3. The future, right now!
rosso - Data parsers and formatters
firefox-scrapbook - ScrapBook X – a legacy Firefox add-on that captures web pages to local device for future retrieval, organization, annotation, and edit.
playwright_stealth - playwright stealth
youtube-dl - Command-line program to download videos from YouTube.com and other video sites