metascraper
TWINT
metascraper | TWINT | |
---|---|---|
6 | 77 | |
2,238 | 13,272 | |
0.9% | - | |
8.9 | 0.0 | |
10 days ago | almost 2 years ago | |
HTML | Python | |
MIT License | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
metascraper
- Show HN: I made a tool to clean and convert any webpage to Markdown
-
Show HN: AboutIdeasNow – search /about, /ideas, /now pages of 7k+ personal sites
Yep but there is a fallback to metascraper [0] which does check the HTML tags. However the fallback didn't work in case GPT returns a 1970 date -- I just fixed this! [1]
I think you can now remove the date from your post content and it should still work. If you submit your website again it should do a re-scrape if you changed the content text. Thanks for catching this :)
[0] https://metascraper.js.org/#/
[1] https://github.com/lindylearn/aboutideasnow/commit/8b0ea5b46...
-
[Question] fetched data having "Promise<Any>" when it prints as a regular JSON object
200 {description: 'easily scrape metadata from an article on the web.', publisher: null, title: 'metascraper, easily scrape metadata from an article on the web.', url: 'https://metascraper.js.org'} [[Prototype]]: Object
-
9gag metadata scrapper
I am using this library https://github.com/microlinkhq/metascraper but it doesn't catch it.
-
Creating a serverless function to scrape web pages metadata
First of all, we'll use the got npm package to fetch the website content (feel free to use any other fetching library), and the metascraper npm package to extract the metadata:
-
Show HN: Link Preview (Unfurl/Expand) API
> After that, pricing starts at $25 per month for up to 15,000 requests.
This is very expensive for any decent usage. I have used tools like metascraper for this purpose and it worked pretty well. Setup just requires throwing a tiny nodejs app on a raspberry pi or $5 server and that can handle tons of requests.
https://github.com/microlinkhq/metascraper
TWINT
-
Twitter will be purging accounts with no activity for several years soon. We need to archive as many as we can. Any ideas on Methods
twint is a project that can scrape twitter data via the webpages rather than the twitter API, which means that it can get more than the last 3200 tweets of an account. Unfortunately it seems that the repo was archived and is no longer in development, so I'm not sure if it even still works. It's also a bit heavy on dependencies and is written in Python, neither of which make it easier to install and use.
- How Do I Use Twint?
- NYC's transport authority will no longer post service alerts on Twitter
-
New OSINT tool
The tool doesn't work anymore since Twitter changed its APIs, but a good example is twint. Most people in OSINT are not highly technical and don't know their way around a CLI. On the other hand, a CLI tool is one of the quickest, lowest (dev) cost ways to release a tool to the public, and many developers who build tools for the OSINT community do so for free (open source).
- Show HN: Twitter API Reverse Engineered
-
What’s currently the best method to archive a twitter account?
You can try twint which is extensive and should be able to do that. Another is via this twitter downloader but might require multiple runs depending on what you want to archive.
-
Gbf.life will be gone at the end of April
They do have examples that don't specify a username such as number 3 on this page or this one on the main page: "twint -g="48.880048,2.385939,1km" -o file.csv --csv - Scrape Tweets from a radius of 1km around a place in Paris and export them to a csv file."
- Do I have to pay now for the Twitter API if I want to use it for data analysis?
-
Twitter’s $42,000-per-Month API Prices Out Nearly Everyone | Tiers will start at $500,000 a year for access to 0.3 percent of the company’s tweets. Researchers say that’s too much for too little data
This will motivate researchers to web scrape to circumvent these restrictions. Twint can scrape tweets and it supports proxies. It can also be multi threaded. A huge hassle and it's prone to breaking when the site changes.
-
Basically the current state of granblue
The comment I saw said they used this: https://github.com/twintproject/twint
What are some alternatives?
vercel - Develop. Preview. Ship.
snscrape - A social networking service scraper in Python
bbob - ⚡️Blazing fast js bbcode parser, that transforms and parses bbcode to AST with plugin support in pure javascript, no dependencies
Scweet - A simple and unlimited twitter scraper : scrape tweets, likes, retweets, following, followers, user info, images...
url-metadata-scraper - Tiny Vercel serverless function to scrape metadata from a URL
newspaper - newspaper3k is a news, full-text, and article metadata extraction in Python 3. Advanced docs:
icecast-parser - Node.js module for getting and parsing metadata from SHOUTcast/Icecast radio streams
twitterscraper - Scrape Twitter for Tweets
patch-package - Fix broken node modules instantly 🏃🏽♀️💨
gallery-dl - Command-line program to download image galleries and collections from several image hosting sites
is-video - Check if a filepath is a video file
trafilatura - Python & command-line tool to gather text on the Web: web crawling/scraping, extraction of text, metadata, comments