xurlfind3r VS gogetcrawl

Compare xurlfind3r vs gogetcrawl and see what are their differences.

xurlfind3r

A command-line interface (CLI) based passive URLs discovery utility. It is designed to efficiently identify known URLs of given domains by tapping into a multitude of curated online passive sources. (by hueristiq)

gogetcrawl

Extract web archive data using Wayback Machine and Common Crawl (by karust)
InfluxDB - Power Real-Time Data Analytics at Scale
Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
www.influxdata.com
featured
SaaSHub - Software Alternatives and Reviews
SaaSHub helps you find the best software and product alternatives
www.saashub.com
featured
xurlfind3r gogetcrawl
1 1
524 126
1.3% -
8.0 5.2
3 months ago 11 months ago
Go Go
MIT License MIT License
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

xurlfind3r

Posts with mentions or reviews of xurlfind3r. We have used some of these posts to build our list of alternatives and similar projects.

gogetcrawl

Posts with mentions or reviews of gogetcrawl. We have used some of these posts to build our list of alternatives and similar projects.
  • A tool/package for Web Archive data extraction
    1 project | /r/golang | 31 May 2023
    I've developed yet another solution that can help you extract data from web archives :) You can use it as a separate tool, or import it into your Go project. Github: https://github.com/karust/gogetcrawl

What are some alternatives?

When comparing xurlfind3r and gogetcrawl you can also consider the following projects:

gau - Fetch known URLs from AlienVault's Open Threat Exchange, the Wayback Machine, and Common Crawl.

ghost - Use ghost for passive recon: get the Wayback Machine history for a URL, search for term(s) or regular expression matches, save all archived links, save an archived robots.txt and sitemap.xml, run a whois lookup, and get IP addresses, all without touching the target.

uncover - Quickly discover exposed hosts on the internet using multiple search engines.

colly - Elegant Scraper and Crawler Framework for Golang

SatIntel - SatIntel is an OSINT tool for Satellites 🛰. Extract satellite telemetry, receive orbital predictions, and parse TLEs 🔭

Ferret - Declarative web scraping

TeamsUserEnum - User enumeration with Microsoft Teams API

lux - 👾 Fast and simple video download library and CLI tool written in Go

goSCF - Session Cookie Finder

Rendora - dynamic server-side rendering using headless Chrome to effortlessly solve the SEO problem for modern javascript websites

udon - A simple tool that helps to find assets/domains based on the Google Analytics ID.