ipfs-search
phalanx
ipfs-search | phalanx | |
---|---|---|
16 | 13 | |
842 | 341 | |
0.7% | - | |
4.1 | 0.0 | |
6 months ago | about 1 year ago | |
Go | Go | |
GNU Affero General Public License v3.0 | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
ipfs-search
-
admarus alternatives - ipfs-search and Yacy
3 projects | 9 Aug 2023
Admarus is a decentralized alternative to ipfs-search
-
where to find ipfs websites?
You can also use this search engine: https://ipfs-search.com
-
Official CIDs for Tor project and I2P downloads on IPFS if they exist.
ipfs-search.com doesnt allow user submission, but you can use tools like public-gateway-cacher to increase the chances that their dht listener hears of it.
-
Decentralised Search Engines
IPFS Search https://ipfs-search.com
-
Hello guys I'm new to IPFS and have some questions
There's not any main search portal that I know of. I mean, there's this, https://github.com/ipfs-search/ipfs-search / https://ipfs-search.com/ Which afaik looks at the DHT traffic or similar. I'm not great at getting useful results from that, maybe you'll have better luck. r/IPFS_Hashes is a place for people to post things they're hosting.
-
Is there a way to search for newly added IPFS files?
There is already ipfs-search.com, which uses an open source byte analyzer to figure out what type of file it runs into.
-
Handshake vs. ENS
IPFS though has a search engine, https://ipfs-search.com/#/, which seems to work pretty good.
ENS (.eth, .sol, .luna), Unstoppable domains, handshake/namebase, IPFS websites, onion/i2p/zeronet/freenet/lokinet and others will need to get indexed and accessible by a search engine easily by a normal user. Until then, all of these trying to counter ICANN are pretty much useless.
-
I traced that new Satoshi post back to a wallet that has made 83,000+ transactions TODAY
I didnt look into the wallet’s transactions yet, but i searched the NFT names on this IPFS search engine https://ipfs-search.com/ and it came up empty. I don’t know all the specifics of which NFTs end up on the ipfs but the dumb ones my little bro and I minted for curiosity on Solana were on there. I dont know if the search engine is comprehensive, but it’s something—another data point.
-
How to find content on ipfs?
https://github.com/ipfs-search/ipfs-search#building
-
Questions about what an idle node is doing
Nodes announce the hashes they know about so you could sniff this gossip and then build a search engine on top of it which is how ipfs-search works.
phalanx
-
An alternative to Elasticsearch that runs on a few MBs of RAM
Somewhat related, this guy: https://github.com/mosuka/ seems to be very passionate about search service.
He built two distributed search services:
- https://github.com/mosuka/phalanx, written in Go.
- https://github.com/mosuka/bayard, written in Rust.
-
What is the coolest Go open source projects you have seen?
Don’t forget about Phalanx if you like Bleve/Bluge.
- Cloud-native distributed search engine written in Go
-
I want to dive into how to make search engines
I've never worked on a project that encompasses as many computer science algorithms as a search engine. There are a lot of topics you can lookup in "Information Storage and Retrieval":
- Tries (patricia, radix, etc...)
- Trees (b-trees, b+trees, merkle trees, log-structured merge-tree, etc..)
- Consensus (raft, paxos, etc..)
- Block storage (disk block size optimizations, mmap files, delta storage, etc..)
- Probabilistic filters (hyperloloog, bloom filters, etc...)
- Binary Search (sstables, sorted inverted indexes, roaring bitmaps)
- Ranking (pagerank, tf/idf, bm25, etc...)
- NLP (stemming, POS tagging, subject identification, sentiment analysis etc...)
- HTML (document parsing/lexing)
- Images (exif extraction, removal, resizing / proxying, etc...)
- Queues (SQS, NATS, Apollo, etc...)
- Clustering (k-means, density, hierarchical, gaussian distributions, etc...)
- Rate limiting (leaky bucket, windowed, etc...)
- Compression
- Applied linear algebra
- Text processing (unicode-normalization, slugify, sanitation, lossless and lossy hashing like metaphone and document fingerprinting)
- etc...
I'm sure there is plenty more I've missed. There are lots of generic structures involved like hashes, linked-lists, skip-lists, heaps and priority queues and this is just to get 2000's level basic tech.
- https://github.com/quickwit-oss/tantivy
- https://github.com/valeriansaliou/sonic
- https://github.com/mosuka/phalanx
- https://github.com/meilisearch/MeiliSearch
- https://github.com/blevesearch/bleve
- https://github.com/thomasjungblut/go-sstables
A lot of people new to this space mistakenly think you can just throw elastic search or postgres fulltext search in front of terabytes of records and have something decent. The problem is that search with good rankings often requires custom storage so calculations can be sharded among multiple nodes and you can do layered ranking without passing huge blobs of results between systems.
-
Why Writing Your Own Search Engine Is Hard (2004)
For those curious, I'm on my 3rd search engine as I keep discovering new methods of compactly and efficiently processing and querying results.
There isn't a one-size-fits all approach, but I've never worked on a project that encompasses as many computer science algorithms as a search engine.
- Tries (patricia, radix, etc...)
- Trees (b-trees, b+trees, merkle trees, log-structured merge-tree, etc..)
- Consensus (raft, paxos, etc..)
- Block storage (disk block size optimizations, mmap files, delta storage, etc..)
- Probabilistic filters (hyperloloog, bloom filters, etc...)
- Binary Search (sstables, sorted inverted indexes)
- Ranking (pagerank, tf/idf, bm25, etc...)
- NLP (stemming, POS tagging, subject identification, etc...)
- HTML (document parsing/lexing)
- Images (exif extraction, removal, resizing / proxying, etc...)
- Queues (SQS, NATS, Apollo, etc...)
- Clustering (k-means, density, hierarchical, gaussian distributions, etc...)
- Rate limiting (leaky bucket, windowed, etc...)
- text processing (unicode-normalization, slugify, sanitation, lossless and lossy hashing like metaphone and document fingerprinting)
- etc...
I'm sure there is plenty more I've missed. There are lots of generic structures involved like hashes, linked-lists, skip-lists, heaps and priority queues and this is just to get 2000's level basic tech.
- https://github.com/quickwit-oss/tantivy
- https://github.com/valeriansaliou/sonic
- https://github.com/mosuka/phalanx
- https://github.com/meilisearch/MeiliSearch
- https://github.com/blevesearch/bleve
A lot of people new to this space mistakenly think you can just throw elastic search or postgres fulltext search in front of terabytes of records and have something decent. That might work for something small like a curated collection of a few hundred sites.
-
Show HN: I built a self hosted recommendation feed to escape Google's algorithm
Is there a tool that automatically forwards every URL + HTML of the page you visit to a webhook so you could write an endpoint that would index everything?
If not, I would love to see this add a "forward to webhook" option. I would be happy to write up a real backend that parsed the content and indexed it.
Actually, there are lots of OS projects for this: https://github.com/quickwit-oss/tantivy, https://github.com/valeriansaliou/sonic, https://github.com/mosuka/phalanx, https://github.com/meilisearch/MeiliSearch, etc...
- Phalanx is a cloud-native distributed search engine with REST API written in Go
- Phalanx v0.3.0, a distributed search engine written in Go, has been released
- Phalanx 0.2.0, a distributed search engine written in Go, has been released
- Phalanx - A cloud-native full-text search and indexing server written in Go built on top of Bluge
What are some alternatives?
rabbit-hole - RabbitMQ HTTP API client in Go
tantivy - Tantivy is a full-text search engine library inspired by Apache Lucene and written in Rust
watermill - Building event-driven applications the easy way in Go.
MeiliSearch - A lightning-fast search API that fits effortlessly into your apps, websites, and workflow
superhighway84 - USENET-inspired, uncensorable, decentralized internet discussion system running on IPFS & OrbitDB
markov - Materials for book: "Markov Chains for programmers"
machinery - Machinery is an asynchronous task queue/job queue based on distributed message passing.
go-sstables - Go library for protobuf compatible sstables, a skiplist, a recordio format and other database building blocks like a write-ahead log. Ships now with an embedded key-value store.
nebula - 🌌 A network agnostic DHT crawler, monitor, and measurement tool that exposes timely information about DHT networks.
search-engines - Reviewing alternative search engines
Go IPFS - IPFS implementation in Go [Moved to: https://github.com/ipfs/kubo]
grub-2.0 - Grub is an AI powered Web crawler.