phalanx
grub-2.0
phalanx | grub-2.0 | |
---|---|---|
13 | 4 | |
341 | 19 | |
- | - | |
0.0 | 0.0 | |
about 1 year ago | over 1 year ago | |
Go | Python | |
Apache License 2.0 | - |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
phalanx
-
An alternative to Elasticsearch that runs on a few MBs of RAM
Somewhat related, this guy: https://github.com/mosuka/ seems to be very passionate about search service.
He built two distributed search services:
- https://github.com/mosuka/phalanx, written in Go.
- https://github.com/mosuka/bayard, written in Rust.
-
What is the coolest Go open source projects you have seen?
Don’t forget about Phalanx if you like Bleve/Bluge.
- Cloud-native distributed search engine written in Go
-
I want to dive into how to make search engines
I've never worked on a project that encompasses as many computer science algorithms as a search engine. There are a lot of topics you can lookup in "Information Storage and Retrieval":
- Tries (patricia, radix, etc...)
- Trees (b-trees, b+trees, merkle trees, log-structured merge-tree, etc..)
- Consensus (raft, paxos, etc..)
- Block storage (disk block size optimizations, mmap files, delta storage, etc..)
- Probabilistic filters (hyperloloog, bloom filters, etc...)
- Binary Search (sstables, sorted inverted indexes, roaring bitmaps)
- Ranking (pagerank, tf/idf, bm25, etc...)
- NLP (stemming, POS tagging, subject identification, sentiment analysis etc...)
- HTML (document parsing/lexing)
- Images (exif extraction, removal, resizing / proxying, etc...)
- Queues (SQS, NATS, Apollo, etc...)
- Clustering (k-means, density, hierarchical, gaussian distributions, etc...)
- Rate limiting (leaky bucket, windowed, etc...)
- Compression
- Applied linear algebra
- Text processing (unicode-normalization, slugify, sanitation, lossless and lossy hashing like metaphone and document fingerprinting)
- etc...
I'm sure there is plenty more I've missed. There are lots of generic structures involved like hashes, linked-lists, skip-lists, heaps and priority queues and this is just to get 2000's level basic tech.
- https://github.com/quickwit-oss/tantivy
- https://github.com/valeriansaliou/sonic
- https://github.com/mosuka/phalanx
- https://github.com/meilisearch/MeiliSearch
- https://github.com/blevesearch/bleve
- https://github.com/thomasjungblut/go-sstables
A lot of people new to this space mistakenly think you can just throw elastic search or postgres fulltext search in front of terabytes of records and have something decent. The problem is that search with good rankings often requires custom storage so calculations can be sharded among multiple nodes and you can do layered ranking without passing huge blobs of results between systems.
-
Why Writing Your Own Search Engine Is Hard (2004)
For those curious, I'm on my 3rd search engine as I keep discovering new methods of compactly and efficiently processing and querying results.
There isn't a one-size-fits all approach, but I've never worked on a project that encompasses as many computer science algorithms as a search engine.
- Tries (patricia, radix, etc...)
- Trees (b-trees, b+trees, merkle trees, log-structured merge-tree, etc..)
- Consensus (raft, paxos, etc..)
- Block storage (disk block size optimizations, mmap files, delta storage, etc..)
- Probabilistic filters (hyperloloog, bloom filters, etc...)
- Binary Search (sstables, sorted inverted indexes)
- Ranking (pagerank, tf/idf, bm25, etc...)
- NLP (stemming, POS tagging, subject identification, etc...)
- HTML (document parsing/lexing)
- Images (exif extraction, removal, resizing / proxying, etc...)
- Queues (SQS, NATS, Apollo, etc...)
- Clustering (k-means, density, hierarchical, gaussian distributions, etc...)
- Rate limiting (leaky bucket, windowed, etc...)
- text processing (unicode-normalization, slugify, sanitation, lossless and lossy hashing like metaphone and document fingerprinting)
- etc...
I'm sure there is plenty more I've missed. There are lots of generic structures involved like hashes, linked-lists, skip-lists, heaps and priority queues and this is just to get 2000's level basic tech.
- https://github.com/quickwit-oss/tantivy
- https://github.com/valeriansaliou/sonic
- https://github.com/mosuka/phalanx
- https://github.com/meilisearch/MeiliSearch
- https://github.com/blevesearch/bleve
A lot of people new to this space mistakenly think you can just throw elastic search or postgres fulltext search in front of terabytes of records and have something decent. That might work for something small like a curated collection of a few hundred sites.
-
Show HN: I built a self hosted recommendation feed to escape Google's algorithm
Is there a tool that automatically forwards every URL + HTML of the page you visit to a webhook so you could write an endpoint that would index everything?
If not, I would love to see this add a "forward to webhook" option. I would be happy to write up a real backend that parsed the content and indexed it.
Actually, there are lots of OS projects for this: https://github.com/quickwit-oss/tantivy, https://github.com/valeriansaliou/sonic, https://github.com/mosuka/phalanx, https://github.com/meilisearch/MeiliSearch, etc...
- Phalanx is a cloud-native distributed search engine with REST API written in Go
- Phalanx v0.3.0, a distributed search engine written in Go, has been released
- Phalanx 0.2.0, a distributed search engine written in Go, has been released
- Phalanx - A cloud-native full-text search and indexing server written in Go built on top of Bluge
grub-2.0
-
I want to dive into how to make search engines
Not finished, but the Selenium based crawler works pretty well to combat most blocks: https://github.com/kordless/grub-2.0
For IP blocks, try this: https://github.com/kordless/mitta-screenshot
-
Ask HN: Decent, open source search engine?
I started https://mitta.us as this, but am pivoting to prompt management for GPT-3. I've Open Sourced the code for the crawler here: https://github.com/kordless/grub-2.0. The entire system uses Google Vision for extracting text. I dislike fiddling with the DOM...
If you are interested in using Solr for this, I can provide instructions to you. I'm kordless at the gmails ... com.
-
How to Scrape and Extract Hyperlink Networks with BeautifulSoup and NetworkX
Depending on the use case you might try imaging the page, then send the image to an ML model for full text before indexing. If you need links extracted, Selenium also supports parsing the assembled DOM: https://github.com/kordless/grub-2.0/tree/main/aperture
-
Mastering Web Scraping in Python: Crawling from Scratch
I’ve found imaging the page and doing OCR on the image is quite good for text extraction. Many pages on the Internet render with JavaScript, which means BS may not see the text in the DOM.
Here is the code to do some of that: https://github.com/kordless/grub-2.0
What are some alternatives?
tantivy - Tantivy is a full-text search engine library inspired by Apache Lucene and written in Rust
ChromeController - Comprehensive wrapper and execution manager for the Chrome browser using the Chrome Debugging Protocol.
ipfs-search - Search engine for the Interplanetary Filesystem.
skyscraper - Structural scraping for the rest of us.
MeiliSearch - A lightning-fast search API that fits effortlessly into your apps, websites, and workflow
mitta-screenshot - Mitta's Chrome extension for saving the current view of a website.
markov - Materials for book: "Markov Chains for programmers"
rod - A Devtools driver for web automation and scraping
go-sstables - Go library for protobuf compatible sstables, a skiplist, a recordio format and other database building blocks like a write-ahead log. Ships now with an embedded key-value store.
search-engines - Reviewing alternative search engines
colly - Elegant Scraper and Crawler Framework for Golang