Apache Solr
phalanx
Our great sponsors
Apache Solr | phalanx | |
---|---|---|
31 | 13 | |
4,365 | 341 | |
0.0% | - | |
0.0 | 0.0 | |
2 months ago | about 1 year ago | |
Java | Go | |
Apache License 2.0 | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
Apache Solr
- Iniciando no Elasticsearch: Conceitos básicos
-
YaCy, a distributed Web Search Engine, based on a peer-to-peer network
There are already many project about search:
- https://www.marginalia.nu/
- https://searchmysite.net/
- https://lucene.apache.org/
- elastic search
- https://presearch.com/
- https://stract.com/
- https://wiby.me/
I think that all project are fun. I would like to see one succeeding at reaching mainstream level of attention.
I have also been gathering links meta data for some time. Maybe I will use them to feed any eventual self hosted search engine, or language model, if I decide to experiment with that.
- domains for seed https://github.com/rumca-js/Internet-Places-Database
- bookmarks seed https://github.com/rumca-js/RSS-Link-Database
- links for year https://github.com/rumca-js/RSS-Link-Database-2024
-
Getting started with Elasticsearch + Python
Elasticsearch is based on Lucene and is used by various companies and developers across the world to build custom search solutions.
-
Tools to use to query and index data?
elastic search is kinda heavyweight infra for a small project. Its built on top of apache lucene (https://lucene.apache.org), which you can use directly.
-
Top metrics for Elasticsearch monitoring with Prometheus
Elasticsearch is based on Lucene, which is built in Java. This means that monitoring the Java Virtual Machine (JVM) memory is crucial to understand the current usage of the whole system.
-
Cross data type search that wasn’t supported well using Elasticsearch
Apache Lucene which seems to have a lot more features than Elasticsearch
-
How to find closest keyphrase match in text?
Generally with term vectors and a tf-idf index. Lucene is a good starting place to help.
-
Java Library to perform string search
try elasticsearch or solr, behind the scenes they both use https://lucene.apache.org/ if you don't want basically a full nosql database service, but I'd just slap solr up and call it a day.
-
Top 8 Open-Source Observability & Testing Tools
OpenSearch is an open-source database to ingest, search, visualize, and analyze data. It’s built on top of Apache Lucerce, a FOSS library for indexing and search, which OpenSearch leverages for more advanced analytics capabilities, like anomaly detection, machine learning, full-text search, and more.
-
grep like search with preprocessing
Lucene is the thing you think you need. Elastic Search is a nice wrapper for it. But these are Java, so maybe you want Sphinx Search (C++) or MeiliSearch (Rust).
phalanx
-
An alternative to Elasticsearch that runs on a few MBs of RAM
Somewhat related, this guy: https://github.com/mosuka/ seems to be very passionate about search service.
He built two distributed search services:
- https://github.com/mosuka/phalanx, written in Go.
- https://github.com/mosuka/bayard, written in Rust.
-
What is the coolest Go open source projects you have seen?
Don’t forget about Phalanx if you like Bleve/Bluge.
- Cloud-native distributed search engine written in Go
-
I want to dive into how to make search engines
I've never worked on a project that encompasses as many computer science algorithms as a search engine. There are a lot of topics you can lookup in "Information Storage and Retrieval":
- Tries (patricia, radix, etc...)
- Trees (b-trees, b+trees, merkle trees, log-structured merge-tree, etc..)
- Consensus (raft, paxos, etc..)
- Block storage (disk block size optimizations, mmap files, delta storage, etc..)
- Probabilistic filters (hyperloloog, bloom filters, etc...)
- Binary Search (sstables, sorted inverted indexes, roaring bitmaps)
- Ranking (pagerank, tf/idf, bm25, etc...)
- NLP (stemming, POS tagging, subject identification, sentiment analysis etc...)
- HTML (document parsing/lexing)
- Images (exif extraction, removal, resizing / proxying, etc...)
- Queues (SQS, NATS, Apollo, etc...)
- Clustering (k-means, density, hierarchical, gaussian distributions, etc...)
- Rate limiting (leaky bucket, windowed, etc...)
- Compression
- Applied linear algebra
- Text processing (unicode-normalization, slugify, sanitation, lossless and lossy hashing like metaphone and document fingerprinting)
- etc...
I'm sure there is plenty more I've missed. There are lots of generic structures involved like hashes, linked-lists, skip-lists, heaps and priority queues and this is just to get 2000's level basic tech.
- https://github.com/quickwit-oss/tantivy
- https://github.com/valeriansaliou/sonic
- https://github.com/mosuka/phalanx
- https://github.com/meilisearch/MeiliSearch
- https://github.com/blevesearch/bleve
- https://github.com/thomasjungblut/go-sstables
A lot of people new to this space mistakenly think you can just throw elastic search or postgres fulltext search in front of terabytes of records and have something decent. The problem is that search with good rankings often requires custom storage so calculations can be sharded among multiple nodes and you can do layered ranking without passing huge blobs of results between systems.
-
Why Writing Your Own Search Engine Is Hard (2004)
For those curious, I'm on my 3rd search engine as I keep discovering new methods of compactly and efficiently processing and querying results.
There isn't a one-size-fits all approach, but I've never worked on a project that encompasses as many computer science algorithms as a search engine.
- Tries (patricia, radix, etc...)
- Trees (b-trees, b+trees, merkle trees, log-structured merge-tree, etc..)
- Consensus (raft, paxos, etc..)
- Block storage (disk block size optimizations, mmap files, delta storage, etc..)
- Probabilistic filters (hyperloloog, bloom filters, etc...)
- Binary Search (sstables, sorted inverted indexes)
- Ranking (pagerank, tf/idf, bm25, etc...)
- NLP (stemming, POS tagging, subject identification, etc...)
- HTML (document parsing/lexing)
- Images (exif extraction, removal, resizing / proxying, etc...)
- Queues (SQS, NATS, Apollo, etc...)
- Clustering (k-means, density, hierarchical, gaussian distributions, etc...)
- Rate limiting (leaky bucket, windowed, etc...)
- text processing (unicode-normalization, slugify, sanitation, lossless and lossy hashing like metaphone and document fingerprinting)
- etc...
I'm sure there is plenty more I've missed. There are lots of generic structures involved like hashes, linked-lists, skip-lists, heaps and priority queues and this is just to get 2000's level basic tech.
- https://github.com/quickwit-oss/tantivy
- https://github.com/valeriansaliou/sonic
- https://github.com/mosuka/phalanx
- https://github.com/meilisearch/MeiliSearch
- https://github.com/blevesearch/bleve
A lot of people new to this space mistakenly think you can just throw elastic search or postgres fulltext search in front of terabytes of records and have something decent. That might work for something small like a curated collection of a few hundred sites.
-
Show HN: I built a self hosted recommendation feed to escape Google's algorithm
Is there a tool that automatically forwards every URL + HTML of the page you visit to a webhook so you could write an endpoint that would index everything?
If not, I would love to see this add a "forward to webhook" option. I would be happy to write up a real backend that parsed the content and indexed it.
Actually, there are lots of OS projects for this: https://github.com/quickwit-oss/tantivy, https://github.com/valeriansaliou/sonic, https://github.com/mosuka/phalanx, https://github.com/meilisearch/MeiliSearch, etc...
- Phalanx is a cloud-native distributed search engine with REST API written in Go
- Phalanx v0.3.0, a distributed search engine written in Go, has been released
- Phalanx 0.2.0, a distributed search engine written in Go, has been released
- Phalanx - A cloud-native full-text search and indexing server written in Go built on top of Bluge
What are some alternatives?
OpenSearch - 🔎 Open source distributed and RESTful search engine.
tantivy - Tantivy is a full-text search engine library inspired by Apache Lucene and written in Rust
Typesense - Open Source alternative to Algolia + Pinecone and an Easier-to-Use alternative to ElasticSearch ⚡ 🔍 ✨ Fast, typo tolerant, in-memory fuzzy Search Engine for building delightful search experiences
ipfs-search - Search engine for the Interplanetary Filesystem.
MeiliSearch - A lightning-fast search API that fits effortlessly into your apps, websites, and workflow
Elasticsearch - Free and Open, Distributed, RESTful Search Engine
markov - Materials for book: "Markov Chains for programmers"
loki - Like Prometheus, but for logs.
go-sstables - Go library for protobuf compatible sstables, a skiplist, a recordio format and other database building blocks like a write-ahead log. Ships now with an embedded key-value store.
Apache Lucene - Apache Lucene.NET
search-engines - Reviewing alternative search engines