Self Hosted SaaS Alternatives

This page summarizes the projects mentioned and recommended in the original post on news.ycombinator.com

Our great sponsors
  • WorkOS - The modern identity platform for B2B SaaS
  • InfluxDB - Power Real-Time Data Analytics at Scale
  • SaaSHub - Software Alternatives and Reviews
  • awesome-selfhosted

    A list of Free Software network services and web applications which can be hosted on your own servers

  • At least once per month I check out https://github.com/awesome-selfhosted/awesome-selfhosted to see what folks have been adding.

    One of my favorites from that list is Focalboard. I used to use a combination of Todoist, Trello, and Notion, but found that moving to FB helped me collapse that all into one tool. The open source and self-hosted aspects were a big bonus, of course.

  • ordiri

  • I self host a huge amount of stuff on-top of a custom cloud platform I built using a Kubernetes cluster deployed as a tenant of my cloud.

    I run a few servers which all have an "ordlet" installed, akin to the kubelet, which configure network namespaces for isolated tenant networking and boots virtual machines which use a EC2 style metadata server to fetch their boot script, which for this purpose configures a HA kubernetes cluster that then uses ArgoCD to fetch all the manifests from my git repo using an AppSet.

    It's so incredibly over complicated and over engineered, it's a lot of fun :)

    https://github.com/ordiri/ordiri

  • WorkOS

    The modern identity platform for B2B SaaS. The APIs are flexible and easy-to-use, supporting authentication, user identity, and complex enterprise features like SSO and SCIM provisioning.

    WorkOS logo
  • Compose-Examples

    Various Docker Compose examples of selfhosted FOSS and proprietary projects.

  • selfhosted

    Ansible framework for self-hosted infrastructure, based on Rocky Linux and FreeIPA (by sacredheartsc)

  • I self-host literally everything (email, calendar/contacts, VOIP, XMPP, you name it) from by basement with used 1U servers from eBay and a cable internet connection.

    It was probably more hassle than most people would want to bother with to get it set up. But, with everything up and running, there's very little maintenance. I probably spend a few hours a month tinkering still, just because I enjoy it.

    I use a stack of Proxmox VMs, FreeIPA for authn/authz, and Rocky Linux for all servers and workstations. My phone runs GrapheneOS with a Wireguard VPN back to the house. I don't expose anything to the public internet unless absolutely necessary.

    I recently anonymized and Ansibilized my entire setup so that others might get some use out of it:

    https://github.com/sacredheartsc/selfhosted

  • coolify

    An open-source & self-hostable Heroku / Netlify / Vercel alternative.

  • I posted about this before but I would recommend Coolify for self hosting applications, it's an open source Heroku alternative that has one-click installation of services like Plausible, NextCloud etc. It works with Herokuish buildpacks as well as Docker + Docker Compose (with Kubernetes support coming soon).

    I personally use a $5 Hetzner server in Northern Virginia which works great, cheaper and faster than the equivalent in DigitalOcean.

    https://coolify.io

  • core

    OPNsense GUI, API and systems backend (by opnsense)

  • I plug my cable modem into a server running the OPNsense firewall [0], which has a wireguard plugin.

    I set up a wireguard VPN in OPNsense.

    Then I downloaded the wireguard app in F-Droid, and pasted my credentials from F-Droid into the wireguard configs on the firewall.

    I set the VPN in grapheneOS as "always on," so from my phones perspective, it always has access to my internal network, even when on LTE. All my phones internet traffic ends up going through my home internet connection as a result.

    [0] https://opnsense.org/

  • lldap

    Light LDAP implementation

  • Not quite yet, but I'm working on a feature that will enable that: https://github.com/nitnelave/lldap/issues/67

  • InfluxDB

    Power Real-Time Data Analytics at Scale. Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.

    InfluxDB logo
  • matrix-docker-ansible-deploy

    šŸ³ Matrix (An open network for secure, decentralized communication) server setup using Ansible and Docker

  • I love self hosting. Hereā€™s my home setup.

    Hardware:

    Mini atx tower, 8TB usable storage, Debian, AMD processor, 8GB memory

    Pfsense Firewall (Tailscale exit node)

    Plume Wi-Fi (would like to replace, owned by comcast now)

    Solution stack:

    Portainer + Docker Compose to manage everything

    Nextcloud

    Photo Prism

    Tailscale (remote WireGuard based access from all my devices. Integrates well with Pfsense)

    Home Assistant (amazing platform for home automation and more). I love the new voice control features and mission!

    Used to self host Email with Helm hardware company (not k8s Helm) but they went out of business. Self hosting email is annoying thanks to the big email providers and their control over the spam filtering world.

    Matrix chat server bridging all the chat interfaces I use. This is managed by an awesome open source Ansible playbook https://github.com/spantaleev/matrix-docker-ansible-deploy

    Pihole

  • home-ops

    Wife approved HomeOps driven by Kubernetes and GitOps using Flux

  • Im fully onboard with the geneneral idea as a target.

    Right now it's for early early adopters. Hosting stuff is still a painm But we are getting better at hosting stuff, finding stable patterns, paving the path. Hint, it's not doing less, it's not simpler options: it's adopting & making our own industrial scale tooling. https://github.com/onedr0p/home-ops is a great early & still strong demonstration; the up front cost od learning is high, but there's the biggest ecosystem of support you can imagine, and once you recognize the patterns, you can get into flow states, make stuff happen, with extreme leverage far beyond where humanity has ever been. Building the empowered individual is happening, and we're using stable good patterns that will mean the individual isnt so off on their own doing ops- they'll have a lot more accrued human experiene at their back, their running of services isnt as simple to understand from the start but goes much much further, is much more mature & well supported in the long run.

  • gnize

    A distributed data fingerprinting system. Cognize what you find now, so that other's can recognize it later.

  • https://github.com/MatrixManAtYrService/gnize

    Python was giving me trouble though, so I'm switching to Nim for performance reasons, and so I can compile it to C, objective-C, and JavaScript for better client portability. It's just an empty shell right now but that protects will end up here: https://github.com/gnize and hopefully soon.

  • ndup

    Near-Duplicate File Detection

  • You both might be interested in this little Nim program "framed" to frame & digest text for near-duplicate detection:

        https://github.com/c-blake/ndup/blob/main/framed.nim

  • nimsearch

    A nascent tutorial/intro to search engine ideas in Nim

  • You are welcome. Thanks are too rarely offered. :-)

    You may also be interested in word stemming ( such as used by snowball stemmer in https://github.com/c-blake/nimsearch ) or other NLP techniques, but I don't know how internationalized/multi-lingual that stuff is, but conceptually you might want "series of stemmed words" to be the content fragments of interest.

    Similarity scores have many applications. Weights on graph of cancelled downloads ranked by size might be one. :)

    Of course, for your specific "truncation" problem, you might also be able to just do an edit distance against the much smaller filenames and compare data prefixes in files or use a SHA256 of a content-based first slice. ( There are edit distance algos in Nim in https://github.com/c-blake/cligen/blob/master/cligen/textUt.... as well as in https://github.com/c-blake/suggest ).

    Or, you could do a little program like ndup/sh/ndup to create a "mirrored file tree" of such content-based slices then you could use any true duplicate-file finder (like https://github.com/c-blake/bu/blob/main/dups.nim) on the little signature system to identify duplicates and go from path suffixes in those clusters back to the main filesystem. Of course, a single KV store within one or two files would be more efficient than thousands of tiny files. There are many possibilities.

  • cligen

    Nim library to infer/generate command-line-interfaces / option / argument parsing; Docs at

  • You are welcome. Thanks are too rarely offered. :-)

    You may also be interested in word stemming ( such as used by snowball stemmer in https://github.com/c-blake/nimsearch ) or other NLP techniques, but I don't know how internationalized/multi-lingual that stuff is, but conceptually you might want "series of stemmed words" to be the content fragments of interest.

    Similarity scores have many applications. Weights on graph of cancelled downloads ranked by size might be one. :)

    Of course, for your specific "truncation" problem, you might also be able to just do an edit distance against the much smaller filenames and compare data prefixes in files or use a SHA256 of a content-based first slice. ( There are edit distance algos in Nim in https://github.com/c-blake/cligen/blob/master/cligen/textUt.... as well as in https://github.com/c-blake/suggest ).

    Or, you could do a little program like ndup/sh/ndup to create a "mirrored file tree" of such content-based slices then you could use any true duplicate-file finder (like https://github.com/c-blake/bu/blob/main/dups.nim) on the little signature system to identify duplicates and go from path suffixes in those clusters back to the main filesystem. Of course, a single KV store within one or two files would be more efficient than thousands of tiny files. There are many possibilities.

  • suggest

    An mmap-persistent Wolfe Garbe's SymSpell spell checking algorithm in Nim

  • You are welcome. Thanks are too rarely offered. :-)

    You may also be interested in word stemming ( such as used by snowball stemmer in https://github.com/c-blake/nimsearch ) or other NLP techniques, but I don't know how internationalized/multi-lingual that stuff is, but conceptually you might want "series of stemmed words" to be the content fragments of interest.

    Similarity scores have many applications. Weights on graph of cancelled downloads ranked by size might be one. :)

    Of course, for your specific "truncation" problem, you might also be able to just do an edit distance against the much smaller filenames and compare data prefixes in files or use a SHA256 of a content-based first slice. ( There are edit distance algos in Nim in https://github.com/c-blake/cligen/blob/master/cligen/textUt.... as well as in https://github.com/c-blake/suggest ).

    Or, you could do a little program like ndup/sh/ndup to create a "mirrored file tree" of such content-based slices then you could use any true duplicate-file finder (like https://github.com/c-blake/bu/blob/main/dups.nim) on the little signature system to identify duplicates and go from path suffixes in those clusters back to the main filesystem. Of course, a single KV store within one or two files would be more efficient than thousands of tiny files. There are many possibilities.

  • bu

    B)asic|But-For U)tility Code/Programs (in Nim & Often Unix/POSIX/Linux Context)

  • You are welcome. Thanks are too rarely offered. :-)

    You may also be interested in word stemming ( such as used by snowball stemmer in https://github.com/c-blake/nimsearch ) or other NLP techniques, but I don't know how internationalized/multi-lingual that stuff is, but conceptually you might want "series of stemmed words" to be the content fragments of interest.

    Similarity scores have many applications. Weights on graph of cancelled downloads ranked by size might be one. :)

    Of course, for your specific "truncation" problem, you might also be able to just do an edit distance against the much smaller filenames and compare data prefixes in files or use a SHA256 of a content-based first slice. ( There are edit distance algos in Nim in https://github.com/c-blake/cligen/blob/master/cligen/textUt.... as well as in https://github.com/c-blake/suggest ).

    Or, you could do a little program like ndup/sh/ndup to create a "mirrored file tree" of such content-based slices then you could use any true duplicate-file finder (like https://github.com/c-blake/bu/blob/main/dups.nim) on the little signature system to identify duplicates and go from path suffixes in those clusters back to the main filesystem. Of course, a single KV store within one or two files would be more efficient than thousands of tiny files. There are many possibilities.

  • SaaSHub

    SaaSHub - Software Alternatives and Reviews. SaaSHub helps you find the best software and product alternatives

    SaaSHub logo
NOTE: The number of mentions on this list indicates mentions on common posts plus user suggested alternatives. Hence, a higher number means a more popular project.

Suggest a related project

Related posts