Internet-Places-Database VS sunburn.nvim

Compare Internet-Places-Database vs sunburn.nvim and see what are their differences.

sunburn.nvim

A Neovim colorscheme emphasizing readability above all else. (by loganswartz)
InfluxDB - Power Real-Time Data Analytics at Scale
Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
www.influxdata.com
featured
SaaSHub - Software Alternatives and Reviews
SaaSHub helps you find the best software and product alternatives
www.saashub.com
featured
Internet-Places-Database sunburn.nvim
11 1
21 10
- -
9.3 5.6
2 days ago 21 days ago
Lua
GNU General Public License v3.0 only MIT License
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

Internet-Places-Database

Posts with mentions or reviews of Internet-Places-Database. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2024-04-22.
  • Google Search results polluted by buggy AI-written code frustrate coders
    1 project | news.ycombinator.com | 1 May 2024
    I started gathering domains to see for myself the state of the Internet

    https://github.com/rumca-js/Internet-Places-Database

    I have many observations.

    One is that I cannot see aby useful amiga links. I had to manually search them for some time. Some parts of the old internet exist, but are buried.

    Second is that spam sites are everywhere. Not only AI generator.

    Next is that personal sites exist, but they are often boring. Also 'CV sites' are a waste of time for me. I wonder how many of them are fake.

    Many sites have poorly set up HTML meta fields, title, description. How anybody is supposed to find them?

    I prefer going to passionate personal site about programming tips that reading content farms. It is difficult to find such sites.

  • Show HN: OpenOrb, a curated search engine for Atom and RSS feeds
    7 projects | news.ycombinator.com | 22 Apr 2024
    You can find many RSS feeds, links in my repository

    https://github.com/rumca-js/Internet-Places-Database/tree/ma...

    It contains also domain lists, that include tag indicating, if it is personal, or not.

  • We Need to Rewild the Internet
    2 projects | news.ycombinator.com | 16 Apr 2024
    I am running my personal web crawler since September of 2022. I gather internet domains and assign them meta information. There are various sources of my data. I assign "personal" tag to any personal website. I assign "self-host" tag to any self-host program I find.

    I have less than 30k of personal websites.

    Data are in the repository.

    https://github.com/rumca-js/Internet-Places-Database

    I still rely on google for many things, or kagi. It is interesting to me, what my crawler finds next. It is always a surprise to see new blog, or forgotten forum of sorts.

    This is how I discover real new content on the Internet. Certainly not by google which can find only BBC, or techcrunch.

  • The internet is slipping out of our reach
    1 project | news.ycombinator.com | 12 Mar 2024
    Google will not be interested in fixing search. It also may not be possibile because of ai spam. They would like to invest in deep mind/bard/gemini than to fix technology that will be obsolete in a few years.

    I have started scanning domains to see how many different places there are in the internet. Spoiler: Not many.

    We could try to create curated open databases for links, forums, places, and links, but in ai era it will always be a niche.

    Having said that I think that it is a good thing. If it is a niche it will not be spoiled by normal users expecting simple behavior, or corporations trying to control the output.

    Start your blog

    Start your curated lists of links.

    Control your data. Share your data.

    Link https://github.com/rumca-js/Internet-Places-Database

  • YaCy, a distributed Web Search Engine, based on a peer-to-peer network
    9 projects | news.ycombinator.com | 5 Mar 2024
    There are already many project about search:

    - https://www.marginalia.nu/

    - https://searchmysite.net/

    - https://lucene.apache.org/

    - elastic search

    - https://presearch.com/

    - https://stract.com/

    - https://wiby.me/

    I think that all project are fun. I would like to see one succeeding at reaching mainstream level of attention.

    I have also been gathering links meta data for some time. Maybe I will use them to feed any eventual self hosted search engine, or language model, if I decide to experiment with that.

    - domains for seed https://github.com/rumca-js/Internet-Places-Database

    - bookmarks seed https://github.com/rumca-js/RSS-Link-Database

    - links for year https://github.com/rumca-js/RSS-Link-Database-2024

  • A search engine in 80 lines of Python
    6 projects | news.ycombinator.com | 7 Feb 2024
    I have myself dabbled a little bit in that subject. Some of my notes:

    - some RSS feeds are protected by cloudflare. It is true however that it is not necessary for majority of blogs. If you would like to do more then selenium would be a way to solve "cloudflare" protected links

    - sometimes even selenium headless is not enough and full blown browser in selenium is necessary to fool it's protection

    - sometimes even that is not enough

    - then I started to wonder, why some RSS feeds are so well protected by cloudflare, but who am I to judge?

    - sometimes it is beneficial to cover user agent. I feel bad for setting my user agent to chrome, but again, why RSS feeds are so well protected?

    - you cannot parse, read entire Internet, therefore you always need to think about compromises. For example I have narrowed area of my searches in one of my projects to domains only. Now I can find most of the common domains, and I sort them by their "importance"

    - RSS links do change. There need to be automated means to disable some feeds automatically to prevent checking inactive domains

    - I do not see any configurable timeout for reading a page, but I am not familiar with aiohttp. Some pages might waste your time

    - I hate that some RSS feeds are not configured properly. Some sites do not provide a valid meta "link" with "application/rss+xml". Some RSS feeds have naive titles like "Home", or no title at all. Such a waste of opportunity

    My RSS feed parser, link archiver, web crawler: https://github.com/rumca-js/Django-link-archive. Especially interesting could be file rsshistory/webtools.py. It is not advanced programming craft, but it got the job done.

    Additionally, in other project I have collected around 2378 of personal sites. I collect domains in https://github.com/rumca-js/Internet-Places-Database/tree/ma... . These files are JSONs. All personal sites have tag "personal".

    Most of the things are collected from:

    https://nownownow.com/

    https://searchmysite.net/

    I wanted also to process domains from https://downloads.marginalia.nu/, but haven't got time to read structure of the files

  • Is Google Getting Worse? A Longitudinal Investigation of SEO Spam in Search [pdf]
    6 projects | news.ycombinator.com | 16 Jan 2024
    On the other hand it is not 1995. Time has moved on. I wrote a Simple RSS feed, that also serves as search engine for bookmarks.

    I am able to run it in attick on raspberry pi. We do not have to rely so heavily on google.

    https://github.com/rumca-js/Django-link-archive

    It is true that it does not serve me as google, or kagi replacement. It is a very nice addition though.

    With a little bit off determination I do not have to be so dependent on google.

    Here is also a dump of known domains. Some are personal.

    https://github.com/rumca-js/Internet-Places-Database

    ...and my bookmarks

    https://github.com/rumca-js/RSS-Link-Database

    Some more years, and google can go to hell.

  • Ask HN: What apps have you created for your own use?
    212 projects | news.ycombinator.com | 12 Dec 2023
    [4] https://github.com/rumca-js/Django-link-archive

    These are exported then to github repositories:

    [5] https://github.com/rumca-js/RSS-Link-Database - bookmarks

    [6] https://github.com/rumca-js/RSS-Link-Database-2023 - 2023 year news headlines

    [7] https://github.com/rumca-js/Internet-Places-Database - all known to me domains, and RSS feeds

  • The Small Website Discoverability Crisis
    14 projects | news.ycombinator.com | 15 Nov 2023
    My own repositories:

    - bookmarked entries https://github.com/rumca-js/RSS-Link-Database

    - mostly domains https://github.com/rumca-js/Internet-Places-Database

    - all 'news' from 2023 https://github.com/rumca-js/RSS-Link-Database-2023

    I am using my own Django program to capture and manage links https://github.com/rumca-js/Django-link-archive.

  • Show HN: List of Internet Domains
    1 project | news.ycombinator.com | 30 Oct 2023

sunburn.nvim

Posts with mentions or reviews of sunburn.nvim. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-12-12.
  • Ask HN: What apps have you created for your own use?
    212 projects | news.ycombinator.com | 12 Dec 2023
    A while back I read about the Oklab color space, and long story short I decided I wanted to create my own Neovim coloscheme. That led to sunburn.nvim[1], which aims to take advantage of the hue and brightness uniformity that Oklab provides.

    At first I was using lush.nvim to build sunburn.nvim, but quickly it became a hassle to only be able to specify colors via RGB or HSL. My initial thought was a PR to add Oklab support to lush, but that framework does so much that it was hard to see where to start. So I ended up writing polychrome.nvim[2], which is a dead simple micro framework in comparison to lush.nvim, but does enough to take care of all the boilerplate, and supports a bunch of color spaces (which are converted to RGB on the fly).

    I also wanted push notifications for when certain RSS feeds I follow were updated, because I suck at remembering to check in on things or check an RSS feed app. But I didn't want to pay for IFTTT or other bespoke solutions, so I wrote notifeed[3]. It's designed to run as a service on a server, and then check all your feeds at predetermined intervals and send the necessary webhooks based on your configuration. Feeds and clients are configured via the CLI and stored in a SQLite DB for simplicity.

    [1] https://github.com/loganswartz/sunburn.nvim

What are some alternatives?

When comparing Internet-Places-Database and sunburn.nvim you can also consider the following projects:

polychrome.nvim - A colorscheme creation micro-framework for Neovim

full-text-tabs-forever - Full text search all your browsing history

webring - Make yourself a website

simplecd - Simple Continuous Delivery system running in your bash shell

RSS-Link-Database - Bookmarked archived links

company-org-block

notifeed - Watch RSS/Atom feeds and send push notifications/webhooks when new content is detected

srgn - A code surgeon for precise text and code transplantation. A marriage of `tr`/`sed`, `rg` and `tree-sitter`.

webpub - Give me a website, I'll make you an epub.

Filestash - 🦄 A modern web client for SFTP, S3, FTP, WebDAV, Git, Minio, LDAP, CalDAV, CardDAV, Mysql, Backblaze, ...

clipzoomfx - Side-project for extracting highlights from (mostly sports) videos

motion - Motion, a software motion detector. Home page: https://motion-project.github.io/