RSS-Link-Database-2023
youtube-cue
RSS-Link-Database-2023 | youtube-cue | |
---|---|---|
6 | 3 | |
2 | 14 | |
- | - | |
9.4 | 6.4 | |
5 months ago | 5 months ago | |
HTML | JavaScript | |
GNU General Public License v3.0 only | - |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
RSS-Link-Database-2023
- Show HN: Link metadata. Complete year 2023
-
Ask HN: What apps have you created for your own use?
[4] https://github.com/rumca-js/Django-link-archive
These are exported then to github repositories:
[5] https://github.com/rumca-js/RSS-Link-Database - bookmarks
[6] https://github.com/rumca-js/RSS-Link-Database-2023 - 2023 year news headlines
[7] https://github.com/rumca-js/Internet-Places-Database - all known to me domains, and RSS feeds
-
The Small Website Discoverability Crisis
My own repositories:
- bookmarked entries https://github.com/rumca-js/RSS-Link-Database
- mostly domains https://github.com/rumca-js/Internet-Places-Database
- all 'news' from 2023 https://github.com/rumca-js/RSS-Link-Database-2023
I am using my own Django program to capture and manage links https://github.com/rumca-js/Django-link-archive.
-
What gets to the front page of Hacker News?
Hi, I am collecting links from various places. Even from Hacker news. I have links since start of the year [1]. Maybe someone will find them useful. You should look at files named like [2].
[1] https://github.com/rumca-js/RSS-Link-Database-2023
[2] https.hnrss.orgfrontpage_entries.json
-
Google No Longer Automatically Indexes Websites – WTF?
That is why I wrote [1] for myself. It stores links in database, which I can query. Everything is later on exported, like in [2] and [3]. I can browse history, I can find useful data. I do not say it has replaced google for me. It is a nice addition that helped me gather data I encounter on the Internet.
It is a link database, at first glance resembles Reddit clone, but my focus is on creating link database, not on providing social media experience cancer.
Links:
[1] https://github.com/rumca-js/Django-link-archive
[2] https://github.com/rumca-js/RSS-Link-Database
[3] https://github.com/rumca-js/RSS-Link-Database-2023
-
Link Archive – 03.2023 Update
- https://github.com/rumca-js/RSS-Link-Database-2023 - all captured links in 2023
youtube-cue
-
Ask HN: What apps have you created for your own use?
> CLI: I wanted to download songs from youtube, but they were often stitched as complete albums - so I wrote a youtube-cue generator that generates cuesheets that can then be used to split and tag the yt-dlp downloaded audio file. (https://github.com/captn3m0/youtube-cue)
Thanks for this! I need to do some testing, this might automate the last manual step of my own script for converting YT mixes into distinct tracks. The problem I faced is that often the timestamps are not in the description, but instead in a comment, sometimes not even the pinned/top voted comment. That is why I paste it in via stdin for now.
As this fits the thread topic, a short description of this script. I enjoy YT mixes and wanted to listen to them in my car. I can use an USB stick with media files and playlists which are displayed decently by the infotainment system. I wrote a script that takes in a YT URL (or anything supported by yt-dlp), downloads & converts it to mp3, splits the mp3 file based on a list of timestamps, recognizes (tries to anyway) the songs via SongRec [0], tags & names the files correctly and finally generates an M3U playlist in the format recognized by my car. I use song recognition instead of parsing out the names from the timestamped list as the format of Artist - Title is nearly always slightly different. It was easier to use SongRec instead and get everything I need for tagging with >90% hit rate.
The heavy lifting is done by calling out to yt-dlp, ffmpeg and SongRec. I just glued them together with Python. I like your approach of a do one thing well and might add youtube-cue to the toolset.
[0] https://github.com/marin-m/SongRec
-
Beets is the media library management system for obsessive music geeks
Beets is amazing and comes with great defaults. I wrote code recently to generate CUE sheets from YouTube mixes[0] and beet imports it nicely and easily.
[0]: https://github.com/captn3m0/youtube-cue There is a bash snippet in readme to show the Beets integration.
What are some alternatives?
Django-rss-feed - Link archive for a NAS drive [Moved to: https://github.com/rumca-js/Django-link-archive]
picard - A cross-platform music tagger powered by the MusicBrainz database. Picard organizes your music collection by updating your tags, renaming your files, and sorting them into a folder structure, exactly the way you want it.
full-text-tabs-forever - Full text search all your browsing history
stag - public domain utf8 curses based audio file tagger
Django-link-archive - Link archive for a NAS drive
spotprice - Quickly get AWS spot instance pricing
BeetsPluginStructuredCommen
catwiki_p3 - CatWiki (using Python 3)
stag - STag: A Stable Fiducial Marker System
webring - Make yourself a website
BeetsPluginStructuredComments