note-keeper
dedupe
Our great sponsors
note-keeper | dedupe | |
---|---|---|
5 | 3 | |
62 | 3 | |
- | - | |
0.0 | 0.0 | |
over 2 years ago | about 6 years ago | |
Shell | Python | |
MIT License | BSD 2-clause "Simplified" License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
note-keeper
-
Best self hosted notes app?
note-keeper shell script within a folder that is synced via nextcloud
-
What tools / utilities have you written that you use regularly?
Notekeeper - I wrote a simple tool for taking notes really quickly on the command line.
- Introducing Note Keeper - A simple but powerful note taking tool written in bash.
-
Notekeeper 1.0 - A tiny bash script for taking notes.
Notekeeper is a ~230 line long bash script that uses tools you already have installed to easily and quickly write and manage notes. Use your favorite editor and just start writing. Check out the github repo for more details and a complete list of features.
dedupe
-
fdupes alternatives?
I wrote https://github.com/Gumnos/dedupe which sounds like it might be useful to you. It's faster than several of the alternatives I've found (many run the checksum across the whole of every file, this uses the file-size as a first-line discriminator, and only if the files are the same size does it go to the trouble of checking the checksum of the files). I designed it for creating hard-links in my media collection, but in the --dry-run mode, it should emit the file-names allowing you to pass it to xargs to remove them if it looks copacetic.
-
File Management via CLI
You can use my dedupe.py script with the dry-run flag (-n) to find all the duplicates on your drive. If you run it without the dry-run flag, it will attempt to make hard-links so that each file exists only once on the drive with multiple hard-links to the underlying file. It should be pretty fast, only needing to checksum file-content in the event that files have the same size (several other such deduplication methods work by checksumming every file on the drive which can be slow).
-
What tools / utilities have you written that you use regularly?
a file-deduplication utility that hard-links duplicate files to save space (our family photo gallery gets pics put in multiple albums for various audiences, so I can cut down on a lot of duplication with this)
What are some alternatives?
cli - A tiny CLI for HedgeDoc
ripgrep-all - rga: ripgrep, but also search in PDFs, E-Books, Office documents, zip, tar.gz, etc.
idgit - /ËÉĒdĘÉĒt/ - A đ rolodex for your git config. Never push your work email to your personal repo again!
xonsh - :shell: Python-powered, cross-platform, Unix-gazing shell.
vids - đ đ â¯ī¸ đ - search for videos to play from youtube.com and other platforms...
tawk - Like awk, but using tcl as the scripting language.
dotfiles - My personal dotfiles
file-arranger - Simple & capable Directory arranger/cleaner
EgyBestCLI - A Command-Line Interface Wrapper For EgyBest
mpd_what - An mpd album art and info getter
notes - :pencil: Simple delightful note taking, with more unix and less lock-in.
ledger - Double-entry accounting system with a command-line reporting interface