jdupes
fd
Our great sponsors
jdupes | fd | |
---|---|---|
44 | 172 | |
1,681 | 31,581 | |
- | - | |
0.0 | 8.8 | |
7 months ago | 12 days ago | |
C | Rust | |
MIT License | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
jdupes
-
File Servers... how are you handling duplicates
I recommend the use of jdupes, a fork of the well-known fdupes, to find duplicate files.
-
fdupes: Identify or Delete Duplicate Files
200 lines of Nim [1] seems to run about 9X faster than the 8000 lines of C in fdupes on a little test dir I have. If you need C, I think jdupes [2] is faster as @TacticalCoder points out a couple of times here. In my testing, `dups` is usually faster than `jdupes`, though.
[1] https://github.com/c-blake/bu/blob/main/dups.nim
[2] https://github.com/jbruchon/jdupes
-
I'm amazed how I find anything & why I have so many dupes!
There's always the well-respected tool, Czkawka. Or, of the CLI is your thing, jdupes is a good option.
- Anyone know of any good file deduplication tools?
-
Johnny Decimal
My research into this many years ago turned out that jdupes was the right / best solution I could find for my usecase.
https://github.com/jbruchon/jdupes
Though that works fine from a script perspective I'd like some more interactive way of sorting directories etc. Identifying is just the first step, jdupes helps with linking the files (both soft and hard links comes with caveats though!) but that is mostly to save space, not to help in reorganisation.
- Jdupes: A powerful duplicate file finder
-
Does jdupes do a 'dry run' if you just specify directory(s) and no other options
I can work it out by looking at https://github.com/jbruchon/jdupes.
-
replace duplicates with hard links - I think jdupes is the answer, or maybe fclones (I have questions)
I have looked at a few alternatives and think jdupes is the one for me. Then I found out it was not multi-threaded so will give it a go but the developer of jdupes recomended fclones (https://github.com/jbruchon/jdupes/issues/186) if you were dealing with large file systems and wanted multi-threading. But as I am using a HD it may not be necessary.
-
De-Duping a file server
jdupes is a fork of the old standby fdupes, but it has a Win32 release as well as supporting POSIX.
-
Any good duplicate file finder for windows?
jdupes is a tuned fork of the well-known fdupes, and has Win32 releases.
fd
-
Level Up Your Dev Workflow: Conquer Web Development with a Blazing Fast Neovim Setup (Part 1)
ripgrep: A super-fast file searcher. You can install it using your system's package manager (e.g., brew install ripgrep on macOS). fd: Another blazing-fast file finder. Installation instructions can be found here: https://github.com/sharkdp/fd
-
Hyperfine: A command-line benchmarking tool
hyperfine is such a great tool that it's one of the first I reach for when doing any sort of benchmarking.
I encourage anyone who's tried hyperfine and enjoyed it to also look at sharkdp's other utilities, they're all amazing in their own right with fd[1] being the one that perhaps get the most daily use for me and has totally replaced my use of find(1).
[1]: https://github.com/sharkdp/fd
-
Z – Jump Around
You call it with `n` and get an interactive fuzzy search for your directories. If you do `n ` instead, it’ll start the find with `` already filled in (and if there’s only one match, jump to it directly). The `ls` is optional but I find that I like having the contents visible as soon as I change a directory.
I’m also including iCloud Drive but excluding the Library directory as that is too noisy. I have a separate `nl` function which searches just inside `~/Library` for when I need it, as well as other specialised `n` functions that search inside specific places that I need a lot.
¹ https://github.com/sharkdp/fd
² https://github.com/junegunn/fzf
-
Unix as IDE: Introduction (2012)
Many (most?) of them have been overhauled with success. For find there is fd[1]. There's batcat, exa (ls), ripgrep, fzf, atuin (history), delta (diff) and many more.
Most are both backwards compatible and fresh and friendly. Your hardwon muscle memory still of good use. But there's sane flags and defaults too. It's faster, more colorful (if you wish), better integration with another (e.g. exa/eza or aware of git modifications). And, in my case, often features I never knew I needed (atuin sync!, ripgrep using gitignore).
1 https://github.com/sharkdp/fd
- Tell HN: My Favorite Tools
-
Potencializando Sua Experiência no Linux: Conheça as Ferramentas em Rust para um Desenvolvimento Eficiente
Descubra mais sobre o fd em: https://github.com/sharkdp/fd
-
Making Hard Things Easy
AFAIK there is a find replacement with sane defaults: https://github.com/sharkdp/fd , a lot of people I know love it.
However, I already have this in my muscle memory:
-
🐚🦀Comandos shell reescritos em Rust
fd
-
Oils 0.17.0 – YSH Is Becoming Real
> without zsh globs I have to remember find syntax
My "solution" to this is using https://github.com/sharkdp/fd (even when in zsh and having glob support). I'm not sure if using a tool that's not present by default would be suitable for your use cases, but if you're considering alternate shells, I suspect you might be
-
Bfs 3.0: The Fastest Find Yet
Nice to see other alternatives to find. I personally use fd (https://github.com/sharkdp/fd) a lot, as I find the UX much better. There is one thing that I think could be better, around the difference between "wanting to list all files that follow a certain pattern" and "wanting to find one or a few specific files". Technically, those are the same, but an issue I'll often run into is wanting to search something in dotfiles (for example the Go tools), use the unrestricted mode, and it'll find the few files I'm looking for, alongside hundreds of files coming from some cache/backup directory somewhere. This happens even more with rg, as it'll look through the files contents.
I'm not sure if this is me not using the tool how I should, me not using Linux how I should, me using the wrong tool for this job, something missing from the tool or something else entirely. I wonder if other people have this similar "double usage issue", and I'm interested in ways to avoid it.
What are some alternatives?
fdupes - FDUPES is a program for identifying or deleting duplicate files residing within specified directories.
telescope.nvim - Find, Filter, Preview, Pick. All lua, all the time.
dupeguru - Find duplicate files
ripgrep - ripgrep recursively searches directories for a regex pattern while respecting your gitignore
rmlint - Extremely fast tool to remove duplicates and other lint from your filesystem
fzf - :cherry_blossom: A command-line fuzzy finder
rdfind - find duplicate files utility
exa - A modern replacement for ‘ls’.
czkawka - Multi functional app to find duplicates, empty folders, similar images etc.
skim - Fuzzy Finder in rust!
duperemove - Tools for deduping file systems
vim-grepper - :space_invader: Helps you win at grep.