Find And Remove Duplicate Files With fdupes

Today’s article has you cleaning up your storage space as you learn to find and remove duplicate files with fdupes. This isn’t something you need to do often and it’s something that could theoretically break your system. If you’re going to remove duplicate files, it’s a good idea to exercise some caution.

I’ve previously shared another way to remove duplicate files:

Find And Remove Duplicate Files With rdfind

I suppose it’s pretty obvious as to why one might want to remove duplicate files. You do so to keep your storage tidy and you do so to make space when space is limited. There are all sorts of ways to make free space and removing duplicate files is just one of them.

The tool we’ll be using this time around is known as ‘fdupes’ and the man page describes it like this:

fdupes – finds duplicate files in a given set of directories

It’s an easy enough application to install and this article shouldn’t be all that long. You’ll find that fdupes is available in your default repositories, or there’s a good chance that it is in there. Of course, this means it’s easy enough to install.

Installing fdupes:

You can install fdupes with a GUI and your software manager, but you can just as easily install it via the terminal. We’ll cover the latter, as it’s the most universal (and, I find, quickest) method. Of course, you’ll need an open terminal. In most cases you can just press CTRL + ALT + T and your default terminal should open.

With your terminal now open, let’s go ahead and install fdupes:






And More! (Just search your default repositories for ‘fdupes’ and it’ll almost certainly be there.)

As you can see, you’ll find that fdupes is available for pretty much every Linux system out there. Not only is it available, it’s already packaged for you and easy enough to install via the terminal. On top of that, it doesn’t take much space to install fdupes, a mere 110 kB or so.

NOTE: I do not have anything against GUI tools. While I have terminals open at all times, I do the majority of my computer interaction in a browser – specifically a GUI browser. I suggest and write about the terminal because it’s more universal. It’s also often faster, assuming you can type or at least cut and paste than it is to go mucking about with a GUI software installer.

Anyhow, it’s nice and easy to remove duplicate files with fdupes. This article is going to show you how – and the article shouldn’t even be that long! It’s pretty simple.

Remove Duplicate Files With fdupes:

I’m going to assume that you left your terminal open after installing fdupes. If you didn’t, you’ll need to open it again. The only way to run the fdupes application is in the terminal. So, even if you installed it with a GUI, it’s a CLI tool and you’ll need the command line to use it.

The basic syntax is pretty easy, and not entirely unlike rdfind. For example, if you want to find duplicate files, you simply run this command:

So, if you wanted to find duplicates in your home directory, you’d run this:

Don’t worry, you’re safe running that command. That won’t delete anything at all. That fdupes command will simply show you the duplicate files that it found.

If you want to run the fdupes command recursively, that is to check all the folders within the directory, you’d run the command like this:

If you want to calculate the size of the files that would be removed when removing the duplicates, the command is just this:

A summary is also available with this command:

You can also search multiple directories for duplicates. That’d be something like this:

Of course, you can run those commands together to get quite a bit of customization. They’re all reasonably harmless and will simply point out the duplicates as well as some meta information. You can then remove the duplicates by hand if you want.

You can also tell fdupes to remove the duplicates that it found. You’d never want to run this command without knowing what exactly is going to be removed, so don’t do that. Always check to make sure you’re not removing anything of value before automatically removing duplicates.

Fortunately, there’s a bit of a stopgap. You can run the following command and fdupes will ask for confirmation before removing the file:

If you want to go whole-hog and remove every duplicate found, and remove the files without any confirmation, you can run this command:

Of course, that’s just the basics. If you want to know more about fdupes, simply check the man page (man fdupes) for more information.


Hmm… I think I need to clean or replace this keyboard. The colon key is sticking on me. It’s a bit of a pain in the butt.

Anyhow, if you’ve ever wanted to remove duplicate files with fdupes, you now have directions to do so. This being Linux, you have all sorts of options when it comes to removing duplicate files, though I again urge caution when doing so. If you were to run this on the root directory, you’d likely find a lot of duplicates, and removing them might break your system. So, be careful with these tools, as they’re pretty powerful.

I’m not yet out of ideas for articles but it’d be great if folks might suggest something they’d like to read about. You never know, it might be something I know about. As it is, I have to search this site before writing an article, or else I’d end up with even more duplicate articles. I don’t want that and you don’t want that. It’s not all that easy to keep up this pace, writing a new article every other day. I’ve managed so far, but I’m eventually going to miss a day or two. It’s going to happen.

As always…

Thanks for reading! If you want to help, or if the site has helped you, you can donate, register to help, write an article, or buy inexpensive hosting to start your site. If you scroll down, you can sign up for the newsletter, vote for the article, and comment.

Find And Remove Duplicate Files With rdfind

In today’s article, we’re going to find and remove duplicate files with rdfind. We’ll try to make this as safe as possible. I’d suggest newer  users not actually worry about duplicate files. Allocate enough space to your OS and don’t worry about it. Disk space is cheap these days.

Warning: Blindly removing duplicate files can be a risky operation. It can break things. You have been warned. Exercise caution!

If you’re interested in removing duplicate files, then the rdfind application is one solution you can try. There are others, but we’ll be using rdfind. We may cover other choices in the future.

You don’t have to run rdfind with it automatically deleting the duplicate files and that’s what I’m going to suggest you do – at least at first. It’s good to see what’ll be deleted before it is actually deleted.

If you check the rdfind man page, you’ll see it’s described as:

rdfind – finds duplicate files

It does what it says on the tin. It finds duplicate files. You can run the command in a manner that automatically removes the found duplicates, but that’s not something to take lightly.

Again, and I can’t stress this enough, some duplicates are there for a reason – they belong there. So, don’t run this on the root directory and expect a good outcome. Running this on the root directory and automatically removing duplicates is going to break stuff. Feel free to do so, ’cause it’s your computer. Just don’t blame me when it breaks.

There… I feel you’re safely and properly informed! Let’s get this article started…

Install rdfind:

We’ll just use the terminal to install rdfind. To open your default terminal emulator, press CTRL + ALT + T and your default terminal should open. You might as well leave it open, as rdfind also runs in the terminal and you’ll need an open terminal in the next step.


Now that you have installed rdfind, you should probably consult the man page. That’s an easy command:

With that knowledge fresh in your memory and rdfind installed, we can just jump into the article!

Find And Remove Duplicate Files With rdfind:

Your terminal should still be open from the previous step. If not, go ahead and open it now. You’ll need a terminal open to find and remove duplicate files with rdfind. It is not a graphical application.

So, I suppose you can start with this command:

That may look dangerous, but it’s not. If you run that command, it simply finds the duplicate files and then creates a text file for you. You then review the text file and manually remove the duplicate files. This is probably for the best. It’s also the same thing if you do a dry run, like so:

You can actually delete the files and replace the first one found with hard links. While not recommended by me, it’s at least safer.

Finally, you can just go right ahead and just find and remove duplicate files! This is safer if you have both a recent backup and you’ve gone ahead and run one of the first two commands. Then, if you have run one of those two rdfind commands, you’ll know what’s going to be deleted.

Just don’t run rdfind on your root directory, and probably don’t run it directly on your home directory, and you should be more or less okay. Feel free to run it on your Downloads folder, on your Documents folder, or even your Pictures folder.

Running rdfind that way, on those types of directories, will be fine and at least should not break things. Rdfind pretty good at finding just duplicates, or I’d not recommend it. Be sure to backup first and make sure you give it a dry run before you start automatically removing stuff! Seriously, do not run this on your root directory.


And there you have it… You have yet another article! This time, we’ve learned how to find and remove duplicate files with rdfind. You were given a clear warning, but you’re gonna do what you’re gonna do. Man, I really need to write that article about backing up properly!

Thanks for reading! If you want to help, or if the site has helped you, you can donate, register to help, write an article, or buy inexpensive hosting to start your own site. If you scroll down, you can sign up for the newsletter, vote for the article, and comment.

Subscribe To Our Newsletter
Get notified when new articles are published! It's free and I won't send you any spam.
Linux Tips
Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 International License.