Which is the fastest method to delete files in Linux

Sarath Pillai's picture
faster methods to delete a file in linux

Creating, deleting, and modifying files is one of the common task that a user does in any operating system. This kind of task comes under a day to day list of tasks that a user does. Although it is quite fast and seamless operation when it comes to deleting a single or a handful of files in Linux or any other operating system. But if the number of files is quite large, then the deletion operation takes quite long to complete.

What happens when you delete a file in Linux, depends on the kind of file system, on which the file you want to delete resides. There are many operational differences in deleting files under different types of file system. When we talk about files in Linux, its all about inodes rather than files. How an inode gets modified, during file deletion is an important aspect to understand.

Inodes are the building blocks of Linux operating system. If you are interested in understanding inodes, i would recommend reading the below post, before going ahead, as we will not be discussing inode related details in this post.

Read: What is an inode in Linux

I am writing this post, to find out the fastest method to delete large number of files in Linux. We will begin this tutorial with some simple file deletion methods, and then will compare the speed with which different method completed the task of file deletion. Another major reason for writing this post is the time i spend on one of our crawler servers, to delete millions of files with very small size (few KBs).

As i told, if you are dealing with small number of files, then the operation will be fast compared to a large number of files which are of very small size. Let's begin this with some simple commands in Linux used to delete files.

 

Commands to delete files in Linux and their example usage

To delete files in Linux, the most commonly used command is rm command. Let's see some example's of rm command.

[root@myvm1 ~]# rm -f testfile

-f used in the above command, will delete the file forcefully without asking for a confirmation.

[root@myvm1 ~]# rm -rf testdirectory

The above command will delete the directory named "testdirectory"  as well as all the contents inside that directory(-r option used is to delete files recursively)

[root@myvm1 ~]# rmdir testdirectory

The above command rmdir, will only delete the directory if its empty.

 

Let's now have a look at some different methods of deleting files in Linux. One of my favorite methods out there is to use find command. Find commands is a very handy tool that can be used to search files according to its type, size, created date, modified date, and much more different criteria. To find out more about this wonderful searching tool in Linux, read the below post.

Read: Usage examples of find command in Linux

[root@myvm1 /]# find /test -type f -exec rm {} \;

The above shown command, will delete all the files inside /test directory. First the find command will look for all files inside the directory, and then for each result, it will execute and rm.

Let's see some different methods that can be used with find command, to delete files.

[root@myvm1 /]# find /test -mtime +7 -exec rm {} \;

In the above shown example, find command will search all those files inside the /test directory which are modified 7 days ago, and then delete each of them.

[root@myvm1 /]# find /test -size +7M -exec rm {} \;

Above shown example, will search for all those files in the directory /test which are larger than 7M, and then delete each of them.

In all of the above shown examples of find command, rm command is invoked for each and every file in the list. For example, in the last find command shown above, if the result is 50 files which are bigger than 7M, then 50 rm commands are invoked for deleting each of them. This will take a much longer time.

Instead of using the above command of rm with the help of -exec argument in find, there is a better alternative. We will see the alternative and then calculate the difference between speed in each of them.

As i told before, the main idea behind finding the deletion speed, is when you delete large number of files. So lets first create half a million files with the help of a simple bash for loop. And after creating half a million files, we will try to delete it with rm command, find command with exec argument and then will see a better find command alternative.

[root@myvm1 test]# for i in $(seq 1 500000); do echo testing >> $i.txt; done

The above command will create 5 lakh files (half a million) in the current working directory, with the name 1.txt to 500000.txt, and each file contains the text "testing", so the file size will be at least in the range of some kilo bytes. Let's now test the speed of deleting these number of files with different commands. First let's use the simple rm command, and then will use find command with -exec and then delete option to calculate the time taken to delete these number of files.

[root@myvm1 test]# time rm -f *
-bash: /bin/rm: Argument list too long
real    0m11.126s
user    0m9.673s
sys     0m1.278s

If you see the above rm command i ran on the test directory, containing half a million files, it gave me a message saying /bin/rm: Argument list too long. Which means the command didn't complete the deletion, because the number of files given to rm command was too big to complete.  So rm command didn't even stand the test, because it gave up. Don't pay attention to the time displayed by the time command, because rm command didn't complete its operation, and time command displays the output without bothering about the end result of the command.

Now let's use our previously seen find command with -exec argument.

[root@myvm1 test]# time find ./ -type f -exec rm {} \;
real    14m51.735s
user    2m24.330s
sys     9m48.743s

From the output we got by using time command, it is clear that it took 14 minutes and 51 seconds to delete 5 lakh files from a single directory. This is quite a long time, because for each file a separate rm command is executed, until the complete list of files gets deleted.

Now lets test the time consumed, by using -delete, option in find command.

[root@myvm1 test]# time find ./ -type f -delete
real    5m11.937s
user    0m1.259s
sys     0m28.441s
[root@myvm1 test]#

Wow you saw that result!! -delete option only took 5 minutes 11 seconds. That's a wonderful improvement in the speed, when you delete millions of files in Linux.

Let's now have a look at how deleting files using Perl language works, and its speed compared to other options we saw earlier.

[root@myvm1 test]# time perl -e 'for(<*>){((stat)[9]<(unlink))}'
real    1m0.488s
user    0m7.023s
sys     0m27.403s

That's insanely fast compared to other find command, and rm command options we saw earlier. Till now this seems to be the best method that can be used to delete all the files in a directory. That's a remarkable achievement in speed for deleting files in Linux. If you see the output Perl only took around 1 minute to delete half a million files in that directory.

But yeah if you are interested in finding more complex options while using Perl, you need to have some good hands on with Perl regular expressions.

There is one more lesser used and less known method that can be used to delete large number of files inside a folder. This method is none other than our famous tool RSYNC used for transferring and synchronizing files between two local as well as remote locations in Linux.

Let's have a look at that method of deleting all files inside a folder with the help of RSYNC command. The method and logic used behind deleting files with the help of rsync is based on the fact that rsync is commonly used for synchronizing files between two different locations.

This can be achieved by simply synchronizing a target directory which has the large number of files, with an empty directory. In our case test directory has half a million files, lets create a directory called as blanktest, which will be kept empty for the purpose of simply synchronization. Now along with this we will be using -delete option in rsync, which will delete all those files in the target directory, which are are not present in the source(in our case the source is an empty directory, so all the files in the destination directory will be deleted.)

Empty Directory: /home/blanktest

Directory to be emptied: /test

[root@myvm1 home]# time rsync -a --delete blanktest/ test/
real    2m52.502s
user    0m2.772s
sys     0m32.649s

The results are pretty impressive, so its much better to use rsync if you want to empty a directory containing millions of files, compared to find command.

The below shown table summarizes the speed for file deletion in Linux, using different methods in Linux.

COMMAND TIME TAKEN
RM CommandIs not capable of deleting large number of files
Find Command with -exec14 Minutes for half a million files
Find Command with -delete 5 Minutes for half a million files
Perl1 Minute for half a million files
RSYNC with -delete2 Minute 56 seconds for half a million files

 

Rate this article: 
Average: 4 (470 votes)

Comments

The previous commands are good.
rm -rf directory/ also works faster for billion of files in one folder. I tried that.

We have an Ab Initio application that connects to database. Before it connects, it creates a lock file, name convention is pset.xxxx.log.lock.
When it disconnects it deletes this lock file. When it connects again, it creates the same lock file.

However, due to that this application is making such frequent connections, like more than one connections in a few seconds, it gets hung on creating or removing the lock file.

I suspect that it doesn't even give the linux os enough time to remove the lock and file and create another.

I wonder how fast the lock file can be created and removed and then created again?
It is red hat linux 2.6.32-358.55.1.el6.x86_64 #1 SMP x86_64 x86_64 x86_64 GNU/Linux

Nice article. It inspired me to check results for find -delete, rsync and perl. I got another top. On my PC leader is find. Linux 4.2, Ubuntu 14.04, Intel i5 4 cores, Intel SSD 5xx series, EncFS encryption.

$ time for i in $(seq 1 500000); do echo testing >> $i.txt; done

real 1m13.263s
user 0m7.756s
sys 0m57.268s

Operation was repeated for each test with similar results.

$ time rsync --delete -av ../empty/ ./

real 4m5.197s
user 0m4.308s
sys 1m43.400s

$ time find ./ -delete

real 2m19.819s
user 0m1.044s
sys 0m59.100s

$ time perl -e 'unlink for ( <*> ) '
real 3m17.482s
user 0m2.524s
sys 1m29.196s

My perl code a little more efficient than yours. Because you do unneeded stat call. Anyway, perl is slower, than find.

rm fails, because * is expanded by shell to huge list. nobody really does this as files are packed in folders. so:

$ time rm -rf $(pwd)

"pwd" causes it to print current directory, skipping all symlinks.

Hi

indeed
perl -e 'for(<*>){((stat)[9]<(unlink))}'
is very fast but *did not* delete all subdirectories. those starting with a . are left
and for the others I could no figure out the reason. But I don't know perl, so I don't know
the meaning of the syntax

As said by tomator, just need to be used right

$ time rm -rf teste/

real 0m9,815s
user 0m3,210s
sys 0m5,986s

$ time perl -e 'unlink for ( <*> ) '

real 0m20,237s
user 0m5,847s
sys 0m10,195s

I used the same bash expression to create the files (for i in $(seq 1 500000); do echo testing >> $i.txt; done)

rm is hors-concours

Add new comment

Plain text

  • No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.
CAPTCHA
This question is for testing whether or not you are a human visitor and to prevent automated spam submissions.