Thu 23 Jun 2016 10:29:01 AM UTC, original submission:
The performance of 'find' is signficantly worst when executing simple conditions (-name ...) on large folders (>1000 files).
We are using 'find' to scan large trees (>100K files over ~300 folders). The directories are loaded incrementally over time. When the size of any folder crosses the 1000 files (approximately) performance becomes very slow.
The command is 'find . -name 'prefix.*'
We were expecting performance to be linear to the number of files. performance slowdown once we cross the threadshold was 5X, and kept going down.
The linear performance was expected since the "getdents" call return indication of folders. As a result, the whole query can be resolved by reading the directories, without having to get attributes on each an every file.
After running strace, we noticed that once a single directory has lot of files (somewhere between 700 and 1000, we could not identify exact point), find start to issue "fstat" call for each file.
In theory, the fstat call is only needed if the "find" command contain condition on the file. e.g, -type, -newer, ... If the test apply only to file name, no fstat is required.
Looking for other people advice/comment on this issue. Is this a find bug, or could this be related to the system configuraiton.
|