The commands below may takes minutes depends on the file size. Is there any more effient method?
sed -i 1d large_file
The commands below may takes minutes depends on the file size. Is there any more effient method?
|
|||
|
Try
If that “large” means about 10 million lines or more, better use
Edit to show some time differences: (
|
|||||||||||||||||||||
|
There is no way to efficiently remove things from the start of a file. Removing data from the beginning requires re-writing the whole file. Truncating from the end of a file can be very quick though (the OS only has to adjust the file size information, possibly clearing up now-unused blocks). This is not generally possible when you try to remove from the head of a file. It could theoretically be "fast" if you removed a whole block/extent exactly, but there are no system calls for that, so you'd have to rely on filesystem-specific semantics (if such exist). (Or having some form of offset inside the first block/extent to mark the real start of file, I guess. Never heard of that either.) |
|||||||||||||
|
The most efficient method, don't do it ! If you do, in any case, you need twice the 'large' space on disk, and you waist IOs. If you are stuck with a large file that you want to read without the 1st line, wait until you need to read it for removing the 1st line. If you need to send the file from stdin to a program, use tail to do it:
When you need to read the file, you can take the opportunity to remove the 1st line, but only if you have the needed space on disk:
If you can't read from stdin, use a fifo:
of even better if you are using bash, take advantage of process substitution:
If you need seeking in the file, I do not see a better solution than not getting stuck with the file in the 1st place. If this file was generated by stdout:
Else, there is always the fifo or process substitution solution:
|
|||
|
This is just theorizing, but... A custom filesystem (implemented using FUSE or a similar mechanism) could expose a directory whose contents are exactly the same as an already existing directory somewhere else, but with files truncated as you wish. The filesystem would translate all the file offsets. Then you wouldn't have to do a time-consuming rewriting of a file. But given that this idea is very non-trivial, unless you've got tens of terabytes of such files, implementing such filesystem would be too expensive/time-consuming to be practical. |
|||
|