Join GitHub today
GitHub is home to over 50 million developers working together to host and review code, manage projects, and build software together.
Sign upGitHub is where the world builds software
Millions of developers and companies build, ship, and maintain their software on GitHub — the largest and most advanced development platform in the world.
Why did Compress and Uncompress disappear? #131
Comments
|
Hmm, I can't find those functions in the commit history -- do you have a link to them to refresh my memory? In any case, I think it must have been removed because the use case for buffering entire archive files in memory as byte slices was limited. It's much more recommended to stream instead. (How would you upload large objects to the cloud? That API sounds terrible...) There are examples how to do this in the godocs: https://godoc.org/github.com/mholt/archiver#example-Zip--StreamingWrite |
|
@mholt sorry for the confusion.. Those methods come from our wrapper interface.. :D The ones I meant was
https://github.com/mholt/archiver/blob/v2.1/targz.go#L53 I found the example you linked to, but it is very verbose compared to the old API where all you needed to do was give Write a list of file paths. |
|
Hm, okay -- I'll tag this as a feature request; contributions welcomed. |
|
My idea is to use full IO to accelerate the compression efficiency;
|
|
func Zip(srcFile string, destZip string, thNums string) error {
} ================================================ |
|
I've put up PRs #199 and #200 that would mostly do this. Previously, there was: type Archiver interface {
...
// Write writes an archive to a Writer.
Write(output io.Writer, sources []string) error
// Read reads an archive from a Reader.
Read(input io.Reader, destination string) error
}Those PRs would introduce: type ReaderUnarchiver interface {
ReaderUnarchive(source io.Reader, size int64, destination string) error
}
type WriterArchiver interface {
WriterArchive(sources []string, destination io.Writer) error
}Besides the argument order being swapped and the names being different, the other primary difference is that the "Read" function takes a "size" parameter. This is not required for tar or rar archives, but is required for zip archives -- removing the "size" parameter would force the implementation to fully read the archive into memory first, which is not efficient. However, consumers should be able to deal with this trivially with a wrapper interface -- if you know that you're not going to deal with zip archives, you can just pass "0" for the size parameter (and if you do need zip support, you can check for it and then do the work of reading the bytes into memory and getting the size yourself). I think this approach (exposing size parameter as part of API to allow more efficiency) is better than having the native approach omit size and read into memory for ZIP (which is what it did previously) since it allows for more efficiency while still allowing clients the flexibility to work around it with their own API if needed. |
|
Thanks for all the work on this, I will get back to this sometime after Caddy 2 is in a more polished state. |
Howdy and thanks for a awesome library!
We have a use case where we tar up a path and upload it to google cloud storage, which takes a []byte as the object you wish to upload. Later on we might download the tar which again gives us a []byte and untar it to some path.
In
v2.0.1we have the super helpfulWe noticed that we had issues with the symlink and that later versions of archiver fixes this. But the helpful functions are now gone in favor of
Any reason why there are no methods do archive into []byte and unarchive from []byte anymore?