From 047a4764f22340667ea8c3b11a56ed54011aded9 Mon Sep 17 00:00:00 2001 From: sacha Date: Sat, 7 Aug 2010 13:19:53 +0000 Subject: and another one... --- bin/ICSharpCode.SharpZipLib.xml | 9055 +++++++++++++++++++++++++++++++++++++++ bin/hdfsdll.dll.config | 3 + bin/hdfsdll.so | Bin 0 -> 6557 bytes bin/xmrhelpers.so | Bin 0 -> 18357 bytes 4 files changed, 9058 insertions(+) create mode 100644 bin/ICSharpCode.SharpZipLib.xml create mode 100644 bin/hdfsdll.dll.config create mode 100755 bin/hdfsdll.so create mode 100755 bin/xmrhelpers.so (limited to 'bin') diff --git a/bin/ICSharpCode.SharpZipLib.xml b/bin/ICSharpCode.SharpZipLib.xml new file mode 100644 index 0000000..98cb51e --- /dev/null +++ b/bin/ICSharpCode.SharpZipLib.xml @@ -0,0 +1,9055 @@ + + + + ICSharpCode.SharpZipLib + + + + + FastZipEvents supports all events applicable to FastZip operations. + + + + + Delegate to invoke when processing directories. + + + + + Delegate to invoke when processing files. + + + + + Delegate to invoke during processing of files. + + + + + Delegate to invoke when processing for a file has been completed. + + + + + Delegate to invoke when processing directory failures. + + + + + Delegate to invoke when processing file failures. + + + + + Raise the directory failure event. + + The directory causing the failure. + The exception for this event. + A boolean indicating if execution should continue or not. + + + + Raises the file failure delegate. + + The file causing the failure. + The exception for this failure. + A boolean indicating if execution should continue or not. + + + + Fires the Process File delegate. + + The file being processed. + A boolean indicating if execution should continue or not. + + + + Fires the CompletedFile delegate + + The file whose processing has been completed. + A boolean indicating if execution should continue or not. + + + + Fires the process directory delegate. + + The directory being processed. + Flag indicating if the directory has matching files as determined by the current filter. + A of true if the operation should continue; false otherwise. + + + + The minimum timespan between events. + + The minimum period of time between events. + + + + + FastZip provides facilities for creating and extracting zip files. + + + + + Initialise a default instance of . + + + + + Initialise a new instance of + + The events to use during operations. + + + + Create a zip file. + + The name of the zip file to create. + The directory to source files from. + True to recurse directories, false for no recursion. + The file filter to apply. + The directory filter to apply. + + + + Create a zip file/archive. + + The name of the zip file to create. + The directory to obtain files and directories from. + True to recurse directories, false for no recursion. + The file filter to apply. + + + + Create a zip archive sending output to the passed. + + The stream to write archive data to. + The directory to source files from. + True to recurse directories, false for no recursion. + The file filter to apply. + The directory filter to apply. + + + + Extract the contents of a zip file. + + The zip file to extract from. + The directory to save extracted information in. + A filter to apply to files. + + + + Extract the contents of a zip file. + + The zip file to extract from. + The directory to save extracted information in. + The style of overwriting to apply. + A delegate to invoke when confirming overwriting. + A filter to apply to files. + A filter to apply to directories. + Flag indicating wether to restore the date and time for extracted files. + + + + Get/set a value indicating wether empty directories should be created. + + + + + Get / set the password value. + + + + + Get or set the active when creating Zip files. + + + + + + Get or set the active when creating Zip files. + + + + + Get/set a value indicating wether file dates and times should + be restored when extracting files from an archive. + + The default value is false. + + + + Get/set a value indicating wether file attributes should + be restored during extract operations + + + + + Defines the desired handling when overwriting files during extraction. + + + + + Prompt the user to confirm overwriting + + + + + Never overwrite files. + + + + + Always overwrite files. + + + + + Delegate called when confirming overwriting of files. + + + + + NameFilter is a string matching class which allows for both positive and negative + matching. + A filter is a sequence of independant regular expressions separated by semi-colons ';' + Each expression can be prefixed by a plus '+' sign or a minus '-' sign to denote the expression + is intended to include or exclude names. If neither a plus or minus sign is found include is the default + A given name is tested for inclusion before checking exclusions. Only names matching an include spec + and not matching an exclude spec are deemed to match the filter. + An empty filter matches any name. + + The following expression includes all name ending in '.dat' with the exception of 'dummy.dat' + "+\.dat$;-^dummy\.dat$" + + + + + Scanning filters support filtering of names. + + + + + Test a name to see if it 'matches' the filter. + + The name to test. + Returns true if the name matches the filter, false if it does not match. + + + + Construct an instance based on the filter expression passed + + The filter expression. + + + + Test a string to see if it is a valid regular expression. + + The expression to test. + True if expression is a valid false otherwise. + + + + Test an expression to see if it is valid as a filter. + + The filter expression to test. + True if the expression is valid, false otherwise. + + + + Convert this filter to its string equivalent. + + The string equivalent for this filter. + + + + Test a value to see if it is included by the filter. + + The value to test. + True if the value is included, false otherwise. + + + + Test a value to see if it is excluded by the filter. + + The value to test. + True if the value is excluded, false otherwise. + + + + Test a value to see if it matches the filter. + + The value to test. + True if the value matches, false otherwise. + + + + Compile this filter. + + + + + Huffman tree used for inflation + + + + + Literal length tree + + + + + Distance tree + + + + + Constructs a Huffman tree from the array of code lengths. + + + the array of code lengths + + + + + Reads the next symbol from input. The symbol is encoded using the + huffman tree. + + + input the input source. + + + the next symbol, or -1 if not enough input is available. + + + + + This class is general purpose class for writing data to a buffer. + + It allows you to write bits as well as bytes + Based on DeflaterPending.java + + author of the original java version : Jochen Hoenicke + + + + + Internal work buffer + + + + + construct instance using default buffer size of 4096 + + + + + construct instance using specified buffer size + + + size to use for internal buffer + + + + + Clear internal state/buffers + + + + + Write a byte to buffer + + + The value to write + + + + + Write a short value to buffer LSB first + + + The value to write. + + + + + write an integer LSB first + + The value to write. + + + + Write a block of data to buffer + + data to write + offset of first byte to write + number of bytes to write + + + + Align internal buffer on a byte boundary + + + + + Write bits to internal buffer + + source of bits + number of bits to write + + + + Write a short value to internal buffer most significant byte first + + value to write + + + + Flushes the pending buffer into the given output array. If the + output array is to small, only a partial flush is done. + + The output array. + The offset into output array. + The maximum number of bytes to store. + The number of bytes flushed. + + + + Convert internal buffer to byte array. + Buffer is empty on completion + + + The internal buffer contents converted to a byte array. + + + + + The number of bits written to the buffer + + + + + Indicates if buffer has been flushed + + + + + Used to advise clients of 'events' while processing archives + + + + + The TarArchive class implements the concept of a + 'Tape Archive'. A tar archive is a series of entries, each of + which represents a file system object. Each entry in + the archive consists of a header block followed by 0 or more data blocks. + Directory entries consist only of the header block, and are followed by entries + for the directory's contents. File entries consist of a + header followed by the number of blocks needed to + contain the file's contents. All entries are written on + block boundaries. Blocks are 512 bytes long. + + TarArchives are instantiated in either read or write mode, + based upon whether they are instantiated with an InputStream + or an OutputStream. Once instantiated TarArchives read/write + mode can not be changed. + + There is currently no support for random access to tar archives. + However, it seems that subclassing TarArchive, and using the + TarBuffer.CurrentRecord and TarBuffer.CurrentBlock + properties, this would be rather trivial. + + + + + Raises the ProgressMessage event + + The TarEntry for this event + message for this event. Null is no message + + + + Constructor for a default . + + + + + Initalise a TarArchive for input. + + The to use for input. + + + + Initialise a TarArchive for output. + + The to use for output. + + + + The InputStream based constructors create a TarArchive for the + purposes of extracting or listing a tar archive. Thus, use + these constructors when you wish to extract files from or list + the contents of an existing tar archive. + + The stream to retrieve archive data from. + Returns a new suitable for reading from. + + + + Create TarArchive for reading setting block factor + + Stream for tar archive contents + The blocking factor to apply + Returns a suitable for reading. + + + + Create a TarArchive for writing to, using the default blocking factor + + The to write to + Returns a suitable for writing. + + + + Create a TarArchive for writing to + + The stream to write to + The blocking factor to use for buffering. + Returns a suitable for writing. + + + + Set the flag that determines whether existing files are + kept, or overwritten during extraction. + + + If true, do not overwrite existing files. + + + + + Set the ascii file translation flag. + + + If true, translate ascii text files. + + + + + Set user and group information that will be used to fill in the + tar archive's entry headers. This information based on that available + for the linux operating system, which is not always available on other + operating systems. TarArchive allows the programmer to specify values + to be used in their place. + is set to true by this call. + + + The user id to use in the headers. + + + The user name to use in the headers. + + + The group id to use in the headers. + + + The group name to use in the headers. + + + + + Close the archive. + + + + + Perform the "list" command for the archive contents. + + NOTE That this method uses the progress event to actually list + the contents. If the progress display event is not set, nothing will be listed! + + + + + Perform the "extract" command and extract the contents of the archive. + + + The destination directory into which to extract. + + + + + Extract an entry from the archive. This method assumes that the + tarIn stream has been properly set with a call to GetNextEntry(). + + + The destination directory into which to extract. + + + The TarEntry returned by tarIn.GetNextEntry(). + + + + + Write an entry to the archive. This method will call the putNextEntry + and then write the contents of the entry, and finally call closeEntry() + for entries that are files. For directories, it will call putNextEntry(), + and then, if the recurse flag is true, process each entry that is a + child of the directory. + + + The TarEntry representing the entry to write to the archive. + + + If true, process the children of directory entries. + + + + + Write an entry to the archive. This method will call the putNextEntry + and then write the contents of the entry, and finally call closeEntry() + for entries that are files. For directories, it will call putNextEntry(), + and then, if the recurse flag is true, process each entry that is a + child of the directory. + + + The TarEntry representing the entry to write to the archive. + + + If true, process the children of directory entries. + + + + + Releases the unmanaged resources used by the FileStream and optionally releases the managed resources. + + true to release both managed and unmanaged resources; + false to release only unmanaged resources. + + + + Closes the archive and releases any associated resources. + + + + + Ensures that resources are freed and other cleanup operations are performed + when the garbage collector reclaims the . + + + + + Client hook allowing detailed information to be reported during processing + + + + + Get/set the ascii file translation flag. If ascii file translation + is true, then the file is checked to see if it a binary file or not. + If the flag is true and the test indicates it is ascii text + file, it will be translated. The translation converts the local + operating system's concept of line ends into the UNIX line end, + '\n', which is the defacto standard for a TAR archive. This makes + text files compatible with UNIX. + + + + + PathPrefix is added to entry names as they are written if the value is not null. + A slash character is appended after PathPrefix + + + + + RootPath is removed from entry names if it is found at the + beginning of the name. + + + + + Get or set a value indicating if overrides defined by SetUserInfo should be applied. + + If overrides are not applied then the values as set in each header will be used. + + + + Get the archive user id. + See ApplyUserInfoOverrides for detail + on how to allow setting values on a per entry basis. + + + The current user id. + + + + + Get the archive user name. + See ApplyUserInfoOverrides for detail + on how to allow setting values on a per entry basis. + + + The current user name. + + + + + Get the archive group id. + See ApplyUserInfoOverrides for detail + on how to allow setting values on a per entry basis. + + + The current group id. + + + + + Get the archive group name. + See ApplyUserInfoOverrides for detail + on how to allow setting values on a per entry basis. + + + The current group name. + + + + + Get the archive's record size. Tar archives are composed of + a series of RECORDS each containing a number of BLOCKS. + This allowed tar archives to match the IO characteristics of + the physical device being used. Archives are expected + to be properly "blocked". + + + The record size this archive is using. + + + + + An output stream that compresses into the BZip2 format + including file header chars into another stream. + + + + + Construct a default output stream with maximum block size + + The stream to write BZip data onto. + + + + Initialise a new instance of the + for the specified stream, using the given blocksize. + + The stream to write compressed data to. + The block size to use. + + Valid block sizes are in the range 1..9, with 1 giving + the lowest compression and 9 the highest. + + + + + Ensures that resources are freed and other cleanup operations + are performed when the garbage collector reclaims the BZip2OutputStream. + + + + + Sets the current position of this stream to the given value. + + The point relative to the offset from which to being seeking. + The reference point from which to begin seeking. + The new position in the stream. + + + + Sets the length of this stream to the given value. + + The new stream length. + + + + Read a byte from the stream advancing the position. + + The byte read cast to an int; -1 if end of stream. + + + + Read a block of bytes + + The buffer to read into. + The offset in the buffer to start storing data at. + The maximum number of bytes to read. + The total number of bytes read. This might be less than the number of bytes + requested if that number of bytes are not currently available, or zero + if the end of the stream is reached. + + + + Write a block of bytes to the stream + + The buffer containing data to write. + The offset of the first byte to write. + The number of bytes to write. + + + + Write a byte to the stream. + + The byte to write to the stream. + + + + End the current block and end compression. + Close the stream and free any resources + + + + + Get the number of bytes written to output. + + + + + Releases the unmanaged resources used by the and optionally releases the managed resources. + + true to release both managed and unmanaged resources; false to release only unmanaged resources. + + + + Flush output buffers + + + + + Get/set flag indicating ownership of underlying stream. + When the flag is true will close the underlying stream also. + + + + + Gets a value indicating whether the current stream supports reading + + + + + Gets a value indicating whether the current stream supports seeking + + + + + Gets a value indicating whether the current stream supports writing + + + + + Gets the length in bytes of the stream + + + + + Gets or sets the current position of this stream. + + + + + Get the number of bytes written to the output. + + + + + Represents exception conditions specific to Zip archive handling + + + + + SharpZipBaseException is the base exception class for the SharpZipLibrary. + All library exceptions are derived from this. + + NOTE: Not all exceptions thrown will be derived from this class. + A variety of other exceptions are possible for example + + + + Deserialization constructor + + for this constructor + for this constructor + + + + Initializes a new instance of the SharpZipBaseException class. + + + + + Initializes a new instance of the SharpZipBaseException class with a specified error message. + + A message describing the exception. + + + + Initializes a new instance of the SharpZipBaseException class with a specified + error message and a reference to the inner exception that is the cause of this exception. + + A message describing the exception. + The inner exception + + + + Deserialization constructor + + for this constructor + for this constructor + + + + Initializes a new instance of the ZipException class. + + + + + Initializes a new instance of the ZipException class with a specified error message. + + The error message that explains the reason for the exception. + + + + Initialise a new instance of ZipException. + + A message describing the error. + The exception that is the cause of the current exception. + + + + A helper class to simplify compressing and decompressing streams. + + + + + Decompress input writing + decompressed data to the output stream + + The stream containing data to decompress. + The stream to write decompressed data to. + Both streams are closed on completion + + + + Compress input stream sending + result to output stream + + The stream to compress. + The stream to write compressed data to. + The block size to use. + Both streams are closed on completion + + + + Initialise a default instance of this class. + + + + + Determines how entries are tested to see if they should use Zip64 extensions or not. + + + + + Zip64 will not be forced on entries during processing. + + An entry can have this overridden if required + + + + Zip64 should always be used. + + + + + #ZipLib will determine use based on entry values when added to archive. + + + + + The kind of compression used for an entry in an archive + + + + + A direct copy of the file contents is held in the archive + + + + + Common Zip compression method using a sliding dictionary + of up to 32KB and secondary compression from Huffman/Shannon-Fano trees + + + + + An extension to deflate with a 64KB window. Not supported by #Zip currently + + + + + Not supported by #Zip currently + + + + + WinZip special for AES encryption, Not supported by #Zip + + + + + Identifies the encryption algorithm used for an entry + + + + + No encryption has been used. + + + + + Encrypted using PKZIP 2.0 or 'classic' encryption. + + + + + DES encryption has been used. + + + + + RCS encryption has been used for encryption. + + + + + Triple DES encryption with 168 bit keys has been used for this entry. + + + + + Triple DES with 112 bit keys has been used for this entry. + + + + + AES 128 has been used for encryption. + + + + + AES 192 has been used for encryption. + + + + + AES 256 has been used for encryption. + + + + + RC2 corrected has been used for encryption. + + + + + Blowfish has been used for encryption. + + + + + Twofish has been used for encryption. + + + + + RCS has been used for encryption. + + + + + An unknown algorithm has been used for encryption. + + + + + Defines the contents of the general bit flags field for an archive entry. + + + + + Bit 0 if set indicates that the file is encrypted + + + + + Bits 1 and 2 - Two bits defining the compression method (only for Method 6 Imploding and 8,9 Deflating) + + + + + Bit 3 if set indicates a trailing data desciptor is appended to the entry data + + + + + Bit 4 is reserved for use with method 8 for enhanced deflation + + + + + Bit 5 if set indicates the file contains Pkzip compressed patched data. + Requires version 2.7 or greater. + + + + + Bit 6 if set strong encryption has been used for this entry. + + + + + Bit 7 is currently unused + + + + + Bit 8 is currently unused + + + + + Bit 9 is currently unused + + + + + Bit 10 is currently unused + + + + + Bit 11 if set indicates the filename and + comment fields for this file must be encoded using UTF-8. + + + + + Bit 12 is documented as being reserved by PKware for enhanced compression. + + + + + Bit 13 if set indicates that values in the local header are masked to hide + their actual values, and the central directory is encrypted. + + + Used when encrypting the central directory contents. + + + + + Bit 14 is documented as being reserved for use by PKware + + + + + Bit 15 is documented as being reserved for use by PKware + + + + + This class contains constants used for Zip format files + + + + + The version made by field for entries in the central header when created by this library + + + This is also the Zip version for the library when comparing against the version required to extract + for an entry. See . + + + + + The version made by field for entries in the central header when created by this library + + + This is also the Zip version for the library when comparing against the version required to extract + for an entry. See ZipInputStream.CanDecompressEntry. + + + + + The minimum version required to support strong encryption + + + + + The minimum version required to support strong encryption + + + + + The version required for Zip64 extensions + + + + + Size of local entry header (excluding variable length fields at end) + + + + + Size of local entry header (excluding variable length fields at end) + + + + + Size of Zip64 data descriptor + + + + + Size of data descriptor + + + + + Size of data descriptor + + + + + Size of central header entry (excluding variable fields) + + + + + Size of central header entry + + + + + Size of end of central record (excluding variable fields) + + + + + Size of end of central record (excluding variable fields) + + + + + Size of 'classic' cryptographic header stored before any entry data + + + + + Size of cryptographic header stored before entry data + + + + + Signature for local entry header + + + + + Signature for local entry header + + + + + Signature for spanning entry + + + + + Signature for spanning entry + + + + + Signature for temporary spanning entry + + + + + Signature for temporary spanning entry + + + + + Signature for data descriptor + + + This is only used where the length, Crc, or compressed size isnt known when the + entry is created and the output stream doesnt support seeking. + The local entry cannot be 'patched' with the correct values in this case + so the values are recorded after the data prefixed by this header, as well as in the central directory. + + + + + Signature for data descriptor + + + This is only used where the length, Crc, or compressed size isnt known when the + entry is created and the output stream doesnt support seeking. + The local entry cannot be 'patched' with the correct values in this case + so the values are recorded after the data prefixed by this header, as well as in the central directory. + + + + + Signature for central header + + + + + Signature for central header + + + + + Signature for Zip64 central file header + + + + + Signature for Zip64 central file header + + + + + Signature for Zip64 central directory locator + + + + + Signature for archive extra data signature (were headers are encrypted). + + + + + Central header digitial signature + + + + + Central header digitial signature + + + + + End of central directory record signature + + + + + End of central directory record signature + + + + + Convert a portion of a byte array to a string. + + + Data to convert to string + + + Number of bytes to convert starting from index 0 + + + data[0]..data[length - 1] converted to a string + + + + + Convert a byte array to string + + + Byte array to convert + + + dataconverted to a string + + + + + Convert a byte array to string + + The applicable general purpose bits flags + + Byte array to convert + + The number of bytes to convert. + + dataconverted to a string + + + + + Convert a byte array to string + + + Byte array to convert + + The applicable general purpose bits flags + + dataconverted to a string + + + + + Convert a string to a byte array + + + String to convert to an array + + Converted array + + + + Convert a string to a byte array + + The applicable general purpose bits flags + + String to convert to an array + + Converted array + + + + Initialise default instance of ZipConstants + + + Private to prevent instances being created. + + + + + Default encoding used for string conversion. 0 gives the default system OEM code page. + Dont use unicode encodings if you want to be Zip compatible! + Using the default code page isnt the full solution neccessarily + there are many variable factors, codepage 850 is often a good choice for + European users, however be careful about compatability. + + + + + This is a DeflaterOutputStream that writes the files into a zip + archive one after another. It has a special method to start a new + zip entry. The zip entries contains information about the file name + size, compressed size, CRC, etc. + + It includes support for Stored and Deflated entries. + This class is not thread safe. +
+
Author of the original java version : Jochen Hoenicke +
+ This sample shows how to create a zip file + + using System; + using System.IO; + + using ICSharpCode.SharpZipLib.Core; + using ICSharpCode.SharpZipLib.Zip; + + class MainClass + { + public static void Main(string[] args) + { + string[] filenames = Directory.GetFiles(args[0]); + byte[] buffer = new byte[4096]; + + using ( ZipOutputStream s = new ZipOutputStream(File.Create(args[1])) ) { + + s.SetLevel(9); // 0 - store only to 9 - means best compression + + foreach (string file in filenames) { + ZipEntry entry = new ZipEntry(file); + s.PutNextEntry(entry); + + using (FileStream fs = File.OpenRead(file)) { + StreamUtils.Copy(fs, s, buffer); + } + } + } + } + } + + +
+ + + A special stream deflating or compressing the bytes that are + written to it. It uses a Deflater to perform actual deflating.
+ Authors of the original java version : Tom Tromey, Jochen Hoenicke +
+
+ + + Creates a new DeflaterOutputStream with a default Deflater and default buffer size. + + + the output stream where deflated output should be written. + + + + + Creates a new DeflaterOutputStream with the given Deflater and + default buffer size. + + + the output stream where deflated output should be written. + + + the underlying deflater. + + + + + Creates a new DeflaterOutputStream with the given Deflater and + buffer size. + + + The output stream where deflated output is written. + + + The underlying deflater to use + + + The buffer size to use when deflating + + + bufsize is less than or equal to zero. + + + baseOutputStream does not support writing + + + deflater instance is null + + + + + Finishes the stream by calling finish() on the deflater. + + + Not all input is deflated + + + + + Encrypt a block of data + + + Data to encrypt. NOTE the original contents of the buffer are lost + + + Offset of first byte in buffer to encrypt + + + Number of bytes in buffer to encrypt + + + + + Initializes encryption keys based on given password + + The password. + + + + Deflates everything in the input buffers. This will call + def.deflate() until all bytes from the input buffers + are processed. + + + + + Sets the current position of this stream to the given value. Not supported by this class! + + The offset relative to the to seek. + The to seek from. + The new position in the stream. + Any access + + + + Sets the length of this stream to the given value. Not supported by this class! + + The new stream length. + Any access + + + + Read a byte from stream advancing position by one + + The byte read cast to an int. THe value is -1 if at the end of the stream. + Any access + + + + Read a block of bytes from stream + + The buffer to store read data in. + The offset to start storing at. + The maximum number of bytes to read. + The actual number of bytes read. Zero if end of stream is detected. + Any access + + + + Asynchronous reads are not supported a NotSupportedException is always thrown + + The buffer to read into. + The offset to start storing data at. + The number of bytes to read + The async callback to use. + The state to use. + Returns an + Any access + + + + Asynchronous writes arent supported, a NotSupportedException is always thrown + + The buffer to write. + The offset to begin writing at. + The number of bytes to write. + The to use. + The state object. + Returns an IAsyncResult. + Any access + + + + Flushes the stream by calling Flush on the deflater and then + on the underlying stream. This ensures that all bytes are flushed. + + + + + Calls and closes the underlying + stream when is true. + + + + + Writes a single byte to the compressed output stream. + + + The byte value. + + + + + Writes bytes from an array to the compressed stream. + + + The byte array + + + The offset into the byte array where to start. + + + The number of bytes to write. + + + + + This buffer is used temporarily to retrieve the bytes from the + deflater and write them to the underlying output stream. + + + + + The deflater which is used to deflate the stream. + + + + + Base stream the deflater depends on. + + + + + Get/set flag indicating ownership of the underlying stream. + When the flag is true will close the underlying stream also. + + + + + Allows client to determine if an entry can be patched after its added + + + + + Get/set the password used for encryption. + + When set to null or if the password is empty no encryption is performed + + + + Gets value indicating stream can be read from + + + + + Gets a value indicating if seeking is supported for this stream + This property always returns false + + + + + Get value indicating if this stream supports writing + + + + + Get current length of stream + + + + + Gets the current position within the stream. + + Any attempt to set position + + + + Creates a new Zip output stream, writing a zip archive. + + + The output stream to which the archive contents are written. + + + + + Set the zip file comment. + + + The comment text for the entire archive. + + + The converted comment is longer than 0xffff bytes. + + + + + Sets the compression level. The new level will be activated + immediately. + + The new compression level (1 to 9). + + Level specified is not supported. + + + + + + Get the current deflater compression level + + The current compression level + + + + Write an unsigned short in little endian byte order. + + + + + Write an int in little endian byte order. + + + + + Write an int in little endian byte order. + + + + + Starts a new Zip entry. It automatically closes the previous + entry if present. + All entry elements bar name are optional, but must be correct if present. + If the compression method is stored and the output is not patchable + the compression for that entry is automatically changed to deflate level 0 + + + the entry. + + + if entry passed is null. + + + if an I/O error occured. + + + if stream was finished + + + Too many entries in the Zip file
+ Entry name is too long
+ Finish has already been called
+
+
+ + + Closes the current entry, updating header and footer information as required + + + An I/O error occurs. + + + No entry is active. + + + + + Writes the given buffer to the current entry. + + The buffer containing data to write. + The offset of the first byte to write. + The number of bytes to write. + Archive size is invalid + No entry is active. + + + + Finishes the stream. This will write the central directory at the + end of the zip file and flush the stream. + + + This is automatically called when the stream is closed. + + + An I/O error occurs. + + + Comment exceeds the maximum length
+ Entry name exceeds the maximum length +
+
+ + + The entries for the archive. + + + + + Used to track the crc of data added to entries. + + + + + The current entry being added. + + + + + Used to track the size of data for an entry during writing. + + + + + Offset to be recorded for each entry in the central header. + + + + + Comment for the entire archive recorded in central header. + + + + + Flag indicating that header patching is required for the current entry. + + + + + Position to patch crc + + + + + Position to patch size. + + + + + Gets a flag value of true if the central header has been added for this archive; false if it has not been added. + + No further entries can be added once this has been done. + + + + Get / set a value indicating how Zip64 Extension usage is determined when adding entries. + + Older archivers may not understand Zip64 extensions. + If backwards compatability is an issue be careful when adding entries to an archive. + Setting this property to off is workable but less desirable as in those circumstances adding a file + larger then 4GB will fail. + + + + INameTransform defines how file system names are transformed for use with archives. + + + + + Given a file name determine the transformed value. + + The name to transform. + The transformed file name. + + + + Given a directory name determine the transformed value. + + The name to transform. + The transformed directory name + + + + This class contains constants used for deflation. + + + + + Set to true to enable debugging + + + + + Written to Zip file to identify a stored block + + + + + Identifies static tree in Zip file + + + + + Identifies dynamic tree in Zip file + + + + + Header flag indicating a preset dictionary for deflation + + + + + Sets internal buffer sizes for Huffman encoding + + + + + Internal compression engine constant + + + + + Internal compression engine constant + + + + + Internal compression engine constant + + + + + Internal compression engine constant + + + + + Internal compression engine constant + + + + + Internal compression engine constant + + + + + Internal compression engine constant + + + + + Internal compression engine constant + + + + + Internal compression engine constant + + + + + Internal compression engine constant + + + + + Internal compression engine constant + + + + + Internal compression engine constant + + + + + Internal compression engine constant + + + + + Internal compression engine constant + + + + + Internal compression engine constant + + + + + Internal compression engine constant + + + + + Internal compression engine constant + + + + + Internal compression engine constant + + + + + Internal compression engine constant + + + + + Internal compression engine constant + + + + + Internal compression engine constant + + + + + This class stores the pending output of the Deflater. + + author of the original java version : Jochen Hoenicke + + + + + Construct instance with default buffer size + + + + + PathFilter filters directories and files using a form of regular expressions + by full path name. + See NameFilter for more detail on filtering. + + + + + Initialise a new instance of . + + The filter expression to apply. + + + + Test a name to see if it matches the filter. + + The name to test. + True if the name matches, false otherwise. + + + + ExtendedPathFilter filters based on name, file size, and the last write time of the file. + + Provides an example of how to customise filtering. + + + + Initialise a new instance of ExtendedPathFilter. + + The filter to apply. + The minimum file size to include. + The maximum file size to include. + + + + Initialise a new instance of ExtendedPathFilter. + + The filter to apply. + The minimum to include. + The maximum to include. + + + + Initialise a new instance of ExtendedPathFilter. + + The filter to apply. + The minimum file size to include. + The maximum file size to include. + The minimum to include. + The maximum to include. + + + + Test a filename to see if it matches the filter. + + The filename to test. + True if the filter matches, false otherwise. + + + + Get/set the minimum size for a file that will match this filter. + + + + + Get/set the maximum size for a file that will match this filter. + + + + + Get/set the minimum value that will match for this filter. + + Files with a LastWrite time less than this value are excluded by the filter. + + + + Get/set the maximum value that will match for this filter. + + Files with a LastWrite time greater than this value are excluded by the filter. + + + + NameAndSizeFilter filters based on name and file size. + + A sample showing how filters might be extended. + + + + Initialise a new instance of NameAndSizeFilter. + + The filter to apply. + The minimum file size to include. + The maximum file size to include. + + + + Test a filename to see if it matches the filter. + + The filename to test. + True if the filter matches, false otherwise. + + + + Get/set the minimum size for a file that will match this filter. + + + + + Get/set the maximum size for a file that will match this filter. + + + + + BZip2Exception represents exceptions specific to Bzip2 algorithm + + + + + Deserialization constructor + + for this constructor + for this constructor + + + + Initialise a new instance of BZip2Exception. + + + + + Initialise a new instance of BZip2Exception with its message set to message. + + The message describing the error. + + + + Initialise an instance of BZip2Exception + + A message describing the error. + The exception that is the cause of the current exception. + + + + GZipException represents a Gzip specific exception + + + + + Deserialization constructor + + for this constructor + for this constructor + + + + Initialise a new instance of GZipException + + + + + Initialise a new instance of GZipException with its message string. + + A that describes the error. + + + + Initialise a new instance of . + + A that describes the error. + The that caused this exception. + + + + Contains the output from the Inflation process. + We need to have a window so that we can refer backwards into the output stream + to repeat stuff.
+ Author of the original java version : John Leuner +
+
+ + + Write a byte to this output window + + value to write + + if window is full + + + + + Append a byte pattern already in the window itself + + length of pattern to copy + distance from end of window pattern occurs + + If the repeated data overflows the window + + + + + Copy from input manipulator to internal window + + source of data + length of data to copy + the number of bytes copied + + + + Copy dictionary to window + + source dictionary + offset of start in source dictionary + length of dictionary + + If window isnt empty + + + + + Get remaining unfilled space in window + + Number of bytes left in window + + + + Get bytes available for output in window + + Number of bytes filled + + + + Copy contents of window to output + + buffer to copy to + offset to start at + number of bytes to count + The number of bytes copied + + If a window underflow occurs + + + + + Reset by clearing window so GetAvailable returns 0 + + + + + Defines internal values for both compression and decompression + + + + + When multiplied by compression parameter (1-9) gives the block size for compression + 9 gives the best compresssion but uses the most memory. + + + + + Backend constant + + + + + Backend constant + + + + + Backend constant + + + + + Backend constant + + + + + Backend constant + + + + + Backend constant + + + + + Backend constant + + + + + Backend constant + + + + + Backend constant + + + + + Random numbers used to randomise repetitive blocks + + + + + This filter stream is used to compress a stream into a "GZIP" stream. + The "GZIP" format is described in RFC 1952. + + author of the original java version : John Leuner + + This sample shows how to gzip a file + + using System; + using System.IO; + + using ICSharpCode.SharpZipLib.GZip; + using ICSharpCode.SharpZipLib.Core; + + class MainClass + { + public static void Main(string[] args) + { + using (Stream s = new GZipOutputStream(File.Create(args[0] + ".gz"))) + using (FileStream fs = File.OpenRead(args[0])) { + byte[] writeData = new byte[4096]; + Streamutils.Copy(s, fs, writeData); + } + } + } + } + + + + + + CRC-32 value for uncompressed data + + + + + Creates a GzipOutputStream with the default buffer size + + + The stream to read data (to be compressed) from + + + + + Creates a GZipOutputStream with the specified buffer size + + + The stream to read data (to be compressed) from + + + Size of the buffer to use + + + + + Sets the active compression level (1-9). The new level will be activated + immediately. + + The compression level to set. + + Level specified is not supported. + + + + + + Get the current compression level. + + The current compression level. + + + + Write given buffer to output updating crc + + Buffer to write + Offset of first byte in buf to write + Number of bytes to write + + + + Writes remaining compressed output data to the output stream + and closes it. + + + + + Finish compression and write any footer information required to stream + + + + + Arguments used with KeysRequiredEvent + + + + + Initialise a new instance of + + The name of the file for which keys are required. + + + + Initialise a new instance of + + The name of the file for which keys are required. + The current key value. + + + + Get the name of the file for which keys are required. + + + + + Get/set the key value + + + + + The strategy to apply to testing. + + + + + Find the first error only. + + + + + Find all possible errors. + + + + + The operation in progress reported by a during testing. + + TestArchive + + + + Setting up testing. + + + + + Testing an individual entries header + + + + + Testing an individual entries data + + + + + Testing an individual entry has completed. + + + + + Running miscellaneous tests + + + + + Testing is complete + + + + + Status returned returned by during testing. + + TestArchive + + + + Initialise a new instance of + + The this status applies to. + + + + Get the current in progress. + + + + + Get the this status is applicable to. + + + + + Get the current/last entry tested. + + + + + Get the number of errors detected so far. + + + + + Get the number of bytes tested so far for the current entry. + + + + + Get a value indicating wether the last entry test was valid. + + + + + Delegate invoked during testing if supplied indicating current progress and status. + + If the message is non-null an error has occured. If the message is null + the operation as found in status has started. + + + + The possible ways of applying updates to an archive. + + + + + Perform all updates on temporary files ensuring that the original file is saved. + + + + + Update the archive directly, which is faster but less safe. + + + + + This class represents a Zip archive. You can ask for the contained + entries, or get an input stream for a file entry. The entry is + automatically decompressed. + + You can also update the archive adding or deleting entries. + + This class is thread safe for input: You can open input streams for arbitrary + entries in different threads. +
+
Author of the original java version : Jochen Hoenicke +
+ + + using System; + using System.Text; + using System.Collections; + using System.IO; + + using ICSharpCode.SharpZipLib.Zip; + + class MainClass + { + static public void Main(string[] args) + { + using (ZipFile zFile = new ZipFile(args[0])) { + Console.WriteLine("Listing of : " + zFile.Name); + Console.WriteLine(""); + Console.WriteLine("Raw Size Size Date Time Name"); + Console.WriteLine("-------- -------- -------- ------ ---------"); + foreach (ZipEntry e in zFile) { + if ( e.IsFile ) { + DateTime d = e.DateTime; + Console.WriteLine("{0, -10}{1, -10}{2} {3} {4}", e.Size, e.CompressedSize, + d.ToString("dd-MM-yy"), d.ToString("HH:mm"), + e.Name); + } + } + } + } + } + + +
+ + + Event handler for handling encryption keys. + + + + + Handles getting of encryption keys when required. + + The file for which encryption keys are required. + + + + Opens a Zip file with the given name for reading. + + The name of the file to open. + + An i/o error occurs + + + The file doesn't contain a valid zip archive. + + + + + Opens a Zip file reading the given . + + The to read archive data from. + + An i/o error occurs. + + + The file doesn't contain a valid zip archive. + + + + + Opens a Zip file reading the given . + + The to read archive data from. + + An i/o error occurs + + + The file doesn't contain a valid zip archive.
+ The stream provided cannot seek +
+
+ + + Initialises a default instance with no entries and no file storage. + + + + + Finalize this instance. + + + + + Closes the ZipFile. If the stream is owned then this also closes the underlying input stream. + Once closed, no further instance methods should be called. + + + An i/o error occurs. + + + + + Create a new whose data will be stored in a file. + + The name of the archive to create. + Returns the newly created + + + + Create a new whose data will be stored on a stream. + + The stream providing data storage. + Returns the newly created + + + + Gets an enumerator for the Zip entries in this Zip file. + + Returns an for this archive. + + The Zip file has been closed. + + + + + Return the index of the entry with a matching name + + Entry name to find + If true the comparison is case insensitive + The index position of the matching entry or -1 if not found + + The Zip file has been closed. + + + + + Searches for a zip entry in this archive with the given name. + String comparisons are case insensitive + + + The name to find. May contain directory components separated by slashes ('/'). + + + A clone of the zip entry, or null if no entry with that name exists. + + + The Zip file has been closed. + + + + + Gets an input stream for reading the given zip entry data in an uncompressed form. + Normally the should be an entry returned by GetEntry(). + + The to obtain a data for + An input containing data for this + + The ZipFile has already been closed + + + The compression method for the entry is unknown + + + The entry is not found in the ZipFile + + + + + Creates an input stream reading a zip entry + + The index of the entry to obtain an input stream for. + + An input containing data for this + + + The ZipFile has already been closed + + + The compression method for the entry is unknown + + + The entry is not found in the ZipFile + + + + + Test an archive for integrity/validity + + Perform low level data Crc check + true if all tests pass, false otherwise + Testing will terminate on the first error found. + + + + Test an archive for integrity/validity + + Perform low level data Crc check + The to apply. + The handler to call during testing. + true if all tests pass, false otherwise + + + + Test a local header against that provided from the central directory + + + The entry to test against + + The type of tests to carry out. + The offset of the entries data in the file + + + + Begin updating this archive. + + The archive storage for use during the update. + The data source to utilise during updating. + + + + Begin updating to this archive. + + The storage to use during the update. + + + + Begin updating this archive. + + + + + + + + Commit current updates, updating this archive. + + + + + + + Abort updating leaving the archive unchanged. + + + + + + + Set the file comment to be recorded when the current update is commited. + + The comment to record. + + + + Add a new entry to the archive. + + The name of the file to add. + The compression method to use. + Ensure Unicode text is used for name and comment for this entry. + + + + Add a new entry to the archive. + + The name of the file to add. + The compression method to use. + + + + Add a file to the archive. + + The name of the file to add. + + + + Add a file entry with data. + + The source of the data for this entry. + The name to give to the entry. + + + + Add a file entry with data. + + The source of the data for this entry. + The name to give to the entry. + The compression method to use. + + + + Add a file entry with data. + + The source of the data for this entry. + The name to give to the entry. + The compression method to use. + Ensure Unicode text is used for name and comments for this entry. + + + + Add a that contains no data. + + The entry to add. + This can be used to add directories, volume labels, or empty file entries. + + + + Add a directory entry to the archive. + + The directory to add. + + + + Delete an entry by name + + The filename to delete + True if the entry was found and deleted; false otherwise. + + + + Delete a from the archive. + + The entry to delete. + + + + Write an unsigned short in little endian byte order. + + + + + Write an int in little endian byte order. + + + + + Write an unsigned int in little endian byte order. + + + + + Write a long in little endian byte order. + + + + + Get a raw memory buffer. + + Returns a raw memory buffer. + + + + Get the size of the source descriptor for a . + + The update to get the size for. + The descriptor size, zero if there isnt one. + + + + Get an output stream for the specified + + The entry to get an output stream for. + The output stream obtained for the entry. + + + + Releases the unmanaged resources used by the this instance and optionally releases the managed resources. + + true to release both managed and unmanaged resources; + false to release only unmanaged resources. + + + + Read an unsigned short in little endian byte order. + + Returns the value read. + + The stream ends prematurely + + + + + Read a uint in little endian byte order. + + Returns the value read. + + An i/o error occurs. + + + The file ends prematurely + + + + + Search for and read the central directory of a zip file filling the entries array. + + + An i/o error occurs. + + + The central directory is malformed or cannot be found + + + + + Locate the data for a given entry. + + + The start offset of the data. + + + The stream ends prematurely + + + The local header signature is invalid, the entry and central header file name lengths are different + or the local and entry compression methods dont match + + + + + Get/set the encryption key value. + + + + + Password to be used for encrypting/decrypting files. + + Set to null if no password is required. + + + + Get a value indicating wether encryption keys are currently available. + + + + + Get/set a flag indicating if the underlying stream is owned by the ZipFile instance. + If the flag is true then the stream will be closed when Close is called. + + + The default value is true in all cases. + + + + + Get a value indicating wether + this archive is embedded in another file or not. + + + + + Get a value indicating that this archive is a new one. + + + + + Gets the comment for the zip file. + + + + + Gets the name of this zip file. + + + + + Gets the number of entries in this zip file. + + + The Zip file has been closed. + + + + + Get the number of entries contained in this . + + + + + Indexer property for ZipEntries + + + + + Get / set the to apply to names when updating. + + + + + Get/set the used to generate values + during updates. + + + + + Get /set the buffer size to be used when updating this zip file. + + + + + Get a value indicating an update has been started. + + + + + Get / set a value indicating how Zip64 Extension usage is determined when adding entries. + + + + + Delegate for handling keys/password setting during compresion/decompression. + + + + + The kind of update to apply. + + + + + Class used to sort updates. + + + + + Compares two objects and returns a value indicating whether one is + less than, equal to or greater than the other. + + First object to compare + Second object to compare. + Compare result. + + + + Represents a pending update to a Zip file. + + + + + Copy an existing entry. + + The existing entry to copy. + + + + Get the for this update. + + This is the source or original entry. + + + + Get the that will be written to the updated/new file. + + + + + Get the command for this update. + + + + + Get the filename if any for this update. Null if none exists. + + + + + Get/set the location of the size patch for this update. + + + + + Get /set the location of the crc patch for this update. + + + + + Represents a string from a which is stored as an array of bytes. + + + + + Initialise a with a string. + + The textual string form. + + + + Initialise a using a string in its binary 'raw' form. + + + + + + Reset the comment to its initial state. + + + + + Implicit conversion of comment to a string. + + The to convert to a string. + The textual equivalent for the input value. + + + + Get a value indicating the original source of data for this instance. + True if the source was a string; false if the source was binary data. + + + + + Get the length of the comment when represented as raw bytes. + + + + + Get the comment in its 'raw' form as plain bytes. + + + + + An enumerator for Zip entries + + + + + An is a stream that you can write uncompressed data + to and flush, but cannot read, seek or do anything else to. + + + + + Close this stream instance. + + + + + Write any buffered data to underlying storage. + + + + + Gets a value indicating whether the current stream supports reading. + + + + + Gets a value indicating whether the current stream supports writing. + + + + + Gets a value indicating whether the current stream supports seeking. + + + + + Get the length in bytes of the stream. + + + + + Gets or sets the position within the current stream. + + + + + A is an + whose data is only a part or subsection of a file. + + + + + This filter stream is used to decompress data compressed using the "deflate" + format. The "deflate" format is described in RFC 1951. + + This stream may form the basis for other decompression filters, such + as the GZipInputStream. + + Author of the original java version : John Leuner. + + + + + Create an InflaterInputStream with the default decompressor + and a default buffer size of 4KB. + + + The InputStream to read bytes from + + + + + Create an InflaterInputStream with the specified decompressor + and a default buffer size of 4KB. + + + The source of input data + + + The decompressor used to decompress data read from baseInputStream + + + + + Create an InflaterInputStream with the specified decompressor + and the specified buffer size. + + + The InputStream to read bytes from + + + The decompressor to use + + + Size of the buffer to use + + + + + Skip specified number of bytes of uncompressed data + + + Number of bytes to skip + + + The number of bytes skipped, zero if the end of + stream has been reached + + + Number of bytes to skip is less than zero + + + + + Clear any cryptographic state. + + + + + Fills the buffer with more data to decompress. + + + Stream ends early + + + + + Flushes the baseInputStream + + + + + Sets the position within the current stream + Always throws a NotSupportedException + + The relative offset to seek to. + The defining where to seek from. + The new position in the stream. + Any access + + + + Set the length of the current stream + Always throws a NotSupportedException + + The new length value for the stream. + Any access + + + + Writes a sequence of bytes to stream and advances the current position + This method always throws a NotSupportedException + + Thew buffer containing data to write. + The offset of the first byte to write. + The number of bytes to write. + Any access + + + + Writes one byte to the current stream and advances the current position + Always throws a NotSupportedException + + The byte to write. + Any access + + + + Entry point to begin an asynchronous write. Always throws a NotSupportedException. + + The buffer to write data from + Offset of first byte to write + The maximum number of bytes to write + The method to be called when the asynchronous write operation is completed + A user-provided object that distinguishes this particular asynchronous write request from other requests + An IAsyncResult that references the asynchronous write + Any access + + + + Closes the input stream. When + is true the underlying stream is also closed. + + + + + Reads decompressed data into the provided buffer byte array + + + The array to read and decompress data into + + + The offset indicating where the data should be placed + + + The number of bytes to decompress + + The number of bytes read. Zero signals the end of stream + + Inflater needs a dictionary + + + + + Decompressor for this stream + + + + + Input buffer for this stream. + + + + + Base stream the inflater reads from. + + + + + The compressed size + + + + + Flag indicating wether this instance has been closed or not. + + + + + Flag indicating wether this instance is designated the stream owner. + When closing if this flag is true the underlying stream is closed. + + + + + Get/set flag indicating ownership of underlying stream. + When the flag is true will close the underlying stream also. + + + The default value is true. + + + + + Returns 0 once the end of the stream (EOF) has been reached. + Otherwise returns 1. + + + + + Gets a value indicating whether the current stream supports reading + + + + + Gets a value of false indicating seeking is not supported for this stream. + + + + + Gets a value of false indicating that this stream is not writeable. + + + + + A value representing the length of the stream in bytes. + + + + + The current position within the stream. + Throws a NotSupportedException when attempting to set the position + + Attempting to set the position + + + + Initialise a new instance of the class. + + The underlying stream to use for IO. + The start of the partial data. + The length of the partial data. + + + + Skip the specified number of input bytes. + + The maximum number of input bytes to skip. + The actuial number of input bytes skipped. + + + + Read a byte from this stream. + + Returns the byte read or -1 on end of stream. + + + + Close this partial input stream. + + + The underlying stream is not closed. Close the parent ZipFile class to do that. + + + + + Provides a static way to obtain a source of data for an entry. + + + + + Get a source of data by creating a new stream. + + Returns a to use for compression input. + Ideally a new stream is created and opened to achieve this, to avoid locking problems. + + + + Represents a source of data that can dynamically provide + multiple data sources based on the parameters passed. + + + + + Get a data source. + + The to get a source for. + The name for data if known. + Returns a to use for compression input. + Ideally a new stream is created and opened to achieve this, to avoid locking problems. + + + + Default implementation of a for use with files stored on disk. + + + + + Initialise a new instnace of + + The name of the file to obtain data from. + + + + Get a providing data. + + Returns a provising data. + + + + Default implementation of for files stored on disk. + + + + + Initialise a default instance of . + + + + + Get a providing data for an entry. + + The entry to provide data for. + The file name for data if known. + Returns a stream providing data; or null if not available + + + + Defines facilities for data storage when updating Zip Archives. + + + + + Get an empty that can be used for temporary output. + + Returns a temporary output + + + + + Convert a temporary output stream to a final stream. + + The resulting final + + + + + Make a temporary copy of the original stream. + + The to copy. + Returns a temporary output that is a copy of the input. + + + + Return a stream suitable for performing direct updates on the original source. + + The current stream. + Returns a stream suitable for direct updating. + This may be the current stream passed. + + + + Dispose of this instance. + + + + + Get the to apply during updates. + + + + + An abstract suitable for extension by inheritance. + + + + + Initializes a new instance of the class. + + The update mode. + + + + Gets a temporary output + + Returns the temporary output stream. + + + + + Converts the temporary to its final form. + + Returns a that can be used to read + the final storage for the archive. + + + + + Make a temporary copy of a . + + The to make a copy of. + Returns a temporary output that is a copy of the input. + + + + Return a stream suitable for performing direct updates on the original source. + + The to open for direct update. + Returns a stream suitable for direct updating. + + + + Disposes this instance. + + + + + Gets the update mode applicable. + + The update mode. + + + + An implementation suitable for hard disks. + + + + + Initializes a new instance of the class. + + The file. + The update mode. + + + + Initializes a new instance of the class. + + The file. + + + + Gets a temporary output for performing updates on. + + Returns the temporary output stream. + + + + Converts a temporary to its final form. + + Returns a that can be used to read + the final storage for the archive. + + + + Make a temporary copy of a stream. + + The to copy. + Returns a temporary output that is a copy of the input. + + + + Return a stream suitable for performing direct updates on the original source. + + The current stream. + Returns a stream suitable for direct updating. + If the stream is not null this is used as is. + + + + Disposes this instance. + + + + + An implementation suitable for in memory streams. + + + + + Initializes a new instance of the class. + + + + + Initializes a new instance of the class. + + The to use + This constructor is for testing as memory streams dont really require safe mode. + + + + Gets the temporary output + + Returns the temporary output stream. + + + + Converts the temporary to its final form. + + Returns a that can be used to read + the final storage for the archive. + + + + Make a temporary copy of the original stream. + + The to copy. + Returns a temporary output that is a copy of the input. + + + + Return a stream suitable for performing direct updates on the original source. + + The original source stream + Returns a stream suitable for direct updating. + If the passed is not null this is used; + otherwise a new is returned. + + + + Disposes this instance. + + + + + Get the stream returned by if this was in fact called. + + + + + Generate a table for a byte-wise 32-bit CRC calculation on the polynomial: + x^32+x^26+x^23+x^22+x^16+x^12+x^11+x^10+x^8+x^7+x^5+x^4+x^2+x+1. + + Polynomials over GF(2) are represented in binary, one bit per coefficient, + with the lowest powers in the most significant bit. Then adding polynomials + is just exclusive-or, and multiplying a polynomial by x is a right shift by + one. If we call the above polynomial p, and represent a byte as the + polynomial q, also with the lowest power in the most significant bit (so the + byte 0xb1 is the polynomial x^7+x^3+x+1), then the CRC is (q*x^32) mod p, + where a mod b means the remainder after dividing a by b. + + This calculation is done using the shift-register method of multiplying and + taking the remainder. The register is initialized to zero, and for each + incoming bit, x^32 is added mod p to the register if the bit is a one (where + x^32 mod p is p+x^32 = x^26+...+1), and the register is multiplied mod p by + x (which is shifting right by one and adding x^32 mod p if the bit shifted + out is a one). We start with the highest power (least significant bit) of + q and repeat for all eight bits of q. + + The table is simply the CRC of all possible eight bit values. This is all + the information needed to generate CRC's on data a byte at a time for all + combinations of CRC register values and incoming bytes. + + + + + Interface to compute a data checksum used by checked input/output streams. + A data checksum can be updated by one byte or with a byte array. After each + update the value of the current checksum can be returned by calling + getValue. The complete checksum object can also be reset + so it can be used again with new data. + + + + + Resets the data checksum as if no update was ever called. + + + + + Adds one byte to the data checksum. + + + the data value to add. The high byte of the int is ignored. + + + + + Updates the data checksum with the bytes taken from the array. + + + buffer an array of bytes + + + + + Adds the byte array to the data checksum. + + + The buffer which contains the data + + + The offset in the buffer where the data starts + + + the number of data bytes to add. + + + + + Returns the data checksum computed so far. + + + + + The crc data checksum so far. + + + + + Resets the CRC32 data checksum as if no update was ever called. + + + + + Updates the checksum with the int bval. + + + the byte is taken as the lower 8 bits of value + + + + + Updates the checksum with the bytes taken from the array. + + + buffer an array of bytes + + + + + Adds the byte array to the data checksum. + + + The buffer which contains the data + + + The offset in the buffer where the data starts + + + The number of data bytes to update the CRC with. + + + + + Returns the CRC32 data checksum computed so far. + + + + + Basic implementation of + + + + + Defines factory methods for creating new values. + + + + + Create a for a file given its name + + The name of the file to create an entry for. + Returns a file entry based on the passed. + + + + Create a for a file given its name + + The name of the file to create an entry for. + If true get details from the file system if the file exists. + Returns a file entry based on the passed. + + + + Create a for a directory given its name + + The name of the directory to create an entry for. + Returns a directory entry based on the passed. + + + + Create a for a directory given its name + + The name of the directory to create an entry for. + If true get details from the file system for this directory if it exists. + Returns a directory entry based on the passed. + + + + Get/set the applicable. + + + + + Initialise a new instance of the class. + + A default , and the LastWriteTime for files is used. + + + + Initialise a new instance of using the specified + + The time setting to use when creating Zip entries. + + + + Initialise a new instance of using the specified + + The time to set all values to. + + + + Make a new for a file. + + The name of the file to create a new entry for. + Returns a new based on the . + + + + Make a new from a name. + + The name of the file to create a new entry for. + If true entry detail is retrieved from the file system if the file exists. + Returns a new based on the . + + + + Make a new for a directory. + + The raw untransformed name for the new directory + Returns a new representing a directory. + + + + Make a new for a directory. + + The raw untransformed name for the new directory + If true entry detail is retrieved from the file system if the file exists. + Returns a new representing a directory. + + + + Get / set the to be used when creating new values. + + + Setting this property to null will cause a default name transform to be used. + + + + + Get / set the in use. + + + + + Get / set the value to use when is set to + + + + + A bitmask defining the attributes to be retrieved from the actual file. + + The default is to get all possible attributes from the actual file. + + + + A bitmask defining which attributes are to be set on. + + By default no attributes are set on. + + + + Get set a value indicating wether unidoce text should be set on. + + + + + Defines the possible values to be used for the . + + + + + Use the recorded LastWriteTime value for the file. + + + + + Use the recorded LastWriteTimeUtc value for the file + + + + + Use the recorded CreateTime value for the file. + + + + + Use the recorded CreateTimeUtc value for the file. + + + + + Use the recorded LastAccessTime value for the file. + + + + + Use the recorded LastAccessTimeUtc value for the file. + + + + + Use a fixed value. + + The actual value used can be + specified via the constructor or + using the with the setting set + to which will use the when this class was constructed. + The property can also be used to set this value. + + + + PkzipClassic embodies the classic or original encryption facilities used in Pkzip archives. + While it has been superceded by more recent and more powerful algorithms, its still in use and + is viable for preventing casual snooping + + + + + Generates new encryption keys based on given seed + + The seed value to initialise keys with. + A new key value. + + + + PkzipClassicCryptoBase provides the low level facilities for encryption + and decryption using the PkzipClassic algorithm. + + + + + Transform a single byte + + + The transformed value + + + + + Set the key schedule for encryption/decryption. + + The data use to set the keys from. + + + + Update encryption keys + + + + + Reset the internal state. + + + + + PkzipClassic CryptoTransform for encryption. + + + + + Initialise a new instance of + + The key block to use. + + + + Transforms the specified region of the specified byte array. + + The input for which to compute the transform. + The offset into the byte array from which to begin using data. + The number of bytes in the byte array to use as data. + The computed transform. + + + + Transforms the specified region of the input byte array and copies + the resulting transform to the specified region of the output byte array. + + The input for which to compute the transform. + The offset into the input byte array from which to begin using data. + The number of bytes in the input byte array to use as data. + The output to which to write the transform. + The offset into the output byte array from which to begin writing data. + The number of bytes written. + + + + Cleanup internal state. + + + + + Gets a value indicating whether the current transform can be reused. + + + + + Gets the size of the input data blocks in bytes. + + + + + Gets the size of the output data blocks in bytes. + + + + + Gets a value indicating whether multiple blocks can be transformed. + + + + + PkzipClassic CryptoTransform for decryption. + + + + + Initialise a new instance of . + + The key block to decrypt with. + + + + Transforms the specified region of the specified byte array. + + The input for which to compute the transform. + The offset into the byte array from which to begin using data. + The number of bytes in the byte array to use as data. + The computed transform. + + + + Transforms the specified region of the input byte array and copies + the resulting transform to the specified region of the output byte array. + + The input for which to compute the transform. + The offset into the input byte array from which to begin using data. + The number of bytes in the input byte array to use as data. + The output to which to write the transform. + The offset into the output byte array from which to begin writing data. + The number of bytes written. + + + + Cleanup internal state. + + + + + Gets a value indicating whether the current transform can be reused. + + + + + Gets the size of the input data blocks in bytes. + + + + + Gets the size of the output data blocks in bytes. + + + + + Gets a value indicating whether multiple blocks can be transformed. + + + + + Defines a wrapper object to access the Pkzip algorithm. + This class cannot be inherited. + + + + + Generate an initial vector. + + + + + Generate a new random key. + + + + + Create an encryptor. + + The key to use for this encryptor. + Initialisation vector for the new encryptor. + Returns a new PkzipClassic encryptor + + + + Create a decryptor. + + Keys to use for this new decryptor. + Initialisation vector for the new decryptor. + Returns a new decryptor. + + + + Get / set the applicable block size in bits. + + The only valid block size is 8. + + + + Get an array of legal key sizes. + + + + + Get an array of legal block sizes. + + + + + Get / set the key value applicable. + + + + + This class represents an entry in a Tar archive. It consists + of the entry's header, as well as the entry's File. Entries + can be instantiated in one of three ways, depending on how + they are to be used. +

+ TarEntries that are created from the header bytes read from + an archive are instantiated with the TarEntry( byte[] ) + constructor. These entries will be used when extracting from + or listing the contents of an archive. These entries have their + header filled in using the header bytes. They also set the File + to null, since they reference an archive entry not a file.

+

+ TarEntries that are created from files that are to be written + into an archive are instantiated with the CreateEntryFromFile(string) + pseudo constructor. These entries have their header filled in using + the File's information. They also keep a reference to the File + for convenience when writing entries.

+

+ Finally, TarEntries can be constructed from nothing but a name. + This allows the programmer to construct the entry by hand, for + instance when only an InputStream is available for writing to + the archive, and the header information is constructed from + other information. In this case the header fields are set to + defaults and the File is set to null.

+ +
+
+ + + Initialise a default instance of . + + + + + Construct an entry from an archive's header bytes. File is set + to null. + + + The header bytes from a tar archive entry. + + + + + Construct a TarEntry using the header provided + + Header details for entry + + + + Clone this tar entry. + + Returns a clone of this entry. + + + + Construct an entry with only a name. + This allows the programmer to construct the entry's header "by hand". + + The name to use for the entry + Returns the newly created + + + + Construct an entry for a file. File is set to file, and the + header is constructed from information from the file. + + The file name that the entry represents. + Returns the newly created + + + + Determine if the two entries are equal. Equality is determined + by the header names being equal. + + The to compare with the current Object. + + True if the entries are equal; false if not. + + + + + Derive a Hash value for the current + + A Hash code for the current + + + + Determine if the given entry is a descendant of this entry. + Descendancy is determined by the name of the descendant + starting with this entry's name. + + + Entry to be checked as a descendent of this. + + + True if entry is a descendant of this. + + + + + Convenience method to set this entry's group and user ids. + + + This entry's new user id. + + + This entry's new group id. + + + + + Convenience method to set this entry's group and user names. + + + This entry's new user name. + + + This entry's new group name. + + + + + Fill in a TarHeader with information from a File. + + + The TarHeader to fill in. + + + The file from which to get the header information. + + + + + Get entries for all files present in this entries directory. + If this entry doesnt represent a directory zero entries are returned. + + + An array of TarEntry's for this entry's children. + + + + + Write an entry's header information to a header buffer. + + + The tar entry header buffer to fill in. + + + + + Convenience method that will modify an entry's name directly + in place in an entry header buffer byte array. + + + The buffer containing the entry header to modify. + + + The new name to place into the header buffer. + + + + + Fill in a TarHeader given only the entry's name. + + + The TarHeader to fill in. + + + The tar entry name. + + + + + The name of the file this entry represents or null if the entry is not based on a file. + + + + + The entry's header information. + + + + + Get this entry's header. + + + This entry's TarHeader. + + + + + Get/Set this entry's name. + + + + + Get/set this entry's user id. + + + + + Get/set this entry's group id. + + + + + Get/set this entry's user name. + + + + + Get/set this entry's group name. + + + + + Get/Set the modification time for this entry + + + + + Get this entry's file. + + + This entry's file. + + + + + Get/set this entry's recorded file size. + + + + + Return true if this entry represents a directory, false otherwise + + + True if this entry is a directory. + + + + + This filter stream is used to decompress a "GZIP" format stream. + The "GZIP" format is described baseInputStream RFC 1952. + + author of the original java version : John Leuner + + This sample shows how to unzip a gzipped file + + using System; + using System.IO; + + using ICSharpCode.SharpZipLib.Core; + using ICSharpCode.SharpZipLib.GZip; + + class MainClass + { + public static void Main(string[] args) + { + using (Stream inStream = new GZipInputStream(File.OpenRead(args[0]))) + using (FileStream outStream = File.Create(Path.GetFileNameWithoutExtension(args[0]))) { + byte[] buffer = new byte[4096]; + StreamUtils.Copy(inStream, outStream, buffer); + } + } + } + + + + + + CRC-32 value for uncompressed data + + + + + Indicates end of stream + + + + + Creates a GZipInputStream with the default buffer size + + + The stream to read compressed data from (baseInputStream GZIP format) + + + + + Creates a GZIPInputStream with the specified buffer size + + + The stream to read compressed data from (baseInputStream GZIP format) + + + Size of the buffer to use + + + + + Reads uncompressed data into an array of bytes + + + The buffer to read uncompressed data into + + + The offset indicating where the data should be placed + + + The number of uncompressed bytes to be read + + Returns the number of bytes actually read. + + + + Provides simple " utilities. + + + + + Read from a ensuring all the required data is read. + + The stream to read. + The buffer to fill. + + + + Read from a " ensuring all the required data is read. + + The stream to read data from. + The buffer to store data in. + The offset at which to begin storing data. + The number of bytes of data to store. + + + + Copy the contents of one to another. + + The stream to source data from. + The stream to write data to. + The buffer to use during copying. + The progress handler delegate to use. + The minimum between progress updates. + The source for this event. + The name to use with the event. + + + + Copy the contents of one to another. + + The stream to source data from. + The stream to write data to. + The buffer to use during copying. + + + + Initialise an instance of + + + + + This is the DeflaterHuffman class. + + This class is not thread safe. This is inherent in the API, due + to the split of Deflate and SetInput. + + author of the original java version : Jochen Hoenicke + + + + + Pending buffer to use + + + + + Construct instance with pending buffer + + Pending buffer to use + + + + Reset internal state + + + + + Write all trees to pending buffer + + The number/rank of treecodes to send. + + + + Compress current buffer writing data to pending buffer + + + + + Flush block to output with no compression + + Data to write + Index of first byte to write + Count of bytes to write + True if this is the last block + + + + Flush block to output with compression + + Data to flush + Index of first byte to flush + Count of bytes to flush + True if this is the last block + + + + Get value indicating if internal buffer is full + + true if buffer is full + + + + Add literal to buffer + + Literal value to add to buffer. + Value indicating internal buffer is full + + + + Add distance code and length to literal and distance trees + + Distance code + Length + Value indicating if internal buffer is full + + + + Reverse the bits of a 16 bit value. + + Value to reverse bits + Value with bits reversed + + + + Resets the internal state of the tree + + + + + Check that all frequencies are zero + + + At least one frequency is non-zero + + + + + Set static codes and length + + new codes + length for new codes + + + + Build dynamic codes and lengths + + + + + Get encoded length + + Encoded length, the sum of frequencies * lengths + + + + Scan a literal or distance tree to determine the frequencies of the codes + in the bit length tree. + + + + + Write tree values + + Tree to write + + + + This class contains constants used for gzip. + + + + + Magic number found at start of GZIP header + + + + + Flag bit mask for text + + + + + Flag bitmask for Crc + + + + + Flag bit mask for extra + + + + + flag bitmask for name + + + + + flag bit mask indicating comment is present + + + + + Initialise default instance. + + Constructor is private to prevent instances being created. + + + + TarExceptions are used for exceptions specific to tar classes and code. + + + + + Deserialization constructor + + for this constructor + for this constructor + + + + Initialises a new instance of the TarException class. + + + + + Initialises a new instance of the TarException class with a specified message. + + The message that describes the error. + + + + + + A message describing the error. + The exception that is the cause of the current exception. + + + + Bzip2 checksum algorithm + + + + + Initialise a default instance of + + + + + Reset the state of Crc. + + + + + Update the Crc value. + + data update is based on + + + + Update Crc based on a block of data + + The buffer containing data to update the crc with. + + + + Update Crc based on a portion of a block of data + + block of data + index of first byte to use + number of bytes to use + + + + Get the current Crc value. + + + + + This is an InflaterInputStream that reads the files baseInputStream an zip archive + one after another. It has a special method to get the zip entry of + the next file. The zip entry contains information about the file name + size, compressed size, Crc, etc. + It includes support for Stored and Deflated entries. +
+
Author of the original java version : Jochen Hoenicke +
+ + This sample shows how to read a zip file + + using System; + using System.Text; + using System.IO; + + using ICSharpCode.SharpZipLib.Zip; + + class MainClass + { + public static void Main(string[] args) + { + using ( ZipInputStream s = new ZipInputStream(File.OpenRead(args[0]))) { + + ZipEntry theEntry; + while ((theEntry = s.GetNextEntry()) != null) { + int size = 2048; + byte[] data = new byte[2048]; + + Console.Write("Show contents (y/n) ?"); + if (Console.ReadLine() == "y") { + while (true) { + size = s.Read(data, 0, data.Length); + if (size > 0) { + Console.Write(new ASCIIEncoding().GetString(data, 0, size)); + } else { + break; + } + } + } + } + } + } + } + + +
+ + + The current reader this instance. + + + + + Creates a new Zip input stream, for reading a zip archive. + + The underlying providing data. + + + + Advances to the next entry in the archive + + + The next entry in the archive or null if there are no more entries. + + + If the previous entry is still open CloseEntry is called. + + + Input stream is closed + + + Password is not set, password is invalid, compression method is invalid, + version required to extract is not supported + + + + + Read data descriptor at the end of compressed data. + + + + + Complete cleanup as the final part of closing. + + True if the crc value should be tested + + + + Closes the current zip entry and moves to the next one. + + + The stream is closed + + + The Zip stream ends early + + + + + Reads a byte from the current zip entry. + + + The byte or -1 if end of stream is reached. + + + + + Handle attempts to read by throwing an . + + The destination array to store data in. + The offset at which data read should be stored. + The maximum number of bytes to read. + Returns the number of bytes actually read. + + + + Handle attempts to read from this entry by throwing an exception + + + + + Perform the initial read on an entry which may include + reading encryption headers and setting up inflation. + + The destination to fill with data read. + The offset to start reading at. + The maximum number of bytes to read. + The actual number of bytes read. + + + + Read a block of bytes from the stream. + + The destination for the bytes. + The index to start storing data. + The number of bytes to attempt to read. + Returns the number of bytes read. + Zero bytes read means end of stream. + + + + Reads a block of bytes from the current zip entry. + + + The number of bytes read (this may be less than the length requested, even before the end of stream), or 0 on end of stream. + + + An i/o error occured. + + + The deflated stream is corrupted. + + + The stream is not open. + + + + + Closes the zip input stream + + + + + Optional password used for encryption when non-null + + A password for all encrypted entries in this + + + + Gets a value indicating if there is a current entry and it can be decompressed + + + The entry can only be decompressed if the library supports the zip features required to extract it. + See the ZipEntry Version property for more details. + + + + + Returns 1 if there is an entry available + Otherwise returns 0. + + + + + Returns the current size that can be read from the current entry if available + + Thrown if the entry size is not known. + Thrown if no entry is currently available. + + + + ZipNameTransform transforms names as per the Zip file naming convention. + + The use of absolute names is supported although its use is not valid + according to Zip naming conventions, and should not be used if maximum compatability is desired. + + + + Initialize a new instance of + + + + + Initialize a new instance of + + The string to trim from front of paths if found. + + + + Static constructor. + + + + + Transform a directory name according to the Zip file naming conventions. + + The directory name to transform. + The transformed name. + + + + Transform a windows file name according to the Zip file naming conventions. + + The file name to transform. + The transformed name. + + + + Force a name to be valid by replacing invalid characters with a fixed value + + The name to force valid + The replacement character to use. + Returns a valid name + + + + Test a name to see if it is a valid name for a zip entry. + + The name to test. + If true checking is relaxed about windows file names and absolute paths. + Returns true if the name is a valid zip name; false otherwise. + Zip path names are actually in Unix format, and should only contain relative paths. + This means that any path stored should not contain a drive or + device letter, or a leading slash. All slashes should forward slashes '/'. + An empty name is valid for a file where the input comes from standard input. + A null name is not considered valid. + + + + + Test a name to see if it is a valid name for a zip entry. + + The name to test. + Returns true if the name is a valid zip name; false otherwise. + Zip path names are actually in unix format, + and should only contain relative paths if a path is present. + This means that the path stored should not contain a drive or + device letter, or a leading slash. All slashes should forward slashes '/'. + An empty name is valid where the input comes from standard input. + A null name is not considered valid. + + + + + Get/set the path prefix to be trimmed from paths if present. + + The prefix is trimmed before any conversion from + a windows path is done. + + + + This exception is used to indicate that there is a problem + with a TAR archive header. + + + + + Deserialization constructor + + for this constructor + for this constructor + + + + Initialise a new instance of the InvalidHeaderException class. + + + + + Initialises a new instance of the InvalidHeaderException class with a specified message. + + Message describing the exception cause. + + + + Initialise a new instance of InvalidHeaderException + + Message describing the problem. + The exception that is the cause of the current exception. + + + + This class allows us to retrieve a specified number of bits from + the input buffer, as well as copy big byte blocks. + + It uses an int buffer to store up to 31 bits for direct + manipulation. This guarantees that we can get at least 16 bits, + but we only need at most 15, so this is all safe. + + There are some optimizations in this class, for example, you must + never peek more than 8 bits more than needed, and you must first + peek bits before you may drop them. This is not a general purpose + class but optimized for the behaviour of the Inflater. + + authors of the original java version : John Leuner, Jochen Hoenicke + + + + + Constructs a default StreamManipulator with all buffers empty + + + + + Get the next sequence of bits but don't increase input pointer. bitCount must be + less or equal 16 and if this call succeeds, you must drop + at least n - 8 bits in the next call. + + The number of bits to peek. + + the value of the bits, or -1 if not enough bits available. */ + + + + + Drops the next n bits from the input. You should have called PeekBits + with a bigger or equal n before, to make sure that enough bits are in + the bit buffer. + + The number of bits to drop. + + + + Gets the next n bits and increases input pointer. This is equivalent + to followed by , except for correct error handling. + + The number of bits to retrieve. + + the value of the bits, or -1 if not enough bits available. + + + + + Skips to the next byte boundary. + + + + + Copies bytes from input buffer to output buffer starting + at output[offset]. You have to make sure, that the buffer is + byte aligned. If not enough bytes are available, copies fewer + bytes. + + + The buffer to copy bytes to. + + + The offset in the buffer at which copying starts + + + The length to copy, 0 is allowed. + + + The number of bytes copied, 0 if no bytes were available. + + + Length is less than zero + + + Bit buffer isnt byte aligned + + + + + Resets state and empties internal buffers + + + + + Add more input for consumption. + Only call when IsNeedingInput returns true + + data to be input + offset of first byte of input + number of bytes of input to add. + + + + Gets the number of bits available in the bit buffer. This must be + only called when a previous PeekBits() returned -1. + + + the number of bits available. + + + + + Gets the number of bytes available. + + + The number of bytes available. + + + + + Returns true when SetInput can be called + + + + + Strategies for deflater + + + + + The default strategy + + + + + This strategy will only allow longer string repetitions. It is + useful for random data with a small character set. + + + + + This strategy will not look for string repetitions at all. It + only encodes with Huffman trees (which means, that more common + characters get a smaller encoding. + + + + + Low level compression engine for deflate algorithm which uses a 32K sliding window + with secondary compression from Huffman/Shannon-Fano codes. + + + + + Construct instance with pending buffer + + + Pending buffer to use + > + + + + Deflate drives actual compression of data + + True to flush input buffers + Finish deflation with the current input. + Returns true if progress has been made. + + + + Sets input data to be deflated. Should only be called when NeedsInput() + returns true + + The buffer containing input data. + The offset of the first byte of data. + The number of bytes of data to use as input. + + + + Determines if more input is needed. + + Return true if input is needed via SetInput + + + + Set compression dictionary + + The buffer containing the dictionary data + The offset in the buffer for the first byte of data + The length of the dictionary data. + + + + Reset internal state + + + + + Reset Adler checksum + + + + + Set the deflate level (0-9) + + The value to set the level to. + + + + Fill the window + + + + + Inserts the current string in the head hash and returns the previous + value for this hash. + + The previous hash value + + + + Find the best (longest) string in the window matching the + string starting at strstart. + + Preconditions: + + strstart + MAX_MATCH <= window.length. + + + True if a match greater than the minimum length is found + + + + Hashtable, hashing three characters to an index for window, so + that window[index]..window[index+2] have this hash code. + Note that the array should really be unsigned short, so you need + to and the values with 0xffff. + + + + + prev[index & WMASK] points to the previous index that has the + same hash code as the string starting at index. This way + entries with the same hash code are in a linked list. + Note that the array should really be unsigned short, so you need + to and the values with 0xffff. + + + + + Points to the current character in the window. + + + + + lookahead is the number of characters starting at strstart in + window that are valid. + So window[strstart] until window[strstart+lookahead-1] are valid + characters. + + + + + This array contains the part of the uncompressed stream that + is of relevance. The current character is indexed by strstart. + + + + + The current compression function. + + + + + The input data for compression. + + + + + The total bytes of input read. + + + + + The offset into inputBuf, where input data starts. + + + + + The end offset of the input data. + + + + + The adler checksum + + + + + Get current value of Adler checksum + + + + + Total data processed + + + + + Get/set the deflate strategy + + + + + The TarBuffer class implements the tar archive concept + of a buffered input stream. This concept goes back to the + days of blocked tape drives and special io devices. In the + C# universe, the only real function that this class + performs is to ensure that files have the correct "record" + size, or other tars will complain. +

+ You should never have a need to access this class directly. + TarBuffers are created by Tar IO Streams. +

+
+
+ + + The size of a block in a tar archive in bytes. + + This is 512 bytes. + + + + The number of blocks in a default record. + + + The default value is 20 blocks per record. + + + + + The size in bytes of a default record. + + + The default size is 10KB. + + + + + Get the TAR Buffer's record size. + + The record size in bytes. + This is equal to the multiplied by the + + + + Get the TAR Buffer's block factor + + The block factor; the number of blocks per record. + + + + Construct a default TarBuffer + + + + + Create TarBuffer for reading with default BlockFactor + + Stream to buffer + A new suitable for input. + + + + Construct TarBuffer for reading inputStream setting BlockFactor + + Stream to buffer + Blocking factor to apply + A new suitable for input. + + + + Construct TarBuffer for writing with default BlockFactor + + output stream for buffer + A new suitable for output. + + + + Construct TarBuffer for writing Tar output to streams. + + Output stream to write to. + Blocking factor to apply + A new suitable for output. + + + + Initialization common to all constructors. + + + + + Determine if an archive block indicates End of Archive. End of + archive is indicated by a block that consists entirely of null bytes. + All remaining blocks for the record should also be null's + However some older tars only do a couple of null blocks (Old GNU tar for one) + and also partial records + + The data block to check. + Returns true if the block is an EOF block; false otherwise. + + + + Skip over a block on the input stream. + + + + + Read a block from the input stream. + + + The block of data read. + + + + + Read a record from data stream. + + + false if End-Of-File, else true. + + + + + Get the current block number, within the current record, zero based. + + + The current zero based block number. + + + The absolute block number = (record number * block factor) + block number. + + + + + Get the current record number. + + + The current zero based record number. + + + + + Write a block of data to the archive. + + + The data to write to the archive. + + + + + Write an archive record to the archive, where the record may be + inside of a larger array buffer. The buffer must be "offset plus + record size" long. + + + The buffer containing the record data to write. + + + The offset of the record data within buffer. + + + + + Write a TarBuffer record to the archive. + + + + + Flush the current record if it has any data in it. + + + + + Close the TarBuffer. If this is an output buffer, also flush the + current block before closing. + + + + + Get the record size for this buffer + + The record size in bytes. + This is equal to the multiplied by the + + + + Get the Blocking factor for the buffer + + This is the number of block in each record. + + + + Get the current block number, within the current record, zero based. + + + + + Get the current record number. + + + The current zero based record number. + + + + + This class encapsulates the Tar Entry Header used in Tar Archives. + The class also holds a number of tar constants, used mostly in headers. + + + + + The length of the name field in a header buffer. + + + + + The length of the mode field in a header buffer. + + + + + The length of the user id field in a header buffer. + + + + + The length of the group id field in a header buffer. + + + + + The length of the checksum field in a header buffer. + + + + + Offset of checksum in a header buffer. + + + + + The length of the size field in a header buffer. + + + + + The length of the magic field in a header buffer. + + + + + The length of the version field in a header buffer. + + + + + The length of the modification time field in a header buffer. + + + + + The length of the user name field in a header buffer. + + + + + The length of the group name field in a header buffer. + + + + + The length of the devices field in a header buffer. + + + + + The "old way" of indicating a normal file. + + + + + Normal file type. + + + + + Link file type. + + + + + Symbolic link file type. + + + + + Character device file type. + + + + + Block device file type. + + + + + Directory file type. + + + + + FIFO (pipe) file type. + + + + + Contiguous file type. + + + + + Posix.1 2001 global extended header + + + + + Posix.1 2001 extended header + + + + + Solaris access control list file type + + + + + GNU dir dump file type + This is a dir entry that contains the names of files that were in the + dir at the time the dump was made + + + + + Solaris Extended Attribute File + + + + + Inode (metadata only) no file content + + + + + Identifies the next file on the tape as having a long link name + + + + + Identifies the next file on the tape as having a long name + + + + + Continuation of a file that began on another volume + + + + + For storing filenames that dont fit in the main header (old GNU) + + + + + GNU Sparse file + + + + + GNU Tape/volume header ignore on extraction + + + + + The magic tag representing a POSIX tar archive. (includes trailing NULL) + + + + + The magic tag representing an old GNU tar archive where version is included in magic and overwrites it + + + + + Initialise a default TarHeader instance + + + + + Get the name of this entry. + + The entry's name. + + + + Create a new that is a copy of the current instance. + + A new that is a copy of the current instance. + + + + Parse TarHeader information from a header buffer. + + + The tar entry header buffer to get information from. + + + + + 'Write' header information to buffer provided, updating the check sum. + + output buffer for header information + + + + Get a hash code for the current object. + + A hash code for the current object. + + + + Determines if this instance is equal to the specified object. + + The object to compare with. + true if the objects are equal, false otherwise. + + + + Set defaults for values used when constructing a TarHeader instance. + + Value to apply as a default for userId. + Value to apply as a default for userName. + Value to apply as a default for groupId. + Value to apply as a default for groupName. + + + + Parse an octal string from a header buffer. + + The header buffer from which to parse. + The offset into the buffer from which to parse. + The number of header bytes to parse. + The long equivalent of the octal string. + + + + Parse a name from a header buffer. + + + The header buffer from which to parse. + + + The offset into the buffer from which to parse. + + + The number of header bytes to parse. + + + The name parsed. + + + + + Add name to the buffer as a collection of bytes + + The name to add + The offset of the first character + The buffer to add to + The index of the first byte to add + The number of characters/bytes to add + The next free index in the buffer + + + + Add name to the buffer as a collection of bytes + + The name to add + The offset of the first character + The buffer to add to + The index of the first byte to add + The number of characters/bytes to add + The next free index in the buffer + + + + Add an entry name to the buffer + + + The name to add + + + The buffer to add to + + + The offset into the buffer from which to start adding + + + The number of header bytes to add + + + The index of the next free byte in the buffer + + + + + Add an entry name to the buffer + + The name to add + The buffer to add to + The offset into the buffer from which to start adding + The number of header bytes to add + The index of the next free byte in the buffer + + + + Add a string to a buffer as a collection of ascii bytes. + + The string to add + The offset of the first character to add. + The buffer to add to. + The offset to start adding at. + The number of ascii characters to add. + The next free index in the buffer. + + + + Put an octal representation of a value into a buffer + + + the value to be converted to octal + + + buffer to store the octal string + + + The offset into the buffer where the value starts + + + The length of the octal string to create + + + The offset of the character next byte after the octal string + + + + + Put an octal representation of a value into a buffer + + Value to be convert to octal + The buffer to update + The offset into the buffer to store the value + The length of the octal string + Index of next byte + + + + Add the checksum integer to header buffer. + + + The header buffer to set the checksum for + The offset into the buffer for the checksum + The number of header bytes to update. + It's formatted differently from the other fields: it has 6 digits, a + null, then a space -- rather than digits, a space, then a null. + The final space is already there, from checksumming + + The modified buffer offset + + + + Compute the checksum for a tar entry header. + The checksum field must be all spaces prior to this happening + + The tar entry's header buffer. + The computed checksum. + + + + Make a checksum for a tar entry ignoring the checksum contents. + + The tar entry's header buffer. + The checksum for the buffer + + + + Get/set the name for this tar entry. + + Thrown when attempting to set the property to null. + + + + Get/set the entry's Unix style permission mode. + + + + + The entry's user id. + + + This is only directly relevant to unix systems. + The default is zero. + + + + + Get/set the entry's group id. + + + This is only directly relevant to linux/unix systems. + The default value is zero. + + + + + Get/set the entry's size. + + Thrown when setting the size to less than zero. + + + + Get/set the entry's modification time. + + + The modification time is only accurate to within a second. + + Thrown when setting the date time to less than 1/1/1970. + + + + Get the entry's checksum. This is only valid/updated after writing or reading an entry. + + + + + Get value of true if the header checksum is valid, false otherwise. + + + + + Get/set the entry's type flag. + + + + + The entry's link name. + + Thrown when attempting to set LinkName to null. + + + + Get/set the entry's magic tag. + + Thrown when attempting to set Magic to null. + + + + The entry's version. + + Thrown when attempting to set Version to null. + + + + The entry's user name. + + + + + Get/set the entry's group name. + + + This is only directly relevant to unix systems. + + + + + Get/set the entry's major device number. + + + + + Get/set the entry's minor device number. + + + + + Event arguments for scanning. + + + + + Initialise a new instance of + + The file or directory name. + + + + The fie or directory name for this event. + + + + + Get set a value indicating if scanning should continue or not. + + + + + Event arguments during processing of a single file or directory. + + + + + Initialise a new instance of + + The file or directory name if known. + The number of bytes processed so far + The total number of bytes to process, 0 if not known + + + + The name for this event if known. + + + + + Get set a value indicating wether scanning should continue or not. + + + + + Get a percentage representing how much of the has been processed + + 0.0 to 100.0 percent; 0 if target is not known. + + + + The number of bytes processed so far + + + + + The number of bytes to process. + + Target may be 0 or negative if the value isnt known. + + + + Event arguments for directories. + + + + + Initialize an instance of . + + The name for this directory. + Flag value indicating if any matching files are contained in this directory. + + + + Get a value indicating if the directory contains any matching files or not. + + + + + Arguments passed when scan failures are detected. + + + + + Initialise a new instance of + + The name to apply. + The exception to use. + + + + The applicable name. + + + + + The applicable exception. + + + + + Get / set a value indicating wether scanning should continue. + + + + + Delegate invoked before starting to process a directory. + + + + + Delegate invoked before starting to process a file. + + The source of the event + The event arguments. + + + + Delegate invoked during processing of a file or directory + + The source of the event + The event arguments. + + + + Delegate invoked when a file has been completely processed. + + The source of the event + The event arguments. + + + + Delegate invoked when a directory failure is detected. + + The source of the event + The event arguments. + + + + Delegate invoked when a file failure is detected. + + The source of the event + The event arguments. + + + + FileSystemScanner provides facilities scanning of files and directories. + + + + + Initialise a new instance of + + The file filter to apply when scanning. + + + + Initialise a new instance of + + The file filter to apply. + The directory filter to apply. + + + + Initialise a new instance of + + The file filter to apply. + + + + Initialise a new instance of + + The file filter to apply. + The directory filter to apply. + + + + Delegate to invoke when a directory is processed. + + + + + Delegate to invoke when a file is processed. + + + + + Delegate to invoke when processing for a file has finished. + + + + + Delegate to invoke when a directory failure is detected. + + + + + Delegate to invoke when a file failure is detected. + + + + + Raise the DirectoryFailure event. + + The directory name. + The exception detected. + + + + Raise the FileFailure event. + + The file name. + The exception detected. + + + + Raise the ProcessFile event. + + The file name. + + + + Raise the complete file event + + The file name + + + + Raise the ProcessDirectory event. + + The directory name. + Flag indicating if the directory has matching files. + + + + Scan a directory. + + The base directory to scan. + True to recurse subdirectories, false to scan a single directory. + + + + The file filter currently in use. + + + + + The directory filter currently in use. + + + + + Flag indicating if scanning should continue running. + + + + + An input buffer customised for use by + + + The buffer supports decryption of incoming data. + + + + + Initialise a new instance of with a default buffer size + + The stream to buffer. + + + + Initialise a new instance of + + The stream to buffer. + The size to use for the buffer + A minimum buffer size of 1KB is permitted. Lower sizes are treated as 1KB. + + + + Call passing the current clear text buffer contents. + + The inflater to set input for. + + + + Fill the buffer from the underlying input stream. + + + + + Read a buffer directly from the input stream + + The buffer to fill + Returns the number of bytes read. + + + + Read a buffer directly from the input stream + + The buffer to read into + The offset to start reading data into. + The number of bytes to read. + Returns the number of bytes read. + + + + Read clear text data from the input stream. + + The buffer to add data to. + The offset to start adding data at. + The number of bytes to read. + Returns the number of bytes actually read. + + + + Read a from the input stream. + + Returns the byte read. + + + + Read an in little endian byte order. + + The short value read case to an int. + + + + Read an in little endian byte order. + + The int value read. + + + + Read a in little endian byte order. + + The long value read. + + + + Get the length of bytes bytes in the + + + + + Get the contents of the raw data buffer. + + This may contain encrypted data. + + + + Get the number of useable bytes in + + + + + Get the contents of the clear text buffer. + + + + + Get/set the number of bytes available + + + + + Get/set the to apply to any data. + + Set this value to null to have no transform applied. + + + + The TarOutputStream writes a UNIX tar archive as an OutputStream. + Methods are provided to put entries, and then write their contents + by writing to this stream using write(). + + public + + + + Construct TarOutputStream using default block factor + + stream to write to + + + + Construct TarOutputStream with user specified block factor + + stream to write to + blocking factor + + + + set the position within the current stream + + The offset relative to the to seek to + The to seek from. + The new position in the stream. + + + + Set the length of the current stream + + The new stream length. + + + + Read a byte from the stream and advance the position within the stream + by one byte or returns -1 if at the end of the stream. + + The byte value or -1 if at end of stream + + + + read bytes from the current stream and advance the position within the + stream by the number of bytes read. + + The buffer to store read bytes in. + The index into the buffer to being storing bytes at. + The desired number of bytes to read. + The total number of bytes read, or zero if at the end of the stream. + The number of bytes may be less than the count + requested if data is not avialable. + + + + All buffered data is written to destination + + + + + Ends the TAR archive without closing the underlying OutputStream. + The result is that the EOF block of nulls is written. + + + + + Ends the TAR archive and closes the underlying OutputStream. + + This means that Finish() is called followed by calling the + TarBuffer's Close(). + + + + Get the record size being used by this stream's TarBuffer. + + + The TarBuffer record size. + + + + + Put an entry on the output stream. This writes the entry's + header and positions the output stream for writing + the contents of the entry. Once this method is called, the + stream is ready for calls to write() to write the entry's + contents. Once the contents are written, closeEntry() + MUST be called to ensure that all buffered data + is completely written to the output stream. + + + The TarEntry to be written to the archive. + + + + + Close an entry. This method MUST be called for all file + entries that contain data. The reason is that we must + buffer data written to the stream in order to satisfy + the buffer's block based writes. Thus, there may be + data fragments still being assembled that must be written + to the output stream before this entry is closed and the + next entry written. + + + + + Writes a byte to the current tar archive entry. + This method simply calls Write(byte[], int, int). + + + The byte to be written. + + + + + Writes bytes to the current tar archive entry. This method + is aware of the current entry and will throw an exception if + you attempt to write bytes past the length specified for the + current entry. The method is also (painfully) aware of the + record buffering required by TarBuffer, and manages buffers + that are not a multiple of recordsize in length, including + assembling records from small buffers. + + + The buffer to write to the archive. + + + The offset in the buffer from which to get bytes. + + + The number of bytes to write. + + + + + Write an EOF (end of archive) block to the tar archive. + An EOF block consists of all zeros. + + + + + bytes written for this entry so far + + + + + current 'Assembly' buffer length + + + + + Flag indicating wether this instance has been closed or not. + + + + + Size for the current entry + + + + + single block working buffer + + + + + 'Assembly' buffer used to assemble data before writing + + + + + TarBuffer used to provide correct blocking factor + + + + + the destination stream for the archive contents + + + + + true if the stream supports reading; otherwise, false. + + + + + true if the stream supports seeking; otherwise, false. + + + + + true if stream supports writing; otherwise, false. + + + + + length of stream in bytes + + + + + gets or sets the position within the current stream. + + + + + Get the record size being used by this stream's TarBuffer. + + + + + Get a value indicating wether an entry is open, requiring more data to be written. + + + + + This is the Deflater class. The deflater class compresses input + with the deflate algorithm described in RFC 1951. It has several + compression levels and three different strategies described below. + + This class is not thread safe. This is inherent in the API, due + to the split of deflate and setInput. + + author of the original java version : Jochen Hoenicke + + + + + The best and slowest compression level. This tries to find very + long and distant string repetitions. + + + + + The worst but fastest compression level. + + + + + The default compression level. + + + + + This level won't compress at all but output uncompressed blocks. + + + + + The compression method. This is the only method supported so far. + There is no need to use this constant at all. + + + + + Creates a new deflater with default compression level. + + + + + Creates a new deflater with given compression level. + + + the compression level, a value between NO_COMPRESSION + and BEST_COMPRESSION, or DEFAULT_COMPRESSION. + + if lvl is out of range. + + + + Creates a new deflater with given compression level. + + + the compression level, a value between NO_COMPRESSION + and BEST_COMPRESSION. + + + true, if we should suppress the Zlib/RFC1950 header at the + beginning and the adler checksum at the end of the output. This is + useful for the GZIP/PKZIP formats. + + if lvl is out of range. + + + + Resets the deflater. The deflater acts afterwards as if it was + just created with the same compression level and strategy as it + had before. + + + + + Flushes the current input block. Further calls to deflate() will + produce enough output to inflate everything in the current input + block. This is not part of Sun's JDK so I have made it package + private. It is used by DeflaterOutputStream to implement + flush(). + + + + + Finishes the deflater with the current input block. It is an error + to give more input after this method was called. This method must + be called to force all bytes to be flushed. + + + + + Sets the data which should be compressed next. This should be only + called when needsInput indicates that more input is needed. + If you call setInput when needsInput() returns false, the + previous input that is still pending will be thrown away. + The given byte array should not be changed, before needsInput() returns + true again. + This call is equivalent to setInput(input, 0, input.length). + + + the buffer containing the input data. + + + if the buffer was finished() or ended(). + + + + + Sets the data which should be compressed next. This should be + only called when needsInput indicates that more input is needed. + The given byte array should not be changed, before needsInput() returns + true again. + + + the buffer containing the input data. + + + the start of the data. + + + the number of data bytes of input. + + + if the buffer was Finish()ed or if previous input is still pending. + + + + + Sets the compression level. There is no guarantee of the exact + position of the change, but if you call this when needsInput is + true the change of compression level will occur somewhere near + before the end of the so far given input. + + + the new compression level. + + + + + Get current compression level + + Returns the current compression level + + + + Sets the compression strategy. Strategy is one of + DEFAULT_STRATEGY, HUFFMAN_ONLY and FILTERED. For the exact + position where the strategy is changed, the same as for + SetLevel() applies. + + + The new compression strategy. + + + + + Deflates the current input block with to the given array. + + + The buffer where compressed data is stored + + + The number of compressed bytes added to the output, or 0 if either + IsNeedingInput() or IsFinished returns true or length is zero. + + + + + Deflates the current input block to the given array. + + + Buffer to store the compressed data. + + + Offset into the output array. + + + The maximum number of bytes that may be stored. + + + The number of compressed bytes added to the output, or 0 if either + needsInput() or finished() returns true or length is zero. + + + If Finish() was previously called. + + + If offset or length don't match the array length. + + + + + Sets the dictionary which should be used in the deflate process. + This call is equivalent to setDictionary(dict, 0, dict.Length). + + + the dictionary. + + + if SetInput () or Deflate () were already called or another dictionary was already set. + + + + + Sets the dictionary which should be used in the deflate process. + The dictionary is a byte array containing strings that are + likely to occur in the data which should be compressed. The + dictionary is not stored in the compressed output, only a + checksum. To decompress the output you need to supply the same + dictionary again. + + + The dictionary data + + + The index where dictionary information commences. + + + The number of bytes in the dictionary. + + + If SetInput () or Deflate() were already called or another dictionary was already set. + + + + + Compression level. + + + + + If true no Zlib/RFC1950 headers or footers are generated + + + + + The current state. + + + + + The total bytes of output written. + + + + + The pending output. + + + + + The deflater engine. + + + + + Gets the current adler checksum of the data that was processed so far. + + + + + Gets the number of input bytes processed so far. + + + + + Gets the number of output bytes so far. + + + + + Returns true if the stream was finished and no more output bytes + are available. + + + + + Returns true, if the input buffer is empty. + You should then call setInput(). + NOTE: This method can also return true when the stream + was finished. + + + + + The TarInputStream reads a UNIX tar archive as an InputStream. + methods are provided to position at each successive entry in + the archive, and the read each entry as a normal input stream + using read(). + + + + + Construct a TarInputStream with default block factor + + stream to source data from + + + + Construct a TarInputStream with user specified block factor + + stream to source data from + block factor to apply to archive + + + + Flushes the baseInputStream + + + + + Set the streams position. This operation is not supported and will throw a NotSupportedException + + The offset relative to the origin to seek to. + The to start seeking from. + The new position in the stream. + Any access + + + + Sets the length of the stream + This operation is not supported and will throw a NotSupportedException + + The new stream length. + Any access + + + + Writes a block of bytes to this stream using data from a buffer. + This operation is not supported and will throw a NotSupportedException + + The buffer containing bytes to write. + The offset in the buffer of the frist byte to write. + The number of bytes to write. + Any access + + + + Writes a byte to the current position in the file stream. + This operation is not supported and will throw a NotSupportedException + + The byte value to write. + Any access + + + + Reads a byte from the current tar archive entry. + + A byte cast to an int; -1 if the at the end of the stream. + + + + Reads bytes from the current tar archive entry. + + This method is aware of the boundaries of the current + entry in the archive and will deal with them appropriately + + + The buffer into which to place bytes read. + + + The offset at which to place bytes read. + + + The number of bytes to read. + + + The number of bytes read, or 0 at end of stream/EOF. + + + + + Closes this stream. Calls the TarBuffer's close() method. + The underlying stream is closed by the TarBuffer. + + + + + Set the entry factory for this instance. + + The factory for creating new entries + + + + Get the record size being used by this stream's TarBuffer. + + + TarBuffer record size. + + + + + Skip bytes in the input buffer. This skips bytes in the + current entry's data, not the entire archive, and will + stop at the end of the current entry's data if the number + to skip extends beyond that point. + + + The number of bytes to skip. + + + + + Since we do not support marking just yet, we do nothing. + + + The limit to mark. + + + + + Since we do not support marking just yet, we do nothing. + + + + + Get the next entry in this tar archive. This will skip + over any remaining data in the current entry, if there + is one, and place the input stream at the header of the + next entry, and read the header and instantiate a new + TarEntry from the header bytes and return that entry. + If there are no more entries in the archive, null will + be returned to indicate that the end of the archive has + been reached. + + + The next TarEntry in the archive, or null. + + + + + Copies the contents of the current tar archive entry directly into + an output stream. + + + The OutputStream into which to write the entry's data. + + + + + Flag set when last block has been read + + + + + Size of this entry as recorded in header + + + + + Number of bytes read for this entry so far + + + + + Buffer used with calls to Read() + + + + + Working buffer + + + + + Current entry being read + + + + + Factory used to create TarEntry or descendant class instance + + + + + Stream used as the source of input data. + + + + + Gets a value indicating whether the current stream supports reading + + + + + Gets a value indicating whether the current stream supports seeking + This property always returns false. + + + + + Gets a value indicating if the stream supports writing. + This property always returns false. + + + + + The length in bytes of the stream + + + + + Gets or sets the position within the stream. + Setting the Position is not supported and throws a NotSupportedExceptionNotSupportedException + + Any attempt to set position + + + + Get the record size being used by this stream's TarBuffer. + + + + + Get the available data that can be read from the current + entry in the archive. This does not indicate how much data + is left in the entire archive, only in the current entry. + This value is determined from the entry's size header field + and the amount of data already read from the current entry. + + + The number of available bytes for the current entry. + + + + + Return a value of true if marking is supported; false otherwise. + + Currently marking is not supported, the return value is always false. + + + + This interface is provided, along with the method , to allow + the programmer to have their own subclass instantiated for the + entries return from . + + + + + Create an entry based on name alone + + + Name of the new EntryPointNotFoundException to create + + created TarEntry or descendant class + + + + Create an instance based on an actual file + + + Name of file to represent in the entry + + + Created TarEntry or descendant class + + + + + Create a tar entry based on the header information passed + + + Buffer containing header information to base entry on + + + Created TarEntry or descendant class + + + + + Standard entry factory class creating instances of the class TarEntry + + + + + Create a based on named + + The name to use for the entry + A new + + + + Create a tar entry with details obtained from file + + The name of the file to retrieve details from. + A new + + + + Create an entry based on details in header + + The buffer containing entry details. + A new + + + + Defines known values for the property. + + + + + Host system = MSDOS + + + + + Host system = Amiga + + + + + Host system = Open VMS + + + + + Host system = Unix + + + + + Host system = VMCms + + + + + Host system = Atari ST + + + + + Host system = OS2 + + + + + Host system = Macintosh + + + + + Host system = ZSystem + + + + + Host system = Cpm + + + + + Host system = Windows NT + + + + + Host system = MVS + + + + + Host system = VSE + + + + + Host system = Acorn RISC + + + + + Host system = VFAT + + + + + Host system = Alternate MVS + + + + + Host system = BEOS + + + + + Host system = Tandem + + + + + Host system = OS400 + + + + + Host system = OSX + + + + + Host system = WinZIP AES + + + + + This class represents an entry in a zip archive. This can be a file + or a directory + ZipFile and ZipInputStream will give you instances of this class as + information about the members in an archive. ZipOutputStream + uses an instance of this class when creating an entry in a Zip file. +
+
Author of the original java version : Jochen Hoenicke +
+
+ + + Creates a zip entry with the given name. + + + The name for this entry. Can include directory components. + The convention for names is 'unix' style paths with relative names only. + There are with no device names and path elements are separated by '/' characters. + + + The name passed is null + + + + + Creates a zip entry with the given name and version required to extract + + + The name for this entry. Can include directory components. + The convention for names is 'unix' style paths with no device names and + path elements separated by '/' characters. This is not enforced see CleanName + on how to ensure names are valid if this is desired. + + + The minimum 'feature version' required this entry + + + The name passed is null + + + + + Initializes an entry with the given name and made by information + + Name for this entry + Version and HostSystem Information + Minimum required zip feature version required to extract this entry + Compression method for this entry. + + The name passed is null + + + versionRequiredToExtract should be 0 (auto-calculate) or > 10 + + + This constructor is used by the ZipFile class when reading from the central header + It is not generally useful, use the constructor specifying the name only. + + + + + Creates a deep copy of the given zip entry. + + + The entry to copy. + + + + + Test the external attributes for this to + see if the external attributes are Dos based (including WINNT and variants) + and match the values + + The attributes to test. + Returns true if the external attributes are known to be DOS/Windows + based and have the same attributes set as the value passed. + + + + Force this entry to be recorded using Zip64 extensions. + + + + + Get a value indicating wether Zip64 extensions were forced. + + A value of true if Zip64 extensions have been forced on; false if not. + + + + Process extra data fields updating the entry based on the contents. + + True if the extra data fields should be handled + for a local header, rather than for a central header. + + + + + Test entry to see if data can be extracted. + + Returns true if data can be extracted for this entry; false otherwise. + + + + Creates a copy of this zip entry. + + An that is a copy of the current instance. + + + + Gets a string representation of this ZipEntry. + + A readable textual representation of this + + + + Test a compression method to see if this library + supports extracting data compressed with that method + + The compression method to test. + Returns true if the compression method is supported; false otherwise + + + + Cleans a name making it conform to Zip file conventions. + Devices names ('c:\') and UNC share names ('\\server\share') are removed + and forward slashes ('\') are converted to back slashes ('/'). + Names are made relative by trimming leading slashes which is compatible + with the ZIP naming convention. + + The name to clean + The 'cleaned' name. + + The Zip name transform class is more flexible. + + + + + Get a value indicating wether the entry has a CRC value available. + + + + + Get/Set flag indicating if entry is encrypted. + A simple helper routine to aid interpretation of flags + + This is an assistant that interprets the flags property. + + + + Get / set a flag indicating wether entry name and comment text are + encoded in unicode UTF8. + + This is an assistant that interprets the flags property. + + + + Value used during password checking for PKZIP 2.0 / 'classic' encryption. + + + + + Get/Set general purpose bit flag for entry + + + General purpose bit flag
+
+ Bit 0: If set, indicates the file is encrypted
+ Bit 1-2 Only used for compression type 6 Imploding, and 8, 9 deflating
+ Imploding:
+ Bit 1 if set indicates an 8K sliding dictionary was used. If clear a 4k dictionary was used
+ Bit 2 if set indicates 3 Shannon-Fanno trees were used to encode the sliding dictionary, 2 otherwise
+
+ Deflating:
+ Bit 2 Bit 1
+ 0 0 Normal compression was used
+ 0 1 Maximum compression was used
+ 1 0 Fast compression was used
+ 1 1 Super fast compression was used
+
+ Bit 3: If set, the fields crc-32, compressed size + and uncompressed size are were not able to be written during zip file creation + The correct values are held in a data descriptor immediately following the compressed data.
+ Bit 4: Reserved for use by PKZIP for enhanced deflating
+ Bit 5: If set indicates the file contains compressed patch data
+ Bit 6: If set indicates strong encryption was used.
+ Bit 7-10: Unused or reserved
+ Bit 11: If set the name and comments for this entry are in unicode.
+ Bit 12-15: Unused or reserved
+
+ + +
+ + + Get/Set index of this entry in Zip file + + This is only valid when the entry is part of a + + + + Get/set offset for use in central header + + + + + Get/Set external file attributes as an integer. + The values of this are operating system dependant see + HostSystem for details + + + + + Get the version made by for this entry or zero if unknown. + The value / 10 indicates the major version number, and + the value mod 10 is the minor version number + + + + + Get a value indicating this entry is for a DOS/Windows system. + + + + + Gets the compatability information for the external file attribute + If the external file attributes are compatible with MS-DOS and can be read + by PKZIP for DOS version 2.04g then this value will be zero. Otherwise the value + will be non-zero and identify the host system on which the attributes are compatible. + + + + The values for this as defined in the Zip File format and by others are shown below. The values are somewhat + misleading in some cases as they are not all used as shown. You should consult the relevant documentation + to obtain up to date and correct information. The modified appnote by the infozip group is + particularly helpful as it documents a lot of peculiarities. The document is however a little dated. + + 0 - MS-DOS and OS/2 (FAT / VFAT / FAT32 file systems) + 1 - Amiga + 2 - OpenVMS + 3 - Unix + 4 - VM/CMS + 5 - Atari ST + 6 - OS/2 HPFS + 7 - Macintosh + 8 - Z-System + 9 - CP/M + 10 - Windows NTFS + 11 - MVS (OS/390 - Z/OS) + 12 - VSE + 13 - Acorn Risc + 14 - VFAT + 15 - Alternate MVS + 16 - BeOS + 17 - Tandem + 18 - OS/400 + 19 - OS/X (Darwin) + 99 - WinZip AES + remainder - unused + + + + + + Get minimum Zip feature version required to extract this entry + + + Minimum features are defined as:
+ 1.0 - Default value
+ 1.1 - File is a volume label
+ 2.0 - File is a folder/directory
+ 2.0 - File is compressed using Deflate compression
+ 2.0 - File is encrypted using traditional encryption
+ 2.1 - File is compressed using Deflate64
+ 2.5 - File is compressed using PKWARE DCL Implode
+ 2.7 - File is a patch data set
+ 4.5 - File uses Zip64 format extensions
+ 4.6 - File is compressed using BZIP2 compression
+ 5.0 - File is encrypted using DES
+ 5.0 - File is encrypted using 3DES
+ 5.0 - File is encrypted using original RC2 encryption
+ 5.0 - File is encrypted using RC4 encryption
+ 5.1 - File is encrypted using AES encryption
+ 5.1 - File is encrypted using corrected RC2 encryption
+ 5.1 - File is encrypted using corrected RC2-64 encryption
+ 6.1 - File is encrypted using non-OAEP key wrapping
+ 6.2 - Central directory encryption (not confirmed yet)
+ 6.3 - File is compressed using LZMA
+ 6.3 - File is compressed using PPMD+
+ 6.3 - File is encrypted using Blowfish
+ 6.3 - File is encrypted using Twofish
+
+ +
+ + + Get a value indicating wether this entry can be decompressed by the library. + + This is based on the and + wether the compression method is supported. + + + + Gets a value indicating if the entry requires Zip64 extensions + to store the full entry values. + + A value of true if a local header requires Zip64 extensions; false if not. + + + + Get a value indicating wether the central directory entry requires Zip64 extensions to be stored. + + + + + Get/Set DosTime value. + + + The MS-DOS date format can only represent dates between 1/1/1980 and 12/31/2107. + + + + + Gets/Sets the time of last modification of the entry. + + + The property is updated to match this as far as possible. + + + + + Returns the entry name. + + + The unix naming convention is followed. + Path components in the entry should always separated by forward slashes ('/'). + Dos device names like C: should also be removed. + See the class, or + + + + + Gets/Sets the size of the uncompressed data. + + + The size or -1 if unknown. + + Setting the size before adding an entry to an archive can help + avoid compatability problems with some archivers which dont understand Zip64 extensions. + + + + Gets/Sets the size of the compressed data. + + + The compressed entry size or -1 if unknown. + + + + + Gets/Sets the crc of the uncompressed data. + + + Crc is not in the range 0..0xffffffffL + + + The crc value or -1 if unknown. + + + + + Gets/Sets the compression method. Only Deflated and Stored are supported. + + + The compression method for this entry + + + + + + + Gets/Sets the extra data. + + + Extra data is longer than 64KB (0xffff) bytes. + + + Extra data or null if not set. + + + + + Gets/Sets the entry comment. + + + If comment is longer than 0xffff. + + + The comment or null if not set. + + + A comment is only available for entries when read via the class. + The class doesnt have the comment data available. + + + + + Gets a value indicating if the entry is a directory. + however. + + + A directory is determined by an entry name with a trailing slash '/'. + The external file attributes can also indicate an entry is for a directory. + Currently only dos/windows attributes are tested in this manner. + The trailing slash convention should always be followed. + + + + + Get a value of true if the entry appears to be a file; false otherwise + + + This only takes account of DOS/Windows attributes. Other operating systems are ignored. + For linux and others the result may be incorrect. + + + + + ExtraData tagged value interface. + + + + + Set the contents of this instance from the data passed. + + The data to extract contents from. + The offset to begin extracting data from. + The number of bytes to extract. + + + + Get the data representing this instance. + + Returns the data for this instance. + + + + Get the ID for this tagged data value. + + + + + A raw binary tagged value + + + + + Initialise a new instance. + + The tag ID. + + + + Set the data from the raw values provided. + + The raw data to extract values from. + The index to start extracting values from. + The number of bytes available. + + + + Get the binary data representing this instance. + + The raw binary data representing this instance. + + + + The tag ID for this instance. + + + + + Get the ID for this tagged data value. + + + + + Get /set the binary data representing this instance. + + The raw binary data representing this instance. + + + + Class representing extended unix date time values. + + + + + Set the data from the raw values provided. + + The raw data to extract values from. + The index to start extracting values from. + The number of bytes available. + + + + Get the binary data representing this instance. + + The raw binary data representing this instance. + + + + Test a value to see if is valid and can be represented here. + + The value to test. + Returns true if the value is valid and can be represented; false if not. + The standard Unix time is a signed integer data type, directly encoding the Unix time number, + which is the number of seconds since 1970-01-01. + Being 32 bits means the values here cover a range of about 136 years. + The minimum representable time is 1901-12-13 20:45:52, + and the maximum representable time is 2038-01-19 03:14:07. + + + + + Get the ID + + + + + Get /set the Modification Time + + + + + + + Get / set the Access Time + + + + + + + Get / Set the Create Time + + + + + + + Get/set the values to include. + + + + + Flags indicate which values are included in this instance. + + + + + The modification time is included + + + + + The access time is included + + + + + The create time is included. + + + + + Class handling NT date time values. + + + + + Set the data from the raw values provided. + + The raw data to extract values from. + The index to start extracting values from. + The number of bytes available. + + + + Get the binary data representing this instance. + + The raw binary data representing this instance. + + + + Test a valuie to see if is valid and can be represented here. + + The value to test. + Returns true if the value is valid and can be represented; false if not. + + NTFS filetimes are 64-bit unsigned integers, stored in Intel + (least significant byte first) byte order. They determine the + number of 1.0E-07 seconds (1/10th microseconds!) past WinNT "epoch", + which is "01-Jan-1601 00:00:00 UTC". 28 May 60056 is the upper limit + + + + + Get the ID for this tagged data value. + + + + + Get/set the last modification time. + + + + + Get /set the create time + + + + + Get /set the last access time. + + + + + A factory that creates tagged data instances. + + + + + Get data for a specific tag value. + + The tag ID to find. + The data to search. + The offset to begin extracting data from. + The number of bytes to extract. + The located value found, or null if not found. + + + + + A class to handle the extra data field for Zip entries + + + Extra data contains 0 or more values each prefixed by a header tag and length. + They contain zero or more bytes of actual data. + The data is held internally using a copy on write strategy. This is more efficient but + means that for extra data created by passing in data can have the values modified by the caller + in some circumstances. + + + + + Initialise a default instance. + + + + + Initialise with known extra data. + + The extra data. + + + + Get the raw extra data value + + Returns the raw byte[] extra data this instance represents. + + + + Clear the stored data. + + + + + Get a read-only for the associated tag. + + The tag to locate data for. + Returns a containing tag data or null if no tag was found. + + + + Get the tagged data for a tag. + + The tag to search for. + Returns a tagged value or null if none found. + + + + Find an extra data value + + The identifier for the value to find. + Returns true if the value was found; false otherwise. + + + + Add a new entry to extra data. + + The value to add. + + + + Add a new entry to extra data + + The ID for this entry. + The data to add. + If the ID already exists its contents are replaced. + + + + Start adding a new entry. + + Add data using , , , or . + The new entry is completed and actually added by calling + + + + + Add entry data added since using the ID passed. + + The identifier to use for this entry. + + + + Add a byte of data to the pending new entry. + + The byte to add. + + + + + Add data to a pending new entry. + + The data to add. + + + + + Add a short value in little endian order to the pending new entry. + + The data to add. + + + + + Add an integer value in little endian order to the pending new entry. + + The data to add. + + + + + Add a long value in little endian order to the pending new entry. + + The data to add. + + + + + Delete an extra data field. + + The identifier of the field to delete. + Returns true if the field was found and deleted. + + + + Read a long in little endian form from the last found data value + + Returns the long value read. + + + + Read an integer in little endian form from the last found data value. + + Returns the integer read. + + + + Read a short value in little endian form from the last found data value. + + Returns the short value read. + + + + Read a byte from an extra data + + The byte value read or -1 if the end of data has been reached. + + + + Skip data during reading. + + The number of bytes to skip. + + + + Internal form of that reads data at any location. + + Returns the short value read. + + + + Dispose of this instance. + + + + + Gets the current extra data length. + + + + + Get the length of the last value found by + + This is only value if has previsouly returned true. + + + + Get the index for the current read value. + + This is only valid if has previously returned true. + Initially it will be the index of the first byte of actual data. The value is updated after calls to + , and . + + + + Get the number of bytes remaining to be read for the current value; + + + + + Holds data pertinent to a data descriptor. + + + + + Get /set the compressed size of data. + + + + + Get / set the uncompressed size of data + + + + + Get /set the crc value. + + + + + This class assists with writing/reading from Zip files. + + + + + Initialise an instance of this class. + + The name of the file to open. + + + + Initialise a new instance of . + + The stream to use. + + + + Close the stream. + + + The underlying stream is closed only if is true. + + + + + Locates a block with the desired . + + The signature to find. + Location, marking the end of block. + Minimum size of the block. + The maximum variable data. + Eeturns the offset of the first byte after the signature; -1 if not found + + + + Write Zip64 end of central directory records (File header and locator). + + The number of entries in the central directory. + The size of entries in the central directory. + The offset of the dentral directory. + + + + Write the required records to end the central directory. + + The number of entries in the directory. + The size of the entries in the directory. + The start of the central directory. + The archive comment. (This can be null). + + + + Read an unsigned short in little endian byte order. + + Returns the value read. + + An i/o error occurs. + + + The file ends prematurely + + + + + Read an int in little endian byte order. + + Returns the value read. + + An i/o error occurs. + + + The file ends prematurely + + + + + Read a long in little endian byte order. + + The value read. + + + + Write an unsigned short in little endian byte order. + + The value to write. + + + + Write a ushort in little endian byte order. + + The value to write. + + + + Write an int in little endian byte order. + + The value to write. + + + + Write a uint in little endian byte order. + + The value to write. + + + + Write a long in little endian byte order. + + The value to write. + + + + Write a ulong in little endian byte order. + + The value to write. + + + + Write a data descriptor. + + The entry to write a descriptor for. + Returns the number of descriptor bytes written. + + + + Read data descriptor at the end of compressed data. + + if set to true [zip64]. + The data to fill in. + Returns the number of bytes read in the descriptor. + + + + Get / set a value indicating wether the the underlying stream is owned or not. + + If the stream is owned it is closed when this instance is closed. + + + + Inflater is used to decompress data that has been compressed according + to the "deflate" standard described in rfc1951. + + By default Zlib (rfc1950) headers and footers are expected in the input. + You can use constructor public Inflater(bool noHeader) passing true + if there is no Zlib header information + + The usage is as following. First you have to set some input with + SetInput(), then Inflate() it. If inflate doesn't + inflate any bytes there may be three reasons: +
    +
  • IsNeedingInput() returns true because the input buffer is empty. + You have to provide more input with SetInput(). + NOTE: IsNeedingInput() also returns true when, the stream is finished. +
  • +
  • IsNeedingDictionary() returns true, you have to provide a preset + dictionary with SetDictionary().
  • +
  • IsFinished returns true, the inflater has finished.
  • +
+ Once the first output byte is produced, a dictionary will not be + needed at a later stage. + + author of the original java version : John Leuner, Jochen Hoenicke +
+
+ + + These are the possible states for an inflater + + + + + Copy lengths for literal codes 257..285 + + + + + Extra bits for literal codes 257..285 + + + + + Copy offsets for distance codes 0..29 + + + + + Extra bits for distance codes + + + + + This variable contains the current state. + + + + + The adler checksum of the dictionary or of the decompressed + stream, as it is written in the header resp. footer of the + compressed stream. + Only valid if mode is DECODE_DICT or DECODE_CHKSUM. + + + + + The number of bits needed to complete the current state. This + is valid, if mode is DECODE_DICT, DECODE_CHKSUM, + DECODE_HUFFMAN_LENBITS or DECODE_HUFFMAN_DISTBITS. + + + + + True, if the last block flag was set in the last block of the + inflated stream. This means that the stream ends after the + current block. + + + + + The total number of inflated bytes. + + + + + The total number of bytes set with setInput(). This is not the + value returned by the TotalIn property, since this also includes the + unprocessed input. + + + + + This variable stores the noHeader flag that was given to the constructor. + True means, that the inflated stream doesn't contain a Zlib header or + footer. + + + + + Creates a new inflater or RFC1951 decompressor + RFC1950/Zlib headers and footers will be expected in the input data + + + + + Creates a new inflater. + + + True if no RFC1950/Zlib header and footer fields are expected in the input data + + This is used for GZIPed/Zipped input. + + For compatibility with + Sun JDK you should provide one byte of input more than needed in + this case. + + + + + Resets the inflater so that a new stream can be decompressed. All + pending input and output will be discarded. + + + + + Decodes a zlib/RFC1950 header. + + + False if more input is needed. + + + The header is invalid. + + + + + Decodes the dictionary checksum after the deflate header. + + + False if more input is needed. + + + + + Decodes the huffman encoded symbols in the input stream. + + + false if more input is needed, true if output window is + full or the current block ends. + + + if deflated stream is invalid. + + + + + Decodes the adler checksum after the deflate stream. + + + false if more input is needed. + + + If checksum doesn't match. + + + + + Decodes the deflated stream. + + + false if more input is needed, or if finished. + + + if deflated stream is invalid. + + + + + Sets the preset dictionary. This should only be called, if + needsDictionary() returns true and it should set the same + dictionary, that was used for deflating. The getAdler() + function returns the checksum of the dictionary needed. + + + The dictionary. + + + + + Sets the preset dictionary. This should only be called, if + needsDictionary() returns true and it should set the same + dictionary, that was used for deflating. The getAdler() + function returns the checksum of the dictionary needed. + + + The dictionary. + + + The index into buffer where the dictionary starts. + + + The number of bytes in the dictionary. + + + No dictionary is needed. + + + The adler checksum for the buffer is invalid + + + + + Sets the input. This should only be called, if needsInput() + returns true. + + + the input. + + + + + Sets the input. This should only be called, if needsInput() + returns true. + + + The source of input data + + + The index into buffer where the input starts. + + + The number of bytes of input to use. + + + No input is needed. + + + The index and/or count are wrong. + + + + + Inflates the compressed stream to the output buffer. If this + returns 0, you should check, whether IsNeedingDictionary(), + IsNeedingInput() or IsFinished() returns true, to determine why no + further output is produced. + + + the output buffer. + + + The number of bytes written to the buffer, 0 if no further + output can be produced. + + + if buffer has length 0. + + + if deflated stream is invalid. + + + + + Inflates the compressed stream to the output buffer. If this + returns 0, you should check, whether needsDictionary(), + needsInput() or finished() returns true, to determine why no + further output is produced. + + + the output buffer. + + + the offset in buffer where storing starts. + + + the maximum number of bytes to output. + + + the number of bytes written to the buffer, 0 if no further output can be produced. + + + if count is less than 0. + + + if the index and / or count are wrong. + + + if deflated stream is invalid. + + + + + Returns true, if the input buffer is empty. + You should then call setInput(). + NOTE: This method also returns true when the stream is finished. + + + + + Returns true, if a preset dictionary is needed to inflate the input. + + + + + Returns true, if the inflater has finished. This means, that no + input is needed and no output can be produced. + + + + + Gets the adler checksum. This is either the checksum of all + uncompressed bytes returned by inflate(), or if needsDictionary() + returns true (and thus no output was yet produced) this is the + adler checksum of the expected dictionary. + + + the adler checksum. + + + + + Gets the total number of output bytes returned by Inflate(). + + + the total number of output bytes. + + + + + Gets the total number of processed compressed input bytes. + + + The total number of bytes of processed input bytes. + + + + + Gets the number of unprocessed input bytes. Useful, if the end of the + stream is reached and you want to further process the bytes after + the deflate stream. + + + The number of bytes of the input which have not been processed. + + + + + An input stream that decompresses files in the BZip2 format + + + + + Construct instance for reading from stream + + Data source + + + + Flushes the stream. + + + + + Set the streams position. This operation is not supported and will throw a NotSupportedException + + A byte offset relative to the parameter. + A value of type indicating the reference point used to obtain the new position. + The new position of the stream. + Any access + + + + Sets the length of this stream to the given value. + This operation is not supported and will throw a NotSupportedExceptionortedException + + The new length for the stream. + Any access + + + + Writes a block of bytes to this stream using data from a buffer. + This operation is not supported and will throw a NotSupportedException + + The buffer to source data from. + The offset to start obtaining data from. + The number of bytes of data to write. + Any access + + + + Writes a byte to the current position in the file stream. + This operation is not supported and will throw a NotSupportedException + + The value to write. + Any access + + + + Read a sequence of bytes and advances the read position by one byte. + + Array of bytes to store values in + Offset in array to begin storing data + The maximum number of bytes to read + The total number of bytes read into the buffer. This might be less + than the number of bytes requested if that number of bytes are not + currently available or zero if the end of the stream is reached. + + + + + Closes the stream, releasing any associated resources. + + + + + Read a byte from stream advancing position + + byte read or -1 on end of stream + + + + Get/set flag indicating ownership of underlying stream. + When the flag is true will close the underlying stream also. + + + + + Gets a value indicating if the stream supports reading + + + + + Gets a value indicating whether the current stream supports seeking. + + + + + Gets a value indicating whether the current stream supports writing. + This property always returns false + + + + + Gets the length in bytes of the stream. + + + + + Gets or sets the streams position. + Setting the position is not supported and will throw a NotSupportException + + Any attempt to set the position + + + + Computes Adler32 checksum for a stream of data. An Adler32 + checksum is not as reliable as a CRC32 checksum, but a lot faster to + compute. + + The specification for Adler32 may be found in RFC 1950. + ZLIB Compressed Data Format Specification version 3.3) + + + From that document: + + "ADLER32 (Adler-32 checksum) + This contains a checksum value of the uncompressed data + (excluding any dictionary data) computed according to Adler-32 + algorithm. This algorithm is a 32-bit extension and improvement + of the Fletcher algorithm, used in the ITU-T X.224 / ISO 8073 + standard. + + Adler-32 is composed of two sums accumulated per byte: s1 is + the sum of all bytes, s2 is the sum of all s1 values. Both sums + are done modulo 65521. s1 is initialized to 1, s2 to zero. The + Adler-32 checksum is stored as s2*65536 + s1 in most- + significant-byte first (network) order." + + "8.2. The Adler-32 algorithm + + The Adler-32 algorithm is much faster than the CRC32 algorithm yet + still provides an extremely low probability of undetected errors. + + The modulo on unsigned long accumulators can be delayed for 5552 + bytes, so the modulo operation time is negligible. If the bytes + are a, b, c, the second sum is 3a + 2b + c + 3, and so is position + and order sensitive, unlike the first sum, which is just a + checksum. That 65521 is prime is important to avoid a possible + large class of two-byte errors that leave the check unchanged. + (The Fletcher checksum uses 255, which is not prime and which also + makes the Fletcher check insensitive to single byte changes 0 - + 255.) + + The sum s1 is initialized to 1 instead of zero to make the length + of the sequence part of s2, so that the length does not have to be + checked separately. (Any sequence of zeroes has a Fletcher + checksum of zero.)" + + + + + + + largest prime smaller than 65536 + + + + + Creates a new instance of the Adler32 class. + The checksum starts off with a value of 1. + + + + + Resets the Adler32 checksum to the initial value. + + + + + Updates the checksum with a byte value. + + + The data value to add. The high byte of the int is ignored. + + + + + Updates the checksum with an array of bytes. + + + The source of the data to update with. + + + + + Updates the checksum with the bytes taken from the array. + + + an array of bytes + + + the start of the data used for this update + + + the number of bytes to use for this update + + + + + Returns the Adler32 data checksum computed so far. + + +
+
diff --git a/bin/hdfsdll.dll.config b/bin/hdfsdll.dll.config new file mode 100644 index 0000000..f1805fd --- /dev/null +++ b/bin/hdfsdll.dll.config @@ -0,0 +1,3 @@ + + + diff --git a/bin/hdfsdll.so b/bin/hdfsdll.so new file mode 100755 index 0000000..cf79ffc Binary files /dev/null and b/bin/hdfsdll.so differ diff --git a/bin/xmrhelpers.so b/bin/xmrhelpers.so new file mode 100755 index 0000000..6074da2 Binary files /dev/null and b/bin/xmrhelpers.so differ -- cgit v1.1