diff --git a/ChangeLog.txt b/ChangeLog.txt index e366c6db..984b91d1 100644 --- a/ChangeLog.txt +++ b/ChangeLog.txt @@ -44,6 +44,18 @@ for such images are ignored by the decompressor. However, the TurboJPEG API was being too rigid and was expecting the sampling factors to be equal to 1 before it treated the image as a grayscale JPEG. +[9] Referring to [5] above, another extremely rare circumstance was discovered +under which the Huffman encoder's local buffer can be overrun when a buffered +destination manager is being used and an extremely-high-frequency block +(basically junk image data) is being encoded. Even though the Huffman local +buffer was increased from 128 bytes to 136 bytes to address the previous +issue, the new issue caused even the larger buffer to be overrun. Further +analysis reveals that, in the absolute worst case (such as setting alternating +AC coefficients to 32767 and -32768 in the JPEG scanning order), the Huffman +encoder can produce encoded blocks that approach double the size of the +unencoded blocks. Thus, the Huffman local buffer was increased to 256 bytes, +which should prevent any such issue from re-occurring in the future. + 1.3.1 ===== diff --git a/jchuff.c b/jchuff.c index fe5b7f7f..1880cc27 100644 --- a/jchuff.c +++ b/jchuff.c @@ -391,7 +391,16 @@ dump_buffer (working_state * state) #endif -#define BUFSIZE (DCTSIZE2 * 2) + 8 +/* Although it is exceedingly rare, it is possible for a Huffman-encoded + * coefficient block to be larger than the 128-byte unencoded block. For each + * of the 64 coefficients, PUT_BITS is invoked twice, and each invocation can + * theoretically store 16 bits (for a maximum of 2048 bits or 256 bytes per + * encoded block.) If, for instance, one artificially sets the AC + * coefficients to alternating values of 32767 and -32768 (using the JPEG + * scanning order-- 1, 8, 16, etc.), then this will produce an encoded block + * larger than 200 bytes. + */ +#define BUFSIZE (DCTSIZE2 * 4) #define LOAD_BUFFER() { \ if (state->free_in_buffer < BUFSIZE) { \