When using the in-memory destination manager, it is necessary to
explicitly call the destination manager's term_destination() method if
an error occurs. That method is called by jpeg_finish_compress() but
not by jpeg_abort_compress().
This fixes a potential double free() that could occur if tjCompress*()
or tjTransform() returned an error and the calling application tried to
clean up a JPEG buffer that was dynamically re-allocated by one of those
functions.
- UBSan complained that entropy->restarts_to_go was underflowing an
unsigned integer when it was decremented while
cinfo->restart_interval == 0. That was, of course, completely
innocuous behavior, since the result of the underflowing computation
was never used.
- d3a3a73f64 and
7bc9fca430 silenced a UBSan signed
integer overflow error, but unfortunately other malformed JPEG images
have been discovered that cause unsigned integer overflow in the same
computation. Since, to the best of our understanding, this behavior
is innocuous, this commit reverts the commits listed above, suppresses
the UBSan errors, and adds code comments to document the issue.
Referring to https://bugzilla.mozilla.org/show_bug.cgi?id=1050342,
there are certain very rare circumstances under which a malformed JPEG
image can cause different Huffman decoder output to be produced,
depending on the size of the source manager's I/O buffer. (More
specifically, the fast Huffman decoder didn't handle invalid codes in
the same manner as the slow decoder, and since the fast decoder requires
all data to be memory-resident, the buffering strategy determines
whether or not the fast decoder can be used on a particular MCU block.)
After extensive experimentation, the Mozilla and Chrome developers and I
determined that this truly was an innocuous issue. The patch that both
browsers adopted as a workaround caused a performance regression with
32-bit code, which is why it was not accepted into libjpeg-turbo. This
commit fixes the problem in a less disruptive way with no performance
regression.
After the completion of the start_input() method, it's too late to check
the image size, because the image readers may have already tried to
allocate memory for the image. If the width and height are excessively
large, then attempting to allocate memory for the image could slow
performance or lead to out-of-memory errors prior to the fuzz target
checking the image size.
NOTE: Specifically, the aforementioned OOM errors and slow units were
observed with the compression fuzz targets when using MSan.
Certain rare malformed input images can cause the Huffman encoder to
generate a value for nbits that corresponds to an uninitialized member
of the DC code table. The ramifications of this are minimal and would
basically amount to a different bogus JPEG image being generated from a
particular bogus input image.
- Referring to 3311fc0001, we need to use
unsigned intermediate math in order to make UBSan happy, even though
(JDIMENSION)(A * B) is effectively the same as
(JDIMENSION)A *(JDIMENSION)B, regardless of intermediate overflow.
- Because of the previous commit, it is now possible for bfOffBits to be
INT_MIN, which would cause the initial computation of bPad to
underflow a signed integer. Thus, we need to check for that
possibility as soon as we know the values of bfOffBits and headerSize.
The worst case from this regression is that bPad could wrap around to
a large positive value, which would cause a "Premature end of input
file" error in the subsequent read_byte() loop. Thus, this issue was
effectively innocuous as well, since it resulted in catching the same
error later and in a different way. Also, the issue was very
well-contained, since it was both introduced and fixed as part of the
ongoing OSS-Fuzz integration project.
- rdbmp.c: Because of 8fb37b8171,
bfOffBits, biClrUsed, and headerSize were made into unsigned ints.
Thus, if bPad would eventually be negative due to a malformed header,
UBSan complained about unsigned math being used in the intermediate
computations. It was unnecessary to make those variables unsigned,
since they are only meant to hold small values, so this commit makes
them signed again. The UBSan error was innocuous, because it is
effectively (if not officially) the case that
(int)((unsigned int)a - (unsigned int)b) == (int)a - (int)b.
- rdbmp.c: If (biWidth * source->bits_per_pixel / 8) would overflow an
unsigned int, then UBSan complained at the point at which row_width
was set in start_input_bmp(), even though the overflow would have been
detected later in the function. This commit adds overflow checks
prior to setting row_width.
- rdppm.c: read_pbm_integer() now bounds-checks the intermediate
value computations in order to catch integer overflow caused by a
malformed text PPM. It's possible, though extremely unlikely, that
the intermediate value computations could have wrapped around to a
value smaller than maxval, but the worst case is that this would have
generated a bogus pixel in the uncompressed image rather than throwing
an error.
A fuzzing test case that was effectively a 1-pixel PGM file with a
maximum value of 1 and an actual value of 8 caused an uninitialized
member of the rescale[] array to be accessed in get_gray_rgb_row() or
get_gray_cmyk_row(). Since, for performance reasons, those functions do
not perform bounds checking on the PPM values, we need to ensure that
unused members of the rescale[] array are initialized.
A fuzzing test case with an image width of 838860946 triggered a UBSan
error:
rdbmp.c:633:34: runtime error: signed integer overflow:
838860946 * 3 cannot be represented in type 'int'
Because the result is cast to an unsigned int (JDIMENSION), this error
is irrelevant, because
(unsigned int)((int)838860946 * (int)3) ==
(unsigned int)838860946 * (unsigned int)3
This limits the tjLoadImage() behavioral changes to the scope of the
compress_fuzzer target. Otherwise, TJBench in fuzzer builds would
refuse to load images larger than 1 Mpixel.
- The PPM reader now throws an error rather than segfaulting (due to a
buffer overrun) if an application attempts to load a 16-bit PPM file
into a grayscale uncompressed image buffer. No known applications
allowed that (not even the test applications in libjpeg-turbo),
because that mode of operation was never expected to work and did not
work under any circumstances. (In fact, it was necessary to modify
TJBench in order to reproduce the issue outside of a fuzzing
environment.) This was purely a matter of making the library bow out
gracefully rather than crash if an application tries to do something
really stupid.
- The PPM reader now throws an error rather than generating incorrect
pixels if an application attempts to load a 16-bit PGM file into an
RGB uncompressed image buffer.
- The PPM reader now correctly loads 16-bit PPM files into extended
RGB uncompressed image buffers. (Previously it generated incorrect
pixels unless the input colorspace was JCS_RGB or JCS_EXT_RGB.)
The only way that users could have potentially encountered these issues
was through the tjLoadImage() function. cjpeg and TJBench were
unaffected.
The non-default options were not being tested because of a pixel format
comparison buglet. This commit also changes the code in both
decompression fuzz targets such that non-default options are tested
based on the pixel format index rather than the pixel format value,
which is a bit more idiot-proof.
Otherwise, the targets will require libstdc++, the i386 version of which
is not available in the OSS-Fuzz runtime environment. The OSS-Fuzz
build environment passes -stdlib:libc++ in the CXXFLAGS environment
variable in order to mitigate this issue, since the runtime environment
has the i386 version of libc++, but using that compiler flag requires
using the C++ compiler.
This commit integrates OSS-Fuzz targets directly into the libjpeg-turbo
source tree, thus obsoleting and improving code coverage relative to
Google's OSS-Fuzz target for libjpeg-turbo (previously available here:
https://github.com/google/oss-fuzz).
I hope to eventually create fuzz targets for the BMP, GIF, and PPM
readers as well, which would allow for fuzz-testing compression, but
since those readers all require an input file, it is unclear how to
build an efficient fuzzer around them. It doesn't make sense to
fuzz-test compression in isolation, because compression can't accept
arbitrary input data.
Define compiler-independent byte-swap macros and use them instead of
executing 'rev' via inline assembly code with GCC-compatible compilers
or a slow shift-store sequence with Visual C++.
* This produces identical assembly code with:
- 64-bit GCC 8.4.0 (Linux)
- 64-bit GCC 9.3.0 (Linux)
- 64-bit Clang 10.0.0 (Linux)
- 64-bit Clang 10.0.0 (MinGW)
- 64-bit Clang 12.0.0 (Xcode 12.2, macOS)
- 64-bit Clang 12.0.0 (Xcode 12.2, iOS)
* This produces different assembly code with:
- 64-bit GCC 4.9.1 (Linux)
- 32-bit GCC 4.8.2 (Linux)
- 32-bit GCC 8.4.0 (Linux)
- 32-bit GCC 9.3.0 (Linux)
Since the intrinsics implementation of Huffman encoding is not used
by default with these compilers, this is not a concern.
- 32-bit Clang 10.0.0 (Linux)
Verified performance neutrality
Closes#507
(regression introduced by 16bd984557)
This implements the same fix for
jsimd_encode_mcu_AC_refine_prepare_sse2() that
a81a8c137b implemented for
jsimd_encode_mcu_AC_first_prepare_sse2().
Based on:
1a59587397eb176a91d8Fixes#509Closes#510
Referring to https://bugzilla.redhat.com/show_bug.cgi?id=1937385#c2,
it is my opinion that the severity of this bug was grossly overstated
and that a CVE never should have been assigned to it, but since one was
assigned, users need to know which version of libjpeg-turbo contains
the fix.
Dear security community, please learn what "DoS" actually means and stop
misusing that term for dramatic effect. Thanks.
We don't officially support i386 or PowerPC Mac builds of libjpeg-turbo
anymore, but they still work (bearing in mind that PowerPC builds
require GCC v4.0 in Xcode 3.2.6, and i386 builds require Xcode 9.x or
earlier.) Referring to #495, apparently MacPorts needs this
functionality.
The GNU builtin function __builtin_clzl() accepts an unsigned long
argument, which is 8 bytes wide on LP64 systems (most Un*x systems,
including Mac) but 4 bytes wide on LLP64 systems (Windows.) This caused
the Neon intrinsics implementation of Huffman encoding to produce
mathematically incorrect results when compiled using Visual Studio with
Clang.
This commit changes all invocations of __builtin_clzl() in the Neon SIMD
extensions to __builtin_clzll(), which accepts an unsigned long long
argument that is guaranteed to be 8 bytes wide on all systems.
Fixes#480Closes#490
The __builtin_clz() compiler intrinsic was already used in the C Huffman
encoders when building libjpeg-turbo for Arm CPUs using a GCC-compatible
compiler. This commit modifies the C Huffman encoders so that they also
use__builtin_clz() when building for Arm CPUs using Visual Studio +
Clang, as well as the equivalent _CountLeadingZeros() compiler intrinsic
when building for Arm CPUs using Visual C++.
In addition to making the C Huffman encoders faster on Windows/Arm, this
also prevents jpeg_nbits_table from being included in Windows/Arm builds,
thus saving 128 KB of memory.
When configuring a Visual Studio IDE build and passing -A arm64 to
CMake, CMAKE_SYSTEM_PROCESSOR will be amd64, so we should set CPU_TYPE
based on the value of CMAKE_GENERATOR_PLATFORM rather than the value of
CMAKE_SYSTEM_PROCESSOR.
Note that this removes our ability to regression test the Armv8 and
PowerPC SIMD extensions, effectively reverting
a524b9b06b and
02227e48a9, but at the moment, there is no
other way.