Regression introduced by 16bd984557 and
5b177b3cab
The pre-computed absolute values used in encode_mcu_AC_first() and
encode_mcu_AC_refine() were stored in a JCOEF (signed short) array.
When attempting to losslessly transform a specially-crafted malformed
12-bit JPEG image with a coefficient value of -32768 into a progressive
12-bit JPEG image, the progressive Huffman encoder attempted to store
the absolute value of -32768 in the JCOEF array, thus overflowing the
16-bit signed data type. Therefore, at this point in the code:
8c5e78ce29/jcphuff.c (L889)
the absolute value was read as -32768, which caused the test at
8c5e78ce29/jcphuff.c (L896)
to fail, falling through to
8c5e78ce29/jcphuff.c (L908)
with an overly large value of r (46) that, when shifted left four
places, incremented, and passed to emit_symbol(), exceeded the maximum
index (255) for the derived code tables. Fortunately, the buffer
overrun was fully contained within phuff_entropy_encoder, so the issue
did not generate a segfault or other user-visible errant behavior, but
it did cause a UBSan failure that was detected by OSS-Fuzz.
This commit introduces an unsigned JCOEF (UJCOEF) data type and uses it
to store the absolute values of DCT coefficients computed by the
AC_first_prepare() and AC_refine_prepare() methods.
Note that the changes to the Arm Neon progressive Huffman encoder
extensions cause signed 16-bit instructions to be replaced with
equivalent unsigned 16-bit instructions, so the changes should be
performance-neutral.
Based on:
bbf61c0382Closes#628
- Because of b5a9ef64ea, "by default" is
no longer applicable. (12-bit-per-component JPEG support is now part
of the core libjpeg-turbo functionality and cannot be disabled.)
- Change awkward "can be used to enable the creation of" to less awkward
"can be used to create".
In libjpeg-turbo 2.1.x and prior, the WITH_12BIT CMake variable was used
to enable 12-bit JPEG support at compile time, because the libjpeg API
library could not handle multiple JPEG data precisions at run time. The
initial approach to handling multiple JPEG data precisions at run time
(7fec5074f9) created a whole new API,
library, and applications for 12-bit data precision, so it made sense to
repurpose WITH_12BIT to allow 12-bit data precision to be disabled.
e8b40f3c2b made it so that the libjpeg API
library can handle multiple JPEG data precisions at run time via a
handful of straightforward API extensions. Referring to
6c2bc901e2, it hasn't been possible to
build libjpeg-turbo with both forward and backward libjpeg API/ABI
compatibility since libjpeg-turbo 1.4.x. Thus, whereas we retain full
backward API/ABI compatibility with libjpeg v6b-v8, forward libjpeg
API/ABI compatibility ceased being realistic years ago, so it no longer
makes sense to provide compile-time options that give a false sense of
forward API/ABI compatibility by allowing some (but not all) of our
libjpeg API extensions to be disabled. Such options are difficult to
maintain and clutter the code with #ifdefs.
The Gordian knot that 7fec5074f9 attempted
to unravel was caused by the fact that there are several
data-precision-dependent (JSAMPLE-dependent) fields and methods in the
exposed libjpeg API structures, and if you change the exposed libjpeg
API structures, then you have to change the whole API. If you change
the whole API, then you have to provide a whole new library to support
the new API, and that makes it difficult to support multiple data
precisions in the same application. (It is not impossible, as example.c
demonstrated, but using data-precision-dependent libjpeg API structures
would have made the cjpeg, djpeg, and jpegtran source code hard to read,
so it made more sense to build, install, and package 12-bit-specific
versions of those applications.)
Unfortunately, the result of that initial integration effort was an
unreadable and unmaintainable mess, which is a problem for a library
that is an ISO/ITU-T reference implementation. Also, as I dug into the
problem of lossless JPEG support, I realized that 16-bit lossless JPEG
images are a thing, and supporting yet another version of the libjpeg
API just for those images is untenable.
In fact, however, the touch points for JSAMPLE in the exposed libjpeg
API structures are minimal:
- The colormap and sample_range_limit fields in jpeg_decompress_struct
- The alloc_sarray() and access_virt_sarray() methods in
jpeg_memory_mgr
- jpeg_write_scanlines() and jpeg_write_raw_data()
- jpeg_read_scanlines() and jpeg_read_raw_data()
- jpeg_skip_scanlines() and jpeg_crop_scanline()
(This is subtle, but both of those functions use JSAMPLE-dependent
opaque structures behind the scenes.)
It is much more readable and maintainable to provide 12-bit-specific
versions of those six top-level API functions and to document that the
aforementioned methods and fields must be type-cast when using 12-bit
samples. Since that eliminates the need to provide a 12-bit-specific
version of the exposed libjpeg API structures, we can:
- Compile only the precision-dependent libjpeg modules (the
coefficient buffer controllers, the colorspace converters, the
DCT/IDCT managers, the main buffer controllers, the preprocessing
and postprocessing controller, the downsampler and upsamplers, the
quantizers, the integer DCT methods, and the IDCT methods) for
multiple data precisions.
- Introduce 12-bit-specific methods into the various internal
structures defined in jpegint.h.
- Create precision-independent data type, macro, method, field, and
function names that are prefixed by an underscore, and use an
internal header to convert those into precision-dependent data
type, macro, method, field, and function names, based on the value
of BITS_IN_JSAMPLE, when compiling the precision-dependent libjpeg
modules.
- Expose precision-dependent jinit*() functions for each of the
precision-dependent libjpeg modules.
- Abstract the precision-dependent libjpeg modules by calling the
appropriate precision-dependent jinit*() function, based on the
value of cinfo->data_precision, from top-level libjpeg API
functions.
- Fix an issue whereby a build with ENABLE_SHARED=0 could not be
installed when using the Ninja Multi-Config CMake generator.
- Fix an issue whereby a Windows installer could not be built when using
the Ninja Multi-Config CMake generator.
- Fix an issue whereby the Java regression tests failed when using the
Ninja Multi-Config CMake generator.
Based on:
4f169deeb0Closes#626
This commit reverts 4dbc293125 and
9f8f683e74 (the previous two commits) and
fixes#613 the correct way. The crux of the issue wasn't the size of
the whole_image virtual array but rather that, since last_iMCU_row is
unsigned, (last_iMCU_row - 1) wrapped around to 0xFFFFFFFF when
last_iMCU_row was 0. This caused the interblock smoothing algorithm
introduced in 6d91e950c8 to erroneously
try to access the next two iMCU rows, neither of which existed. The
first attempt at a fix (4dbc293125)
exposed a NULL dereference, detected by OSS-Fuzz, that occurred when
attempting to decompress a specially-crafted malformed JPEG image to a
YUV buffer using tjDecompressToYUV*() with 1/4 IDCT scaling.
Fixes#613 (again)
Also fixes https://bugs.chromium.org/p/oss-fuzz/issues/detail?id=49898
Regression introduced by 6d91e950c8
Because we're now using a 5x5 smoothing window when decompressing
progressive JPEG images, we need to ensure that the whole_image virtual
array contains at least five rows. Previously that was not always the
case unless the progressive JPEG image being decompressed had at least
five iMCU rows. Since an iMCU has a height of (8 * the vertical
sampling factor), attempting to decompress 4:2:2 and 4:4:4 images <= 32
pixels in height or 4:2:0 images <= 64 pixels in height triggered a
JERR_BAD_VIRTUAL_ACCESS error in decompress_smooth_data(), because
access_rows exceeded the number of rows in the virtual array.
Fixes#613
libjpeg-turbo's AltiVec SIMD extensions previously assumed that AltiVec
instructions were available on all Power Macs that supported OS X 10.4
"Tiger" (the earliest version of OS X that libjpeg-turbo has ever
supported), but Tiger can actually run on PowerPC G3 processors, which
lack AltiVec instructions. This commit enables run-time detection of
AltiVec instructions on OS X/PowerPC systems if AltiVec instructions are
not force-enabled at compile time (using -maltivec). This allows the
same build of libjpeg-turbo to support G3, G4, and G5 Power Macs.
Closes#609
(broken by 607b668ff9)
- Visual Studio 2010 apparently doesn't have the snprintf() inline
function, so restore the macro that emulates that function using
_snprintf_s().
- Explicitly include errno.h in strtest.c, since jinclude.h doesn't
include it when building with Visual Studio.
The h2v2 (4:2:0) merged upsampler uses a spare row buffer so that it can
upsample two rows at a time but return only one row to the application,
if necessary. merged_2v_upsample() copies from this spare row buffer
into the application-supplied output buffer, using the out_row_width
field in the my_merged_upsampler struct to determine how many samples to
copy. out_row_width is set in jinit_merged_upsampler(), which is called
within the body of jpeg_start_decompress(). Since jpeg_crop_scanline()
must be called after jpeg_start_decompress(), jpeg_crop_scanline() must
modify the value of out_row_width if the h2v2 merged upsampler will be
used. Otherwise, merged_2v_upsample() can overflow the output buffer if
the number of bytes between the current output buffer position and the
end of the buffer is less than the number of bytes required to represent
an uncropped scanline of the output image. All of the destination
managers used by djpeg allocate either a whole image buffer or a
scanline buffer based on the uncropped output image width, so this issue
is not reproducible using djpeg.
Fixes#574
- Use check_c_source_compiles() rather than check_symbol_exists() to
detect the presence of vld1_s16_x3(), vld1_u16_x2(), and
vld1q_u8_x4(). check_symbol_exists() is unreliable for detecting
intrinsics, and in practice, it did not detect the presence of the
aforementioned intrinsics in versions of GCC that support them.
- Set DEFAULT_NEON_INTRINSICS=0 for GCC < 12, even if the aforementioned
intrinsics are available. The AArch64 back end in GCC 10 and 11
supports the necessary intrinsics, but the GAS implementation is still
faster when using those compilers.
Fixes#547
(regression introduced by aa7459050d)
cjpeg sets cinfo.in_color_space to JCS_RGB as an "arbitrary guess."
Since tjLoadImage() never uses JCS_RGB, the PGM reader should treat
JCS_RGB the same as JCS_UNKNOWN.
Fixes#566
Recent FreeBSD/PowerPC compilers, such as Clang 11.0.x on FreeBSD 13, do
the equivalent of passing -maltivec to the compiler by default, so
run-time AltiVec detection is unnecessary. However, it becomes
necessary when using other compilers or when passing -mno-altivec to the
compiler.
Closes#552
When building for 32-bit Arm platforms, test whether basic Neon
intrinsics will compile with the specified compiler and C flags. This
prevents the build system from enabling the Neon SIMD extensions when
targetting Armv6 and other legacy architectures that do not support Neon
instructions.
Regression introduced by bbd8089297.
(Checking whether gas-preprocessor.pl was needed for 32-bit Arm builds
had the effect of checking whether Neon instructions were supported.)
Fixes#553
Attempting to losslessly transform certain malformed JPEG images can
cause the nbits table index in the Huffman encoder to exceed 32768, so
we need to pad the SSE2 implementation of that table to 65536 entries as
we do with the C implementation.
Regression introduced by 087c29e07fFixes#543
Although sizeof(void *) == sizeof(size_t) for all architectures that are
currently supported by libjpeg-turbo, such is not guaranteed by the C
standard. Specifically, CHERI-enabled architectures (e.g. CHERI-RISC-V
or Arm's Morello) use capability pointers that are twice the size of
size_t (128 bits for Morello and RV64), so casting to size_t strips the
upper bits of the pointer (including the validity bit) and makes it
non-deferenceable, as indicated by the following compiler warning:
warning: cast from provenance-free integer type to pointer type will
give pointer that can not be dereferenced
[-Werror,-Wcheri-capability-misuse]
cvalue = values = (JCOEF *)PAD((size_t)values_unaligned, 16);
Ignoring this warning results in a run-time crash. Casting pointers to
uintptr_t, if it is available, avoids this problem, since uintptr_t is
defined as an unsigned integer type that can hold a pointer value.
Since C89 compatibility is still necessary in libjpeg-turbo, this commit
introduces a new typedef for pointer-to-integer casts that uses a
GNU-specific extension available in GCC 4.6+ and Clang 3.0+ and falls
back to using size_t if the extension is unavailable. The only other
options would require C99 or Clang-specific builtins.
Closes#538
Referring to the C standard
(http://www.open-std.org/jtc1/sc22/WG14/www/docs/n1256.pdf,
J.2 Undefined behavior), the behavior of the compiler is undefined if
"conversion between two pointer types produces a result that is
incorrectly aligned." Thus, the behavior of this code
*((uint32_t *)buffer) = BUILTIN_BSWAP32(put_buffer);
in the AArch32 version of the FLUSH() macro is undefined unless 'buffer'
is 32-bit-aligned. Referring to
https://bugs.llvm.org/show_bug.cgi?id=50785, certain versions of Clang,
when generating Thumb (T32) instructions, miscompile that code into an
assembly instruction (stm) that requires the destination to be
32-bit-aligned. Since such alignment cannot be guaranteed within the
Huffman encoder, this reportedly led to crashes (SIGBUS: illegal
alignment) with AArch32/Thumb builds of libjpeg-turbo running on Android
devices, although thus far I have been unable to reproduce those crashes
with a plain Linux/Arm system.
The miscompilation is visible with the Compiler Explorer:
https://godbolt.org/z/rv1ccx1Pb
However, it goes away when removing the return statement from the
function. Thus, it seems that Clang's behavior in this regard is
somewhat variable, which may explain why the crashes are only
reproducible on certain platforms.
The suggested workaround is to use memcpy(), but whereas Clang and
recent GCC releases are smart enough to compile a 4-byte memcpy() call
into a str instruction, GCC < 6 is not. Referring to
https://godbolt.org/z/ae7Wje3P6, the only way to consistently produce
the desired str instruction across all supported compilers is to use
inline assembly. Visual C++ presumably does not miscompile the code in
question, since no issues have been reported with it, but since the code
relies on undefined compiler behavior, prudence dictates that
e4ec23d7ae should be reverted for Visual
C++, which this commit does. The performance impact of
e4ec23d7ae for Visual C++/Arm builds is
unknown (I have no ability to test such builds), but regardless, this
commit reverts the Visual C++/Arm performance to that of libjpeg-turbo
2.1 beta1.
Closes#529
Arm compilers have three floating point ABI options:
'soft' compiles floating point operations as function calls into a
software floating point library, which emulates floating point
operations using integer operations. Floating point function arguments
are passed using integer registers.
'softfp' also compiles floating point operations as function calls into
a floating point library and passes floating point function arguments
using integer registers, but the floating point library functions can
use FPU instructions if the CPU supports them.
'hard' compiles floating point operations into inline FPU instructions,
similarly to x86 and other architectures, and passes floating point
function arguments using FPU registers.
Not all AArch32 CPUs have FPUs or support Neon instructions, so on Linux
and Android platforms, the AArch32 SIMD dispatcher in libjpeg-turbo only
enables the Neon SIMD extensions at run time if /proc/cpuinfo indicates
that the CPU supports Neon instructions or if Neon instructions are
explicitly enabled (e.g. by passing -mfpu=neon to the compiler.) In
order to support all AArch32 CPUs using the same code base, i.e. to
support run-time FPU and Neon auto-detection, it is necessary to compile
the scalar C source code using -mfloat-abi=soft. However, the 'soft'
floating point ABI cannot be used when compiling Neon intrinsics, so the
intrinsics implementation of the Neon SIMD extensions must be compiled
using -mfloat-abi=softfp if the scalar C source code is compiled using
-mfloat-abi=soft.
This commit modifies the build system so that it detects whether
-mfloat-abi=softfp must be explicitly added to the compiler flags when
building the intrinsics implementation of the Neon SIMD extensions.
This will be necessary if the build is using the 'soft' floating
point ABI along with run-time auto-detection of Neon instructions.
Fixes#523
Referring to #527, the security community did not assign this CVE ID
until more than 8 months after the fix for the issue was released. By
the time they assigned the ID, libjpeg-turbo already had two production
releases containing the fix. This calls into question the usefulness of
assigning a CVE ID to the issue, particularly given that the buffer
overrun in question was fully contained in the stack, not detectable
with valgrind, and confined to lossless transformation (it did not
affect JPEG compression or decompression.)
https://vuldb.com/?id.176175
says that "the exploitability is told to be easy" but provides no
clarification, and given that the author of that page does not seem to
be aware that a fix for the issue has been available since early
December of 2019, it calls into question the accuracy of everything else
on the page.
It would really be nice if the security community approached me about
these things before wasting my time, but I guess it's my lot in life to
modify a change log entry from 2019 to include a CVE ID from 2020.
So it goes...
When using the in-memory destination manager, it is necessary to
explicitly call the destination manager's term_destination() method if
an error occurs. That method is called by jpeg_finish_compress() but
not by jpeg_abort_compress().
This fixes a potential double free() that could occur if tjCompress*()
or tjTransform() returned an error and the calling application tried to
clean up a JPEG buffer that was dynamically re-allocated by one of those
functions.
- The PPM reader now throws an error rather than segfaulting (due to a
buffer overrun) if an application attempts to load a 16-bit PPM file
into a grayscale uncompressed image buffer. No known applications
allowed that (not even the test applications in libjpeg-turbo),
because that mode of operation was never expected to work and did not
work under any circumstances. (In fact, it was necessary to modify
TJBench in order to reproduce the issue outside of a fuzzing
environment.) This was purely a matter of making the library bow out
gracefully rather than crash if an application tries to do something
really stupid.
- The PPM reader now throws an error rather than generating incorrect
pixels if an application attempts to load a 16-bit PGM file into an
RGB uncompressed image buffer.
- The PPM reader now correctly loads 16-bit PPM files into extended
RGB uncompressed image buffers. (Previously it generated incorrect
pixels unless the input colorspace was JCS_RGB or JCS_EXT_RGB.)
The only way that users could have potentially encountered these issues
was through the tjLoadImage() function. cjpeg and TJBench were
unaffected.
(regression introduced by 16bd984557)
This implements the same fix for
jsimd_encode_mcu_AC_refine_prepare_sse2() that
a81a8c137b implemented for
jsimd_encode_mcu_AC_first_prepare_sse2().
Based on:
1a59587397eb176a91d8Fixes#509Closes#510
Referring to https://bugzilla.redhat.com/show_bug.cgi?id=1937385#c2,
it is my opinion that the severity of this bug was grossly overstated
and that a CVE never should have been assigned to it, but since one was
assigned, users need to know which version of libjpeg-turbo contains
the fix.
Dear security community, please learn what "DoS" actually means and stop
misusing that term for dramatic effect. Thanks.