- Due to an oversight, a113506d17
(libjpeg-turbo 1.4 beta1) effectively made the call to
std_huff_tables() in jpeg_set_defaults() a no-op if the Huffman tables
were previously defined, which made it impossible to disable Huffman
table optimization or progressive mode if they were previously enabled
in the same API instance. std_huff_tables() retains its previous
behavior for decompression instances, but it now force-enables the
standard (baseline) Huffman tables for compression instances.
- Due to another oversight, there was no way to disable lossless mode
if it was previously enabled in a particular API instance.
jpeg_set_defaults() now accomplishes this, which makes
TJ*PARAM_LOSSLESS behave as intended/documented.
- Due to yet another oversight, setCompDefaults() in the TurboJPEG API
library permanently modified the value of TJ*PARAM_SUBSAMP when
generating a lossless JPEG image, which affected subsequent lossy
compression operations. This issue was hidden by the issue above and
thus does not need to be publicly documented.
Fixes#792
The target data precision isn't known at the time that the calling
program sets TJPARAM_LOSSLESSPT, so tj3Set() needs to allow all possible
values (from 0 to 15.) jpeg_enable_lossless(), which is called within
the body of tj3Compress*(), will throw an error if the point transform
value is greater than {data precision} - 1.
This just improves code readability by emphasizing that we don't care
about the destination image's level of subsampling unless
TJPARAM_NOREALLOC is set or lossless cropping will be performed.
With respect to tj3Transform(), this addresses an oversight from
bb1d540a80.
Note to self: A convenience function/method for computing the worst-case
transformed JPEG size for a particular transform would be nice.
tj*Transform() relied upon the underlying transupp API to check the
cropping region. However, transupp uses unsigned integers for the
cropping region, whereas the tjregion structure uses signed integers.
Thus, casting negative values from a tjregion structure produced very
large unsigned values. In the case of the left and upper boundary, this
was innocuous, because jtransform_request_workspace() rejected the
values as being out of bounds. However, jtransform_request_workspace()
did not always reject very large width and height values, because it
supports expanding the destination image by specifying a cropping region
larger than the source image. In certain cases, it allowed those
values, and the libjpeg memory manager subsequently ran out of memory.
NOTE: Prior to this commit, image expansion technically worked with
tj*Transform() as long as the cropping width and height were valid and
automatic JPEG buffer (re)allocation was used. However, that behavior
is not a documented feature of the TurboJPEG API, nor do we have any way
of testing it at the moment. Official support for image expansion can
be added later, if there is sufficient demand for it.
Similarly, this commit modifies tj3SetCroppingRegion() so that it
explicitly checks for left boundary values exactly equal to the scaled
image width and upper boundary values exactly equal to the scaled image
height. If the specified cropping width or height was 0 (which is
interpreted as {scaled image width} - {left boundary} or
{scaled image height} - {upper boundary}), then such values caused a
cropping width or height of 0 to be passed to the libjpeg API. In the
case of the width, this was innocuous, because jpeg_crop_scanline()
rejected the value. In the case of the height, however, this caused
unexpected and hard-to-diagnose errors farther down the pipeline.
Put all general functions at the top of the list, and ensure that all
functions are defined before they are mentioned. Also consistify the
function ordering between turbojpeg.h and turbojpeg.c
Lossless cropping is performed after other lossless transform
operations, so the cropping region must be specified relative to the
destination image dimensions and level of chrominance subsampling, not
the source image dimensions and level of chrominance subsampling.
More specifically, if the lossless transform operation swaps the X and Y
axes, or if the image is converted to grayscale, then that changes the
cropping region requirements.
The JPEG-1 spec never uses the term "MCU block". That term is rarely
used in other literature to describe the equivalent of an MCU in an
interleaved JPEG image, but the libjpeg documentation uses "iMCU" to
describe the same thing. "iMCU" is a better term, since the equivalent
of an interleaved MCU can contain multiple DCT blocks (or samples in
lossless mode) that are only grouped together if the image is
interleaved.
In the case of restart markers, "MCU block" was used in the libjpeg
documentation instead of "MCU", but "MCU" is more accurate and less
confusing. (The restart interval is literally in MCUs, where one MCU
is one data unit in a non-interleaved JPEG image and multiple data units
in a multi-component interleaved JPEG image.)
In the case of 9b704f96b2, the issue was
actually with progressive JPEG images exactly two DCT blocks wide, not
two MCU blocks wide.
This commit also defines "MCU" and "MCU row" in the description of the
various restart marker options/parameters. Although an MCU row is
technically always a row of samples in lossless mode, "sample row" was
confusing, since it is used in other places to describe a row of samples
for a single component (whereas an MCU row in a typical lossless JPEG
image consists of a row of interleaved samples for all components.)
Referring to
https://sourceforge.net/p/libjpeg-turbo/bugs/48,
https://sourceforge.net/p/libjpeg-turbo/bugs/82,
#15, #238, #253, and #619,
valgrind and MSan have failed to properly detect data initialization by
libjpeg-turbo's x86 SIMD extensions for the entire 14 years that
libjpeg-turbo has been a project, resulting in false positives unless
libjpeg-turbo is built with WITH_SIMD=0 or run with JSIMD_FORCENONE=1.
This commit introduces a new C preprocessor macro (ZERO_BUFFERS) that,
if set, causes libjpeg-turbo to zero certain buffers in order to work
around the specific valgrind/MSan test failures caused by the
aforementioned false positives. This allows us to more closely
approximate the production configuration of libjpeg-turbo when testing
with valgrind or MSan.
Closes#781
Several TurboJPEG functions store their return value in an unsigned
long long intermediate and compare it against the maximum value of
unsigned long or size_t in order to avoid integer overflow. However,
such comparisons are tautological (always true, i.e. redundant) unless
the size of unsigned long or size_t is less than the size of unsigned
long long. Explicitly guarding the comparisons with #if avoids compiler
warnings with -Wtautological-constant-in-range-compare in Clang and also
makes it clear to the reader that the comparisons are only intended for
32-bit code.
Refer to #752
(regression introduced by e8b40f3c2b)
The documented behavior of the libjpeg API is to compute optimal Huffman
tables when generating 12-bit lossy Huffman-coded JPEG images, unless
the calling application supplies its own Huffman tables. However,
e8b40f3c2b and
96bc40c1b3 modified
jinit_c_master_control() so that it always set cinfo->optimize_coding to
TRUE when generarating 12-bit lossy Huffman-coded JPEG images, which
prevented calling applications from supplying custom Huffman tables for
such images.
This commit modifies jinit_c_master_control() so that it only overrides
cinfo->optimize_coding when generating 12-bit lossy Huffman-coded JPEG
images if all Huffman table slots are empty or all slots contain default
Huffman tables. Determining whether the latter is true requires using
memcmp() to compare the allocated Huffman tables with the default
Huffman tables, because:
- The documented behavior of jpeg_set_defaults() is to initialize any
empty Huffman table slot with the default Huffman table corresponding
to that slot, regardless of the data precision. There is also no
requirement that the data precision be specified prior to calling
jpeg_set_defaults(). Thus, there is no reliable way to prevent
jpeg_set_defaults() from initializing empty Huffman table slots with
default Huffman tables, which are useless for 12-bit data precision.
- There is no requirement that custom Huffman tables be defined prior to
calling jpeg_set_defaults(). A calling application could call
jpeg_set_defaults() and modify the values in the default Huffman
tables rather than allocating new tables. Thus, there is no reliable
way to detect whether the allocated Huffman tables contain default
values without comparing the tables with the default Huffman tables.
Fortunately, comparing the allocated Huffman tables with the default
Huffman tables is the last stop on the logic train, so it won't happen
unless cinfo->data_precision == 12, cinfo->arith_code == FALSE,
cinfo->optimize_coding == FALSE, and one or more Huffman tables are
allocated. (If the compressor object is reused, this ensures that the
full comparison will be performed at most once.) Custom Huffman tables
will be flagged as non-default when the first non-default value is
encountered, and the worst case (comparing 400 bytes) is very fast on
modern CPUs anyhow.
Fixes#751
TJPARAM_MAXPIXELS was previously hidden and used only for fuzz testing,
but it is potentially useful for calling applications as well,
particularly if they want to guard against excessive memory consumption
by the tj3LoadImage*() functions. The parameter has also been extended
to decompression and lossless transformation functions/methods, mainly
as a convenience. (It was already possible for calling applications to
impose their own JPEG image size limits by reading the JPEG header prior
to decompressing or transforming the image.)
Because of the TurboJPEG 3 API overhaul, the legacy decompression and
lossless transformation functions now wrap the new TurboJPEG 3
functions. For performance reasons, we don't want to read the JPEG
header more than once during the same operation, so the wrapped
functions do not read the header if it has already been read by a
wrapper function. Initially the TurboJPEG 3 functions used a state
variable to track whether the header had already been read, but
b94041390c made this more robust by using
the libjpeg global decompression state instead. If a wrapper function
has already read the JPEG header successfully, then the global
decompression state will be DSTATE_READY, and the logic introduced in
b94041390c will prevent the header from
being read again.
A subtle issue arises because tj3DecompressHeader() does not call
jpeg_abort_decompress() if jpeg_read_header() fails. (That is arguably
a bug, but it has existed since the very first implementation of the
function.) Depending on the nature of the failure, this can cause
tj3DecompressHeader() to return an error code and leave the libjpeg
global decompression state set to DSTATE_INHEADER. If a misbehaved
application ignored the error and subsequently called a TurboJPEG
decompression or lossless transformation function, then the function
would fail to read the JPEG header because the global decompression
state was greater than DSTATE_START. In the case of the decompression
functions, this was innocuous, because jpeg_calc_output_dimensions()
and jpeg_start_decompress() both sanity check the global decompression
state. However, it was possible for a misbehaved application to call
tj3DecompressHeader() with junk data, ignore the return value, and pass
the same junk data into tj3Transform(). Because tj3DecompressHeader()
left the global decompression state set to DSTATE_INHEADER,
tj3Transform() failed to detect the junk data (because it didn't try to
read the JPEG header), and it called jtransform_request_workspace() with
dinfo->image_width and dinfo->image_height still initialized to 0.
Because jtransform_request_workspace() does not sanity check the
decompression state, a division-by-zero error occurred with certain
combinations of transform options in which TJXOPT_TRIM or TJXOPT_CROP
was specified. However, it should be noted that TJXOPT_TRIM and
TJXOPT_CROP cannot be expected to work properly without foreknowledge of
the JPEG source image dimensions, which cannot be gained except by
calling tj3DecompressHeader() successfully. Thus, a calling application
is inviting trouble if it does not check the return value of
tj3DecompressHeader() and sanity check the JPEG source image dimensions
before calling tj3Transform(). This commit softens the failure, but the
failure is still due to improper API usage.
This corresponds to max_memory_to_use in the jpeg_memory_mgr struct in
the libjpeg API, except that the TurboJPEG parameter is specified in
megabytes. Because this is 2023 and computers with less than 1 MB of
memory are not a thing (at least not within the scope of libjpeg-turbo
support), it isn't useful to allow a limit less than 1 MB to be
specified. Furthermore, because TurboJPEG parameters are signed
integers, if we allowed the memory limit to be specified in bytes, then
it would be impossible to specify a limit larger than 2 GB on 64-bit
machines. Because max_memory_to_use is a long signed integer,
effectively we can specify a limit of up to 2 petabytes on 64-bit
machines if the TurboJPEG parameter is specified in megabytes. (2 PB
should be enough for anybody, right?)
This commit also bumps the TurboJPEG API version to 3.0.1. Since the
TurboJPEG API version no longer tracks the libjpeg-turbo version, it
makes sense to increment the API revision number when adding constants,
to increment the minor version number when adding functions, and to
increment the major version number for a complete overhaul.
This commit also removes the vestigial TJ_NUMPARAM macro, which was
never defined because it proved unnecessary.
Partially implements #735
This is very subtle, but if a user specifies a libjpeg virtual array
memory limit via the JPEGMEM environment variable and one of the
tj3Compress*() functions hits that limit, the libjpeg error handler
will be invoked in jpeg_start_compress() (more specifically in
realize_virt_arrays() in jinit_compress_master()) before the libjpeg
global compression state can be incremented. Thus,
jpeg_abort_compress() will not be called before the tj3Compress*()
function exits, the unrealized virtual arrays will not be freed, and if
the TurboJPEG compression instance is reused, those unrealized virtual
arrays will count against the specified memory limit. This could cause
subsequent compression operations that require smaller virtual arrays
(or even no virtual arrays at all) to fail when they would otherwise
succeed. In reality, the vast majority of calling programs would abort
and free the TurboJPEG compression instance if one of the tj3Compress*()
functions failed, but TJBench is a rare exception. This issue does not
bear documenting because of its subtlety and rarity and because JPEGMEM
is not a documented feature of the TurboJPEG API.
Note that the issue does not exist in the tj3Encode*() and tj3Decode*()
functions, because realize_virt_arrays() is never called in the body of
those functions. The issue also does not exist in the tj3Decompress*()
and tj3Transform() functions, because those functions ensure that the
JPEG header is read (and thus the libjpeg global decompression state is
incremented) prior to calling a function that calls
realize_virt_arrays() (i.e. jpeg_start_decompress() or
jpeg_read_coefficients().) If realize_virt_arrays() failed in the body
of jpeg_write_coefficients(), then tj3Transform() would abort without
calling jpeg_abort_compress(). However, since jpeg_start_compress() is
never called in the body of tj3Transform(), no virtual arrays are ever
requested from the compression object, so failing to call
jpeg_abort_compress() would be innocuous.
If the align parameter was set to an unreasonably large value, such as
0x2000000, strides[0] * ph0 and strides[1] * ph1 could have overflowed
the int datatype and wrapped around when computing (src|dst)Planes[1]
and (src|dst)Planes[2] (respectively.) This would have caused
(src|dst)Planes[1] and (src|dst)Planes[2] to point to lower addresses in
the YUV buffer than expected, so the worst case would have been a
visually incorrect output image, not a buffer overrun or other
exploitable issue.
When used with TJPARAM_NOREALLOC and with TJXOP_TRANSPOSE,
TJXOP_TRANSVERSE, TJXOP_ROT90, or TJXOP_ROT270, tj3Transform()
incorrectly based the destination buffer size for a transform on the
source image dimensions rather than the transformed image dimensions.
This was apparently a long-standing bug that had existed in the
tj*Transform() function since its inception. As initially implemented
in the evolving libjpeg-turbo v1.2 code base, tjTransform() required
dstSizes[i] to be set regardless of whether TJFLAG_NOREALLOC (the
predecessor to TJPARAM_NOREALLOC) was set.
ff78e37595, which was introduced later in
the evolving libjpeg-turbo v1.2 code base, removed that requirement and
planted the seed for the bug. However, the bug was not activated until
9b49f0e4c7 was introduced still later in
the evolving libjpeg-turbo v1.2 code base, adding a subsampling type
argument to the (new at the time) tjBufSize() function and thus making
the width and height arguments no longer commutative.
The bug opened up the possibility that a JPEG source image could cause
tj3Transform() to overflow the destination buffer for a transform if all
of the following were true:
- The JPEG source image used 4:2:2, 4:4:0, 4:1:1, or 4:4:1 subsampling.
(These are the only subsampling types for which the width and height
arguments to tj3JPEGBufSize() are not commutative.)
- The width and height of the JPEG source image were such that
tj3JPEGBufSize(height, width, subsamplingType) returned a smaller
value than tj3JPEGBufSize(width, height, subsamplingType).
- The JPEG source image contained enough metadata that the size of the
transformed image was larger than
tj3JPEGBufSize(height, width, subsamplingType).
- TJPARAM_NOREALLOC was set.
- TJXOP_TRANSPOSE, TJXOP_TRANSVERSE, TJXOP_ROT90, or TJXOP_ROT270 was
used.
- TJXOPT_COPYNONE was not set.
- TJXOPT_CROP was not set.
- The calling program allocated
tj3JPEGBufSize(height, width, subsamplingType) bytes for the
destination buffer, as the API documentation instructs.
The API documentation cautions that JPEG source images containing a
large amount of extraneous metadata (EXIF, IPTC, ICC, etc.) cannot
reliably be transformed if TJPARAM_NOREALLOC is set and TJXOPT_COPYNONE
is not set. Irrespective of the bug, there are still cases in which a
JPEG source image with a large amount of metadata can, when transformed,
exceed the worst-case transformed JPEG image size. For instance, if you
try to losslessly crop a JPEG image with 3 kB of EXIF data to 16x16
pixels, then you are guaranteed to exceed the worst-case 16x16 JPEG
image size unless you discard the EXIF data.
Even without the bug, tj3Transform() will still fail with "Buffer passed
to JPEG library is too small" when attempting to transform JPEG source
images that meet the aforementioned criteria. The bug is that the
function segfaults rather than failing gracefully, but the chances of
that occurring in a real-world application are very slim. Any
real-world application developers who attempted to transform arbitrary
JPEG source images with TJPARAM_NOREALLOC set would very quickly realize
that they cannot reliably do that without also setting TJXOPT_COPYNONE.
Thus, I posit that the actual risk posed by this bug is low.
Applications such as web browsers that are the most exposed to security
risks from arbitrary JPEG source images do not use the TurboJPEG
lossless transform feature. (None of those applications even use the
TurboJPEG API, to the best of my knowledge, and the public libjpeg API
has no equivalent transform function.) Our only command-line interface
to the tj3Transform() function, TJBench, was not exposed to the bug
because it had a compatible bug whereby it allocated the JPEG
destination buffer to the same size that tj3Transform() erroneously
expected. The TurboJPEG Java API was also not exposed to the bug
because of a similar compatible bug in the
Java_org_libjpegturbo_turbojpeg_TJTransformer_transform() JNI function.
(This commit fixes both compatible bugs.)
In short, best practices for tj3Transform() are to use TJPARAM_NOREALLOC
only with JPEG source images that are known to be free of metadata (such
as images generated by tj3Compress*()) or to use TJXOPT_COPYNONE along
with TJPARAM_NOREALLOC. Still, however, the function shouldn't segfault
as long as the calling program allocates the suggested amount of space
for the JPEG destination buffer.
Usability notes:
tj3Transform() could hypothetically require dstSizes[i] to be set
regardless of the value of TJPARAM_NOREALLOC, but there are usability
pitfalls either way. The main pitfall I sought to avoid with
ff78e37595 was a calling program failing
to set dstSizes[i] at all, thus leaving its value undefined. It could
be argued that requiring dstSizes[i] to be set in all cases is more
consistent, but it could also be argued that not requiring it to be set
when TJPARAM_NOREALLOC is set is more user-proof. tj3Transform() could
also hypothetically set TJXOPT_COPYNONE automatically when
TJPARAM_NOREALLOC is set, but that could lead to user confusion.
Ultimately, I would like to address these issues in TurboJPEG v4 by
using managed buffer objects, but that would be an extensive overhaul.
In decompression and transform functions, use the libjpeg API state
rather than a TurboJPEG instance variable to determine whether
jpeg_mem_src_tj() and jpeg_read_header() have already been called by a
wrapper function.
This actually works and apparently always has worked. It only failed
because the libjpeg code, which did not originally support arithmetic
coding, assumed that optimize_coding should always be TRUE for 12-bit
data precision.
(ChangeLog update forthcoming)
- Prefix all function names with "tj3" and remove version suffixes from
function names. (Future API overhauls will increment the prefix to
"tj4", etc., thus retaining backward API/ABI compatibility without
versioning each individual function.)
- Replace stateless boolean flags (including TJ*FLAG_ARITHMETIC and
TJ*FLAG_LOSSLESS, which were never released) with stateful integer
parameters, the value of which persists between function calls.
* Use parameters for the JPEG quality and subsampling as well, in
order to eliminate the awkwardness of specifying function arguments
that weren't relevant for lossless compression.
* tj3DecompressHeader() now stores all relevant information about the
JPEG image, including the width, height, subsampling type, entropy
coding type, etc. in parameters rather than returning that
information in its arguments.
* TJ*FLAG_LIMITSCANS has been reimplemented as an integer parameter
(TJ*PARAM_SCANLIMIT) that allows the number of scans to be
specified.
- Use the const keyword for all pointer arguments to unmodified
buffers, as well as for both dimensions of 2D pointers. Addresses
#395.
- Use size_t rather than unsigned long to represent buffer sizes, since
unsigned long is a 32-bit type on Windows. Addresses #24.
- Return 0 from all buffer size functions if an error occurs, rather
than awkwardly trying to return -1 in an unsigned data type.
- Implement 12-bit and 16-bit data precision using dedicated
compression, decompression, and image I/O functions/methods.
* Suffix the names of all data-precision-specific functions with 8,
12, or 16.
* Because the YUV functions are intended to be used for video, they
are currently only implemented with 8-bit data precision, but they
can be expanded to 12-bit data precision in the future, if
necessary.
* Extend TJUnitTest and TJBench to test 12-bit and 16-bit data
precision, using a new -precision option.
* Add appropriate regression tests for all of the above to the 'test'
target.
* Extend tjbenchtest to test 12-bit and 16-bit data precision, and
add separate 'tjtest12' and 'tjtest16' targets.
* BufferedImage I/O in the Java API is currently limited to 8-bit
data precision, since the BufferedImage class does not
straightforwardly support higher data precisions.
* Extend the PPM reader to convert 12-bit and 16-bit PBMPLUS files
to grayscale or CMYK pixels, as it already does for 8-bit files.
- Properly accommodate lossless JPEG using dedicated parameters
(TJ*PARAM_LOSSLESS, TJ*PARAM_LOSSLESSPSV, and TJ*PARAM_LOSSLESSPT),
rather than using a flag and awkwardly repurposing the JPEG quality.
Update TJBench to properly reflect whether a JPEG image is lossless.
- Re-organize the TJBench usage screen.
- Update the Java docs using Java 11, to improve the formatting and
eliminate HTML frames.
- Use the accurate integer DCT algorithm by default for both
compression and decompression, since the "fast" algorithm is a legacy
feature, it does not pass the ISO compliance tests, and it is not
actually faster on modern x86 CPUs.
* Remove the -accuratedct option from TJBench and TJExample.
- Re-implement the 'tjtest' target using a CMake script that enables
the appropriate tests, depending on the data precision and whether or
not the Java API is part of the build.
- Consolidate the C and Java versions of tjbenchtest into one script.
- Consolidate the C and Java versions of tjexampletest into one script.
- Combine all initialization functions into a single function
(tj3Init()) that accepts an integer parameter specifying the
subsystems to initialize.
- Enable decompression scaling explicitly, using a new function/method
(tj3SetScalingFactor()/TJDecompressor.setScalingFactor()), rather
than implicitly using awkward "desired width"/"desired height"
parameters.
- Introduce a new macro/constant (TJUNSCALED/TJ.UNSCALED) that maps to
a scaling factor of 1/1.
- Implement partial image decompression, using a new function/method
(tj3SetCroppingRegion()/TJDecompressor.setCroppingRegion()) and
TJBench option (-crop). Extend tjbenchtest to test the new feature.
Addresses #1.
- Allow the JPEG colorspace to be specified explicitly when
compressing, using a new parameter (TJ*PARAM_COLORSPACE). This
allows JPEG images with the RGB and CMYK colorspaces to be created.
- Remove the error/difference image feature from TJBench. Identical
images to the ones that TJBench created can be generated using
ImageMagick with
'magick composite <original_image> <output_image> -compose difference <diff_image>'
- Handle JPEG images with unknown subsampling types. TJ*PARAM_SUBSAMP
is set to TJ*SAMP_UNKNOWN (== -1) for such images, but they can still
be decompressed fully into packed-pixel images or losslessly
transformed (with the exception of lossless cropping.) They cannot
be partially decompressed or decompressed into planar YUV images.
Note also that TJBench, due to its lack of support for imperfect
transforms, requires that the subsampling type be known when
rotating, flipping, or transversely transposing an image. Addresses
#436
- The Java version of TJBench now has identical functionality to the C
version. This was accomplished by (somewhat hackishly) calling the
TurboJPEG C image I/O functions through JNI and copying the pixels
between the C heap and the Java heap.
- Add parameters (TJ*PARAM_RESTARTROWS and TJ*PARAM_RESTARTBLOCKS) and
a TJBench option (-restart) to allow the restart marker interval to
be specified when compressing. Eliminate the undocumented TJ_RESTART
environment variable.
- Add a parameter (TJ*PARAM_OPTIMIZE), a transform option
(TJ*OPT_OPTIMIZE), and a TJBench option (-optimize) to allow
optimized baseline Huffman coding to be specified when compressing.
Eliminate the undocumented TJ_OPTIMIZE environment variable.
- Add parameters (TJ*PARAM_XDENSITY, TJ*PARAM_DENSITY, and
TJ*DENSITYUNITS) to allow the pixel density to be specified when
compressing or saving a Windows BMP image and to be queried when
decompressing or loading a Windows BMP image. Addresses #77.
- Refactor the fuzz targets to use the new API.
* Extend decompression coverage to 12-bit and 16-bit data precision.
* Replace the awkward cjpeg12 and cjpeg16 targets with proper
TurboJPEG-based compress12, compress12-lossless, and
compress16-lossless targets
- Fix innocuous UBSan warnings uncovered by the new fuzzers.
- Implement previous versions of the TurboJPEG API by wrapping the new
functions (tested by running the 2.1.x versions of TJBench, via
tjbenchtest, and TJUnitTest against the new implementation.)
* Remove all JNI functions for deprecated Java methods and implement
the deprecated methods using pure Java wrappers. It should be
understood that backward API compatibility in Java applies only to
the Java classes and that one cannot mix and match a JAR file from
one version of libjpeg-turbo with a JNI library from another
version.
- tj3Destroy() now silently accepts a NULL handle.
- tj3Alloc() and tj3Free() now return/accept void pointers, as malloc()
and free() do.
- The image I/O functions now accept a TurboJPEG instance handle, which
is used to transmit/receive parameters and to receive error
information.
Closes#517
tjPlaneWidth() and tjPlaneHeight() could overflow a signed int and
return a negative value if passed a width/height argument of INT_MAX and
a subsampling type for which the MCU block size is larger than 8x8.
- TJBench/TJUnitTest: Wordsmith command-line output
- Java: "decompress operations"="decompression operations"
- tjLoadImage(): Error message tweak
- Don't mention compression performance in the description of
TJXOPT_PROGRESSIVE/TJTransform.OPT_PROGRESSIVE, because the image has
already been compressed at that point.
(Oversights from 9a146f0f23)
The documented behavior of the function is to use decompression scaling
to generate the largest possible image that will fit within the desired
image dimensions. Thus, if the desired image dimensions are larger than
the scaled image dimensions, then tjDecompressToYUV2() should use the
scaled image dimensions when computing the plane pointers and strides to
pass to tjDecompressToYUVPlanes().
Note that this bug was not previously detected, because tjunittest and
tjbench always passed the scaled image dimensions to
tjDecompressToYUV2().
- Wordsmithing, formatting, and grammar tweaks
- Various clarifications and corrections, including specifying whether
a particular buffer or image is used as a source or destination
- Accommodate/mention features that were introduced since the API
documentation was created.
- For clarity, use "packed-pixel" to describe uncompressed
source/destination images that are not planar YUV.
- Use "row" rather than "line" to refer to a single horizontal group of
pixels or component values, for consistency with the libjpeg API
documentation. (libjpeg also uses "scanline", which is a more archaic
term.)
- Use "alignment" rather than "padding" to refer to the number of bytes
by which a row's width is evenly divisible. This consistifies the
documention of the YUV functions and tjLoadImage(). ("Padding"
typically refers to the number of bytes added to each row, which is
not the same thing.)
- Remove all references to "the underlying codec." Although the
TurboJPEG API originated as a cross-platform wrapper for the Intel
Integrated Performance Primitives, Sun mediaLib, QuickTime, and
libjpeg, none of those TurboJPEG implementations has been maintained
since 2009. Nothing would prevent someone from implementing the
TurboJPEG API without libjpeg-turbo, but such an implementation would
not necessarily have an "underlying codec." (It could be fully
self-contained.)
- Use "destination image" rather than "output image", for consistency,
or describe the type of image that will be output.
- Avoid the term "image buffer" and instead use "byte buffer" to
refer to buffers that will hold JPEG images, or describe the type of
image that will be contained in the buffer. (The Java documentation
doesn't use "byte buffer", because the buffer arrays literally have
"byte" in front of them, and since Java doesn't have pointers, it is
not possible for mere mortals to store any other type of data in those
arrays.)
- C: Use "unified" to describe YUV images stored in a single buffer, for
consistency with the Java documentation.
- Use "planar YUV" rather than "YUV planar". Is is our convention to
describe images using {component layout} {colorspace/pixel format}
{image function}, e.g. "packed-pixel RGB source image" or "planar YUV
destination image."
- C: Document the TurboJPEG API version in which a particular function
or macro was introduced, and reorder the backward compatibility
function stubs in turbojpeg.h alphabetically by API version.
- C: Use Markdown rather than HTML tags, where possible, in the Doxygen
comments.
Macros from older versions of the TurboJPEG API are supported but not
documented, so using the current version of those macros makes the code
more readable.
Because the PAD() macro can only handle powers of 2, this is a necessary
restriction (and a documented one, except in the case of
tjCompressFromYUV()-- oops.) Failing to check the 'pad' argument
caused tjBufSizeYUV2() to return bogus results if 'pad' was less than 1
or otherwise not a power of 2. tjEncodeYUV3() and tjDecodeYUV()
effectively treated a 'pad' value of 0 as unpadded, but that was subtle
and undocumented behavior. tjCompressFromYUV() did not check whether
'pad' was a power of 2, so the strides passed to
tjCompressFromYUVPlanes() would have been incorrect if 'pad' was not a
power of 2. That would not have caused tjCompressFromYUV() to overrun
the source buffer, as long as the calling application allocated the
buffer based on the return value of tjBufSizeYUV2() (which computes the
strides in the same manner as tjCompressFromYUV().) However, if the
calling application attempted to initialize the source buffer using
correctly-computed strides, then it could have overrun its own
buffer in certain cases or produced incorrect JPEG images in others.
Realistically, there is no reason why an application would want to pass
a non-power-of-2 'pad' value to a TurboJPEG API function, so this commit
is about user-proofing the API rather than fixing any known issue.
TJFLAG_LOSSLESS is irrelevant to planar YUV encoding, and setting the
flag caused tjEncode*() to fail with "Invalid lossless parameters"
because tjEncodeYUVPlanes() passes a JPEG quality value of -1 to
setCompDefaults(). This commit modifies setCompDefaults() so that it
takes no action related to the jpegQual parameter unless jpegQual >= 0.
Add a new TurboJPEG C API function (tjDecompressHeader4()) and Java API
method (TJDecompressor.getFlags()) that return the bitwise OR of any
flags that are relevant to the JPEG image being decompressed (currently
TJFLAG_PROGRESSIVE, TJFLAG_ARITHMETIC, TJFLAG_LOSSLESS, and their Java
equivalents.) This allows a calling program to determine whether the
image being decompressed is a lossless JPEG image, which means that the
decompression scaling feature will not be available and that a
full-sized destination buffer should be allocated.
More specifically, this fixes a buffer overrun in TJBench, TJExample,
and the decompress* fuzz targets that occurred when attempting (in vain)
to decompress a lossless JPEG image with decompression scaling enabled.
The Gordian knot that 7fec5074f9 attempted
to unravel was caused by the fact that there are several
data-precision-dependent (JSAMPLE-dependent) fields and methods in the
exposed libjpeg API structures, and if you change the exposed libjpeg
API structures, then you have to change the whole API. If you change
the whole API, then you have to provide a whole new library to support
the new API, and that makes it difficult to support multiple data
precisions in the same application. (It is not impossible, as example.c
demonstrated, but using data-precision-dependent libjpeg API structures
would have made the cjpeg, djpeg, and jpegtran source code hard to read,
so it made more sense to build, install, and package 12-bit-specific
versions of those applications.)
Unfortunately, the result of that initial integration effort was an
unreadable and unmaintainable mess, which is a problem for a library
that is an ISO/ITU-T reference implementation. Also, as I dug into the
problem of lossless JPEG support, I realized that 16-bit lossless JPEG
images are a thing, and supporting yet another version of the libjpeg
API just for those images is untenable.
In fact, however, the touch points for JSAMPLE in the exposed libjpeg
API structures are minimal:
- The colormap and sample_range_limit fields in jpeg_decompress_struct
- The alloc_sarray() and access_virt_sarray() methods in
jpeg_memory_mgr
- jpeg_write_scanlines() and jpeg_write_raw_data()
- jpeg_read_scanlines() and jpeg_read_raw_data()
- jpeg_skip_scanlines() and jpeg_crop_scanline()
(This is subtle, but both of those functions use JSAMPLE-dependent
opaque structures behind the scenes.)
It is much more readable and maintainable to provide 12-bit-specific
versions of those six top-level API functions and to document that the
aforementioned methods and fields must be type-cast when using 12-bit
samples. Since that eliminates the need to provide a 12-bit-specific
version of the exposed libjpeg API structures, we can:
- Compile only the precision-dependent libjpeg modules (the
coefficient buffer controllers, the colorspace converters, the
DCT/IDCT managers, the main buffer controllers, the preprocessing
and postprocessing controller, the downsampler and upsamplers, the
quantizers, the integer DCT methods, and the IDCT methods) for
multiple data precisions.
- Introduce 12-bit-specific methods into the various internal
structures defined in jpegint.h.
- Create precision-independent data type, macro, method, field, and
function names that are prefixed by an underscore, and use an
internal header to convert those into precision-dependent data
type, macro, method, field, and function names, based on the value
of BITS_IN_JSAMPLE, when compiling the precision-dependent libjpeg
modules.
- Expose precision-dependent jinit*() functions for each of the
precision-dependent libjpeg modules.
- Abstract the precision-dependent libjpeg modules by calling the
appropriate precision-dependent jinit*() function, based on the
value of cinfo->data_precision, from top-level libjpeg API
functions.