Best JPEG Image compression quality

I'm encoding a bitmapData to JPG. What is a good compression rate that brings out a smaller file size without losing the overall image quality? (I'm looking for a solution that work for any image)


So, I'm try to answer your question in the most theoretical way possible, and explain why its difficult, if not possible, to recommend a single compression rate that will work great in all cases.

The first thing you have to have at least a vague understanding of the difference between lossy and lossless compression.

Lossy vs Lossless

Lossless compression is a compression algorithm that takes a set of data, transforms it into a another set of data that is smaller. Reversing process should produce the exact same set of data as the pre-compressed data. Lossy compression, on the other hand, does not.

With lossy compress, the algorithm is allowed to throw away information that it deems unnecessary for the reconstruction of the original message, but doesn't not guarantee the reconstruction of the exact same message.

Let's look at a hypothetical example to drive home the point. Let us say I have come up with an algorithmic way to compress English text. I have a lossess codec called A and a lossy codec called B. Lets say I wanted to compress the following phrase:

Bob has gone to the store for milk.

Running the phrase through codec A and then decompressing it, I would get:

Bob has gone to the store for milk.

However, running the same message through codec B with a compression rate of 10, I might get the following phrase back out:

Bob went to store for milk.

Notice that the result isn't the same, but is pretty close. The integretiy of the message remains intact, but it isn't the same informat that I put into the system.

Now lets, run the source message through codec B with a compression rate of 5. This time I might get the following when I try to decode the compressed message:

Bob went to store for food.

Notice that even more information is missing, but the implied intent of the message is still present. However I have no idea, what kind of food bob went to the store for.

Finally, lets run the source message through codec B with a compression rate of 1. This time I might this back:

Bob isn't here.

This time the algorithm has decided that where Bob has gone isn't important, only that Bob isn't in the current location. The core intent of original message is still preserved, but the rest of the context has been lost.

The same theory applies to images. JPEG compression works by the algorithm throwing away data it doesn't think it necessary to reconstruct the image.

So how to JPEG actually work?

The process by which JPEG actual does it work is full of complicated math, but on a higher level its pretty easy to understand. It works by breaking down the image into small 8x8 pixel chunks. Then it transforms those pixels from a collection of pixels into a collection of math formulas (DCT-II if you are interested), then by analyzing those formulas to see which ones it can omit based on the compression rate given.

There is a great visual example of the math involved over at wikipedia on the article for DCT.

Notice how it builds up the image of the letter "A" by mixing together a collection of simple patterned blocks (generated from cosine).

Now, you'll notice – if you watch that image close enough – there are a bunch of coefficients that are very close to zero, stuff like +0.006 and +0.021. And you'll notice, if you look close, their impact on the resulting image on the left is fairly minimal. The most simplistic explanation is this: JPEG compression works by throwing these away small values, effectively not counting them. So when it re-constructing the image, by reversing the process (iDCT/DCT-III), it does not add or subtract these subtle changes to the block. It only stores/uses the ones that have the most effect on the final block.

The lower the compression rate, the more of these it tries to throw away. The higher the compression rate, the more it tries to keep.

Now there is a lot more subtle math that goes on during this phase, but that's the best simple example.

What does this all mean?!

This means that what information the compressor tries to get rid of, is directly dependent on the image that you are trying to compress. Some images will be more compressible without showing visual artifacts than others, simply due to the structure of the blocks and the information they contain.

Also consider that the effect of compression is depends on who is looking at it. I have worked with images a lot (both in college and professionally) and spent a lot of time studying the underlying compression mechanics of JPEG/MPEG, so its pretty easy for me to spot compression artifacts because I know what I'm looking for. But those same artifacts might not be picked up by a less than discerning eye. (Just like some people cannot listen to MP3 compressed files, because they can literally hear the compression algorithm at work)

So your mileage may vary, depending on what you are trying to do and the images that you are trying to compress. If you had a good understanding of the underlying mathematics then you might be able to predict what would give you the best bang for the buck in terms of compression ratio, based on the image that you are trying to compress. But most of the time, it's simply a product of experimentation.


A compression rate between 60% and 80% is usually provides a decent reduction in size, without introducing too many noticeable visual artifacts.

Need Your Help

Error converting PNG to TIFF-Java

java png tiff javax.imageio jai

HI I am working on the following snippet which is supposed to convert my png file to tiff.

Neo4j and spring-data: numeric index cypher queries not working?

neo4j spring-data-neo4j

having some problems with cypher query and numeric indexes