I recently got it into my head to compare the various popular video codecs in an effort to better understand how av1 works and looks compared to x264 and x265. I also had ideas of using a intel video card to compress a home video security setup, and what levels of compression I would need to get good results.

The Setup
I used the 4k 6.3gb blender project, tears of steel as a source. I downscaled the video to 1080p using all three codecs, and then attempted to compare the results using various crf levels.

To compare results I used imgsli, FFMetrics, and my own picture viewer to try and see what the differences are.

The Results

crf av1 KB x265 KB x264 KB
18 419,261 632,079 685,217 – x246 visually lossless
21 352,337 390,358 – x265 visually lossless 411,439
24 301,517 – av1 VAMF visually lossless 250,426 263,524 – x264 good enough
27 245,685 165,079 – x265 good enough 176,919
30 205,008 110,062 122,458
33 168,192 73,528 86,899
36 139,379 – av1 My visually lossless 48,516 63,214
39 116,096 31,670 47,161
42 97,365 – av1 my good enough 20,636 35,801
45 81,805 13,598 27,484
48 69,044 9,726 20,823
51 58,316 8,586 – worst possible 16,120 – worst possible
54 48,681 - -
57 39,113 - -
60 29,062 - -
63 16,533 – worst possible - -

Here is av1 rcf 36 vs crf 24.

I go into more detail with the hows and whys of my choices, in my journal-style blog post, as well as how i came to these conclusions, But in essence, if you want to lose practically no visual information, crf24 through 36 for av1, crf 21 for x265, and crf 18 for x264 will do the job.

If you are low on space, using my ‘good enough’ choices will get you practically the same visual results while using less space, depending on the codec.

  • UndercoverUlrikHD@programming.dev
    link
    fedilink
    English
    arrow-up
    0
    ·
    10 months ago

    Frankly, for anything other than real-time encoding, I don’t actually consider encoding time to be a huge deal. None of my encodes were slower than 3fps on my 5800x3d, which is plenty for running on my media server as overnight job. For real-time encoding, I would just grab a Intel Arc card, and redo the whole thing since the bitrates will be different anyways.

    Encoding speed heavily depends on your preset. Veryslow will give you better compression than medium or fast, but at a heavy expense of encoding speed. You’re not gonna re-encode a movie overnight on slow preset. GPU encoding will also give you worse result than CPU encode so that’s something one would have to take into consideration. It’s not a big deal when you’re streaming, but if it’s for video files, I’d much prefer using the CPU.

    I consider the ‘good enough’ level to be, if I didn’t pixel peep, I couldn’t tell the difference. The visually lossless levels were the first crf levels where I couldn’t tell a quality difference even when pixel peeping with imgsli. I also included VAMF results, which say that the quality loss levels are all the same at a pixel level.

    I was mostly talking about how you organised your table by using CRF values as the rows. It implies that one should compare the results in each row, however that wouldn’t be a comparison that makes much sense. E.g. looking at row “24” one might think that av1 is less effective than h264/5 due to greater file size, but the video quality is vastly different. A more “informative” way to present the data might have been to organise each row by their vmaf score.

    Hopefully I don’t come across as too cross or argumentative, just want to give some feedback on how to present the data in clearer way for people who aren’t familiar with how encoding works.

      • glizzyguzzler@lemmy.blahaj.zone
        link
        fedilink
        English
        arrow-up
        2
        ·
        10 months ago

        GPU encoding uses (relatively) simpler fixed function encoders that do it much faster than the CPU which uses its general purpose transistors to run an encoding algorithm. End result is GPU encoding is speedy at the cost of visual quality per bitrate; the file size is bigger for same visual quality as a CPU encode. Importantly for storing your videos - CPU encoding, while much slower, will get your file size smaller at the same visual quality threshold you desire, so you can save more videos per drive!