MSU Video Codecs Comparison 2021 Part 4: Cloud
Sixteen Annual Video-Codecs Comparison by MSU
Anastasia Antsiferova,
Egor Sklyarov,
Nickolay Safonov,
Alexander Gushin,
Nikita Alutis
Moscow State University (MSU)
Graphics and Media Lab Dubna International
State University Institute for Information
Transmission Problems,
Russian Academy of Science
News
- 25.05.2022 Updates on comparison: Changed leader in 1080p HEVC MS-SSIM cathegory
- 17.05.2022 Release of the comparison
Navigation
- Objective results
- Subjective results
- Time deviation
- Download
- Participated services
- Subjectify
- Features
- Thanks
- Contact information
- Subscribe to updates
Objective Results
- The results below are based solely on quality scores and do not take into account encoding speed
- Services with scores closer than 1% share one place
YUV-SSIM 6:1:1, YUV-PSNR avg. MSE 6:1:1, Y-VMAF NEG 0.6.1
YUV-MSSSIM 6:1:1
Y-VMAF 0.6.1
YUV-SSIM 6:1:1
Alibaba Update Account
Tencent Media Processing Service
YUV-MSSSIM 6:1:1
YUV-PSNR avg. MSE 6:1:1, Y-VMAF NEG 0.6.1
Y-VMAF 0.6.1
YUV-SSIM 6:1:1, YUV-MSSSIM 6:1:1
YUV-PSNR avg. MSE 6:1:1, Y-VMAF NEG 0.6.1
Y-VMAF 0.6.1
* - YUV-VMAF was calculated as VMAF for all colour-planes (Y, U, V) following the same methodology as YUV-SSIM, YUV-PSNR and other metrics.
The winners vary for different objective quality metrics. The participants were rated using BSQ-rate (enhanced BD-rate) scores [1].
[1] A. Zvezdakova, D. Kulikov, S. Zvezdakov, D. Vatolin, "BSQ-rate: a new approach for video-codec performance comparison and drawbacks of current solutions," 2020.
Subjective Results
- The results below are based solely on quality scores and do not take into account encoding speed
- Services with scores closer than 1% share one place
YUV-Subjective
For subjective quality measurements we used Subjectify.us crowdsourcing platform. We involved more than 10,800+ participants
Encoding time deviation
We performed three encodes for each sequence on different days and day times. The chart below shows a deviation of encoding time among all videos for each iteration. Big delays could be caused by high load of service resources (and big queues) or long time of accessing our videos from storage (the table with services description above shows which storage was used for each service).
Download and buy report
Objective and subjective comparison of cloud encoding services
Released on May, 16
Full version for free
Alibaba Public Account, Alibaba Update Account, AWS Elemental MediaConvert, Coconut, Qencode, Tencent Media Processing Service, Zencoder
1080p, 720p, 480p
VMAF, SSIM, MS-SSIM, PSNR of different variants
5400+ interactive charts
Participated services
H264, HEVC Yes Alibaba Cloud OSS
H264, HEVC Yes Alibaba Cloud OSS
H264, HEVC, AV1 Partial Amazon S3
(us-east-1)
H264, HEVC Partial Amazon S3
(us-east-1)
H264, HEVC, AV1 Partial Amazon S3
(us-east-1)
H264, HEVC, AV1 Partial Tencent Cloud COS
H264, HEVC, AV1 Yes Amazon S3
(us-east-1)
Subjective Comparison Methodology
For subjective quality measurements we used Subjectify.us crowdsourcing platform. We involved 10,800+ participants. After deleting replies from bots we got 529,171 pairwise answers. Bradley-Terry model was used to compute global rank.
To conduct an online crowdsourced comparison, we uploaded encoded streams to Subjectify.us. For better browser compatibility we performed transcoding with x264 and CRF=16.
The platform hired study participants and showed the upload streams to them in pairs. Each pair consisted of two variants of the same test video sequence encoded by various codecs at various bitrates. Videos from each pair were presented to study participant sequentially (i.e., one after another) in full-screen mode. After viewing each pair, participants were asked to choose the video with the best visual quality. They also had the option to play the videos again or to indicate that the videos have equal visual quality. We assigned each study participant 12 pairs, including 2 hidden quality-control pairs, and each received money reward after successfully completing the task. The quality-control pairs consisted of test videos compressed by the x264 encoder at 1 Mbps and 4 Mbps. Responses from participants who failed to choose the 4 Mbps sequence for one or more quality-control questions were excluded from further consideration.
In total we collected 529,171 valid answers from 10,800+ unique participants. To convert the collected pairwise results to subjective scores, we used the Bradley-Terry model [1]. Thus, each codec run received a quality score. We then linearly interpolated these scores to get continuous rate-distortion (RD) curves, which show the relationship between the real bitrate (i.e., the actual bitrate of the encoded stream) and the quality score. Section "RD Curves" shows these curves.
We obtained the subjective scores for this study using Subjectify.us. This platform enables researchers and developers to conduct subjective comparisons of image and video processing methods (e.g., compression, inpainting, denoising, matting, etc.) and carry out studies of human quality perception.
To conduct a study, researchers must apply the methods under comparison to a set of test videos (images), upload the results to Subjectify.us and write a task description for study participants. Subjectify.us handles all the laborious steps of a crowdsourced study: it recruits participants, presents uploaded content in a pairwise fashion, filters out responses from participants who cheat or are careless, analyzes collected results, and generates a study report with interactive plots. Thanks to the pairwise presentation, researchers need not invent a quality scale, as study participants just select the best option of the two.
The platform is optimized for comparison of large video files: it prefetches all videos assigned to a study participant and loads them into his or her device before asking the first question. Thus, even participants with a slow Internet connection won窶冲 experience buffering events that might affect quality perception.
To try the platform in your research project, reach out to www.subjectify.us. This demo video shows an overview of the Subjectify.us workflow.
Codec Analysis and Tuning for Codec Developers and Codec Users
Computer Graphics and Multimedia Laboratory of Moscow State University:
- 17+ years working in the area of video codec analysis and tuning using objective quality metrics and subjective comparisons.
- 30+ reports of video codec comparisons and analysis (H.265, H.264, AV1, VP9, MPEG-4, MPEG-2, decoders' error recovery).
- Methods and algorithms for codec comparison and analysis development, separate codec's features and codec's options analysis.
Strong and Weak Points of Your Codec
- Deep encoder parts analysis (ME, RC on GOP, mode decision, etc).
- Weak and strong points for your encoder and complete information about encoding quality on different content types.
- Encoding Quality improvement by the pre and post filtering (including technologies licensing).
Independent Codec Estimation Comparing to Other Codecs for Different Use-cases
- Comparative analysis of your encoder and other encoders.
- We have direct contact with many codec developers.
- You will know place of your encoder between other newest well-known encoders (compare encoding quality, speed, bitrate handling, etc.).
Encoder Features Implementation Optimality Analysis
We perform encoder features effectiveness (speed/quality trade-off) analysis that could lead up to 30% increase in the speed/quality characteristics of your codec. We can help you to tune your codec and find best encoding parameters.Thanks
Special thanks to the following contributors of our previous comparisons
Contact Information
Subscribe to report updates
Other Materials
Video resources:
Projects on 3D and stereo video processing and analysis
- MSU S3D-video analysis reports
- MSU 3D Devices Testing
- 3D Displays Video Generation
- 3D Displays Video Capturing
- Stereo Video Depth Map Generation
- SAVAM Saliensy-Aware Video Compression & Dataset
- Video Matting Benchmark
- Video Inpainting Benchmark
MSU Video Quality Measurement tools
Programs with different objective and subjective video quality metrics implementation
- MSU Video Quality Measurement Tool - objective metrics for codecs and filters comparison
- MSU Human Perceptual Quality Metric - several metrics for exact visual tests
Objective and subjective quality evaluation
tests for video and image codecs
- MSU Video Codecs Comparison 2025
- MSU Video Codecs Comparison 2023-2024
- MSU Video Codecs Comparison 2022
- MSU Video Codecs Comparison 2021
- MSU Video Codecs Comparison 2020
- MSU Cloud Benchmark 2020
- Cloud Encoding Services Comparison 2019
- HEVC/AV1 Codec Comparison 2019
- HEVC/AV1 Codec Comparison 2018
- HEVC/AV1 Codec Comparison 2017
- HEVC Codec Comparison 2016
- HEVC Codec Comparison 2015
- 9-th MPEG4-AVC/H.264 Comparison
- 8-th MPEG4-AVC/H.264 Comparison
- 7-th MPEG4-AVC/H.264 Comparison
- 6-th MPEG4-AVC/H.264 Comparison
Here are available VirtualDub and AviSynth filters. For a given type of digital video filtration we typically develop a family of different algorithms and implementations. Generally there are also versions optimized for PC and hardware implementations (ASIC/FPGA/DSP). These optimized versions can be licensed to companies. Please contact us for details via video(at)graphics.cs.msu_ru.
- MSU Cartoon Restore
- MSU Noise Estimation
- MSU Frame Rate Conversion
- MSU Image Restoration
- MSU Denoising
- MSU Old Cinema
- MSU Deblocking
- MSU Smart Brightness and Contrast
- MSU Smart Sharpen
- MSU Noise generation
- MSU Noise estimation
- MSU Motion Estimation Information
- MSU Subtitles removal
- MSU Logo removal
- MSU Deflicker
- MSU Field Shift Fixer AviSynth plug-in
- MSU StegoVideo
- MSU Cartoonizer
- MSU SmartDeblocking
- MSU Color Enhancement
- MSU Old Color Restoration
- MSU TV Commercial Detector
- MSU filters FAQ
- MSU filters statistics
We are working with Intel, Samsung, RealNetworks and other companies on adapting our filters other video processing algorithms for specific video streams, applications and hardware like TV-sets, graphics cards, etc. Some of such projects are non-exclusive. Also we have internal researches. Please let us know via video(at)graphics.cs.msu_ru if you are interested in acquiring a license for such filters or making a custom R&D project on video processing, compression, computer vision.
- 3D Displays Video Generation
- 3D Displays Video Capturing
- Stereo Video Depth Map Generation
- Automatic Objects Segmentation
- Semiautomatic Objects Segmentation
- New Frame Rate Conversion
- New Deinterlacer
- MSU-Samsung Deinterlacing Project
- Digital TV Signal Enhancement
- Old Film Recovery
- Tuner TV Restore
- Panorama
- Video2Photo
- SuperResolution
- SuperPrecision
- MSU-Samsung image and video resampling
- MSU-Samsung Frame Rate Conversion
- Motion Phase filter
- Deshaker (video stabilization)
- Film Grain/Degrain filter
- Deblurring filter
- Video Content Search
Different research and development
projects on video codecs
- MSU Lossless Video Codec (Top!)
- MSU Screen Capture Lossless Codec (Top!)
- MSU MPEG-2 Video Codec
- x264 Codec Improvement
Other information
- Crazy gallery (filters screams :)
- License for commercial usage of MSU VideoGroup Public Software (please be careful: some soft like metrics has another license!)
Server size: 8069 files, 1215Mb (Server statistics)
Project updated by
Server Team and
MSU Video Group
Project sponsored by YUVsoft Corp.
Project supported by MSU Graphics & Media Lab