Answer a few questions to get personalized recommendations based on benchmark trials.
Select your source media type
What type of content is this?
Choose your priority
Where will you play this?
Where every codec stands today. Click any badge to filter. Dimensions are independent β combine freely.
Approximate playback support across TVs, phones, browsers, receivers, and media players.
Click any entry to expand. From MPEG-2 DVD originals through the AV1/H.266 frontier.
This guide doesn't try to benchmark every codec or cover every edge case. The goal is simpler: find great sensible defaults for the formats hobbyists actually rip β and back those recommendations with real data instead of forum wisdom.
Short clips are extracted from real disc rips as lossless FFV1 β mathematically identical to the source. These pristine references become the ground truth that every encode is measured against. Clips are chosen to cover the kinds of content you'd actually encounter: fast action, slow pans, fine detail, dark scenes, and mixed content.
We're not testing every possible setting β we're trying to find the sweet spot. Testing starts broad with a sweep across CRF values, encoder presets, and tuning profiles to map the landscape. The benchmark framework encodes each clip against every combination in that matrix, recording settings, file size, and encode duration. Then, informed by those results, we drill down: isolating individual encoder parameters to figure out which knobs actually improve quality versus which ones just burn CPU time for no gain.
Each encode is compared against its lossless reference in four independent FFmpeg passes. No single metric tells the whole story, so we triangulate:
Results flow into a SQLite database where we look for patterns: which settings give the best quality-per-byte? Which parameters actually help versus just slowing things down? The best-performing settings are then validated on entirely new content that wasn't part of the original test β a sanity check that our recommendations work broadly, not just on the clips we happened to test.
VMAF (Video Multimethod Assessment Fusion) is the gold standard for perceptual video quality measurement. Developed by Netflix in collaboration with the University of Southern California, it uses machine learning trained on thousands of subjective quality ratings to predict how a human viewer would rate video quality on a 0β100 scale.
This is a hobbyist project, not a lab. Benchmarks run on a single machine for consistency, and only test the codecs and formats most relevant to disc ripping today β not every codec under the sun. Recommendations will evolve as we add more source formats and encoders. Want to see the raw data? Download the SQLite database.