Visual Acuity, Vernier Acuity, Anti-Aliasing, and You

Visual Acuity, Vernier Acuity, Anti-Aliasing, and You

Note: This is a lightly edited post written in 2012. My core argument at the time is that FXAA and other AA algorithms are “Good Enough”, so there isn’t much place for MSAA or other AA techniques that impose a significant burden on the rendering (like a MSAA depth pass). Since then, I’ve changed my mind, which will be part of another post in the future. But for now, you have my future predictions from 2012 which didn’t go as I had expeted.

I don’t actually have any inside information about next gen consoles, but let’s assume that the next gen games will output at 1080p and have about as much processing power as a top end GPU.

The argument essentially boils down to: Is FXAA enough, which costs <1ms, or do you want (or NEED) better AA. The first thing that you have to understand is the difference between Visual Acuity and Vernier Acuity which unfortunately is one of those things that no one teaches you. So let’s try.

You have all heard about visual acuity before. If you need a refresher you can talk to your good friend Wikipedia: http://en.wikipedia.org/wiki/Visual_acuity. Essentially, as things get really small, the human eye has trouble distinguishing them (duh). So if text is too small and too far away then you can’t read it.

What you probably don’t know about is Vernier Acuity. Wikipedia is of course a great resource: en.wikipedia.org/wiki/Vernier_scale. Vernier acuity should make sense to anyone who has worked in games and seen aliasing. I’ve talked about this issue before when I’ve argued that <a href=”http://filmicgames.com/archives/698”</a>Apple is lying to you when they call the iPhone 4 a “Retina Display”</a> as well as in an early post about the difference between 720p and 1080p. But it should make intuitive sense you. The human eye has an incredible ability to tell if two lines aren’t exactly aligned with each other. That’s how a vernier caliper works. You can think of vernier acuity as the official term for our eye’s ability to see aliasing.

But when it comes to choosing what AA technique you need for a given resolution, Visual Acuity vs Vernier Acuity is incredibly important. There are three cases:

1: Visual Acuity < Vernier Acuity < Resolution If you are rendering at a resolution that is finer than Vernier Acuity, then AA is worthless (except for crazy extreme cases which I’ll get to in a moment). If you rendered to a screen with 10,000 DPI there would be no need for AA because the resolution is so fine that your eye can’t make out the aliased edges.

2: Resolution < Visual Acuity < Vernier Acuity On the other hand, if you are rendering at a resolution coarser then Visual Acuity, then your eye can clearly see blurriness in your original image. In this case any AA technique that increases the blurriness of your image will be easy to see. Also, in this situation techniques like MSAA have a definite quality advantage over post AA techniques (like FXAA) but are usually more expensive.

3: Visual Acuity < Resolution < Vernier Acuity If your resolution is in between Visual Acuity and Vernier Acuity then you are in a strange land. You’re eye can’t pick out individual details but it can tell if there are artifacts like aliasing. In these cases, FXAA should be good enough for 99% of your cases. Sure, a more expensive option like MSAA might look a teeny bit better. But if you’re AA technique reduces sharpness a little bit around the edges then your eye can’t tell because the resolution is beyond your Visual Acuity.

My belief is that on next-gen consoles, most users will be in category 3. Most people who actually work in games (myself included when I was at Naughty Dog) think that aliasing is much more of an issue than it actually is. Partially, it’s because we are trained to look for it. But also the median viewer is looking at the screen with much less resolution than we do. When I was at Naughty Dog I sat about 4 feet away from a 32-inch screen. But the average user sits about 10 feet away. And then if we change resolution from 720p to 1080p aliasing becomes even less of an issue for the average user.

So that’s about where I stand. Once the next gen consoles come around and we move to 1080p then there is a very negligible difference between what a cheap solution like FXAA gives you and a theoretical “perfect” solution. The original comment that started this discussion was about using tiling to get MSAA. And I find it pretty hard to believe that the quality difference of MSAA (with TILING!) would justify the cost on next gen.

Side Note #1 There are always exceptions. If you are making Flower 2 and you are willing to dedicate 60% of your budget to rendering grass than MSAA might be worth it. But for the “standard” games like Skyrim, Halo, Modern Warfare, Uncharted, etc. I see FXAA as the best solution.

Side Note #2 For me, the #1 problem in video games is Shading and the #2 problem is Lighting. Everything has that “weird video game look” to it. If you compare games to other games, you would think that games look pretty good. But when you compare games to either VFX or real life on the same monitor it breaks down. Every time a commercial on TV comes up showing in-game footage I always cringe a little bit (even on games that I worked on). We can’t even make the easy surfaces like concrete/wood/tile look right. And we are REALLY far away from the harder surface like skin/cloth/hair/marble. That’s where I would rather put cycles if I was planning a next-gen game budget.

Side Note #3 So what about thin objects that cause flickering because they are so small? Simple: Don’t have them. Instead use alpha for things like one million blades of grass. Don’t use individual triangles. The hardware isn’t really effective with lots of 1 pixel triangles so you’re better off using alpha cards anyways. For thin object that are harder to replace I think there are more interesting options for blending them with geometry shaders if you really need to. And if your game is a true outlier case (like you absolutely must render a field) then see Side Note #1.

Side Note #4 Still think I’m completely wrong? This is easy enough to test. Just load up one of Post AA comparison samples onto a PC, hook it up to a 42 inch display at 1080p, and invite random people off the street to sit 12 feet away. Then switch between no MSAA, 4xMSAA, and FXAA randomly and ask them to rate the quality. That’s the real way to settle this issue, although the results would be a bit biased because people are more likely to notice aliasing if they are asked to look for it than if they were just playing at home.

Side Note #5 Users and video game reviewers know virtually nothing about why an image looks good or bad. That’s why you always here them say “texture resolution”. If a game looks good because it has good lighting, nice animation, and high-quality lighting models most forum posters will say “Wow, this game has such detailed textures!”. But when a game has bad animation and gamma space lighting they will say “Those textures look low-res!”. I don’t remember hearing a game reviewer say “Game A has a great anisotropic reflectance model on it’s brushed metal shaders!”. Instead, they’ll say “Game A has really crisp textures” as if the developer magically found an extra 100 megs of texture RAM inside the console.

The same thing tends to happen with anti-aliasing. Often times you’ll see two games, let’s call them Game A and Game B, where A looks much better than B. The reviews will say “Game A has detailed texture and clean edges” whereas “Game B has low-res textures and lots of jaggies” even when they both use the exact same technique. Game reviewers don’t know what to say other than texture detail and jaggies so they use those terms to justify what they already believe.

comments powered by Disqus