Top Banner
Real-Time Anti- Aliasing Matt Pickering
33

Real-Time Anti-Aliasing

Feb 24, 2016

Download

Documents

umika

Real-Time Anti-Aliasing. Matt Pickering. Introduction. Anti-aliasing Something you’ve probably encountered before Why would we want to use anti-aliasing? How is it done?. Overview. What is aliasing? What causes it? Using anti-aliasing to solve the problem The techniques we can use - PowerPoint PPT Presentation
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript

Real-Time Anti-Aliasing

Real-Time Anti-AliasingMatt PickeringMatt Pickering - 100017537Advanced 3D Graphics ProgrammingReal-Time Anti-Aliasing1IntroductionAnti-aliasingSomething youve probably encountered before

Why would we want to use anti-aliasing?

How is it done?AA something youve probably heard of - especially if youre a PC gamer.

Normally we just crank the graphics up to the highest setting our machine can handle, perhaps without really understanding what they do.

So before anything else, Im going to talk about what aliasing actually is, then Ill discuss the ways in which we can combat it.

2OverviewWhat is aliasing?What causes it?Using anti-aliasing to solve the problemThe techniques we can usePros and consCode stuffTexture filteringThe future of anti-aliasingSummaryResources

Format:What Im going to sayWhat Im sayingWhat Ive said

3BackgroundAliasing can affect any kind of signal processing

Signal processing is a field of electrical engineering and applied mathematics

Not just imagesAudioSensor dataRadio transmissionsEtc.Its worthy of note that aliasing doesnt just happen to images, it can happen to any kind of signal that we might send to a device, including sound, sensor data (like from medical equipment), and radio transmissions.

Theres a whole field of study regarding how aliasing can affect signals of various kinds relevant to electrical engineering and applied mathematics.

Since were interested in graphics programming, Ill stick with aliasing effects that we see in image processing.4ContextWe want to view a digital image(The rendered frame of our game)Image gets reconstructed by display deviceBottleneck

Our eyes and brain are also part of the reconstruction processWhen we view a digital image, the quality of it is bottlenecked by the capabilities of the device on which we are viewing it (monitor, printer, whatever).

The device is reconstructing or interpolating the original image (the rendered frame from the GPU).5DescriptionBottleneck:Reconstruction resolution is too lowDetail is lost or distortion occurs

The distorted/low resolution image is an alias of the originalIf the resolution of a reconstructed image is too low, it differs from the original, so what we actually end up with is an alias. Hence the name.

Because of the resolution being too low, the quality of the alias is poor, so there are going to be undesirable artefacts in our final image.6DescriptionPre-aliasingAliasing that happens during sampling

Post-aliasingAliasing that happens during reconstructionThe image has to be sampled first before it can be reconstructed. If the sampling is not thorough, aliasing can occur at this point. This is called pre-aliasing.

Aliasing that happens due to the limitations of our display devices reconstruction capability is called post-aliasing.7Spatial AliasingDisplay device limitations

What we want

What we getSo why is the resolution in our reconstructed image not good enough?

Pixels on the screen are arranged in a grid, meaning that although our device is good at drawing lines that are perfectly horizontal or vertical, it struggles with representing angles because pixels cant be half one colour, half another colour.

Rasterization simply decides what colour the pixel should be based on whether or not the edge of a shape covers the centre of the pixel.

This is what creates the infamous jaggies, i.e. the unappealing blocky lines around the edges of the objects in our scene.8Spatial AliasingLimited resolution

What we want

What we getThe size of the pixels is also a problem. Objects that are far away cannot be drawn with enough detail, and get distorted. This distortion is called a Moir pattern.

Although advances in display technology allow us to have more pixels that are closer together (dot pitch), it does not solve the problem completely.9Temporal AliasingWe can only show so many frames per secondIf objects in our scene are being transformed (moving/rotating) faster than our sample rate (frames per second)We get temporal aliasingObjects appear to jump or stutter instead of moving smoothlyOne kind of aliasing that is commonly encountered is called temporal aliasing.

Because we can only render and display a limited number of frames per second, objects in our scene that are moving faster than we can show are going to appear to jump in position instead of moving smoothly.10Temporal AliasingExample:

Spoked wheel/propeller

Wagon-wheel effect

Appears to rotate much slower than it really is, or even appear to rotate backwards

Consider a wheel or propeller with 5 spokes or blades. With every multiple of 72 degrees of rotation, the wheel looks like it was in exactly the same position as before because of its lines of symmetry.

In a simple example, if it rotates clockwise by 70 degrees in the time that it takes us to render a frame, it looks exactly the same as if it rotated anti-clockwise by 2 degrees.

This is called the wagon-wheel effect. There are some good examples of it in my references.11AliasingInsufficient resolution

Insufficient sample rate

So now we begin to see that aliasing is caused whenever we try to represent an image which has a higher resolution or frequency than our device can reconstruct.12Anti-AliasingWe want a way to fool the viewer into thinking the jagged edges and distortions in our scene are actually nice and smooth

How?

Anti-aliasing refers to the techniques of minimizing the distortion effects of our reconstructed image.13TechniquesFSAASuper-samplingMulti-sampling

Basic principle:

There are several techniques, each with many variations. Most are described as mathematical algorithms rather than source code examples, and are just different approaches to achieving the same results. The two main approaches are called super-sampling and multi-sampling.

FSAA = Full Scene Anti-Aliasing. Often a term that goes hand in hand with super-sampling and multi-sampling in relation to games because we want to remove all the jaggies in our scene.

Regardless of which technique we choose, what we want to achieve is something like the image on the right. The pixels around the character are subtly blurred with the colour of the surrounding pixels. These images are zoomed-in for illustration purposes, at the default level of zoom the anti-aliasing on the right-hand image is very effective.14Super-samplingBrute force4x super-sampling @ 800 x 600 = 1600 x 1200So for every pixel on the 800 x 600 screenWere effectively drawing 4 sub-pixel samplesThose 4 sub-pixels are then combinedWe get the final pixel colour for the 800 x 600 screen

Super-sampling aka. Over-samplingWe effectively draw the image to be a power of 2 larger than the final resolution, then combine colours back down into the final image.

This is why you see anti-aliasing settings like 2x, 4x, 8x, 16x etc.

Technically its not correct to say we draw the image larger than the final image, but we do have to create and store the sub-pixel information which is a lot of extra work.

15Super-samplingVisualising super-sampling:

We want to draw the red objects

The small squares are our pixels

The pixels that are part red, part white get averaged into a shade of orange.

Sub-pixels: 100% red, final colour is redSub-pixels: 50% red, 50% white, final colour is averaged into a pale red/orange colourSub-pixels: 100% white, final colour is white16Super-samplingPros:We get higher sampling resolution for the entire image

Cons:Its extremely expensive to make the image much larger than we actually want it to be, then scale it down by combining samples

Consider 16x SSAA @ 1920 x 108030720 x 17280 samples!

Advantages:Super-sampling is good because it blends anti-aliases the entire image, not just the edges of the objects, so every pixel ends up with an accurate colour at its location. This reduces texture artefacts like shimmering and gives an overall improved appearance. For this reason, super-sampling is often used in ray-tracing applications.

Disadvantages:The costs of using super-sampling are huge.

At each sub-pixel the graphics card has to sample texture data, perform shader operations, do z-buffer checks and then do the combination calculations. We need a big back-buffer and therefore have a large fill cost. Overall the process is horribly slow, and if our display device is updating faster than we can draw, were going to get frame tearing.

17Multi-samplingOptimisation over super-samplingFor 4x MSAA we still create just as many sub-pixelsBut the pixel pipeline isnt run for each sampleInstead, we first do the same calculation as rasterization:If the edge of a polygon covers the centre of a pixelThen that colour is the same as the polygonThen, the percentage of the sampling points covered by the polygon is multiplied by that colourThis gives us the final colour for the pixelMulti-samplingOne technique I read about multi-sampling involved comparing the Z-depth of each sub-pixel, if the Z-depth for each sample is the same, you know youre not at the edge of the object and you can just skip any further calculations and render as normal.

Nvidia like to use something called Coverage Sampling Anti-Aliasing. Coverage is their name for averaging sub-pixels into a final pixel colour, but they do separate samples from the Z-buffer. You see settings like 8x CSAA and 8xQ CSAA in Nvidia anti-aliasing types. The Q stands for Quality, and just means that twice as many Z-buffer samples are taken.18Multi-samplingVisualising multi-sampling:

Middle sampleSame as rasterization

Sub-pixel samples2 sub-pixels are coveredFinal colour = 50% of poly colour

We set the colour of a pixel based on whether or not the centre point of that pixel is covered by the polygon.

Then we multiply the colour by the percentage of sub-pixel samples are covered.19Multi-samplingPros:Much lower overheadsCons:Multi-sampling extends the rasterized area to include all pixels in which at least some sampling points are coveredEven if the pixel centre is not covered!This means that if a polygon edge covers some sub-pixels samples, but not the pixel centre, a sample gets taken from beyond the UV boundaries of the polygonAdvantages:Although at first it appears to be just as costly as super-sampling, youre only sampling colour information once for the centre sample. Then you just multiply the centre sample colour by the percentage of samples covered by the polygon. This decreases your overheads a lot.

Disadvantages:The pattern of sample points varies by graphics card manufacturer, meaning you get different results on different hardware.

Older games didnt bother writing translucent pixels to the framebuffer, which prevents multi-sampling from working correctly.

Multisampling also has problems when the edge of a polygon doesnt pass through the centre of a pixel, but it does pass through some of the sub-pixel samples. If you have a texture that is larger than the polygon on which it is being displayed, the centre sample point will end up sampling texel beyond the UV boundaries of the polygon, which causes artefacts.20Centroid Multi-samplingWe can avoid these artefacts:Centroid multi-sampling

This simply moves the centre sampling point to be between the sample points that are covered by the polygon.

This ensures that the centre sample point is always taken from within the UV boundaries of the polygons.21Anti-Aliasing DirectX

I hope this code is readable

Enabling multisampling in DirectX11 is very simple.

When you create a struct to describe your swap-chain parameters, you can specify how to perform MSAA. The higher the number, the more samples are taken. Direct3D 11 cards are guaranteed to support up to 4 samples, but the minimum we can specify is 1.

In DirectX 9, multi-sampling can be enabled when you create your struct of presentation parameters.22Anti-Aliasing ShadersHLSL:

Assembly shader:

In Pixel Shader 2.0 and above, centroid sampling is automatically enabled for each pixel shader input that has a colour semantic. Alternatively you can use _centroid as an instruction modifier on your TEXCOORDs to enable centroid sampling on any pixel shader input. Refer to Week 2 presentation on Pixel Shaders.

23Texture FilteringIts not just our polygons that have trouble being represented on a screen at an angleThe texture on the polygon is a grid-like arrangement of texelsIf the texture is rotated or scaled, it wont map to our screen pixels accurately, and well get aliasing

Solution:Mip-mappingI only want to touch on this briefly as I feel it may overlap with one of the presentations still to come.

Scaling up isnt such a big deal, the texture gets stretched but it doesnt look too bad. Scaling down is more of a problem, if we have a 512 x 512 texture on an object thats really far away, well be trying to apply a 512 x 512 texture to an object which is only a few pixels big on our screen.

Not only is this inefficient, it can cause the texture to shimmer or crawl with distortion effects and noise. The solution to this is to pre-create multiple versions of our texture that weve shrunk down with a high quality filter in true Blue Peter style (heres one I made earlier). Then, we can swap the texture to one thats a more appropriate size based on how far away it is. These different sized textures are called mip-maps.24Bilinear/Trilinear FilteringBilinear FilteringWe can have multiple mip-maps being drawn on one large object (like terrain)Problem:Were only blending pixels from within each mip-mapVery obvious seam between mip-map levelsTrilinear FilteringSolution:Blend with the neighbouring mip-map levels as wellGreat. We fix one thing and get another problem.25Anisotropic FilteringAA makes object edges look betterAF makes object interiors look betterSurfaces at oblique viewing angles

IDirect3DDevice9::SetSamplerState()Texture indexMag/Min/Mip filterD3DTEXTF_ANISOTROPIC

Anisotropic means directionally dependant.

Basically what AF does is change the number of samples we take from a texture based on what angle its at to the camera.

The more angled the texture, the more samples we take. If the texture is facing us directly, we only need to take a few samples because the likelihood is that the entire texture is at the same mip-level.

The reason we need to take more samples on a texture that is at a steep angle is because one screen pixel may span many texels, meaning that too few samples would make the texture blurry.26Temporal Anti-aliasingRemember we talked about stuff moving/rotating too fast?How do we fix that?Sample at least 2x as fast as the fastest moving objectNyquist FrequencySo we either need very high framerates, or lower detail objects(Not really an option)CheatStore frames weve already drawnBlend them with the current frame (post processing)Basic motion blur can cause ghosting

True motion blur is complicated and expensive to create, so we compromise by reusing old frames because its cheaper. The effect is convincing enough in most cases, but if you look closely at a fast moving object you can see ghosts of it, i.e. faint trails of its position in previous frames.27FutureMLAAStill experimental, complicated, being developed by IntelPost-processed anti-aliasingUses shape recognition to identify edgesBreaks edges down into L-shapesConnect midpoint of secondary edge with the end point of primary edgeThis gives us an areaCalculates blending weightsC_new = (1 area) * C_old + area * C_oppositeThis is an approximation of MLAABlend each pixel with its neighbours using calculated weightsMorphological Anti-aliasingBecause its done during post-processing, its cheaper and faster than MSAA and produces similar quality results to SSAA.

C_new = new colourC_old = old colourC_opposite = colour of the pixel on the opposite side of the primary edge

The formula shown here is for binary (black and white) colour images, full colour MLAA is a bit more complicated. You either have to check with big changes in colour or store some additional information (like material type).28Morphological Anti-Aliasing

Saboteur: CELL processor of the PS3 is particularly well suited for MLAA.

Progress is being made on getting all MLAA calculations done on the GPU and tools are emerging to use it with the Xbox 360, so console systems can finally benefit from anti-aliasing since it was previously too expensive.29SummaryWe get aliasing when :We cant represent an image with a high enough resolutionWe cant sample fast enoughTheres 2 main strategies for dealing with thisSuper-samplingDraw a big image, then downsample by taking average coloursMulti-samplingOnly sample colour once per pixel, then multiply colour by % of covered sub-pixelsTexture filteringMip-mappingBilinear/TrilinearAnisotropic filteringMotion blurMLAAReferencesMSAA in DirectXSDK Sample Browser includes an anti-aliasing sample + documentationhttp://www.directxtutorial.com/Tutorial11/B-A/BA2.aspxhttp://msdn.microsoft.com/en-us/library/bb173422(VS.85).aspxhttp://msdn.microsoft.com/en-us/library/bb206250(VS.85).aspxhttp://www.chadvernon.com/blog/resources/managed-directx-2/texture-compression-filters-and-transformations/AA in generalhttp://www.extremetech.com/article2/0,2845,2136956,00.asphttp://www.pantherproducts.co.uk/Articles/Graphics/anti_aliasing.shtmlhttp://www.bit-tech.net/hardware/2005/07/04/aliasing_filtering/1Moir patternhttp://en.wikipedia.org/wiki/Moir%C3%A9_patternMSAAhttp://alt.3dcenter.org/artikel/multisampling_anti-aliasing/index7_e.phpMLAAhttp://www.eurogamer.net/articles/digitalfoundry-saboteur-aa-blog-entryhttp://www.realtimerendering.com/blog/morphological-antialiasing/http://igm.univ-mlv.fr/~biri/mlaa-gpu/http://www.youtube.com/watch?v=Z8UG7g8NRcwhttp://visual-computing.intel-research.net/publications/papers/2009/mlaa/mlaa.pdf

ReferencesWagon-wheel effect this is cool http://www.youtube.com/watch?v=jHS9JGkEOmAhttp://www.youtube.com/watch?v=rVSh-au_9aMhttp://www.youtube.com/watch?v=LVwmtwZLG88http://www.youtube.com/watch?v=T055cp-JFUAhttp://www.youtube.com/watch?v=oqUNd5wPGbUMotion blur/Temporal AAhttp://blogs.msdn.com/b/shawnhar/archive/2007/08/21/motion-blur.aspxhttp://www.eurogamer.net/articles/digitalfoundry-halo-reach-tech-analysis-article?page=3Texture filteringhttp://blogs.msdn.com/b/shawnhar/archive/2009/09/08/texture-filtering.aspxNyquist frequencyhttp://www.youtube.com/watch?v=Fy9dJgGCWZIBasic summaries of AA/AFhttp://www.youtube.com/watch?v=OLf03IMLsLI&NRhttp://www.youtube.com/watch?v=YM3ieQHRYOcOldskool article on AFhttp://www.nvnews.net/previews/geforce3/anisotropic.shtml

Personally, I find that seeing something in action rather than trying to read and digest a lengthy paper helps me to understand something a lot quicker, so Ive included some video links here.32Questions?