Performance of vector graphics versus bitmap or raster graphics

19

5

Sometimes I use vector graphics, simply because they look just slightly nicer in some cases, and other times, I use bitmap/raster graphics.

I was wondering, are there any significant performance differences between these two options?

Ethan Bierlein

Posted 2015-08-04T18:13:47.903

Reputation: 247

6

Really, it depends on many factors. NVIDIA provides hardware acceleration for vector graphics. Have you seen it? https://developer.nvidia.com/nv-path-rendering-videos

– TheBuzzSaw – 2015-08-04T18:18:45.080

Answers

14

As TheBuzzSaw said, it does depend on lots of things, including implementations of the rasterized graphics vs the vector graphics.

Here are some high performance vector graphics methods that are rendered using traditionally rasterization methods.

Loop and Blinn show how to render a vector graphics quadratic bezier curve by rendering a single triangle, and using the texture coordinates in a pixel shader to say whether a pixel is above or below the curve: http://www.msr-waypoint.net/en-us/um/people/cloop/LoopBlinn05.pdf

The basic idea is that you set your triangle corner positions to be the 3 control point positions, and you set the texture coordinates at each corner to be (0,0), (0.5,0) and (1,1) respectively. In your shader, if the interpolated texture coordinate (x*x-y) is < 0, the pixel is underneath the curve, else it's above the curve.

You can see a faux implementation of it on shadertoy here: https://www.shadertoy.com/view/4tj3Dy

As for the second method, here's a method from Valve, where distances to a shape are stored in a texture, instead of pixel data, allowing vector graphics to be drawn by using texture sampling. The decoding is so simple, it could be implemented even on fixed function hardware using only an alpha test! http://www.valvesoftware.com/publications/2007/SIGGRAPH2007_AlphaTestedMagnification.pdf

To give you an idea of how well the second technique works, this 1024x768 mustache image was generated from a 64x32 source image that had a single color channel! (aka 2KB uncompressed)

enter image description here

I also wrote some stuff about it on my blog: http://blog.demofox.org/2014/06/30/distance-field-textures/

Here is some sample OpenCL code to show just how simple it is:

float alpha = read_imagef(tex3dIn, g_textureSampler, textureCoords).w;
float3 color = (alpha < 0.5f) ? (float3)(1.0f) : (float3)(0.0f);

Both of these techniques are super fast, and blur the line between vector and rasterized graphics a bit. They are rendered using rasterization techniques, but have zooming / scaling properties like vector graphics techniques.

Alan Wolfe

Posted 2015-08-04T18:13:47.903

Reputation: 4 256

1

See also http://www.valvesoftware.com/publications/2007/SIGGRAPH2007_AlphaTestedMagnification.pdf

– teh internets is made of catz – 2015-08-04T19:00:48.043

yeah that's a very cool technique, that's the second technique i mentioned, and i link to that same pdf above as well. – Alan Wolfe – 2015-08-04T19:02:10.423

Woops, missed it, sorry. – teh internets is made of catz – 2015-08-04T19:03:04.390

1

An adaptive solution for the edge anti-aliasing of distance fields can be found here: http://www.essentialmath.com/blog/?p=151.

– Jim Van Verth – 2015-08-04T19:35:15.710

8

There might be.

Less technical answer:

If you're building a website or another application where you have nothing to do with the graphics programming then the answer is probably yes. The underlying APIs will try to guess how to render them and cache them efficiently. However, as your application runs and the API sometimes guesses incorrectly it may have to re-render things and affect performance.

More technical:

Keep in mind that unless you're using one of the newest GPUs and a library for drawing the vector paths on the GPU then it's all bitmap textures being rendered by the GPU.

I'll consider the typical case where vector graphics are rendered to textures. Here the performance would depend on your toolchain, whether your application is dynamically creating textures from the vector assets, and whether the graphics are viewed at various zoom levels. There are two issues involved: resources and texture generation. If you're only displaying the graphics at a static size then I would say there is no difference and perhaps your toolchain can convert the assets into bitmap graphics before runtime. However, if they are being displayed in various sizes or in a 3D world then you'll need mip mapped textures which take more memory. They'll take a lot of memory if you really want to see their fidelity 'up close' with a larger texture. Having your application dynamically create the larger textures only when necessary will save memory but is costly at runtime and will affect performance.

I hope this helps.

ShaneC

Posted 2015-08-04T18:13:47.903

Reputation: 80

6

There are a few ways of rendering vector graphics. As TheBuzzSaw mentions, NVIDIA has an extension that can render general paths quite quickly (but of course it only works on NVIDIA GPUs). And Alan Wolfe mentions the implicit surface methods (Loop-Blinn/distance fields), which define a function which says whether you're inside or outside a shape, and color the pixels based on that function. Another method is stencil-and-cover, where you render the path into a stencil buffer and use the even-odd count to determine whether the path covers a pixel.

In general, however, the tradeoff is that rendering raster will be faster, but is more susceptible to aliasing (even distance fields break down at low and high scales). Rendering paths requires a lot of setup but in theory can be scaled to any resolution.

Jim Van Verth

Posted 2015-08-04T18:13:47.903

Reputation: 61