How can I debug GLSL shaders?

27

6

When writing non-trivial shaders (just as when writing any other piece of non-trivial code), people make mistakes.[citation needed] However, I can't just debug it like any other code - you can't just attach gdb or the Visual Studio debugger after all. You can't even do printf debugging, because there's no form of console output. What I usually do is render the data I want to look at as colour, but that is a very rudimentary and amateurish solution. I'm sure people have come up with better solutions.

So how can I actually debug a shader? Is there a way to step through a shader? Can I look at the execution of the shader on a specific vertex/primitive/fragment?

(This question is specifically about how to debug shader code akin to how one would debug "normal" code, not about debugging things like state changes.)

Martin Ender

Posted 2015-08-07T08:24:33.827

Reputation: 1 164

Have you looked into gDEBugger? Quoting the site: "gDEBugger is an advanced OpenGL and OpenCL Debugger, Profiler and Memory Analyzer. gDEBugger does what no other tool can - lets you trace application activity on top of the OpenGL and OpenCL APIs and see what is happening within the system implementation." Granted, no VS style debugging/stepping through code, but it might give you some insight in what your shader does (or should do). Crytec released a similar tool for Direct shader "debugging" called RenderDoc (free, but strictly for HLSL shaders, so maybe not relevant for you). – Bert – 2015-08-07T11:06:15.930

@Bert Hm yeah, I guess gDEBugger is the OpenGL equivalent to WebGL-Inspector? I've used the latter. It's immensely useful, but it's definitely more debugging OpenGL calls and state changes than shader execution. – Martin Ender – 2015-08-07T11:47:56.203

I've never done any WebGL programming and hence I'm not familiar with WebGL- Inspector. With gDEBugger you can at least inspect the entire state of your shader pipeline including texture memory, vertex data, etc. Still, no actual stepping through code afaik. – Bert – 2015-08-07T11:57:21.997

gDEBugger is extremely old and not supported since a while. If you are looking from frame and GPU state analysis you this is other question is strongly related: http://computergraphics.stackexchange.com/questions/23/how-can-i-debug-what-is-being-rendered-to-a-frame-buffer-object-in-opengl/25#25

– cifz – 2015-08-07T12:00:52.567

Answers

22

As far as I know there are no tools that allows you to steps through code in a shader (also, in that case you would have to be able to select just a pixel/vertex you want to "debug", the execution is likely to vary depending on that).

What I personally do is a very hacky "colourful debugging". So I sprinkle a bunch of dynamic branches with #if DEBUG / #endif guards that basically say

#if DEBUG
if( condition ) 
    outDebugColour = aColorSignal;
#endif

.. rest of code .. 

// Last line of the pixel shader
#if DEBUG
OutColor = outDebugColour;
#endif

So you can "observe" debug info this way. I usually do various tricks like lerping or blending between various "colour codes" to test various more complex events or non-binary stuff.

In this "framework" I also find useful to have a set of fixed conventions for common cases so that if I don't have to constantly go back and check what colour I associated with what. The important thing is have a good support for hot-reloading of shader code, so you can almost interactively change your tracked data/event and switch easily on/off the debug visualization.

If need to debug something that you cannot display on screen easily, you can always do the same and use one frame analyser tool to inspect your results. I've listed a couple of them as answer of this other question.

Obv, it goes without saying that if I am not "debugging" a pixel shader or compute shader, I pass this "debugColor" info throughout the pipeline without interpolating it (in GLSL with flat keyword )

Again, this is very hacky and far from proper debugging, but is what I am stuck with not knowing any proper alternative.

cifz

Posted 2015-08-07T08:24:33.827

Reputation: 1 953

When they are available you can use SSBOs to get a more flexible output format where you don't need to encode in colors. However, the big drawback of this approach is that it alters the code which can hide/alter bugs, especially when UB is involved. +1 Nevertheless, for it is the most direct method that is available. – Nobody – 2016-01-22T15:10:21.610

9

There is also GLSL-Debugger. It is a debugger used to be known as "GLSL Devil".

The Debugger itself is super handy not only for GLSL code, but for OpenGL itself as well. You have the ability to jump between draw calls and break on Shader switches. It also shows you error messages communicated by OpenGL back to the application itself.

Sepehr

Posted 2015-08-07T08:24:33.827

Reputation: 221

7

There are several offerings by GPU vendors like AMD's CodeXL or NVIDIA's nSight/Linux GFX Debugger which allow stepping through shaders but are tied to the respective vendor's hardware.

Let me note that, although they are available under Linux, I always had very little success with using them there. I can't comment on the situation under Windows.

The option which I have come to use recently, is to modularize my shader code via #includes and restrict the included code to a common subset of GLSL and C++&glm.

When I hit a problem I try to reproduce it on another device to see if the problem is the same which hints at a logic error (instead of a driver problem/undefined behavior). There is also the chance of passing wrong data to the GPU (e.g. by incorrectly bound buffers etc.) which I usually rule out either by output debugging like in cifz answer or by inspecting the data via apitrace.

When it is a logic error I try to rebuild the situation from the GPU on CPU by calling the included code on CPU with the same data. Then I can step through it on CPU.

Building upon the modularity of the code you can also try to write unittest for it and compare the results between a GPU run and a CPU run. However, you have to be aware that there are corner cases where C++ might behave differently than GLSL, thus giving you false positives in these comparisons.

Finally, when you can't reproduce the problem on another device, you can only start to dig down where the difference comes from. Unittests might help you to narrow down where that happens but in the end you will probably need to write out additional debug information from the shader like in cifz answer.

And to give you an overview here is a flowchart of my debugging process: Flow chart of the procedure described in the text

To round this off here is a list of random pros and cons:

pro

  • step through with usual debugger
  • additional (often better) compiler diagnostics

con

Nobody

Posted 2015-08-07T08:24:33.827

Reputation: 171

This is a great idea, and probably the closest you can get to single-stepping shader code. I wonder if running through a software renderer (Mesa?) would have similar benefits? – None – 2016-06-29T06:03:38.983

@racarate: I thought about that as well but did not have the time to try yet. I am no expert on mesa but I think it might be hard to debug the shader as the shader debug information has to somehow reach the debugger. Then again, maybe the folks at mesa already have an interface for that to debug mesa itself :) – Nobody – 2016-06-29T10:21:35.590

5

While it doesn't seem to be possible to actually step through an OpenGL shader, it is possible to get the compilation results.
The following is taken from the Android Cardboard Sample.

while ((error = GLES20.glGetError()) != GLES20.GL_NO_ERROR) {
    Log.e(TAG, label + ": glError " + error);
    throw new RuntimeException(label + ": glError " + error);

If your code compiles properly, then you have little choice but to try out a different way of communicating the program's state to you. You could signal that a part of the code was reached by, for example, changing the color of a vertex or using a different texture. Which is awkward, but seems to be the only way for now.

EDIT: For WebGL, I am looking at this project, but I've only just found it... can't vouch for it.

S.L. Barth

Posted 2015-08-07T08:24:33.827

Reputation: 256

3Hm yeah, I'm aware that I can get compiler errors. I was hoping for better runtime debugging. I've also used WebGL inspector in the past, but I believe it only shows you state changes, but you can't look into a shader invocation. I guess this could have been clearer in the question. – Martin Ender – 2015-08-07T09:42:23.417

2

This is a copy-paste of my answer to the same question at StackOverflow.


At the bottom of this answer is an example of GLSL code which allows to output the full float value as color, encoding IEEE 754 binary32. I use it like follows (this snippet gives out yy component of modelview matrix):

vec4 xAsColor=toColor(gl_ModelViewMatrix[1][1]);
if(bool(1)) // put 0 here to get lowest byte instead of three highest
    gl_FrontColor=vec4(xAsColor.rgb,1);
else
    gl_FrontColor=vec4(xAsColor.a,0,0,1);

After you get this on screen, you can just take any color picker, format the color as HTML (appending 00 to the rgb value if you don't need higher precision, and doing a second pass to get the lower byte if you do), and you get the hexadecimal representation of the float as IEEE 754 binary32.

Here's the actual implementation of toColor():

#version 120

const int emax=127;
// Input: x>=0
// Output: base 2 exponent of x if (x!=0 && !isnan(x) && !isinf(x))
//         -emax if x==0
//         emax+1 otherwise
int floorLog2(float x)
{
    if(x==0) return -emax;
    // NOTE: there exist values of x, for which floor(log2(x)) will give wrong
    // (off by one) result as compared to the one calculated with infinite precision.
    // Thus we do it in a brute-force way.
    for(int e=emax;e>=1-emax;--e)
        if(x>=exp2(float(e))) return e;
    // If we are here, x must be infinity or NaN
    return emax+1;
}

// Input: any x
// Output: IEEE 754 biased exponent with bias=emax
int biasedExp(float x) { return emax+floorLog2(abs(x)); }

// Input: any x such that (!isnan(x) && !isinf(x))
// Output: significand AKA mantissa of x if !isnan(x) && !isinf(x)
//         undefined otherwise
float significand(float x)
{
    // converting int to float so that exp2(genType) gets correctly-typed value
    float expo=floorLog2(abs(x));
    return abs(x)/exp2(expo);
}

// Input: x\in[0,1)
//        N>=0
// Output: Nth byte as counted from the highest byte in the fraction
int part(float x,int N)
{
    // All comments about exactness here assume that underflow and overflow don't occur
    const int byteShift=256;
    // Multiplication is exact since it's just an increase of exponent by 8
    for(int n=0;n<N;++n)
        x*=byteShift;

    // Cut higher bits away.
    // $q \in [0,1) \cap \mathbb Q'.$
    float q=fract(x);

    // Shift and cut lower bits away. Cutting lower bits prevents potentially unexpected
    // results of rounding by the GPU later in the pipeline when transforming to TrueColor
    // the resulting subpixel value.
    // $c \in [0,255] \cap \mathbb Z.$
    // Multiplication is exact since it's just and increase of exponent by 8
    float c=floor(byteShift*q);
    return int(c);
}

// Input: any x acceptable to significand()
// Output: significand of x split to (8,8,8)-bit data vector
ivec3 significandAsIVec3(float x)
{
    ivec3 result;
    float sig=significand(x)/2; // shift all bits to fractional part
    result.x=part(sig,0);
    result.y=part(sig,1);
    result.z=part(sig,2);
    return result;
}

// Input: any x such that !isnan(x)
// Output: IEEE 754 defined binary32 number, packed as ivec4(byte3,byte2,byte1,byte0)
ivec4 packIEEE754binary32(float x)
{
    int e = biasedExp(x);
    // sign to bit 7
    int s = x<0 ? 128 : 0;

    ivec4 binary32;
    binary32.yzw=significandAsIVec3(x);
    // clear the implicit integer bit of significand
    if(binary32.y>=128) binary32.y-=128;
    // put lowest bit of exponent into its position, replacing just cleared integer bit
    binary32.y+=128*int(mod(e,2));
    // prepare high bits of exponent for fitting into their positions
    e/=2;
    // pack highest byte
    binary32.x=e+s;

    return binary32;
}

vec4 toColor(float x)
{
    ivec4 binary32=packIEEE754binary32(x);
    // Transform color components to [0,1] range.
    // Division is inexact, but works reliably for all integers from 0 to 255 if
    // the transformation to TrueColor by GPU uses rounding to nearest or upwards.
    // The result will be multiplied by 255 back when transformed
    // to TrueColor subpixel value by OpenGL.
    return binary32/255.;
}

Ruslan

Posted 2015-08-07T08:24:33.827

Reputation: 174

0

The solution that worked for me is compilation of shader code to C++ - as mentioned by Nobody. It prooved very efficient when working on a complex code even though it requires a bit of setup.

I have been working mostly with HLSL Compute Shaders for which I have developed a proof-of-concept library available here:

https://github.com/cezbloch/shaderator

It demonstrates on a Compute Shader from DirectX SDK Samples, how to enable C++ like HLSL debugging and how to setup Unit Tests.

Compilation of GLSL compute shader to C++ looks easier than HLSL. Mainly due to syntax constructions in HLSL. I have added a trivial example of executable Unit Test on a GLSL ray tracer Compute Shader which you can also find in the Shaderator project's sources under link above.

SpaceKees

Posted 2015-08-07T08:24:33.827

Reputation: 1