*This is a copy-paste of my answer to the same question at StackOverflow*.

At the bottom of this answer is an example of GLSL code which allows to output the full `float`

value as color, encoding IEEE 754 `binary32`

. I use it like follows (this snippet gives out `yy`

component of modelview matrix):

```
vec4 xAsColor=toColor(gl_ModelViewMatrix[1][1]);
if(bool(1)) // put 0 here to get lowest byte instead of three highest
gl_FrontColor=vec4(xAsColor.rgb,1);
else
gl_FrontColor=vec4(xAsColor.a,0,0,1);
```

After you get this on screen, you can just take any color picker, format the color as HTML (appending `00`

to the `rgb`

value if you don't need higher precision, and doing a second pass to get the lower byte if you do), and you get the hexadecimal representation of the `float`

as IEEE 754 `binary32`

.

Here's the actual implementation of `toColor()`

:

```
#version 120
const int emax=127;
// Input: x>=0
// Output: base 2 exponent of x if (x!=0 && !isnan(x) && !isinf(x))
// -emax if x==0
// emax+1 otherwise
int floorLog2(float x)
{
if(x==0) return -emax;
// NOTE: there exist values of x, for which floor(log2(x)) will give wrong
// (off by one) result as compared to the one calculated with infinite precision.
// Thus we do it in a brute-force way.
for(int e=emax;e>=1-emax;--e)
if(x>=exp2(float(e))) return e;
// If we are here, x must be infinity or NaN
return emax+1;
}
// Input: any x
// Output: IEEE 754 biased exponent with bias=emax
int biasedExp(float x) { return emax+floorLog2(abs(x)); }
// Input: any x such that (!isnan(x) && !isinf(x))
// Output: significand AKA mantissa of x if !isnan(x) && !isinf(x)
// undefined otherwise
float significand(float x)
{
// converting int to float so that exp2(genType) gets correctly-typed value
float expo=floorLog2(abs(x));
return abs(x)/exp2(expo);
}
// Input: x\in[0,1)
// N>=0
// Output: Nth byte as counted from the highest byte in the fraction
int part(float x,int N)
{
// All comments about exactness here assume that underflow and overflow don't occur
const int byteShift=256;
// Multiplication is exact since it's just an increase of exponent by 8
for(int n=0;n<N;++n)
x*=byteShift;
// Cut higher bits away.
// $q \in [0,1) \cap \mathbb Q'.$
float q=fract(x);
// Shift and cut lower bits away. Cutting lower bits prevents potentially unexpected
// results of rounding by the GPU later in the pipeline when transforming to TrueColor
// the resulting subpixel value.
// $c \in [0,255] \cap \mathbb Z.$
// Multiplication is exact since it's just and increase of exponent by 8
float c=floor(byteShift*q);
return int(c);
}
// Input: any x acceptable to significand()
// Output: significand of x split to (8,8,8)-bit data vector
ivec3 significandAsIVec3(float x)
{
ivec3 result;
float sig=significand(x)/2; // shift all bits to fractional part
result.x=part(sig,0);
result.y=part(sig,1);
result.z=part(sig,2);
return result;
}
// Input: any x such that !isnan(x)
// Output: IEEE 754 defined binary32 number, packed as ivec4(byte3,byte2,byte1,byte0)
ivec4 packIEEE754binary32(float x)
{
int e = biasedExp(x);
// sign to bit 7
int s = x<0 ? 128 : 0;
ivec4 binary32;
binary32.yzw=significandAsIVec3(x);
// clear the implicit integer bit of significand
if(binary32.y>=128) binary32.y-=128;
// put lowest bit of exponent into its position, replacing just cleared integer bit
binary32.y+=128*int(mod(e,2));
// prepare high bits of exponent for fitting into their positions
e/=2;
// pack highest byte
binary32.x=e+s;
return binary32;
}
vec4 toColor(float x)
{
ivec4 binary32=packIEEE754binary32(x);
// Transform color components to [0,1] range.
// Division is inexact, but works reliably for all integers from 0 to 255 if
// the transformation to TrueColor by GPU uses rounding to nearest or upwards.
// The result will be multiplied by 255 back when transformed
// to TrueColor subpixel value by OpenGL.
return binary32/255.;
}
```

Have you looked into gDEBugger? Quoting the site: "gDEBugger is an advanced OpenGL and OpenCL Debugger, Profiler and Memory Analyzer. gDEBugger does what no other tool can - lets you trace application activity on top of the OpenGL and OpenCL APIs and see what is happening within the system implementation." Granted, no VS style debugging/stepping through code, but it might give you some insight in what your shader does (or should do). Crytec released a similar tool for Direct shader "debugging" called RenderDoc (free, but strictly for HLSL shaders, so maybe not relevant for you). – Bert – 2015-08-07T11:06:15.930

@Bert Hm yeah, I guess gDEBugger is the OpenGL equivalent to WebGL-Inspector? I've used the latter. It's immensely useful, but it's definitely more debugging OpenGL calls and state changes than shader execution. – Martin Ender – 2015-08-07T11:47:56.203

I've never done any WebGL programming and hence I'm not familiar with WebGL- Inspector. With gDEBugger you can at least inspect the entire state of your shader pipeline including texture memory, vertex data, etc. Still, no actual stepping through code afaik. – Bert – 2015-08-07T11:57:21.997

gDEBugger is extremely old and not supported since a while. If you are looking from frame and GPU state analysis you this is other question is strongly related: http://computergraphics.stackexchange.com/questions/23/how-can-i-debug-what-is-being-rendered-to-a-frame-buffer-object-in-opengl/25#25

– cifz – 2015-08-07T12:00:52.567