pouët.net

Strange GLSL parser problem - please help

category: code [glöplog]
 
Hello, I have a nasty and annoying problem with loading GLSL shaders from a file, and after being stuck with it for a while, I would like to ask for help, since if this something wrong with my code, I'm sure someone has encountered it before. I'm not sure if I'm doing something subtly wrong, or if it's a driver issue, but I can reproduce this on two different computers with different drivers and gfx cards (both NVIDIA though) and am at loss on how to proceed.

I have the following GLSL shaders, for simple directional lighting:

Code: //fragment shader uniform sampler2D tex; varying vec2 texcoord; void main(void) { // vec4 texel = texture2D(tex, texcoord); // gl_FragColor = lightvalue * texel; gl_FragColor = gl_Color; }

Code: //vertex shader uniform vec3 lightDirection; uniform vec4 lightColor; uniform vec4 ambientLight; varying vec2 texcoord; void main(void) { vec3 normal = normalize(gl_NormalMatrix * gl_Normal); lightDirection = normalize(lightDirection); float angleDotProduct = max(dot(normal, lightDirection), 0.0); vec4 diffuse = gl_Color * lightColor; //gl_FrontColor goes to the fragment shader gl_FrontColor = angleDotProduct * diffuse + gl_Color * ambientLight; gl_Position = ftransform(); }


(yes, the fragment shader doesn't really do what it should but let's ignore that for now). The issue is, that if I include the "uniform sampler 2D tex" line in the fragment shader, my glGetUniformLocation call on the sampler fails (returns -1). Furthermore, if I actually dare try reading the texel in the aforementioned shader, glGetUniformLocation() fails also on all the parameters in the vertex shader as well, even though they shouldn't be related at all (I think). The same thing happens if I do a small typo, let's say write "uniform Sampler2D tex;" instead (capitalize the S). The both shaders compile and link without errors.

Here's my shader source loading method:
Code: char *System::loadShaderSource(string filename) { ifstream file(filename.c_str(), ios::in|ios::binary|ios::ate); //read in the shader file and pad it with zero if (file.is_open()) { char *data = 0; int size = file.tellg(); data = new char [size+1]; //one byte extra for the zero in the end file.seekg (0, ios::beg); file.read (data, size); file.close(); data[size] = 0; //add a zero terminator so OpenGL doesn't fuck up for (int i = 0; i < size +1 ; i++) { int number = data[i]; g_debug << "data[" << i << "] = " << number << ", in char value'" << data[i] << "'" << endl; } return data; } else { g_debug << "Cannot open shader source file " << filename << endl; return 0; } }


As far as I can tell, this is a bug either in the GLSL parser or my loading method, but is there any way I might be able to bypass it? I went probing a bit further, and noticed that I can also make this happen by taking out either the ios::binary or ios::ate from the file stream, thus strenghtening my suspicion that it has something to do with the text parsing. For the record, the following blur shader work just fine:

Code: //blur vertex shader varying vec2 offs0; varying vec2 offs1; varying vec2 offs2; varying vec2 offs3; varying vec2 offs4; uniform float blurscale; void main(void) { //take 5 texels vec2 dx = vec2(blurscale, 0.0); vec2 dx2 = vec2(2.0 * blurscale, 0.0); offs0 = gl_MultiTexCoord0.st - dx2; offs1 = gl_MultiTexCoord0.st - dx; offs2 = gl_MultiTexCoord0.st; offs3 = gl_MultiTexCoord0.st + dx; offs4 = gl_MultiTexCoord0.st + dx2; gl_Position = ftransform(); } //blur fragment shader uniform sampler2D tex; varying vec2 offs0; varying vec2 offs1; varying vec2 offs2; varying vec2 offs3; varying vec2 offs4; uniform float bluralpha; void main() { vec4 texel = vec4(0.0, 0.0, 0.0, 0.0); texel += texture2D(tex, offs0) * 0.1; texel += texture2D(tex, offs1) * 0.25; texel += texture2D(tex, offs2) * 0.5; texel += texture2D(tex, offs3) * 0.25; texel += texture2D(tex, offs4) * 0.1; gl_FragColor = texel * bluralpha; }


Any ideas? I tried different ways to pass the parameter string to glGetUniformLocation (with or without a padding zero, char*, std::string etc), but nothing seems to work. Without the texturing, the shader works as expected.

EDIT:

(I'm writing this on a computer that's not on the Net, so I'm writing it offline..)

Now I managed to get it working by some miracle, by just changing stuff around (?). Now it looks like this:
Code: //passed from the main program uniform vec3 lightDirection; uniform vec4 lightColor; uniform vec4 ambientLight; varying vec2 texcoord; varying vec4 lightvalue; void main(void) { vec3 normal = normalize(gl_NormalMatrix * gl_Normal); lightDirection = normalize(lightDirection); float angleDotProduct = max(dot(normal, lightDirection), 0.0); vec4 diffuse = gl_Color * lightColor; texcoord = gl_MultiTexCoord0.st; //gl_FrontColor goes to the fragment shader //gl_FrontColor = angleDotProduct * diffuse + gl_Color * ambientLight; lightvalue = angleDotProduct * diffuse + gl_Color * ambientLight; gl_Position = ftransform(); }


Code: uniform sampler2D tex; varying vec2 texcoord; varying vec4 lightvalue; void main(void) { vec4 texel = texture2D(tex, texcoord); gl_FragColor = texel * lightvalue; // gl_FragColor = gl_Color; }


I don't see any real difference the the non-working version above..? Could it perhaps be some sort of UNIX/Windows end of line/tabulator/whatever encoding issue?
added on the 2009-02-08 17:11:13 by Preacher Preacher
I don't think there are line ending problems on the GLSL compilers. Even if my shaders are only on one single line, but I've used just \n in the past and it worked.

Have you tried using glintercept to see what the shader compiler says?
added on the 2009-02-08 17:28:04 by xernobyl xernobyl
is there something missing in your file load code?
i think you're missing a file.seekg(0, ios::end) before the file.tellg() call.
check http://www.cplusplus.com/reference/iostream/istream/tellg.html
added on the 2009-02-08 17:28:53 by bartman bartman
The issue you describe is pretty much your daily bread with the nvidia GLSL compiler.

It just takes the gl_FragColor and parses back to all expressions and instructions and variables that affect it. (it builds a tree recursivelly that only involves what is used in expressions that have an influence on the result). So if you declare a uniform and don't use it in a way there is an influence to the final color, it is like your declaration is ignored. the uniform is ignored and you can't query its position.
added on the 2009-02-08 17:30:08 by nystep nystep
On another side, you wouldn't like it if your shader compiler was letting some unused variables take resources on the GPU, would you? :)

I think the optimisation is a bit agressive here, yes.
added on the 2009-02-08 17:32:44 by nystep nystep
it's not only agressive, it's violating the GLSL and OpenGL specifications.
added on the 2009-02-08 18:16:16 by kusma kusma
well, that's nvidia's story with glsl, isn't it? I always add "#version 120" in the beginning of the shader to force nvidia's driver to be a bit more legal...
added on the 2009-02-08 19:02:00 by iq iq
why not "#version 130"? Much more modern ;)
added on the 2009-02-08 20:03:30 by xernobyl xernobyl
Thanks for the help, I can sleep better now :) I have to say that I never assumed that it would be a feature and not a bug. And yes, I would like my shader compiler to do exactly as I say, or at least make its optimizations in a way that doesn't confuse me :)
added on the 2009-02-08 22:32:25 by Preacher Preacher
it *is* a bug, a compliant glsl implementation isn't supposed to do this.
added on the 2009-02-08 22:39:17 by ryg ryg

login