Best way to send time to a shader in 4k/1k?

category: code [glöplog]
I see in IQ's 4k framework that he uses glColor3f to send time data to the shaders. Doesn't seem to work for me. Any other clever hacks to send the current time to a shader in 4 or less kilobytes?
added on the 2011-09-13 11:05:28 by Mewler Mewler
Why doesn't it seem to work? What's the symptoms?
glColor3f to send time data to the shaders ??

how it's possible to get the color from OpenGL into the shader? Using the background color? Or just writing a pixel somewhere on teh screen?
added on the 2011-09-13 11:15:50 by rez rez
added on the 2011-09-13 11:16:26 by raer raer
Check it out in the Accio demo found here: http://www.iquilezles.org/www/material/isystem1k4k/isystem1k4k.htm

Basically he's doing glColor3f(t,sinf(.25f*t),0.0f); every frame and then using gl_Color.x in the fragment shader to grab the data. Saves quite a few bytes not having to set up the necessary extensions to do it the proper way.

Symptoms of it not working for me? Its always 0 in the shader xD
added on the 2011-09-13 11:26:34 by Mewler Mewler
You're doing something wrong. What happens if you pass in glColor3f(0.5, 0.5, 0.5) and try drawing with it in the shader?
added on the 2011-09-13 11:34:56 by Preacher Preacher
Best way to use shaders in OGL 4k, thanks to ARB:

/* an array of shader code strings, in my case I have an include shader code with many useful functions as the first array element, and then the actual shader code with the main function as the second array element. This makes small multi-shader intros with code-reuse possible :)
__forceinline unsigned int createProgram(const char** shaders)
return ((PFNGLCREATESHADERPROGRAMVPROC)wglGetProcAddress("glCreateShaderProgramv&q uot;))(GL_FRAGMENT_SHADER, 2, shaders);

Then somewhere in your C++ main function:

((PFNGLUSEPROGRAMPROC)wglGetProcAddress("glUseProgram"))(shaderProg ram);
((PFNGLUNIFORM4FPROC)wglGetProcAddress("glUniform4f"))(0, width, height, introTime, get_Envelope(1));
glRects(-1, -1, 1, 1);

You don't need a vertex shader, just pass width&height into your uniform vec4 of the shader, then do the math there (gl_FragCoord.xy/U.xy). That's how I also get the intro time in U.z and sync data from one 4klang instrument in U.w.
added on the 2011-09-13 11:41:53 by xTr1m xTr1m
Hey thanks xTr1m and Preacher. Managed to get it working, just needed to copy gl_color to a varying vec3 in the vertex shader. But I'm probably just going to use xTr1m's method, looks neat xD
added on the 2011-09-13 11:49:40 by Mewler Mewler
what do you think about using the lightsource? shouldn't that be equal small...
added on the 2011-09-13 13:32:44 by FeN FeN
Putting a constant variable through a vertex interpolator (which is what you're doing when using glColor3f()) doesn't seem smart, just use the constant/uniform instead, it's made for that. That reminds me; there used to be some obvious precision loss in shader calculations, I wonder how that is now and how that is for vertex interpolators. Google ahoy!
added on the 2011-09-13 14:04:56 by superplek superplek
(doesn't seem smart -> you're doing extra work, perhaps suffer precision loss --then again the code to do it vs. uniform setup might just be smaller! i'm not into pc ogl)
added on the 2011-09-13 14:07:11 by superplek superplek
i use something like glRectf(time/5000, time/5000, -time/5000, -time/5000); in this piece of shame: http://www.pouet.net/prod.php?which=57718

it is like 20-30 bytes more convenient to just use glRecti(t,t,-t,-t), but for whatever reason nvidia proprietary drivers loose float precision too soon (noticeable after ~10sec). this doesn't happen with opensource/mesa (ati, intel) drivers, but who cares about opensource these days.
added on the 2011-09-13 14:09:45 by provod provod
Hm, can't find anything that would claim you can't just depend on the fully specified precision these days. Good.
added on the 2011-09-13 14:10:17 by superplek superplek
w23: hm. so it's the driver that screws over the precision. i wonder where, how and why.
added on the 2011-09-13 14:11:50 by superplek superplek
If you have the capability, use callbacks from the synth for perfect time-sync
added on the 2011-09-13 15:15:04 by MeteoriK MeteoriK
My synth has something like this function:

void registerCallback( CALLBACK_T*, void* arg)

where "CALLBACK_T" is a typedef to the function pointer shape of blarrrrghh(int note, int channel, void* arg). Then you pass in your function and anything you like as "arg", and it gets called every time a note gets pressed or released and passes "arg" back.
added on the 2011-09-13 16:00:31 by MeteoriK MeteoriK
Auto-triggered events on instruments is by no means "perfect time-sync".
added on the 2011-09-13 16:27:46 by kusma kusma
ok, "time-sync to within accuracy of buffer size, latency, and speed of routine that processes the event" for the pedantic.
added on the 2011-09-13 17:14:20 by MeteoriK MeteoriK
You missed my point completely: auto-triggering stuff on instrument-events is super-highway to boring-ass, predictable sync.
added on the 2011-09-13 17:15:37 by kusma kusma
sorry. I'll try to throw in a few curveball events next time just to surprise you.
added on the 2011-09-13 17:25:19 by MeteoriK MeteoriK
just measure your time in eg. quarter notes instead of seconds (so that 1.0 == 1 beat) and suddenly its easy to do procedural sync stuff, eg. "float strobo=1.-mod(2.*time,1-.);" or do synced camera cuts, synced motions (sin and cos with time*pi*some_power_of_two to the rescue), etc. Or do more complicated stuff either on CPU or as texture lookup / shader constant array / both.
added on the 2011-09-13 17:27:47 by kb_ kb_
beatsync ftw
added on the 2011-09-13 18:33:05 by superplek superplek