pouët.net

SDF to fragColor projection

category: code [glöplog]
I have not looked at shaders for years. I forgot everything about it. Just want the basics behind projection of SDFs. OpenGL takes care of it automatically, but I want the description behind how this works to project onto an pixel-array.

In ShaderToy this is done automatically for GLSL, but what If I want to make my own.
So, I think this boils down to how is fragColor projected?
This should be simple., but I have something wrong in my code.

Code: fvec3 scale = fvec3(10.0, 10.0, 10.0); float angle = M_PI * 0.5; float fov = 30.0; float s = sin(angle); float c = cos(angle); fvec3 eye = fvec3(c, c, s) * scale; fvec3 target = fvec3(0.0, 0.0, 0.0); fvec3 up = fvec3(0.0, 1.0, 0.0); for (int y = 0; y<HEIGHT; y++) { int yaddr = y*WIDTH; for (int x = 0; x<WIDTH; x++) { fvec2 p = fvec2((x - HALFWIDTH)/(float)WIDTH, (y - HALFHEIGHT)/(float)HEIGHT); fvec3 direction = fvec3(x - HALFWIDTH, y - HALFHEIGHT, 64); BYTE col = 0; fvec3 ray = getRay(p, fov, eye, target, up); float dist = raymarch(eye, ray); fvec3 normal = getNormal(eye + dist*ray); if (dist < 0) col = 0; else col = 15; pixel[yaddr + x] = col; } }

I borrow the raymarching algorithm from http://jamie-wong.com/2016/07/15/ray-marching-signed-distance-functions/ where my function is defined as float raymarch(..).

Say, I more or less know how some the signed distance functions work, but I lack the part where you project althis into the 2d-screen. I tried googling for "fragcolor projection filetype:pdf" and so on without finding anything out of this structure.

Some directions or links to documents describing GLSL behind the curtain appreciated. Or simple example code.

Are fragColor homogeneous coordinates or what is it?
added on the 2018-04-08 03:46:53 by rudi rudi
fragColor or rather gl_FragColor is exactly what it says, the color of the fragment(read pixel) you're currently shading.

I don't know what you think OpenGL or shadertoy takes care of / does automatically for you since neither of them take care of anything raymarching related, you're given pixel coordinates(0-width,0-height => gl_FragCoord) and are supposed to come up with a color for that pixel(and write it to fragColor/gl_FragColor), end of story. The projection happens when you setup your ray origin and ray direction.
added on the 2018-04-08 04:34:30 by LJ LJ
First of all you shouldn´t use a BYTE for your color (unless you have an array of colors you want to use lateron)
So, exchange your line
Code:BYTE col = 0;

with
Code:vec4 col = vec4(0.0, 0.0, 0.0, 1.0f);


Then exchange your lines
Code: if (dist < 0) col = 0; else col = 15;

with
Code: if (dist < 0) col = vec4(1.0, 1.0, 1.0, 1.0); else col = vec4(0.0, 0.0, 0.0, 0.0);


Lastly exchange your returning-color-line here
Code: pixel[yaddr + x] = col;

with this
Code: gl_fracColor = col;


To clarify:
the fragment-shader (aka pixel-shader) wants you to return something in gl_FragColor for each pixel, (once you do so the pixel-shader terminates for this pixel).
gl_FragColor is a vec4, which is just 4 floats, one for RED, one for GREEN, one for BLUE and one for ALPHA -> vec4(RED, GREEN, BLUE, ALPHA)
As these are floats you may use values from 0.0 to 1.0 for each of these.

As LJ said already: if this won´t work , try "fragColor" instead of "gl_fragColor"
(I am into DirectX, so i don´t know which is correct for newecst specifications of GL)

Here´s a simple Plasma-effect in Shadertoy, you can see how it works easily in there:
https://www.shadertoy.com/view/MdXGDH


Additionally your code won´t work like that i guess, as LJ also said, you should use "gl_FragCoord" to determine which pixel you are working on, instead of your "HEIGHT", "WIDTH", etc.
Your code won´t work because you should call raymarch() until you hit sth or your ray travelled too far already without having hit anything, you need to put some exit-condition there.
You should have something like
Code: while(currentAccumulatedDist < maxDist) { dist = raymarch(eye, ray); currentAccumulatedDist += dist; if (dist < 0.001) { normal = getNormal(eye + dist*ray); gl_FragColor = setColor(normal); // will terminate the loop (and fragment-shader for this pixel) as your pixel-color got set } }


You want to remove your two loops aswell, for(int x) and for(int y), as your fragment-shader gets called for every pixel, so no need to iterate over all pixels in your pixel-shader...as said, gl_FragCoord gives you a pixel-position.


That whole thing of yours is a mess and you should better throw it away and have a look into this here instead, explains it all:
https://www.shadertoy.com/view/Xl2XWt
Just so you don´t get overwhelmed and stop trying just because that shader seems to be sooo long, here´s a minified version without all the tutorial-comments, you´ll see it´s very small in the end...but read through the tutorial-one, please, no questions will be left afterwards! (but if there still should be Qs, you can return to this your thread here, we´ll answer to the best of our knowledge! :) Signed Distance Fields are pure Fun, so have it!)
minified Version without Tutorial-Comments
Hmm, it´s early in the morning here, i just woke up, so maybe i got you completely wrong here? You just wanted to know the data-structure of fragColor, right?
Well, accidentally i answered your question anyway, then! :)
Quote:

gl_FragColor is a vec4, which is just 4 floats, one for RED, one for GREEN, one for BLUE and one for ALPHA -> vec4(RED, GREEN, BLUE, ALPHA)

So, in C this would be
Code: float[4] = {RED, GREEN, BLUE, ALPHA};


Maybe this is helpful to someone else then, reading this, getting into SDFs! ;)
Yes, you got it "completely" wrong Hardy. :P
My question is how does those fragments get its screen coordinates since its just the colors. I think LJ answered that.
added on the 2018-04-08 12:17:49 by rudi rudi
Seeing as you wrote a for loop for width and height I guess you should know that the fragment shader code gets automatically evaluated for every pixel. You don't have to loop over pixels yourself!

OpenGL rasterizes a triangle for you and then calls the fragment shader for you for every pixel in that triangle. The fragment shader in turn knows which pixel it's shading from gl_FragCoord

This really helped a lot of people I know understand more of basic full-screen fragment shaders:
https://thebookofshaders.com/01/
It is how it is done that I am out after. How it is automatically done.

My "triangle" is a quad representing a screenbuffer where i want to project the pixels.
Code:BYTE pixels[WIDTH*HEIGHT];

So, the color-shading is a 8-bit number. It doesn't really matter what datatype my color is at the moment.

If I didn't manage to point out exactly what I wanted in my starting-thread I am not out after OpenGL or GLSL calls, but rather how it is done internally. But as said I think LJ pointed that out. I need to project to 2d-screen coordinates before setting the fragColor.
added on the 2018-04-08 12:38:55 by rudi rudi
Say I am not using OpenGL or D3D at all. I am writing my own raymarching and projection code for a software renderer or a CPU that doesn't care of the GPU.
added on the 2018-04-08 12:40:53 by rudi rudi
I guess what you're looking for is how to write a rasterizer.
added on the 2018-04-08 13:29:05 by xTr1m xTr1m
Maybe i get you right this time:

There´s nothing automatically done at all by opengl...it just iterates over all pixels and gives you a fragCoord for each, which should be either
"float[2] fragCoord = {xPos, yPos};" or
"float[2] fragCoord = {xPos/ScreenWidth, yPos/ScreenHeight}" <- normalized, so between 0 and 1
That´s the same as you do with your x/y-for-loops! -> You always know which pixel you are working on, right?!

Your Raymarcher however has this line:
"fvec3 direction = fvec3(x - HALFWIDTH, y - HALFHEIGHT, 64);"
...this is what let´s it decide which direction to travel and you may want to normalize this vector i think! ;) Then everything should be fine!
"fvec3((x-HALFWIDTH)/WIDTH, (y-HALFHEIGHT)/WIDTH, 1.0f)" should work very well, this includes aspect ratio already, as you divide x and y by the WIDTH.
The third operator is how long your direction-vector is in z-direction of course, so you can use it kinda like FOV.
"operator"? come on, hardy, wake up already! ;) I should read what i write!
BB Image
added on the 2018-04-08 18:32:47 by kusma kusma
kusma BB Image
added on the 2018-04-09 02:02:06 by rudi rudi
Found something which might be of use:
A nice example of CPU based raymarching.
added on the 2018-04-09 08:49:52 by mudlord mudlord
nice
added on the 2018-04-09 16:38:51 by rudi rudi
well, i could have posted you my textMode-sphereMarcher aswell, just i thought it´s too full of textmode-specific stuff, let me clean it up a bit (removing stuff thats different for each part) and you have another example:
Code: //°°°°°°°°°°°°°°°°°°° //°°°°° Raymarcher //void rayMarch() void rayMarch(int xxx, int yyy) { float xx = float(xxx) - 40.0f; float yy = float(yyy) - 25.0f; // init for raymarching ! ( if nothing HIT ! ) -> ClearScreen ! consoleBuffer[xxx + 80 * yyy].Char.AsciiChar = (int(fabs(length(float2(xx * 0.5f + rayPositionStart.x * 10.0f, yy * 0.5f + rayPositionStart.y * 10.0f)) - t * 55.0f + rand() * 0.00015f)) % 10 > 4) ? 0xfe : 0x08; consoleBuffer[xxx + 80 * yyy].Attributes = 0; // set Ray-Direction-Vector float3 rayDirection; rayDirection.x = (float)xx/40.0f; rayDirection.y = (float)yy/25.0f; rayDirection.z = 1.0f; normalize(rayDirection); // rotate Ray-Direction-Vector rayDirection = RotateXAxis(rayDirection, worldRotation.x); rayDirection = RotateYAxis(rayDirection, worldRotation.y); rayDirection = RotateZAxis(rayDirection, worldRotation.z); normalize(rayDirection); // set Light-Vector float3 light; light.x = -0.5f; light.y = -0.7f; light.z = -1.0f; normalize(light); // rotate Light-Vector light = RotateXAxis(light, worldRotation.x); light = RotateYAxis(light, worldRotation.y); light = RotateZAxis(light, worldRotation.z); float3 // set initial Ray-Position rayPosition = rayPositionStart, normal; float // depth = 96.0f, rayLength = -1.0f, d=0.0f, h=0.0f; int iterationCount = 0, rayBounces = 0; // offset Ray away from the Viewers-Eye // rayPosition += rayDirection * 5.0f; do { if(iterationCount++ > maxIterations) break; d = trace(rayPosition); rayPosition.x += rayDirection.x * d * distanceFraction; rayPosition.y += rayDirection.y * d * distanceFraction; rayPosition.z += rayDirection.z * d * distanceFraction; // rayPosition += (rayDirection * (d * distanceFraction)); // keep track of distance marched so far for bailing out at far distance and coloring // rayLength = length3(rayPosTemp); rayLength += d * distanceFraction; // HIT ! if (d < epsilon) { rayPosition.x += rayDirection.x * d * (1.0f - distanceFraction); rayPosition.y += rayDirection.y * d * (1.0f - distanceFraction); rayPosition.z += rayDirection.z * d * (1.0f - distanceFraction); normal = getNormal(rayPosition); // calculate Light / Color float h = clamp(fabs(dot(normal, light)), 0.0f, 1.0f); // float s = pow(clamp(dot(normal,normalize(normalize(rayPosition) - light)), 0.0f, 1.0f), 99.0f); // float s = pow(max(dot(Reflect(rayDirection, normal), light), 0.), 99.0f); float s = max(dot(Reflect(rayDirection, normal), light), 0.); // set Color-Set if(part == 7) { // colorSetAdder = int(rayPosition.x / 3.0f) + int(rayPosition.y / 3.0f) + 11; colorSetAdder = 1 + colchange;// int(rayPosition.x / 3.0f) + int(rayPosition.y / 3.0f) + 11; } else if(part == 6) { colorSetAdder = 4 + colchange;// int(rayPosition.x / 3.0f) + int(rayPosition.y / 3.0f) + 11; } else if(part == 5) { colorSetAdder = 1 + colchange;// int(rayPosition.x / 3.0f) + int(rayPosition.y / 3.0f) + 11; } else if(part == 8) { // adder gets written inside the part3()-function ! ;) // colorSetAdder = (colorSetAdder + colchange) % 5; } else if (part == 3) { // adder gets written inside the part3()-function ! ;) colorSetAdder = (colorSetAdder + colchange) % 5; } else if(part == 1) { colorSetAdder = (int)(fabs(rayPosition.x) / 100.0f); colorSetAdder += (int)(fabs(rayPosition.y) / 100.0f); colorSetAdder += (int)(fabs(rayPosition.z) / 100.0f); } else { colorSetAdder = 0; } // choose ASCII based on calculated Light and Distance-To-Viewers-Eye form = (int)((h * 3.5f) * (1.0f - (rayLength / depth))); form = min(form, 3); // choose Color based on calculated Light and Type-of-Ray (Found-Object / Reflection / Refraction / etc.) if(rayBounces == 0) { // on first HIT set Foreground-Color color = (int)((h * 3.0f) * (1.0f - (rayLength / depth * 0.7f))); color = min(color, 2); consoleBuffer[xxx + 80 * yyy].Attributes = palette_rmForeground[(color + colorSetAdder * 3) % 15]; // set CHAR consoleBuffer[xxx + 80 * yyy].Char.AsciiChar = charset_rm4[form % 4]; } else { // on reflected HIT set Background-Color color = (int)((h * 2.0f) * (1.0f - (rayLength / depth * 0.7f))); color = min(color, 1); // set CHAR consoleBuffer[xxx + 80 * yyy].Attributes += palette_rmBackground[(color + colorSetAdder * 2) % 10]; } // break out of loop once all ray-bounces found a HIT if (++rayBounces > maxRayBounces) break; // Reflect Ray rayDirection = Reflect(rayDirection, normal); rayPosition.x += rayDirection.x * epsilon * 2.0f; rayPosition.y += rayDirection.y * epsilon * 2.0f; rayPosition.z += rayDirection.z * epsilon * 2.0f; } //if(GetAsyncKeyState(VK_ESCAPE)) break; // safety! -> if loop doesn´t terminate! // keep trying to HIT something until Ray travelled far away from Viewers-Eye } while ( rayLength < depth && depth > 0.0f); }


Oh, well, there´s still a lot of textmo-specific-stuff in there, also normal()-function not included. Don´t wonder about the triple-lines for adding the same stuff, i had added operators for "+=", but didn´t rewrite these lines accordingly, as deadline approached very fast! ;) It´s still readable as is! ;)

I used math.h and all the math used should be in there, if not feel free to ask here, i´ll add missing functions then! ;)
Oh, forgot to remove a big block of part-specific-stuff! :/
Remove it and you see how small a SphereMarcher can be! It´s really easy stuff in the end!
And after i checked, i found there´s a lot of functions missing, not coming with math.h, so have em aswell here:
Code: //°°°°°°°°°°°°° //°°°°° Helper #define saturate(x) clamp(x, 0.0f, 1.0f) #define lerp(x,y,z) x*(1.0f-z)+y*z // Rotations #define W(p,a) float2(p.x,p.y)*cos(a)+float2(-p.y,p.x)*sin(a); float3 abs(float3 p) { p.x = fabs(p.x); p.y = fabs(p.y); p.z = fabs(p.z); return p; } float3 floor(float3 p) { p.x = floorf(p.x); p.y = floorf(p.y); p.z = floorf(p.z); return p; } float clamp(float x, float lower, float upper) { return min(upper, max(x, lower)); } float length(float x, float y) { return sqrtf(x*x+y*y); } float length(float2 vector) { return sqrtf(vector.x*vector.x+vector.y*vector.y); } float length(float3 vector) { return sqrtf(vector.x*vector.x + vector.y*vector.y + vector.z*vector.z); } float dot(float3 vector1,float3 vector2) { return vector1.x * vector2.x + vector1.y * vector2.y + vector1.z * vector2.z; } float3 normalize(float3 vector) { float factor = (float)sqrt(vector.x*vector.x+vector.y*vector.y+vector.z*vector.z); if(factor==0.0f) return vector; factor = 1.0f / factor; vector.x *= factor; vector.y *= factor; vector.z *= factor; return vector; } // Reflection float3 Reflect(float3 rayDirection, float3 normal) { float dotVector = 2.0f * dot(rayDirection, normal); float3 reflectionVector = (rayDirection - dotVector) * normal; /* float3 reflectionVector; reflectionVector.x = rayDirection.x - dotVector * normal.x; reflectionVector.y = rayDirection.y - dotVector * normal.y; reflectionVector.z = rayDirection.z - dotVector * normal.z; */ return reflectionVector; } //°°°°°°°°°°°°°°° // °°°°°Rotation float3 RotateXAxis(float3 p, float angle) { float3 pRot; pRot.x = p.x; pRot.y = p.y * cos(angle) + p.z * sin(angle); pRot.z = p.z * cos(angle) - p.y * sin(angle); return pRot; } float3 RotateYAxis(float3 p, float angle) { float3 pRot; pRot.x = p.x * cos(angle) + p.z * sin(angle); pRot.y = p.y; pRot.z = p.z * cos(angle) - p.x * sin(angle); return pRot; } float3 RotateZAxis(float3 p, float angle) { float3 pRot; pRot.x = p.x * cos(angle) + p.y * sin(angle); pRot.y = p.y * cos(angle) - p.x * sin(angle); pRot.z = p.z; return pRot; } float3 Rotate(float3 p, float3 angle) { p = RotateXAxis(p, angle.x); p = RotateYAxis(p, angle.y); p = RotateZAxis(p, angle.z); return p; } //°°°°°°°°°°°°°°° //°°°°° Geometry //___plAnE___ float plane(float3 p) { return abs(p.y); // UNSIGNED !!!! } float sphere(float3 p, float r) { return length(p) - r; } float cube(float3 p, float r) { return max(max(fabs(p.x) - r, fabs(p.y) - r), fabs(p.z) - r); } //___rOUndEd_cUbE___ float rcube(float3 p, float3 x, float y) { //return length(max(abs(p) - x, 0.0f)) - y; // UNSIGNED !!!! //return length(max(abs(p) - x, float3(0.0f, 0.0f, 0.0f))) - y; // UNSIGNED !!!! return max(length(abs(p) - x), 0.0f) - y; // UNSIGNED !!!! } //___rIng___ float ring(float3 p, float x, float y, float z) { return max(abs(length(p.x, p.y) - x) - y, abs(p.z) - z); } //___OctAhEdrOn___ float octahedron(float3 p, float x) { p = abs(p); return (p.x + p.y + p.z - x) / 3; } float cylinder(float3 p, float x, float y) { return max(fabs(p.z) - y, length(p.x, p.y) - x); //return max(fabs(p.x) - y, length(p.y, p.z) - x); //return max(fabs(p.y) - y, length(p.x, p.z) - x); } //___hExAgOn___ float hexagon(float3 p, float x, float y) { p = abs(p); //return max(p.y-y,max(p.z+p.x*.5,p.x)-x); return max(p.z - y, max(p.x + p.y*.5, p.y) - x); } //___tOrUz___ float torus(float3 p, float x, float y) { // return length(float2(length(float2(p.x, p.z)) - x, p.y)) - y; return length(float2(length(float2(p.x, p.y)) - x, p.z)) - y; // return length(float2(length(float2(p.y, p.z)) - x, p.x)) - y; } //___cApsUlE___ float capsule(float3 p, float x, float y) { p = abs(p); return min(max(p.y - x, length(p.x, p.z) - y), length(p - float3(0., x, 0.)) - y); } //___prIsm___ float prism(float3 p, float x, float y) { return max(fabs(p.z) - y, max(fabs(p.x)*.9 + p.y*.5, -p.y) - x * .5); } //___OctAgOn___ float octagon(float3 p, float x, float y) { p = abs(p); //return max(p.y-y,max(p.z+p.x*.5,p.x+p.z*.5)-x); return max(p.z - y, max(p.y + p.x*.5, p.x + p.y*.5) - x); } //___pEntAgOn___ float pentagonZ(float3 p, float x, float y) { float3 q = abs(p); return max(q.z - y, max(max(q.x*1.176 + p.y*0.385, q.x*0.727 - p.y), p.y*1.237) - x); //return max(q.z-y,max(max(q.x*1.176-p.y*0.385, q.x*0.727+p.y), -p.y*1.237)-x); } //°°°°°°°°°°°°°°°°° //°°°°° Get Normal float3 getNormal(float3 p) { float epsilon = 0.001f; float3 pX = p, pY = p, pZ = p; pX.x += epsilon; pY.y += epsilon; pZ.z += epsilon; float3 normal; normal.x = trace(pX); normal.y = trace(pY); normal.z = trace(pZ); //return normal; return normalize(normal); }


Seems i didn´t rewrite most of those either to use "+="-operators!
If you want to make sense of all this, you´ll also need this as global declarations i guess:
Code: struct float2 { float x,y; float2() {} float2(float x, float y) : x(x), y(y) {} // float2(const float2 &p) : x(p.x), y(p.y) {} float2 operator + (const float2 &p) const { return float2(x + p.x, y + p.y); } float2 operator - (const float2 &p) const { return float2(x - p.x, y - p.y); } float2 operator - (const float &p) const { return float2(x - p, y - p); } float2 operator * (const float2 &p) const { return float2(x * p.x, y * p.y); } float2 operator * (double c) const { return float2(x * c, y * c); } float2 operator * (float c) const { return float2(x * c, y * c); } float2 operator / (double c) const { return float2(x / c, y / c); } float2 operator += (const float2 &p) const { return float2(x + p.x, y + p.y); } float2 operator -= (const float2 &p) const { return float2(x - p.x, y - p.y); } }; struct float3 { float x, y, z; float3() {} float3(float x, float y, float z) : x(x), y(y), z(z) {} // float3(const float3 &p) : x(p.x), y(p.y), z(z) {} float3 operator + (const float3 &p) const { return float3(x + p.x, y + p.y, z + p.z); } float3 operator + (const float &c) const { return float3(x + c, y + c, z + c); } float3 operator - (const float3 &p) const { return float3(x - p.x, y - p.y, z - p.z); } float3 operator - (const float &c) const { return float3(x - c, y - c, z - c); } float3 operator * (const float3 &p) const { return float3(x * p.x, y * p.y, z * p.z); } float3 operator * (double c) const { return float3(x * c, y * c, z * c); } float3 operator * (float c) const { return float3(x * c, y * c, z * c); } float3 operator / (double c) const { return float3(x / c, y / c, z / c); } float3 operator += (const float3 &p) const { return float3(x + p.x, y + p.y, z + p.z); } float3 operator -= (const float3 &p) const { return float3(x - p.x, y - p.y, z - p.z); } float3 operator *= (const float3 &p) const { return float3(x * p.x, y * p.y, z * p.z); } }; struct float4 { float x, y, z, w; float4() {} float4(float x, float y, float z, float w) : x(x), y(y), z(z), w(w) {} // float4(const float4 &p) : x(p.x), y(p.y), z(z), w(w) {} float4 operator + (const float4 &p) const { return float4(x + p.x, y + p.y, z + p.z, w + p.w); } float4 operator + (const float &p) const { return float4(x + p, y + p, z + p, w + p); } float4 operator - (const float4 &p) const { return float4(x - p.x, y - p.y, z - p.z, w - p.w); } float4 operator - (const float &p) const { return float4(x - p, y - p, z - p, w - p); } float4 operator * (const float4 &p) const { return float4(x * p.x, y * p.y, z * p.z, w * p.w); } float4 operator * (double c) const { return float4(x * c, y * c, z * c, w * c); } float4 operator / (double c) const { return float4(x / c, y / c, z / c, w / c); } float4 operator += (const float4 &p) const { return float4(x + p.x, y + p.y, z + p.z, w + p.w); } float4 operator -= (const float4 &p) const { return float4(x - p.x, y - p.y, z - p.z, w - p.w); } }; float3 rayPositionStart, worldRotation;


Enough of this, just wanted to share my example of a simple SphereMarcher for CPUs.
It´s super-basic, but does what i wanted it to do! ;)
BB Image
added on the 2018-04-10 03:12:02 by rudi rudi
phong illumination model:

BB Image
added on the 2018-04-19 22:59:51 by rudi rudi
hardy: you should make them to use "+="-operators. You then also could make the code more compact and readable by not doing every component computation. like this:
Code:rayPosition += rayDirection * d * distanceFraction;
instead of each component you could do that inside the structs...
added on the 2018-04-25 16:50:00 by rudi rudi
this is also more preferable:
Code:float3 rayDirection = {(float)xx/40.0f, (float)yy/25.0f, 1.0f};

settin the values in one line.
added on the 2018-04-25 16:51:37 by rudi rudi
And this one:
Code:light = RotateAllAxises(light, worldRotation);
added on the 2018-04-25 16:53:05 by rudi rudi
yes, sure!
As said: i implemented it last minute and just had no time left to make use of it everywhere, as deadline approached very fast! ;) The parts of code you refer to are from 2010 iirc...i simply didn´t update it after implementing the new operators.
If i ever need this code again, i´ll make use of it of course, hehe! :)

login