Raymarching Beginners' Thread
category: code [glöplog]
Good article noby, there is more to overstepping than what I thought there were. Time to try out :)
I have always overstepped a constant factor and "fixed" the last iteration step by triangulating the distance using the last previous two steps to generate nicer normals...
I have always overstepped a constant factor and "fixed" the last iteration step by triangulating the distance using the last previous two steps to generate nicer normals...
ollj: The last page you linked to was marked as malware and removed :\
Also, 32kb for a commented webGL base seems very large. You know There are webGL demos under 512 bytes, right. Make it 1kb for a decent scene. And there isn't much to comment really.
If you want something simpler than ShaderToy, there is the good 'ole glsl sandbox.
Also, 32kb for a commented webGL base seems very large. You know There are webGL demos under 512 bytes, right. Make it 1kb for a decent scene. And there isn't much to comment really.
If you want something simpler than ShaderToy, there is the good 'ole glsl sandbox.
ts: the over-relaxation technique is good, but be careful with domain repetition.
In that case the distances aren't uniform (i.e. the distance function can change depending on the position on the ray), which can break the marching algorithm. You're then back to reducing the stepping distance, but you have a much more complex bit of code checking if the ray has stepped too far, and you end up with worse performance.
(Looks great if you're not using domain repetition though!)
In that case the distances aren't uniform (i.e. the distance function can change depending on the position on the ray), which can break the marching algorithm. You're then back to reducing the stepping distance, but you have a much more complex bit of code checking if the ray has stepped too far, and you end up with worse performance.
(Looks great if you're not using domain repetition though!)
In my experience, the discontinuities caused by having non-matching distance functions in neighbouring domain repetition cells can be avoided best by using what I call a "guard object":
The problem is caused by the distance function in the current cell returning a value which is correct for geometry from withing that cell, but incorrect for geometry inside the neighbouring cell. The extreme case would be the current cell being empty, so we get inf distance and completely overstep the neighbour. The idea now is to place an object inside the current cell, but outside that cell (confused yet?), simulating an object in the neighbour cell as a kind of cheap impostor. For example, if our cell is a unit cube, we might use a negated cube of size 1.2 centered at the cell origin as a guard object. That way, the distance inside the cell can get as high as 0.6, which is decent, and will drop to 0.1 directly at the border. This guarantees that in all neighbouring cells leaving 0.1 units of free space to the border is enough and the distance bound will be correct in all cases. No fudging around with the marching loop needed. (I think we have kept our loop constant for almost two years now, only adding improvements that help the general case. I remember the times of hand-tweaked loops for different scenes: It was a mess, I don't want to go back. Now we define a distance function, #include our loop, and that's it. Build systems, not special cases :) )
In the domain-operator example code at http://mercury.sexy/hg_sdf/, I used a guard object (search the page for "guard").
If almost all of your cells are empty, adding guard objects this way is obviously a performance killer. In this case you'll have to actually sit down and think about approximating the shape of all the filled cells in your grid by some function to remove the unnecessary cells by intersecting or increasing the size of the guard object. Also, this becomes a bit fiddly sometimes when using non-infinite domain repetition, like the pModInterval1 operator in our lib: In this case, the outermost cells only need to guard against overstepping at their inner border. You'll figure it out :)
The problem is caused by the distance function in the current cell returning a value which is correct for geometry from withing that cell, but incorrect for geometry inside the neighbouring cell. The extreme case would be the current cell being empty, so we get inf distance and completely overstep the neighbour. The idea now is to place an object inside the current cell, but outside that cell (confused yet?), simulating an object in the neighbour cell as a kind of cheap impostor. For example, if our cell is a unit cube, we might use a negated cube of size 1.2 centered at the cell origin as a guard object. That way, the distance inside the cell can get as high as 0.6, which is decent, and will drop to 0.1 directly at the border. This guarantees that in all neighbouring cells leaving 0.1 units of free space to the border is enough and the distance bound will be correct in all cases. No fudging around with the marching loop needed. (I think we have kept our loop constant for almost two years now, only adding improvements that help the general case. I remember the times of hand-tweaked loops for different scenes: It was a mess, I don't want to go back. Now we define a distance function, #include our loop, and that's it. Build systems, not special cases :) )
In the domain-operator example code at http://mercury.sexy/hg_sdf/, I used a guard object (search the page for "guard").
If almost all of your cells are empty, adding guard objects this way is obviously a performance killer. In this case you'll have to actually sit down and think about approximating the shape of all the filled cells in your grid by some function to remove the unnecessary cells by intersecting or increasing the size of the guard object. Also, this becomes a bit fiddly sometimes when using non-infinite domain repetition, like the pModInterval1 operator in our lib: In this case, the outermost cells only need to guard against overstepping at their inner border. You'll figure it out :)
cupe: All good points :)
Re. the 'write it once, properly' thing though: I do this only when I know *how* to write it properly, which is usually after having hacked it together a couple of times. So I tend to hack it, learn, hack it better, then write something reusable.
Otherwise, I spend ages figuring out how to write it properly and it ends up shit :D
The guard object is a great idea. Easy to use with a standard "mod(pos, constant)" repeat, but how about more complex setups? Since the guard object is ideally the same shape as the cell, there could perhaps be a good way to handle this.
I'll have to experiment (especially as I'm using very uncubic repetition!)
Re. the 'write it once, properly' thing though: I do this only when I know *how* to write it properly, which is usually after having hacked it together a couple of times. So I tend to hack it, learn, hack it better, then write something reusable.
Otherwise, I spend ages figuring out how to write it properly and it ends up shit :D
The guard object is a great idea. Easy to use with a standard "mod(pos, constant)" repeat, but how about more complex setups? Since the guard object is ideally the same shape as the cell, there could perhaps be a good way to handle this.
I'll have to experiment (especially as I'm using very uncubic repetition!)
Still learning how to get my lipschitz for 3d transformations to not overstep were that matters. Sum of 2 cosines already gets your l up to 2 and similar sums are always bad newbie errors
Gotten to a point were I would rather array many spheres than making a spiral. Simple fractal arrays of spheres may be worth approximating a spiral? Some fractals can look pretty decent after only 2 iterations.
If I define a transform that only increases surface distances for points above the transformed surface, that should automatically be lipschitz<=1 ?
Then I could shrink wrap a cone into a bowling pin or muscular limb or spike without any overstepping and factorisation of step distance?
Gotten to a point were I would rather array many spheres than making a spiral. Simple fractal arrays of spheres may be worth approximating a spiral? Some fractals can look pretty decent after only 2 iterations.
If I define a transform that only increases surface distances for points above the transformed surface, that should automatically be lipschitz<=1 ?
Then I could shrink wrap a cone into a bowling pin or muscular limb or spike without any overstepping and factorisation of step distance?
Games that cal Look and play better when viewed by ray-marching through signed distance fields:
Extending gameplay by a3d perspective and smarter usage of repeating geometry:
That list comes down to a list of tile-based games wit not too many different tile variations: Sim City 2000, roller coaster tycoon 2, theme hospital, sim earth, civilization1, factorio, harvest moon, zelda 3.
Games with small tile sets. Just need to implement occlusion and near infinite reflections and warping of Space into game mechanics.
Games that Are just as too in resolutions below 800x600 which is default res for diablo 2.
The Game "stardew valley" succeeded in proving that harvest moon Can be ported to The pc masterrace.
Extending gameplay by a3d perspective and smarter usage of repeating geometry:
That list comes down to a list of tile-based games wit not too many different tile variations: Sim City 2000, roller coaster tycoon 2, theme hospital, sim earth, civilization1, factorio, harvest moon, zelda 3.
Games with small tile sets. Just need to implement occlusion and near infinite reflections and warping of Space into game mechanics.
Games that Are just as too in resolutions below 800x600 which is default res for diablo 2.
The Game "stardew valley" succeeded in proving that harvest moon Can be ported to The pc masterrace.
#define "lipschitz" "local lipschitz constant of f()"
# define "lip()" "function f() that also returns itz own lipschitz"
More lipschitz:
While using and extending on The sdf for a cone i realized that The sdf for a cone contains The sdf for a triangular wave:
Let y(x,halfwavelength) = abs(x) % abs(halfwavelength);
Lipschitz of above function y(x,halfwavelength) is =1 as its a simple triangle wave with a simple First derivate of 1 = abs(+-1).
I finally Realized that lip(f(x)) = min(abs(f '(x))) for all possible x.
That can be estimated nicely for most derivable functions within relevant boundaries.
Functions containing other functions inherit The lipschitz constants of what they exten on.
Extending above function with 1mult for amplitude:
Let ycone (x,coneheight,conewidth) = y(x,coneheight)*conewidth;
Ycone() has a lipschitz of conewidth, inheriting The lipschitz of y()=1.0
Ycone is for a 2d cone. How You introduce a 3rd dimension is up to You. Commonly You just define x =length(p) for a rotation around The x axis. But other 3d to 2d projections Can make More sense.
thinking about n dimensional sdf as procections on 2d linear functions seems to be a key to writing compace fast efficiena clear reusable code.
Of your function is a 3d function it is likely a composite of 2d functions.
Not counting % as much of a function that adds a dimension but reddit.com/r/badmathematics or r/fractals may easily disagree here.
The lipschitzes that a function ingerits from The functions that it used must gi through zhe same operations; have to ne summarized , multiplied , squared , divided, depending how You combine functions to larger functions. At least i think so. I may be overgeneralizing.
In The end The marching distance that will never overstep is=1/max("all The lipschitzes returned by The top level sdf of The distance Field"
Having an sdf Return a vec2 of (distance, lipschitz) may just be worth it.
# define "lip()" "function f() that also returns itz own lipschitz"
More lipschitz:
While using and extending on The sdf for a cone i realized that The sdf for a cone contains The sdf for a triangular wave:
Let y(x,halfwavelength) = abs(x) % abs(halfwavelength);
Lipschitz of above function y(x,halfwavelength) is =1 as its a simple triangle wave with a simple First derivate of 1 = abs(+-1).
I finally Realized that lip(f(x)) = min(abs(f '(x))) for all possible x.
That can be estimated nicely for most derivable functions within relevant boundaries.
Functions containing other functions inherit The lipschitz constants of what they exten on.
Extending above function with 1mult for amplitude:
Let ycone (x,coneheight,conewidth) = y(x,coneheight)*conewidth;
Ycone() has a lipschitz of conewidth, inheriting The lipschitz of y()=1.0
Ycone is for a 2d cone. How You introduce a 3rd dimension is up to You. Commonly You just define x =length(p) for a rotation around The x axis. But other 3d to 2d projections Can make More sense.
thinking about n dimensional sdf as procections on 2d linear functions seems to be a key to writing compace fast efficiena clear reusable code.
Of your function is a 3d function it is likely a composite of 2d functions.
Not counting % as much of a function that adds a dimension but reddit.com/r/badmathematics or r/fractals may easily disagree here.
The lipschitzes that a function ingerits from The functions that it used must gi through zhe same operations; have to ne summarized , multiplied , squared , divided, depending how You combine functions to larger functions. At least i think so. I may be overgeneralizing.
In The end The marching distance that will never overstep is=1/max("all The lipschitzes returned by The top level sdf of The distance Field"
Having an sdf Return a vec2 of (distance, lipschitz) may just be worth it.
on the performance of signed distance functions that are 100% displacement maps of plnes, (seamed to plane, torus or sphere):
playing second life, i thought about sdf of dispalcement maps again, but with more experience.
second life uses "sculpted primitive objects" which are just a 33x33 vertex plane that gets a displacement map applied that is a 64x64 .jpg2000 image. its double size to make up for percission loss due to jpg compression. the rgb values of the texture are a displacement map for 33x33 voxels that define a plane, torus or sphere and are smooth shaded in second life as such.
the advantage of this system are that they come with free bad LOD due to mipmapping the displacement map.
the disadvantage is that the displacement is a 2d matric of int3d{0..255,0..255,0.255] so it does not allow for a lot of detail.
and still, they allow one routine to create a lot of detailed geometry. and people habe mastered as art form, creating a whole piano from a displacement map of a plane:
http://wiki.secondlife.com/wiki/Sculpted_prim
another disadvantage within sl is that sl doesnt allow much flexibility in defining different texture faces, but thats more a memory issue within second life....
hoe efficient would it be to make a raymarcher that mostly utilizes such displayement maps of planes? i think it would work nicely within opencl.
playing second life, i thought about sdf of dispalcement maps again, but with more experience.
second life uses "sculpted primitive objects" which are just a 33x33 vertex plane that gets a displacement map applied that is a 64x64 .jpg2000 image. its double size to make up for percission loss due to jpg compression. the rgb values of the texture are a displacement map for 33x33 voxels that define a plane, torus or sphere and are smooth shaded in second life as such.
the advantage of this system are that they come with free bad LOD due to mipmapping the displacement map.
the disadvantage is that the displacement is a 2d matric of int3d{0..255,0..255,0.255] so it does not allow for a lot of detail.
and still, they allow one routine to create a lot of detailed geometry. and people habe mastered as art form, creating a whole piano from a displacement map of a plane:
http://wiki.secondlife.com/wiki/Sculpted_prim
another disadvantage within sl is that sl doesnt allow much flexibility in defining different texture faces, but thats more a memory issue within second life....
hoe efficient would it be to make a raymarcher that mostly utilizes such displayement maps of planes? i think it would work nicely within opencl.
Have a way to do general extrusion. The method is to get the distance to a bezier arc, then translate any other primitive to that point. Say you have a extrude a cube along an arc, you get something like this:
https://onedrive.live.com/redir?resid=165EE8545EC28F90!1557&authkey=!AKxycPhbL6eNTxo&v=3&ithint=photo%2cPNG
The code for this looks like this: (opencl c)
Finding closest points on the bezier is covered here: https://www.shadertoy.com/view/ldj3Wh
I guess if there's one flaw to this, is that the extruded shape should probably rotate into the curve. That would be something along these lines, where you get the next point on the arc and rotate to look at it before translating:
Although strangely after trying this I just get this sad rotated cube at the top of the arc:
https://onedrive.live.com/redir?resid=165EE8545EC28F90!1558&authkey=!ACUZOlDe6t5DZ2A&v=3&ithint=photo%2cPNG
If anyone can see how I goofed that up, appreciate the second look.
https://onedrive.live.com/redir?resid=165EE8545EC28F90!1557&authkey=!AKxycPhbL6eNTxo&v=3&ithint=photo%2cPNG
The code for this looks like this: (opencl c)
Code:
float3 FindPtOnBezierArc( float3 a, float3 b, float3 c, float t)
{
float it = 1.0f - t;
float3 p = (it * it *a) + (2.0f * it * t * b) + (t*t*c);
return p;
}
inline float3 swTranslate(float3 p, float3 t)
{
return p-t;
}
float2 extrude(float3 query)
{
float3 ptA = (float3)(-20.000f,0.000f,0.000f);
float3 ptB = (float3)(0.000f,40.000f,0.000f);
float3 ptC = (float3)(20.000f,0.000f,0.000f);
float2 bcd = FindBezierClosestDist(ptA,ptB,ptC,query);
float bzt = bcd.y; // get out t param
float3 bp0 = FindPtOnBezierArc(ptA,ptB,ptC,bzt);
float3 xlq = swTranslate(query,bp0);
float2 rv = sdBox( xlq );
return rv;
}
Finding closest points on the bezier is covered here: https://www.shadertoy.com/view/ldj3Wh
I guess if there's one flaw to this, is that the extruded shape should probably rotate into the curve. That would be something along these lines, where you get the next point on the arc and rotate to look at it before translating:
Code:
inline float3 swLookAt(float3 p, float3 origin, float3 target, float roll)
{
float3 rr = (float3) (sin(roll), cos(roll), 0.0f);
float3 ww = normalize(target-origin);
float3 uu = normalize(cross(ww,rr));
float3 vv = normalize(cross(uu,ww));
float16 m = CreateTransposeMatrix(uu,vv,ww);
float3 rv = MultiplyMatrixVector3(m, p);
return rv;
}
...
float3 ptA = (float3)(-20.000f,0.000f,0.000f);
float3 ptB = (float3)(0.000f,40.000f,0.000f);
float3 ptC = (float3)(20.000f,0.000f,0.000f);
float2 bcd = FindBezierClosestDist(ptA,ptB,ptC,query);
float bzt = bcd.y; // get out t param
float bztN = bzt + 0.1f;
float3 bp0 = FindPtOnBezierArc(ptA,ptB,ptC,bzt);
float3 bp1 = FindPtOnBezierArc(ptA,ptB,ptC,bztN);
float3 rotq = swLookAt(query, bp0,bp1,0.0f);
float3 xlq = swTranslate(rotq,bp0);
float2 rv = sdCube( rotq );
Although strangely after trying this I just get this sad rotated cube at the top of the arc:
https://onedrive.live.com/redir?resid=165EE8545EC28F90!1558&authkey=!ACUZOlDe6t5DZ2A&v=3&ithint=photo%2cPNG
If anyone can see how I goofed that up, appreciate the second look.
shouldn´t your last code-line
look like this ?
Code:
float2 rv = sdCube( rotq );
look like this ?
Code:
float2 rv = sdCube( xlq );
That is correct, thanks for spotting that. However, even passing through xlq results in just the top of the arch. If I reverse the order of transforms and do translate then rotate, I get the arch back, but the rotation twists around and doesn't make much sense either.
Guess there's some error in my thinking about how to transform the query point to mix the rotation and translation.
Guess there's some error in my thinking about how to transform the query point to mix the rotation and translation.
on overstepping (with smoothstep() and sinusoid() as examples):
every function has a [lipschitz_constant], relevant for more accurate (and advanced) raymaching:
[lipschitz_constant] == max(max([all tangents of f (x)]),-min(min([all tangents of f(x)])))
[lipschitz_constant] == max(largest local maximum of f'(x) ,-smallest local minimum of f'(x))
//there is more to [lipschitz_constant]s, but raymarching doesn't care for most of that extended stuff.
for raymarching, the [lipschitz_constant] can explicitly be defined as "const" of a function,...
... to be calculated (at least as reasonably accurate upper bound) and returned for [higher quality raymarching];
if ([lipschitz_constant]>1.0) then
- you are likely ovestepping, leading to surfaces appearing as if they invisible, because you raymarched trough them,...
- ...depending on parameters such as "rotation of a spiral over time" and "vieving angle".
- to not overstep, ...
- ... you must divide [stepDistance] by the (largest) [lipschitz_constant] (of all [top-level distance function]s).
you can see your distanceField as one [top-level distance function],...
... or see that as containing multiple other [top-level distance function]s.
... all returning their [lipschitz_constant] = max(1,AllSubLipcchitzConstants)) to a higher function.
The fast (and bad) approach is to ignore all [lipschitz_constant]s, or to assume they are all below the same upper bound.
with the default upper bound being = 1.0;
The precuse approach is to calculate lipschitz constants and use that value to get higher precision for less performance.
this is EXACTLY what "hg_sdf" generally does NOT want you to do;
- using functions with [lipschitz_constant>1] to then do smaller raymarch steps...
- ... by dividing steplength by the largest top level lipschitz constant...
- ... by having functions with parameter dependent LOCAL lipschitz constants return their relevant lipschitz constant.
because its bad for performance and functions who's lipschitz constant is ALWAYS <=1 are just better for raymarching!
But sometimes there is just no such function for your desired effect with your desired accuracy.
for example smoothstep() too often has a [lipschitz_constant] >1.0 than <1.0, but that depends on its 3 parameters.
below text is overly general, to be extended by including other functions that return their lipschitz constant,...
... to have ANY function with knowable lipschitz constant be raymarched without overstepping;
general example case: smoothstep() without overstepping:
we extend smoothstep() to also return its [lipschitz_constant] as [.y] of a vec2;
smoothstep() is a wrapper of other functions;
h(x)=(3.0-2.0*x)*x*x;//hermite interpolation, like lerp() but with "nicer" first (and second) derivate, smoother curve.
smoothstep(e0,e1,x){if (x<e0)return 0;if (x>e1)return 1;//capped hermite interpolation
return h(lerp(0,1,e1-e0));}//hermite interpolation
lerp(a,b,c)= a+(b-a)*c;
lerp(0,1,c)= 0+(1-0)*c;
lerp(0,1,c)= c; //with [c=e1-e0] smoothstep(e0,e1,x) comes to;
smoothstep(e0,e1,x){if (x<e0)return 0;if (x>e1)return 1;//capped hermite interpolation
return f(e1-e0);}//hermite interpolation
now getting [lipschitz_constant]s (of hermite interpolation (and the other branches in the function))
h (x)= 3*x*x- 2*x*x*x = (3.0-2.0*x)*x*x
h' (x)= 6*x - 6*x*x = (6.0-6.0*x)*x
h''(x)= 6 -12*x //and its beloved because its h''(x) is still linear
the lipschitz_constant of this function is simple, it is the tangent at x=0.5 = h'(0,5)=3/2=1.5:
we only care for f'(0.5), becsause that point always has the largest abs tangent for this function.
h'(0.5)=6*0.5 - 6*0.5*0.5
h'(0.5)=3 - 6/4
h'(0.5)=6/2 - 3/2
h'(0.5)=3/2
h'(0.5)=1.5
this is a factor for our [lipschitz_constant] for smoothstep(e0,e1,x); [e0<x<e1]
multiplying steplength*=2/(3*e1-e0);
but for [x<e0] and [x>e1] we dont need to shorten step distance so we extend float smoothstep() to vec2 smoothstepLip():
///smoothstep() extension, return... [.x]=smoothstep(e0,e1,x); [.y]=local lipschitz constant
vec2 smoothstepLip(e0,e1,x){
if (x<e0)return vec2(0,0);//LOCAL lipschitz constant for this case is =0;
if (x>e1)return vec2(1,0);//LOCAL lipschitz constant for this case is =0;
const float e=max(e1-e0,0.0000000001);//lazy way of avoiding division by 1.
return vec2((3.0-2.0*e)*e*e, 3.0/(2.0*e));} //LOCAL lipschitz constant for this case is =3.0/2.0;
//defining fractions like this is better for compilers for integrated systems (AND for pretty much any other compiler).
//todo, this makes me think; can i (sometimes) just return f'(x) as second parameter? if (that is a faster option)...
//example use;
float d0=distnceToCircle0(p);
float d1=distnceToCircle1(p);
float slider=0.5;//interpolation slider, interestring range is [0..1], outside ranges are linear.
vec2 ds = smoothstepLip(d0,d1,slider);
nextDistance=ds.x/(min(1.0,ds.y));
// nextDistance/=ds.xmin(1,ds.y) makes sure to only make smaller steps if ANY function has a lipschitz constant >1.0;
---------
[lipschitz_constant] on sinusoids:
//the classic thing you likely tried once is to calculate a distance to round water ripples with;
//return distance to round water ripples on a plane (that has [.y] as its normal).
float distRipples(p) {return cos(length(p.xz)+time)+p.y-2.0;}
...
float distace=distRipples(p);
time+=0.01;
...
//instead of one cosine, we use 4 cosines.
//and insted of making a circle, we make 2x 4 cosines (in 2 dimensions) (showing addition of [lipschitz_constant]s)
//
//watch "engineerguy harmonic analyzer" to get the basics on sinusoids and their basic potential.
//I dare you to include [fast fourier transform] in shadertoy demos.
//I dare you to include signed distance functions,
//...based on 3d [parametric curve functions] that define surfaces (of the volume inside of them) and not just 3d lines.
//... that then also react to (fft of) audio input!
//... or just use fft for any compression of any curved shapes that define signed distance functions.
//I dare you to see "wolfram alpha twilight sparkle like curve" and read the 3 weblog entries that extend it to 2d sdf();
define an array of wavelength[length];
- it makes sense to use a linear or inverse square sequence of wavelengths, this translates nicely to other identities!
- it makes sense to discard all even wavelengts, only keeping odd entries.
- most famous for this is the basic wavelength_sequence: [1 , 1/3 , 1/9 , 1/27 , 1/81 ... ]
define an array of amplitude[length];
- for each wavelength define 1 amplitude; wavelength.length = amplitude.length;
your sinusoid(t) function is the sum of all cosines of both arrays;
float sinusoid(t){
float sum=0.0;
while(l=amplitude.length,l>0,--l){sum +=cos(t+wavelength[n])*amplitude[n]; for all n=array.length=amplitude.length;}
return sum;}
[[lipschitz_constant] of cos(t+wavelength)*ampltude] = 1*amplitude/wavelength; //[t] is irrelevant here.
vec2 sinusoidLip(float p, float amplitude[], float wavelengths[]){
float sum=0.0;
float lip=0.0;
while(l=amplitude.length,l>0,--l){
sum+=cos(t+wavelength[n])*amplitude[n]; for all n=array.length=amplitude.length;
lip+=amplitude[n]/max(wavelength[n],0.000000001);}//better use a planck_length than dividing by /0.
return vec2(sum,lip);}
float wavelengths[4]= new float[/1.0 , 3.0 , 9.0 ,27.0];
wavelengths=1/wavelengths; //sets shorter and shorter wavelengths with no interferrences between each entry.
//shorter wavelengths and higher amplitues are more problematic for performance and accuracy.
float Xamplitudes[4]= new float[0.7 , 1.0 , 0.4 , 3.0];//set waveform for water ripples in x-dimension
float Zamplitudes[4]= new float[0.5 , 1.5 , 0.2 , 2.0];//set waveform for water ripples in y-dimension
//this is NOT round rippes, but the sum of sqaure ripples, independend dimensions withot distance(p);
vec2 d=sinusoidLip(Xamplitude,wavelengths,p.x)+sinusoidLip(Zamplitude,wavelengths,p.z);
//above line sums p 2 waveforms over 2 dimensions.
nextDistance=d.x/(min(1.0,d.y));
//i dont think that a LOCAL lipschitz constant is generally smaller than a global one for sinusoids.
//but you can test some local cases to see if you get any significant performance boost.
//best (counter) example cases are ANY sinusoid that best approximates a square wave, <- very large lipschitz constant.
every function has a [lipschitz_constant], relevant for more accurate (and advanced) raymaching:
[lipschitz_constant] == max(max([all tangents of f (x)]),-min(min([all tangents of f(x)])))
[lipschitz_constant] == max(largest local maximum of f'(x) ,-smallest local minimum of f'(x))
//there is more to [lipschitz_constant]s, but raymarching doesn't care for most of that extended stuff.
for raymarching, the [lipschitz_constant] can explicitly be defined as "const" of a function,...
... to be calculated (at least as reasonably accurate upper bound) and returned for [higher quality raymarching];
if ([lipschitz_constant]>1.0) then
- you are likely ovestepping, leading to surfaces appearing as if they invisible, because you raymarched trough them,...
- ...depending on parameters such as "rotation of a spiral over time" and "vieving angle".
- to not overstep, ...
- ... you must divide [stepDistance] by the (largest) [lipschitz_constant] (of all [top-level distance function]s).
you can see your distanceField as one [top-level distance function],...
... or see that as containing multiple other [top-level distance function]s.
... all returning their [lipschitz_constant] = max(1,AllSubLipcchitzConstants)) to a higher function.
The fast (and bad) approach is to ignore all [lipschitz_constant]s, or to assume they are all below the same upper bound.
with the default upper bound being = 1.0;
The precuse approach is to calculate lipschitz constants and use that value to get higher precision for less performance.
this is EXACTLY what "hg_sdf" generally does NOT want you to do;
- using functions with [lipschitz_constant>1] to then do smaller raymarch steps...
- ... by dividing steplength by the largest top level lipschitz constant...
- ... by having functions with parameter dependent LOCAL lipschitz constants return their relevant lipschitz constant.
because its bad for performance and functions who's lipschitz constant is ALWAYS <=1 are just better for raymarching!
But sometimes there is just no such function for your desired effect with your desired accuracy.
for example smoothstep() too often has a [lipschitz_constant] >1.0 than <1.0, but that depends on its 3 parameters.
below text is overly general, to be extended by including other functions that return their lipschitz constant,...
... to have ANY function with knowable lipschitz constant be raymarched without overstepping;
general example case: smoothstep() without overstepping:
we extend smoothstep() to also return its [lipschitz_constant] as [.y] of a vec2;
smoothstep() is a wrapper of other functions;
h(x)=(3.0-2.0*x)*x*x;//hermite interpolation, like lerp() but with "nicer" first (and second) derivate, smoother curve.
smoothstep(e0,e1,x){if (x<e0)return 0;if (x>e1)return 1;//capped hermite interpolation
return h(lerp(0,1,e1-e0));}//hermite interpolation
lerp(a,b,c)= a+(b-a)*c;
lerp(0,1,c)= 0+(1-0)*c;
lerp(0,1,c)= c; //with [c=e1-e0] smoothstep(e0,e1,x) comes to;
smoothstep(e0,e1,x){if (x<e0)return 0;if (x>e1)return 1;//capped hermite interpolation
return f(e1-e0);}//hermite interpolation
now getting [lipschitz_constant]s (of hermite interpolation (and the other branches in the function))
h (x)= 3*x*x- 2*x*x*x = (3.0-2.0*x)*x*x
h' (x)= 6*x - 6*x*x = (6.0-6.0*x)*x
h''(x)= 6 -12*x //and its beloved because its h''(x) is still linear
the lipschitz_constant of this function is simple, it is the tangent at x=0.5 = h'(0,5)=3/2=1.5:
we only care for f'(0.5), becsause that point always has the largest abs tangent for this function.
h'(0.5)=6*0.5 - 6*0.5*0.5
h'(0.5)=3 - 6/4
h'(0.5)=6/2 - 3/2
h'(0.5)=3/2
h'(0.5)=1.5
this is a factor for our [lipschitz_constant] for smoothstep(e0,e1,x); [e0<x<e1]
multiplying steplength*=2/(3*e1-e0);
but for [x<e0] and [x>e1] we dont need to shorten step distance so we extend float smoothstep() to vec2 smoothstepLip():
///smoothstep() extension, return... [.x]=smoothstep(e0,e1,x); [.y]=local lipschitz constant
vec2 smoothstepLip(e0,e1,x){
if (x<e0)return vec2(0,0);//LOCAL lipschitz constant for this case is =0;
if (x>e1)return vec2(1,0);//LOCAL lipschitz constant for this case is =0;
const float e=max(e1-e0,0.0000000001);//lazy way of avoiding division by 1.
return vec2((3.0-2.0*e)*e*e, 3.0/(2.0*e));} //LOCAL lipschitz constant for this case is =3.0/2.0;
//defining fractions like this is better for compilers for integrated systems (AND for pretty much any other compiler).
//todo, this makes me think; can i (sometimes) just return f'(x) as second parameter? if (that is a faster option)...
//example use;
float d0=distnceToCircle0(p);
float d1=distnceToCircle1(p);
float slider=0.5;//interpolation slider, interestring range is [0..1], outside ranges are linear.
vec2 ds = smoothstepLip(d0,d1,slider);
nextDistance=ds.x/(min(1.0,ds.y));
// nextDistance/=ds.xmin(1,ds.y) makes sure to only make smaller steps if ANY function has a lipschitz constant >1.0;
---------
[lipschitz_constant] on sinusoids:
//the classic thing you likely tried once is to calculate a distance to round water ripples with;
//return distance to round water ripples on a plane (that has [.y] as its normal).
float distRipples(p) {return cos(length(p.xz)+time)+p.y-2.0;}
...
float distace=distRipples(p);
time+=0.01;
...
//instead of one cosine, we use 4 cosines.
//and insted of making a circle, we make 2x 4 cosines (in 2 dimensions) (showing addition of [lipschitz_constant]s)
//
//watch "engineerguy harmonic analyzer" to get the basics on sinusoids and their basic potential.
//I dare you to include [fast fourier transform] in shadertoy demos.
//I dare you to include signed distance functions,
//...based on 3d [parametric curve functions] that define surfaces (of the volume inside of them) and not just 3d lines.
//... that then also react to (fft of) audio input!
//... or just use fft for any compression of any curved shapes that define signed distance functions.
//I dare you to see "wolfram alpha twilight sparkle like curve" and read the 3 weblog entries that extend it to 2d sdf();
define an array of wavelength[length];
- it makes sense to use a linear or inverse square sequence of wavelengths, this translates nicely to other identities!
- it makes sense to discard all even wavelengts, only keeping odd entries.
- most famous for this is the basic wavelength_sequence: [1 , 1/3 , 1/9 , 1/27 , 1/81 ... ]
define an array of amplitude[length];
- for each wavelength define 1 amplitude; wavelength.length = amplitude.length;
your sinusoid(t) function is the sum of all cosines of both arrays;
float sinusoid(t){
float sum=0.0;
while(l=amplitude.length,l>0,--l){sum +=cos(t+wavelength[n])*amplitude[n]; for all n=array.length=amplitude.length;}
return sum;}
[[lipschitz_constant] of cos(t+wavelength)*ampltude] = 1*amplitude/wavelength; //[t] is irrelevant here.
vec2 sinusoidLip(float p, float amplitude[], float wavelengths[]){
float sum=0.0;
float lip=0.0;
while(l=amplitude.length,l>0,--l){
sum+=cos(t+wavelength[n])*amplitude[n]; for all n=array.length=amplitude.length;
lip+=amplitude[n]/max(wavelength[n],0.000000001);}//better use a planck_length than dividing by /0.
return vec2(sum,lip);}
float wavelengths[4]= new float[/1.0 , 3.0 , 9.0 ,27.0];
wavelengths=1/wavelengths; //sets shorter and shorter wavelengths with no interferrences between each entry.
//shorter wavelengths and higher amplitues are more problematic for performance and accuracy.
float Xamplitudes[4]= new float[0.7 , 1.0 , 0.4 , 3.0];//set waveform for water ripples in x-dimension
float Zamplitudes[4]= new float[0.5 , 1.5 , 0.2 , 2.0];//set waveform for water ripples in y-dimension
//this is NOT round rippes, but the sum of sqaure ripples, independend dimensions withot distance(p);
vec2 d=sinusoidLip(Xamplitude,wavelengths,p.x)+sinusoidLip(Zamplitude,wavelengths,p.z);
//above line sums p 2 waveforms over 2 dimensions.
nextDistance=d.x/(min(1.0,d.y));
//i dont think that a LOCAL lipschitz constant is generally smaller than a global one for sinusoids.
//but you can test some local cases to see if you get any significant performance boost.
//best (counter) example cases are ANY sinusoid that best approximates a square wave, <- very large lipschitz constant.
The rotation of a cube that path-follows a curve is rotatef by f'(x) of that curve for each dimension via rotation matrix or quaternion multiplication. Glsl lilely prefers matrices in The Long run.
Approximating f'(x) as ascension of a line through 2 Points Can be simpler and faster but it gets tricky near start and end Points of your curve.
Parametric sums of cosines Are likely easier to use in glsl, easier than bezier curves or nurbs curves.
Parametric sums of cosines Are easier to parallelize and easier to transform to many identities for sum approximations for compression and estimation of many boundaries.
Fourier Analysis in shadertoy may become useful.
Approximating f'(x) as ascension of a line through 2 Points Can be simpler and faster but it gets tricky near start and end Points of your curve.
Parametric sums of cosines Are likely easier to use in glsl, easier than bezier curves or nurbs curves.
Parametric sums of cosines Are easier to parallelize and easier to transform to many identities for sum approximations for compression and estimation of many boundaries.
Fourier Analysis in shadertoy may become useful.
Somehow that actually made sense to me, ollj. Pouet IQ must be rubbing off on my brain.
Twitter shows you that japan apeoaches raymarching differently , prefering unity over unreal engine.
japan loves its anime shaped meshes for miku miko dance and "dating" sims too much to not include meshes into raymarching. therefore they mix the z-buffer of all meshes with the z-buffer of the signed distance field.
its achieved by 3 projection matrices from object space vetrex coordinates to vscreenspace.
Smart z-buffering definitely increases render performance if you manage to combine the best of both methods.
some people tried a-buffering and oob-buffering on signed distance functions to increase the number of individual unique objects in a scene.
in some way "object space raymarching" is just "half assed" but in most ways it gives you much more options to tweak the look and performance of a scene by mixing the z-buffers from multiple sources.
one approach is to just quickly estimate bounds of a more precise z-buffer, allowing you to skip a lot of partial sums.
japan loves its anime shaped meshes for miku miko dance and "dating" sims too much to not include meshes into raymarching. therefore they mix the z-buffer of all meshes with the z-buffer of the signed distance field.
its achieved by 3 projection matrices from object space vetrex coordinates to vscreenspace.
Smart z-buffering definitely increases render performance if you manage to combine the best of both methods.
some people tried a-buffering and oob-buffering on signed distance functions to increase the number of individual unique objects in a scene.
in some way "object space raymarching" is just "half assed" but in most ways it gives you much more options to tweak the look and performance of a scene by mixing the z-buffers from multiple sources.
one approach is to just quickly estimate bounds of a more precise z-buffer, allowing you to skip a lot of partial sums.
Hi people!
I have a question about raytracing. Sorry if it's noob but...
To make sdf of an arbitrary, complex object, we need to discretize the space or else it's too slow (am I right?) but making sdf of simple shapes like cubes and boxes is simple, I guess.
What if we set a box around each object using its largest dimensions which fully encompasses the object, use that box as a grid (subdivision into lots of smaller cells and listing triangles in them) and simplify the scene by making an sdf of the boxes? When a ray traversing the sdf hits a box, we use the incidence spot and continue the ray's traversal through the box's (or object's) grid for tracing triangles. To find out which box is for which object, we can use something like an object id for boxes.
Does this work? Is it done before? If no, can we combine sdf with grids to have the best of the both? Because sdf is relatively fast, so we can have very high res grids for objects, or a balance between them.
I have a question about raytracing. Sorry if it's noob but...
To make sdf of an arbitrary, complex object, we need to discretize the space or else it's too slow (am I right?) but making sdf of simple shapes like cubes and boxes is simple, I guess.
What if we set a box around each object using its largest dimensions which fully encompasses the object, use that box as a grid (subdivision into lots of smaller cells and listing triangles in them) and simplify the scene by making an sdf of the boxes? When a ray traversing the sdf hits a box, we use the incidence spot and continue the ray's traversal through the box's (or object's) grid for tracing triangles. To find out which box is for which object, we can use something like an object id for boxes.
Does this work? Is it done before? If no, can we combine sdf with grids to have the best of the both? Because sdf is relatively fast, so we can have very high res grids for objects, or a balance between them.
"sdf is relatively fast"
that sentence is relativeley wrong.
Space folding and domain repetition with ONE box allows you to have infinite boxes, and that is fast.
Having thousands of box sdfs is a terrible idea.
that sentence is relativeley wrong.
Space folding and domain repetition with ONE box allows you to have infinite boxes, and that is fast.
Having thousands of box sdfs is a terrible idea.
xTr1m: thanks for replying! What about the rest?
Also thousands of boxes is too much of course, but I don't think at least a demo needs so many objects in one scene. Does it?
Also thousands of boxes is too much of course, but I don't think at least a demo needs so many objects in one scene. Does it?
Stop your thought right there, and think about what "raymarching a box sdf" means, in comparison to "raytracing an AABB". Imagine both algorithms and try to understand what the GPU does in both cases, and now try to convince me, why for this simple example the raymarched box sdf is faster.
What you mentioned earlier has been done arleady lots of times, you describe a bouding volume hierarchy, either as an octree or as a k-d-Tree. This only makes sense for accelerating the process of raytracing. Raymarching is a completely different algorithm.
What you mentioned earlier has been done arleady lots of times, you describe a bouding volume hierarchy, either as an octree or as a k-d-Tree. This only makes sense for accelerating the process of raytracing. Raymarching is a completely different algorithm.
"why for this simple example the raymarched box sdf is faster." <-- just to clear up any confusion, my point is that raymarching cannot be the faster algorithm in this case.
Maybe I can't say what's in my mind very well. I know about k-d tree and bvh, believe me, I'm describing something else!
And I posted it in raymarching thread because I didn't want to open another thread. I know what raymarching and raytracing is to some extent.
And I posted it in raymarching thread because I didn't want to open another thread. I know what raymarching and raytracing is to some extent.
I mean in bvh, boxes are grouped together and surrounded by another box. I'm saying we can set the boxes for each object separately and use those boxes as objects for the sdf.
Quote:
we set a box around each object using its largest dimensions which fully encompasses the object, use that box as a grid (subdivision into lots of smaller cells and listing triangles in them)
Quote:
in bvh, boxes are grouped together and surrounded by another box
What exactly is the difference? :)
It's obvious! The 'integration' part
In bvh boxes are included in each other but what I say is boxes are completely separate, they surround and encompass only the object inside them,not each other.
Why I believe this idea is cool is that we can generate and traverse an accurate one out of the boxes pretty fast, and when a ray hits a box use the grid cells inside it for tracing triangles.
In bvh boxes are included in each other but what I say is boxes are completely separate, they surround and encompass only the object inside them,not each other.
Why I believe this idea is cool is that we can generate and traverse an accurate one out of the boxes pretty fast, and when a ray hits a box use the grid cells inside it for tracing triangles.