Raymarching Beginners' Thread
category: code [glöplog]
While safety factors ( like the 0.25 in hardy's "dist*0.25" example ) can make not-quite-distance-functions safe to use as a distance function, they also slow raymarching down a lot. Think of Zenos paradox: if you only march 50% of the remaining distance every time, you'll never reach your destination. Luckily with raymarching we just need to get "close enough", but it can still take lots of steps to get that close.
So:
- use the largest safety factor you can. Don't use 0.25 if your scene still looks fine with 0.30 or higher. The exact value depends on the slope of your distortion, if you want needle-like spikes to protrude from your object, you'll need a crazy small factor like 0.01, but for smooth bumps 0.75 could be enough.
- Only use that safety factor for the object that needs it. Don't write:
d = 0.25 * distance_to_whole_scene(p);
Instead, use the safety factor *inside* distance_to_whole_scene() :
float distance_to_whole_scene(vec3 p)"
{ float d = distance_to_safe_object1(p);
d = min(d, distance_to_safe_object2(p);
d = min(d, 0.25 * distance_to_unsafe_object(p);
return d;
}
- you can use bounding objects to march quickly to somewhere close to your unsafe object, and then march with the safety factor to the distorted object:
d = distance_to_boundingbox(p);
if ( d < .5) // change bounding box margin to your liking
d = .25 * distance_to_unsafe_object(p);
You can also use a bounding sphere. But keep in mind that bounding objects can cause artifacts with ambient occlusion, with the AO seeing the distance to the bounding object.
- there's also cone marching, which limit the cost of the safety factor by marching close to the object in low-res, and doing the final approach in high-res. But it's harder to implement, multiple passes etc. See http://www.fulcrum-demo.org/wp-content/uploads/2012/04/Cone_Marching_Mandelbox_by_Seven_Fulcrum_LongVersion.pdf
So:
- use the largest safety factor you can. Don't use 0.25 if your scene still looks fine with 0.30 or higher. The exact value depends on the slope of your distortion, if you want needle-like spikes to protrude from your object, you'll need a crazy small factor like 0.01, but for smooth bumps 0.75 could be enough.
- Only use that safety factor for the object that needs it. Don't write:
d = 0.25 * distance_to_whole_scene(p);
Instead, use the safety factor *inside* distance_to_whole_scene() :
float distance_to_whole_scene(vec3 p)"
{ float d = distance_to_safe_object1(p);
d = min(d, distance_to_safe_object2(p);
d = min(d, 0.25 * distance_to_unsafe_object(p);
return d;
}
- you can use bounding objects to march quickly to somewhere close to your unsafe object, and then march with the safety factor to the distorted object:
d = distance_to_boundingbox(p);
if ( d < .5) // change bounding box margin to your liking
d = .25 * distance_to_unsafe_object(p);
You can also use a bounding sphere. But keep in mind that bounding objects can cause artifacts with ambient occlusion, with the AO seeing the distance to the bounding object.
- there's also cone marching, which limit the cost of the safety factor by marching close to the object in low-res, and doing the final approach in high-res. But it's harder to implement, multiple passes etc. See http://www.fulcrum-demo.org/wp-content/uploads/2012/04/Cone_Marching_Mandelbox_by_Seven_Fulcrum_LongVersion.pdf
I have so much love for you guys right now XD
Ive implemented a couple of the optimisations you have mentioned and its almost doubled my frame rate!
I have also cut down my lighting a little that gave me a few more precious frames. Just going to trawl through the thread and hopefully find a fast noise algorithm and maybe experiment with some metaballs if I can get my head around them.
Thanks again!
Ive implemented a couple of the optimisations you have mentioned and its almost doubled my frame rate!
I have also cut down my lighting a little that gave me a few more precious frames. Just going to trawl through the thread and hopefully find a fast noise algorithm and maybe experiment with some metaballs if I can get my head around them.
Thanks again!
I did a 1k webgl version of by 704 demo.
This version is very similar but uses webgl and adds reflections, and some fake soft shadows.
http://www.pouet.net/prod.php?which=62822
This version is very similar but uses webgl and adds reflections, and some fake soft shadows.
http://www.pouet.net/prod.php?which=62822
Seven: good tips there :)
I'll add: tracing can be a lot faster. If you can split your scene into bounding spheres, planes, maybe boxes, consider if it'll be faster to trace to the nearest surface before marching. If it can get you a good distance into the scene it might pay off, if not it probably won't.
Where I've found that particularly helpful: ray marching a terrain. If the camera is often above the top of the terrain and usually looking parallel to it, that's really worst case and it can be really slow. But trace to a top bounding plane, and suddenly 1/2 rays (at the top of the screen) very cheaply miss the terrain, and the other 1/2 move a long way forwards very quickly.
I'll add: tracing can be a lot faster. If you can split your scene into bounding spheres, planes, maybe boxes, consider if it'll be faster to trace to the nearest surface before marching. If it can get you a good distance into the scene it might pay off, if not it probably won't.
Where I've found that particularly helpful: ray marching a terrain. If the camera is often above the top of the terrain and usually looking parallel to it, that's really worst case and it can be really slow. But trace to a top bounding plane, and suddenly 1/2 rays (at the top of the screen) very cheaply miss the terrain, and the other 1/2 move a long way forwards very quickly.
psonice: you can extend that algorithm to a quadtree, where each non-leaf node represents the max height of the child nodes. It works well on the GPU too because you can represent the quadtree as mipmaps.
Concerning speed-up-tricks (just some more basic concepts):
--Choice of loop-code:
I f.e. have some
do{test ray for intersection}...while( length(rayPosition - rayOriginalPosition) < bailoutDistance )
loop...
(do/while is the best choice for a raymarcher i think, tested it versus for{} and while{} some years ago and it yielded the best framerate, must have to do with the shaderCompiler-Optimization, maybe only applies to HLSL!)
p.s.: The "rayOriginalPosition" is the initial camera-position before you start travelling on your ray! (bad naming, i know!)
--Reflections (multiple passes in general) -> lower your bailout-distance...
...once i have an intersection/HIT! i calculate the color for the pixel and set it...then i prepare the reflection-pass:
Reset the rayOriginalPosition to the actual position of the ray, reflect the rayDirection and travel my EPSILON on that new ray with my new rayDirection (else your initial first steps would be really tiny and also i most of the times didnt even get any reflections at all without doing so!), also i reduce the bailout-distance as said by simply halving it! (may look bad on some scenes, needs tweaking per scene!):
Reflections dont need to travel too far, as they arent too visible in a moving environment, so this wont meet the eye if you cleverly PostFX your way out! ;)
--Raise the safety factor (0.25 * d)...
...the farther away from the eye your ray is looking for intersections. If sth is far away you wont see the artefacts anyway! ;)
Sth like "0.25+raiseFactor*travelledDistance/bailoutDistance * d" should do!
--Raising the EPSILON (the intersection-close-enough-trigger -> if(d<EPSILON) HIT! )
This one is tricky and needs some clever tweaking per scene to work out, but if you have a really nice scene, already optimized to the max, still going like 15-20fps, this may be your last resort! It´s Artefacts vs. Beauty, you need to find a setting which works for your scene!
It works like the one about the safety factor above!
Hope you have fun tweaking and twiddling your scenes with this knowledge, i had...and had not! ;)
--Choice of loop-code:
I f.e. have some
do{test ray for intersection}...while( length(rayPosition - rayOriginalPosition) < bailoutDistance )
loop...
(do/while is the best choice for a raymarcher i think, tested it versus for{} and while{} some years ago and it yielded the best framerate, must have to do with the shaderCompiler-Optimization, maybe only applies to HLSL!)
p.s.: The "rayOriginalPosition" is the initial camera-position before you start travelling on your ray! (bad naming, i know!)
--Reflections (multiple passes in general) -> lower your bailout-distance...
...once i have an intersection/HIT! i calculate the color for the pixel and set it...then i prepare the reflection-pass:
Reset the rayOriginalPosition to the actual position of the ray, reflect the rayDirection and travel my EPSILON on that new ray with my new rayDirection (else your initial first steps would be really tiny and also i most of the times didnt even get any reflections at all without doing so!), also i reduce the bailout-distance as said by simply halving it! (may look bad on some scenes, needs tweaking per scene!):
Code:
// change parameters for next stage
rayOriginalPosition = rayPosition;
rayDirection = reflect(rayDirection,normal);
rayPosition += rayDirection * EPSILON;
// reduce the depth the ray travels for next stage
bailoutDistance*=0.5;
Reflections dont need to travel too far, as they arent too visible in a moving environment, so this wont meet the eye if you cleverly PostFX your way out! ;)
--Raise the safety factor (0.25 * d)...
...the farther away from the eye your ray is looking for intersections. If sth is far away you wont see the artefacts anyway! ;)
Sth like "0.25+raiseFactor*travelledDistance/bailoutDistance * d" should do!
--Raising the EPSILON (the intersection-close-enough-trigger -> if(d<EPSILON) HIT! )
This one is tricky and needs some clever tweaking per scene to work out, but if you have a really nice scene, already optimized to the max, still going like 15-20fps, this may be your last resort! It´s Artefacts vs. Beauty, you need to find a setting which works for your scene!
It works like the one about the safety factor above!
Hope you have fun tweaking and twiddling your scenes with this knowledge, i had...and had not! ;)
phew, almost had to retype it all, as i clicked on a link in the real window accidentally when i wanted to close the PREVIEW-window...luckily i am aware of my luck and copied it all to clipboard before i previewed it! :D
Now for what i came here:
I have been working on optimizing my Basecode, last thing i did was optimizing the distance-estimation-functions...and i think i have two nice things to share now, not too sure, though:
--the Cube-DE:
IQ has two versions on his DE page
I guess we all know using signed DEs is the way to go, so i had a look into my DE, it looked like this:
So here goes my problem, i am absolutely unsure by now (overthinking! bad coder-disease!) if the uncommented version (many asciis) is signed or not! Please tell me it is! ;)
Anyway, this version has 56 ascii-letters (inside the curly braces), so i had to come up with sth smaller for 4ks ofcoz! ;) This here:
Those are only 42! asciis! And works the same!
And eventho i guess i am not the first one coming up with this smaller approach, i am even more confused about if it´s still signed or not now! ;) Please tell me it is! ;)
While at it, i thought i should also optimize
--the roundedCube-DE:
IQ has only this one version:
while mine looked like this:
I have absolutely no idea what my brain was thinking when coding this (the "+y"), but i used it like this several times and i never checked back after i had it going!
So now that i finally found my stupidity, i wanted to come up with a signed version atleast, so here it is:
Ok, after this setback imma rest and refrain from code for a day or two in shame! :(
Let me know if you came up (a long time ago already) with a signed-roundedCube-DE! ;)
CRRRRRRRMMBLLL!
I have been working on optimizing my Basecode, last thing i did was optimizing the distance-estimation-functions...and i think i have two nice things to share now, not too sure, though:
--the Cube-DE:
IQ has two versions on his DE page
Code:
Box - signed
float sdBox( vec3 p, vec3 b )
{
vec3 d = abs(p) - b;
return min(max(d.x,max(d.y,d.z)),0.0) +
length(max(d,0.0));
}
Box - unsigned
float udBox( vec3 p, vec3 b )
{
return length(max(abs(p)-b,0.0));
}
I guess we all know using signed DEs is the way to go, so i had a look into my DE, it looked like this:
Code:
float cube(float3 p,float3 x)
{
return max(max(abs(p.x)-x.x,abs(p.y)-x.y),abs(p.z)-x.z); // many asciis
//return length(max(abs(p)-x,0.)); // unsigned, artefacts
}
So here goes my problem, i am absolutely unsure by now (overthinking! bad coder-disease!) if the uncommented version (many asciis) is signed or not! Please tell me it is! ;)
Anyway, this version has 56 ascii-letters (inside the curly braces), so i had to come up with sth smaller for 4ks ofcoz! ;) This here:
Code:
float cube(float3 p,float3 x)
{
p=abs(p)-x;
return max(max(p.x,p.y),p.z);
}
Those are only 42! asciis! And works the same!
And eventho i guess i am not the first one coming up with this smaller approach, i am even more confused about if it´s still signed or not now! ;) Please tell me it is! ;)
While at it, i thought i should also optimize
--the roundedCube-DE:
IQ has only this one version:
Code:
Round Box - unsigned
float udRoundBox( vec3 p, vec3 b, float r )
{
return length(max(abs(p)-b,0.0))-r;
}
while mine looked like this:
Code:
// UNSIGNED !!!!
float rcube(float3 p,float3 x,float y )
{
return length(max(abs(p)-x+y,0.))-y;
}
I have absolutely no idea what my brain was thinking when coding this (the "+y"), but i used it like this several times and i never checked back after i had it going!
So now that i finally found my stupidity, i wanted to come up with a signed version atleast, so here it is:
Code:
DAMN! After i thought i had figured it and wrote all of this i realized my stupidity once again...my version does NOT work! As i had tested IQs version just before i altered my code...i had two lines of "return rcube" and "return srcube" (my version), so i forgot to comment out iqs return-line (which is above my return-line) and really really thought i had the solution! :&
Ok, after this setback imma rest and refrain from code for a day or two in shame! :(
Let me know if you came up (a long time ago already) with a signed-roundedCube-DE! ;)
CRRRRRRRMMBLLL!
Quote:
"0.25+raiseFactor*travelledDistance/bailoutDistance * d"
should have been:
"(0.25+raiseFactor*travelledDistance/bailoutDistance) * d"
;)
someone asked if there is an editor ... well, now there is a very basic one for raymarched models/scenes: http://www.thrill-project.com/blog/?p=182 usability and features will improve in the next couple of weeks. I keep a list of feature requests if you got one.
linked link ;)
movAX13h: not sure if you're aware of it, but there is this paper about a sdf modelling tool. Might give some inspiration.
Let´s put a one-direction-pipeline to that other thread from here, maybe someone puts one back to here in some years again!
Raymarching Tutorial
(page 6 for relevance, page 1 for hilarity!)
Raymarching Tutorial
(page 6 for relevance, page 1 for hilarity!)
Hi,
I'm interested in using ray marching on a shader (GLSL) to reproduce 3D polygon models. I saw this technique using 3D textures to create SDFs..
http://www.pouet.net/topic.php?which=7920&page=51
This is just what I'm looking for - but does anyone have any source code for this?
also, how would I create the 3D texture from a model in the first place. I have Cinema 4D & Blender.
Thanks for any help..
I'm interested in using ray marching on a shader (GLSL) to reproduce 3D polygon models. I saw this technique using 3D textures to create SDFs..
http://www.pouet.net/topic.php?which=7920&page=51
This is just what I'm looking for - but does anyone have any source code for this?
also, how would I create the 3D texture from a model in the first place. I have Cinema 4D & Blender.
Thanks for any help..
[raymarching signed distance function]s starting points for begginners are:
defining signed distance fields, basically functions that just return the distance of a point to a surface:
http://iquilezles.org/www/articles/distfunctions/distfunctions.htm (also defines a signed distance field for a quad/polygon, but that function is not too efficient, too many dotproducts for a simple face, you may instead want to replace your polygon model by unions of simpler signed distance field functions, as the distance of a point to a sphere/cube/cone is defined with much less algebra)
your refference bunny image, to process less, it intersects a mesh with multiple planes, and calculates signed distance to that, interpolating between slices.
its a solution to rendering meshes, but its backwards.
glsl shaders to render stuff:
https://github.com/nicoptere/raymarching-for-THREE (has opensource funcctions, for GLSL framed by WebGL in js, using THREE.js , it has some nice functions easy to translate to other frameworks, you can just use the GLSL code, its not too complex.)
youtube tutorials can teach you how to implement glsl to make you an executable for your hardware.
the more i think of sdf, the more i realize that meshes should have died, gone out of favor 10 years ago.
---
3d textures:
once a ray is marched to a surface, determining the color of that surface is pretty much the same process. imagine a room full of spheres with different colors, your point is in one of these spheres. it will have the color of whatever it is inside of. you just need to define colors of fooms/surfaces, relatively simple uvv mapping.
defining signed distance fields, basically functions that just return the distance of a point to a surface:
http://iquilezles.org/www/articles/distfunctions/distfunctions.htm (also defines a signed distance field for a quad/polygon, but that function is not too efficient, too many dotproducts for a simple face, you may instead want to replace your polygon model by unions of simpler signed distance field functions, as the distance of a point to a sphere/cube/cone is defined with much less algebra)
your refference bunny image, to process less, it intersects a mesh with multiple planes, and calculates signed distance to that, interpolating between slices.
its a solution to rendering meshes, but its backwards.
glsl shaders to render stuff:
https://github.com/nicoptere/raymarching-for-THREE (has opensource funcctions, for GLSL framed by WebGL in js, using THREE.js , it has some nice functions easy to translate to other frameworks, you can just use the GLSL code, its not too complex.)
youtube tutorials can teach you how to implement glsl to make you an executable for your hardware.
the more i think of sdf, the more i realize that meshes should have died, gone out of favor 10 years ago.
---
3d textures:
once a ray is marched to a surface, determining the color of that surface is pretty much the same process. imagine a room full of spheres with different colors, your point is in one of these spheres. it will have the color of whatever it is inside of. you just need to define colors of fooms/surfaces, relatively simple uvv mapping.
i know, you love marching in spheres, but i am pretty sure you dont have to.
you dont need to calculate euclidean distance, one squareroot foreach itteration of each ray, till you hit a surface, distance 0...
instead of a signed distance function returning euclidean distance, it should return a vector, one distance or each dimension.
that returned vector sets a bounding box that you can freely march around in, and of course you want to march to its border for each itteration, which comes down to not calculating the intersection of a ray to a box, but it comes down to calculating the signed distance vector of a box that is defined by a signed distance field vector.
only if one dimension od this box is negative you need to calculate euclidean distance.
this may add more itterations along diagonals, but less causes itterations along near parallel surfaces, the more they are aligned to the coordinate systems axes.
rotating a coordinate system to move less diagonally may be worth it.
and even if it doubles itterations, many itterations will have one squareroot less than using euclidean distance. even if your processor is great and energy efficient at calculating length of vectors (which i doubt in general), it still comes with its accumulative inaccuracies.
you dont need to calculate euclidean distance, one squareroot foreach itteration of each ray, till you hit a surface, distance 0...
instead of a signed distance function returning euclidean distance, it should return a vector, one distance or each dimension.
that returned vector sets a bounding box that you can freely march around in, and of course you want to march to its border for each itteration, which comes down to not calculating the intersection of a ray to a box, but it comes down to calculating the signed distance vector of a box that is defined by a signed distance field vector.
only if one dimension od this box is negative you need to calculate euclidean distance.
this may add more itterations along diagonals, but less causes itterations along near parallel surfaces, the more they are aligned to the coordinate systems axes.
rotating a coordinate system to move less diagonally may be worth it.
and even if it doubles itterations, many itterations will have one squareroot less than using euclidean distance. even if your processor is great and energy efficient at calculating length of vectors (which i doubt in general), it still comes with its accumulative inaccuracies.
You Are pathethic for hanging on to outdated paradigms such as distance fog and a linear decrement of itterations...
I am a noob but i found This to be awesome:
Float Distpool=1000000.0f;
Float itterpool=1000.0f;
Float eps = 0.1f;
//our epsilon is nonstatic.
//we define max itteration and distance as floats that we decrement whole both are >0.0f.
While(distpool >0 && itterpool >0)
{
Float dist = shortestdistancetoallsurfaces(rayposition);
If (dist<eps)
{
//you hit a surface. Calculate normal. Calculate light. Calculate refraction. Calculate texture. Sum up to screespace pixel Color...
Break;
}
Eps *= 1.01f;
//the More itterations The larger eps gets. Exponentially.
Itterpool-= eps;
//yes we decrement itterations by an incrementing epsilon. Its awesome.
Distpool -= dist/eps.
//this alows much larger render distances eith raymarching while preserving high accuracy for many itterations with short distannces to surfaces.
}
Result is 10x fps and a nearly linear level of detail decrease along flat surfaces like near a horizon.
Side effect is a lot of moire patterning in The distance becoming visible.
Also near infinite detail and distance possible.
Anyone please tweak my constants to converge More linearily.
I am a noob but i found This to be awesome:
Float Distpool=1000000.0f;
Float itterpool=1000.0f;
Float eps = 0.1f;
//our epsilon is nonstatic.
//we define max itteration and distance as floats that we decrement whole both are >0.0f.
While(distpool >0 && itterpool >0)
{
Float dist = shortestdistancetoallsurfaces(rayposition);
If (dist<eps)
{
//you hit a surface. Calculate normal. Calculate light. Calculate refraction. Calculate texture. Sum up to screespace pixel Color...
Break;
}
Eps *= 1.01f;
//the More itterations The larger eps gets. Exponentially.
Itterpool-= eps;
//yes we decrement itterations by an incrementing epsilon. Its awesome.
Distpool -= dist/eps.
//this alows much larger render distances eith raymarching while preserving high accuracy for many itterations with short distannces to surfaces.
}
Result is 10x fps and a nearly linear level of detail decrease along flat surfaces like near a horizon.
Side effect is a lot of moire patterning in The distance becoming visible.
Also near infinite detail and distance possible.
Anyone please tweak my constants to converge More linearily.
In General. My constants are to be set scene specific.
An exterior scene should increment eps More exponentially for wach itteration.
An interior or forest scene should start with lower epsilons than a desert landscape.
This can easily ne estimated by doing some statistics on some random raycasts to learn if we are in a small detailed room or in a wide open desert with very little ambient occlusion but a lot of parallel floor and horizon in our biew.
An exterior scene should increment eps More exponentially for wach itteration.
An interior or forest scene should start with lower epsilons than a desert landscape.
This can easily ne estimated by doing some statistics on some random raycasts to learn if we are in a small detailed room or in a wide open desert with very little ambient occlusion but a lot of parallel floor and horizon in our biew.
ollj: ok, I see what you are doing here. You are ray-marching without fixed maximum iteration count, but instead you use some heuristic based on exponentially growing epsilon and traversed distances. Moreover, you are laughing at people that are not doing so ;-)
I totally agree with you, that if you go with relatively small iteration count (say max. 200), then you need to deploy some ugly tricks like "distance fog" that you mention to "fade out" areas when the ray does not converge to a surface, but is "stuck in the air", because iteration count reached the maximum.
It's indeed an important problem, that was also addressed by various publications, including Enchanced Sphere Tracing by the very the same people that were writing on this thread.
I think, in general case, you would need to know even more about the field than just traversed distances, for example: you may have a concave mirror. But this is kind of what you said as well:
However, I think you are ignored at large, mostly because of statements like this:
Maybe it's just a non-native speaker language mistake, but you must realize that calling people "pathethic" is not the best opening statement in a technical discussion.
I totally agree with you, that if you go with relatively small iteration count (say max. 200), then you need to deploy some ugly tricks like "distance fog" that you mention to "fade out" areas when the ray does not converge to a surface, but is "stuck in the air", because iteration count reached the maximum.
It's indeed an important problem, that was also addressed by various publications, including Enchanced Sphere Tracing by the very the same people that were writing on this thread.
I think, in general case, you would need to know even more about the field than just traversed distances, for example: you may have a concave mirror. But this is kind of what you said as well:
Quote:
In General. My constants are to be set scene specific.
However, I think you are ignored at large, mostly because of statements like this:
Quote:
You Are pathethic for hanging on to outdated paradigms such as distance fog and a linear decrement of itterations...
Maybe it's just a non-native speaker language mistake, but you must realize that calling people "pathethic" is not the best opening statement in a technical discussion.
Quote:
Eps *= 1.01f;
Why? If you think about it, you want a smaller epsilon the further away an object is, as you need lesser details the further away. So, your line would be way more efficient if you´d take the calculated distance to closest object into it like this:
Code:
Eps *= 1.0 + 0.01*dist;
Would also yield a consistent look allover once you tweaked the values correctly. If the closest object is 5 units away, with your code you would still try to get as close as 1.01*epsilon in your next step, while with mine you´d only need to get as close as 1.05*epsilon. ;) For a far object like 100 meters your approach would be 1.01*epsilon for the second step again, while mine would be 2.0*epsilon.
*meters = units
Yes tomkh makes sense. I was trying to tweak it while using as few mults as possible.
I didn't care for bend space. Barely even care for lipshitz- continuity for longer total distances.
Performance over accuracy while bending space to a fractal mess is the fun part of raymarching.
I Also tried itterpool-=dist*eps; adds a mult for more exponential lod over shorter distances.
Better image but performance loss was barely worth that.
I even went as far as doing
Position.z -= 0.01
For each step, disregarding step distance. It's dirt cheap scattering and more often looks better than worse.
+1add resulting in fake scattering is more worth it.
I didn't care for bend space. Barely even care for lipshitz- continuity for longer total distances.
Performance over accuracy while bending space to a fractal mess is the fun part of raymarching.
I Also tried itterpool-=dist*eps; adds a mult for more exponential lod over shorter distances.
Better image but performance loss was barely worth that.
I even went as far as doing
Position.z -= 0.01
For each step, disregarding step distance. It's dirt cheap scattering and more often looks better than worse.
+1add resulting in fake scattering is more worth it.
Recently I came to like the idea of having commented uncompressed html webgl, glsl, raymarche, distanceField and all shaders in a single plain ascii html file that is still <32kilobytes to be edited and executed in a web browser.
Makes for great templates with null includes.
Can build up to a library of sets, scenes and concepts.
this webgl can exhaust a browser a bit too much
I find this simpler than shadertoy, more basic, easier to archive. Less dependent.
Makes for great templates with null includes.
Can build up to a library of sets, scenes and concepts.
this webgl can exhaust a browser a bit too much
I find this simpler than shadertoy, more basic, easier to archive. Less dependent.
Well you have a lot of artifacts in the shadows probably because you have distance gradients larger than one (discontinuities at repetition edges?), and the performance isn't exactly stellar either... Looking at your code it's barely legible, and the ridiculous comments only make it harder to read. Your marching loop seems quite inefficient. Probably a better idea to just implement the patterns described here instead of doing dirty hacks that are likely break your rendering.
No noby this guy has totally reinvented raymarching and:
Quote:
You Are pathethic for hanging on to outdated paradigms such as distance fog and a linear decrement of itterations...