pouët.net

Raymarching Beginners' Thread

category: code [glöplog]
rare: he's talking about tiled rendering. You march at low res (on the CPU I guess?) to get a list of objects in each tile, then render each tile separately. Because the tiles are small, the number of objects is quite limited so the shader never gets too heavy, and you can have tons of more complex objects in the scene as a whole. You might have say 200 objects in the distance function, which would kill performance totally, but if there's only 10 on each tile it's no problem at all.

He did an intro a while back using this method (can't remember the name :( but it was an invite..) with raymarched text and stuff, which would otherwise be a total bitch to raymarch.
added on the 2011-08-10 11:46:23 by psonice psonice
Are we really bound to "low res" approximations? I don't think so.

Maybe one should move away from using SDFs for modelling as Decipher more or less already pointed out.

I still think you can make something pretty cool with a single shader approach - but it all depends on what you want to do.
And using SDFs in order to modell something is a pretty cool approach - at least from my point of view :).

I guess you can combine all the techniques and pointers mentioned in the last three pages into something really cool.
added on the 2011-08-10 11:53:45 by las las
Aha. Now. He's precomputing stuff with compute shaders of DX and then storing the results somewhere usable in the real shader.
I got fuckno idea of compute shaders. Stuff like this is probably doable in OpenCL/OpenGL too though...

Then there's the question of space. Do you need two different shaders for that? are they built on-the-fly. And how is the real shader working? Branching? Different custom-compiled shaders?
added on the 2011-08-10 11:59:28 by raer raer
psonice, you do not march at a low res because you would miss objects. You do a bounding volume check against the tile. No marching...
Pete, decipher: ok, I'm with it now. For some reason I'd totally neglected that "shade" word in decipher's post, and thought we were talking about the actual rendering too. Sorry :D

For shading a 3d texture would be enough in most cases, yeah. Slightly related, this looks cool as hell: http://artis.imag.fr/Publications/2011/CNSGE11a/ (not actually using distance fields but there's some interesting ideas in there).
added on the 2011-08-10 12:03:22 by psonice psonice
las: indeed, SDFs might be utterly useless for a lot of stuff, but it's a ton of fun finding out what can be done with them :) The twisting cube might have had its day, perhaps. I thought that about regular cubes too though, back in the early 90s ;)

pete: if you used a cone and counted all objects the cone intersects instead of just taking the first one it should work I think. Probably the least efficient way to do it though.
added on the 2011-08-10 12:11:47 by psonice psonice
But better if you wanna save space, as you probably don't need different shaders...
added on the 2011-08-10 12:15:01 by raer raer
raer: for the method used in traskogen (I have some ideas for the more general case (which would hopefully put some of smash's words above to shame ;)), but more on that when it's done, and it's definitely not 4k stuff):
For SIMD reasons it will only work well on a list of a few different primitives (in this case parameterized curves, boxes, spheres), not on a big complex function (*)

It's done in a single compute shader pass. You could do without shader model 5 by having 2 seperate passes and some cumbersome way to store a variable length list of primitives pr tile between them.
It runs in thread groups (/tiles) of 16x16 pixels (in sm5 you can share data on-chip and synchronize within a group). Say we have 256 primitives, then each thread(/pixel) will raymarch a single primitve from the camera through the center of the tile and record the minimum distance. If this is within half the tile frustum width the primitive is added to a shared active-list for the tile. At the same time I also find the shared minimum value for scene-intersection.
So, each pixel will first raymarch a single primitive, then synchronize the group, and then every pixel will raymarch all primitives on the active-list from the minimum distance.

AO is pretty straightforward, just add those primitives that are within the max AO influence distance (or better, do it on a seperate list to avoid raymarching them). Shadows are harder but can be done reasonably well in a second pass, just like reflections (with a much lower speedup) - more about that at a later date ;)

The nice thing about the SDF raymarching is that you, by looking at the estimated distances, can be sure not to miss any potential primitive for the tile, unlike other low/adaptive-res raytracing schemes.

(*) If each thread in a wavefront/warp is raymarching it's own part of a big function, performance will be the same as everyone raymarching everything..
added on the 2011-08-10 14:04:42 by Psycho Psycho
Regarding what decipher is writing about (funny, I didn't know about that paper even though I know the guys), we're using SDF 3d textures for AO for some big architectural models. I can't post a interactive link of the case yet, but here's a small shot: BB Image
I got a nice gpu implementation of the distance calculation which will make a 512x128x512 texture for this ~5M polygons building in around 3 minutes on the gtx460.
But of course you can't capture small details with volume texture SDFs - I think the grid size in this case is about 40cm, which limits our AO somewhat. Still, with 9 ao samples / pixel and real euclidian distances (instead of what is usually done in analytical SDFs) it works reasonably well.
added on the 2011-08-10 14:29:57 by Psycho Psycho
Quote:
I mean using SDFs to shade rasterised geometry is not something to come. It is already in use. Even within the demoscene. :)


yep.. :) weve been using "meshes in sdf volume textures" for some years now. its great. :)
btw about the res, its really not as bad as you think. they interpolate well and you can get away with a lot, much better than e.g. voxels.
added on the 2011-08-10 20:10:28 by smash smash
All right, so, I didn't have time in the morning but now I do. And, I decided to share some of my own personal techniques and ways of doing things.

First of all, I don't ever put the whole scene into a single 3D volume texture. So far, I have found out that having an independent SDF representation of each entity in the scene-graph is a better idea. It allows for other very interesting techniques to be used (e.g. constant-time modification of the total SDF representation of the scene -- hint: think of some sort of a linked list)*.

Secondly, the SDF doesn't need to be used for ray-marching. I think when speaking to Las on Skype, we somehow coined the term ray-shading, and I believe that is a better way of putting it. If you are trying to shade rasterised geometry, then you can always use things like the depth buffer or the per-pixel interpolated vertex normal to avoid actually marching until you get to the same point. Using those two, for example, you can have an initial position and a direction vector for your looks-like-ass™ AO calculation within your scene's cumulative SDF. Obviously for the AO step itself, you have to march on a ray for a little, but it shouldn't hurt (5 - 10 samples per visible fragment vs. truck loads of them until you get to the same point).

There you go, just a little glimpse of what I have been working on for some time.

As you can see, it all boils down to SDFs. It doesn't necessarily need to be ray-marched SDFs, but SDFs nonetheless.

*: Here's how you can organise your scene-graph to have a linked-list data structure for ray-marching the entire SDF of the scene. You can store the closest two nodes (each of which are SDFs of other objects) to a third node with the two nodes' radii (or some sort of a bounding-box description). One of these nodes will be towards the virtual far-plane while the other one is towards your virtual screen (if you have two axis-parallel or overlapping nodes, then simply jump from one to the other and then towards the far- / near- plane, this requires a bit of sanity checking and maintenance but shouldn't be such a tough guy to handle):

Code:head -> a = NULL -> b = second second -> a = head -> b = third third -> a = second -> b = fourth …


So, it's basically a doubly-linked list. At this point, while actually ray-marching (and not ray-shading) if you're marching towards the far-plane you simply query for the next node, jump the distance between the two nodes towards your direction vector. If your vector ends up within the volume of another object you simply march as if it's a regular SDF (you know, the classical deal). It's very similar to the adaptive ray-marching idea (actually, this is just a different interpretation of it).

Well, I am pretty tired right now, bull I'll try to supply you all with some visual representation of the idea. But the point is, you simply create your scene-graph as a doubly-linked list and march through that. Hopefully some people got what I tried to share…

This idea is still in development and might have some bugs. If you have any suggestions or if you'd like to ask questions, feel free. :)

I have some other things I am currently playing with, such as animated meshes and their dynamic distance fields, but to quote Psycho: more about that at a later date. :)
added on the 2011-08-10 21:24:30 by decipher decipher
http://blog.hvidtfeldts.net/index.php/2011/08/distance-estimated-3d-fractals-iii-folding-space/

Recursive Tetrahedrons!
added on the 2011-08-13 04:32:09 by Mewler Mewler
Quote:

Mewler :
http://blog.hvidtfeldts.net/

thats a very good article thanks. I already know about raymarching techniques and played with raymarched fractals before reading this and I have to admit things are ready good explained.
added on the 2011-08-13 10:16:33 by Tigrou Tigrou
Well, i'm quite amazed that so many people in the end use these techniques. :) I thought they were much less spread arround. The folding tutorial is awesome! Thanks for that too. I'll also show where i'm standing by now with that...

BB Image

raymarched stanford's bunny... from the sDF stored in a volume texture. Nothing special, like everybody else in the end. :)

Here's a visualisation about the signed distance function (coder colors powah):

BB Image

The sdf is calculated on the GPU in 200 milliseconds at startup in 128*128*128.

The issue i'm stuck with now is Instancing. What would be the best (fastest) approach to calculate the Signed Distance Function to a set of instances with each its 4*3 transform (rotation+translation)?

added on the 2011-08-13 13:24:53 by nystep nystep
just use modulo on the texture coordinates and use floor to figure out in which cell you are, straight domain repetition.
added on the 2011-08-13 14:02:56 by las las
Actually, no, because there's a discontinuity then in the function. And what about random translations and rotations?
added on the 2011-08-13 14:23:34 by nystep nystep
Hmm shouldn't that work anyways? Did you try it? If not - do so ;)
I'm currently suffering a bit - evoke!
added on the 2011-08-13 14:29:06 by las las
make sure your modulo function repeats outside of the object and you're ok (i.e. if the object is size 0.5,0.5,0.5 move it to centre 0.5,0.5,0.5 and use mod(position, 1.) to ensure the objects are not discontinuous).

For random rotations I've just done a rotation dependent on the current position (i.e. if you're doing mod(p, 1.) you do rotate(ray, floor(p)) or similar). For translation, you can translate freely inside the current mod() tile, so long as you keep the object entirely inside the tile volume. Beyond that, no idea :)
added on the 2011-08-13 14:31:51 by psonice psonice
las, I tried last year yes, and it provocates artifacts... ;)

BB Image

Though i can say i have chosen the angle where they are the less visible in this case... ;p
added on the 2011-08-13 14:42:44 by nystep nystep
The only way to get rid of artifacts is to evaluate the distance to the neighbours and take the minimum, so for the 3d case, you have 8 meshes to evaluate...
added on the 2011-08-13 14:44:02 by nystep nystep
Or to be more precise, we don't get rid of artifacts, the minimum is in fact getting rid of the discontinuity in the function :)
added on the 2011-08-13 14:44:39 by nystep nystep
are you using a bounding box to march to the first object in that volume first? And are you using signed distance fields?

You also need to ensure you're not stepping deep into the next 'box', or the ray might pass some geometry (which caused these artefacts).
added on the 2011-08-13 14:56:12 by psonice psonice
There's only one bounding box for all the instances and then modulo inside
added on the 2011-08-13 14:57:41 by nystep nystep
Check you're moving slightly inside the bounding box, and not just to the surface (or it may be not registering the objects on the next step). Then make sure it's stepping no further than the next modulo boundary (and again slightly more) so it's definitely inside the volume).

Oh, and expect major fuckery if you try AO with mod repetition ;)
added on the 2011-08-13 19:28:30 by psonice psonice
mod repetition is not really what i *want*, i was showing las that it isn't working properly exactly without artifacts :) but anyway, yes, i'm trying to make a forest... :)
added on the 2011-08-13 20:32:04 by nystep nystep

login