pouët.net

Thanks for the help with sphere tracing...

category: code [glöplog]
paniq: nice new project! just dont bite too hard if its feeling like a project not getting enough acceptance. ;)
i know this will get superb once finished, but you know: not everyone is as lucky as notch was!

one small thing, i guess you know it already:
use directX9-march2008-sdk, so you use the faster (eventho older and not that bullet-proof) shadercompiler.dll ...if you dont do so yet, compare again your renderTimes for your volumeTexture ;) (altho the 1/60 sounds reasonable! ... i had strange results when i filled a volumeTexture init-wise, so only once...way faster with old shadercompiler, no idea why, but i guess some of the optimization-passes in the new shader-compilers are just plain stupid and produce stupid bytecode this way! even at opti-stage 1 its always just bugging around one or two instructions, taking ages to compile, ending up in strange stuff if you compare the bytecodes of the two shadercompilers, the older one always producing faster code, atleast here!)
...take 2

Quote:
MC meshes look horrible

try surface nets
added on the 2012-11-06 17:20:13 by xernobyl xernobyl
what am i talking here?
dont we all know paniq uses openGL normally? ;)
sorry, love ya!
xernobyl: stop it ;) i'm done with this topic for now. we might do characters with it, but it's ill suited for landscapes. you can try your luck if you like :) but thank you for trying to help. i'm just a bit grumpy when it gets to MC. i expected a bit more.
added on the 2012-11-06 18:06:39 by paniq paniq
paniq: Are you sure we are talking about the same cone marching technique, as it is for example described by nystep at the end of this post? You shoudn't have a prestep of 32^2 nor any artifacts using this technique. Instead of having a fixed epsilon as it is usually used in regular SphereTracing technique, you modify the epsilon the more the ray is marching, and the factor to apply to the epsilon is a function of vertical precision of your screen output, something like deltapp1 = tan(fov/2)/(height_in_pixels/2); so that convergence is faster.
added on the 2012-11-07 03:49:08 by xoofx xoofx
Oh, by "vertical precision of screen output", nystep employs a more valid explanation "the distance to the neighbour ray on the unit sphere from the camera eye is pretty good, and scaling this with the current ray z". So basically, you need less espsilon precision the more you raymarch in z for a same pixel, pretty clever trick.
added on the 2012-11-07 04:24:54 by xoofx xoofx
xoofx, yeah, the code is already doing something like that. The 32^2 pixel per block prestep was using cones as well - giant cones, exactly. I tried different sizes. At sizes where the artifacts were gone, the performance advantage was negligible.

If you think about it, it's not surprising - larger cones terminate relatively early in a SDF, as soon as ray steps become small enough - the actual per pixel pass then steps the rest of the distance, which is now very small - and after they have passed edges close to the camera, they still need to travel a larger distance.

The Heaven Seven technique works particularly well with scenes like the example scene, where large parts of the screen are not used and curvature is not very crinkly and detailed. I'll give it a try but I'm not expecting much.
added on the 2012-11-07 08:34:34 by paniq paniq
Quote:
i'm just a bit grumpy when it gets to MC. i expected a bit more.


you're doing it wrong. :)
added on the 2012-11-07 09:29:30 by smash smash
smash, i've read through the HPMC papers and did the calculations. it's not fast enough for this application case.
added on the 2012-11-07 15:59:10 by paniq paniq
paniq: well, it worked for us.. (and that was dx9 .. the dx11 implementation is a lot better all round)
added on the 2012-11-07 16:05:57 by smash smash
It also worked well for us.
added on the 2012-11-07 16:10:37 by xTr1m xTr1m
Both productions are certainly nice, but the surface resolution is too low, the volumes you've been dealing with are either tiny, or the scene is entirely static. If you want all three:

1. detail at a large scale,
2. detail at a small scale,
3. completely dynamic geometry,

then you have to abandon triangulation and trace the scene directly. This is now becoming feasible with sphere tracing and SDF fields. MC was a compromise.

To compare: polygonizing a volume texture and raymarching a scene are very much related; only that in the latter case

1. your "cube" or rather screen space 3D trapezoid can be rastered with a massive resolution of up to 1920*1080*(2^32)

2. your volume is implicitly defined, so effectively resolution independent

3. you're iterating through the trapezoid in yxz order, not zyx, as with 3D textures, where the z-ray is the deepest iteration loop, equivalent to running along the x coordinate in a cube.

4. you sample the signed distance field at each step to skip empty space, instead of walking fixed steps as with MC, covering more ground with the same number of iterations.

5. you don't have to sample neighbors to be able to rasterize.

6. you're not generating any mesh data in the process.

7. you terminate at the first voxel that is filled instead of walking through the whole thing.

8. thanks to perspective projection, your ray is a cone, allowing you to increase the threshold at which you decide the ray has reached its goal, effectively terminating even earlier.

9. all this is massively parallelized in the pixel shader, no compute shader or OpenCL required, very little memory usage, works with older hardware and drivers, can be scaled to various hardware capabilities on the fly.

10. while contouring can run at 1/2 or 1/4 of the screen resolution (you only write the position to the color buffer, no depth buffer required), material shading can be deferred and done in a separate, full resolution step, in which contours can also be smoothed in screen space.

11. normals can also be calculated from screen space. I'm using dFdX/dFdY() at the moment on the sampled position (nicely bi-lerped by the sampler), but i have a more expensive, nicer looking method set aside that gives even better results.

So there is no reason to go back to lower resolutions, 3D texture caches, uploading geometry and a general whole lot of red tape now, in 2012. And the situation is only getting better in the following years.
added on the 2012-11-07 18:04:44 by paniq paniq
I forgot

12. abortion of iteration is relatively graceful - you can fill the missing information with black color, which gives you aesthetically pleasing "toon shader" outlines, or shade the last sampled position, which, depending on shading, gives a watery, hazy or slimy look to your scene. These are desirable artifacts.
added on the 2012-11-07 18:08:04 by paniq paniq
Or alternatively,
1. You can work independent of screen res not scale with it linearly even for unneccessary internal rays
3,4. You walk the surface contour not search for it. So when you fuck up Euclid distances or have rays that just miss, it doesn't matter
7. You can cache bits that don't change frame to frame
8. Marching cubes works on perspective warped grids too, hey
10. Deferred lighting , rerendering same cached geom for shadow maps works. No light rays that cost almost as much as the camera ray..or more if divergent
11. You can subdivide and smooth, use hw tessellation cheaply, add post displacement. Mc is only the first step.
added on the 2012-11-07 22:06:00 by smash smash
I like this battle of rendering wits!
added on the 2012-11-07 22:37:34 by gloom gloom
Maybe they should settle it with an arm wrestle!
added on the 2012-11-07 23:03:52 by fizzer fizzer
6. you can have an infinite world too, you only need to dynamically MC-ify and cache the areas around the camera as you move.
8. thanks to perspective projection, you can do LOD and give distant grids less resolution
14. you can have HW antialiasing!
15. you can easily author artist made models (ask your artists to write a SDF)

i personally love and hate raymarching. it really sucks for high quality renders and "real stuff", but it's so easy to quickly have something up and running (full of artifacts, of course), perfect for 4k intros and stuff (and little more, imho)
added on the 2012-11-07 23:26:10 by iq iq
iq: youre right. raymarching is really great for the simple case but there's a problem with scale there. scale in terms of screen res, scale in terms of adding complexity to the distance function, scale in terms of adding lights and having to march a ray for each light (or more), scale like adding essentials for offline render like additional fsaa/motionblur multipasses. the tipping point for something serious comes quite early.
with meshing you have to get past the initial weighty headache of getting enough res on the meshing but after that things scale like any other rasterised geometry. 3d gfx hw is still very very good at dealing with rasterised geometry.

also i suppose my bias towards mc comes from wanting to deal with particles and fluid dynamics, where post processing the mesh is essential.


added on the 2012-11-08 10:35:27 by smash smash
Also, you can apply arbitrary deformations... and many other things... to your MC mesh. After all, it's just a regular mesh once the isosurface extraction is done. You can still do toon shading too.

Anyway, regardless of implementation the game is looking pretty cool!
added on the 2012-11-08 11:01:03 by fizzer fizzer
Sphere tracing is only a simple kind of an numerical intersection test - nothing more - nothing less. And the "SDF" representation comes handy for some shading things.

Another limiting factor I'm currently dealing with is the shader instruction limit.
added on the 2012-11-08 11:14:10 by las las
Smash, IQ: Your critical points are all valid. The reason this works for me is because I work within the limitations that raymarching SDF fields has (including and extending some of your points):

1. No "digital artist"-style, code-free authoring. That kind of person does not, nor will ever be part of our project - not that I don't think this way of working makes sense, but we don't have the resources. If you have reasonable mathematical experience, SDF fields can be a powerful creative form of expression. It's not a workflow for Dali's, but for M.C.Eschers.

2. What you may have to give up in screen resolution and SDF function complexity, you get back in terms of being able to influence the function on a macroscopic and microscopic level, to make the world look less like a set piece, and to give a smooth, curvy look to surfaces close to the eye. The shader performance provides a natural limit of how far I can go, and challenges me to get the most out of that boundary. I like it.

Volumetric SDF texture brushes allow to cache and re-use functional geometry with lower penalty.

LOD techniques like adding complexity to a function depending on ray distance and camera location do not only improve framerates and scene complexity greatly, they can also be seamlessly blended with a simple mix() function.

3. "High quality renders" and "real stuff" is costly with ray marching, and not advised for real time visualization, but it was never available to real time code anyway. Almost all "realism" effects can be done in a deferred shading step, including lighting, classical shadow mapping, depth of field, fogging, and all the other junk. As deferred shading gets increasingly common, it almost doesn't matter anymore how your geometry was rasterized.

4. HW antialiasing is only possible at a costly price, but that goes for classical triangle rasterization as well. Many games avoid it these days, and prefer a less perfect post-processing smoothing step like FXAA, or the newer and better MSAA. We can't afford to be puristic here.

5. Triangulation helps with caching scenery that does not change much, but in our game, we make use of rapidly morphing landscapes. This is probably attainable with classical meshing, but at the cost of increased implementation time, and less flexibility when new features are needed. The ray marching shader and the surrounding infrastructure is ridiculously thin, and allows an agile and creative use that I prefer.

6. The arbitrary deformation you can do to a mesh can look pale and rigid compared to the crazy CSG you can do with distance fields. It has its own laws and its own aesthetics, of course, and would we not do something completely out of the ordinary, I would probably revert to traditional techniques. Alas, it is not so.

Smash: Your approach to do fast marching cubes to visualize fluid dynamics is definitely genius, and I enjoyed reading your paper and presentation. But we can't fit our world into a 128^3 cube. My graphics card has 1 GB of RAM, which gives about ~1024^3, which is still ridiculously small. Sure, by average, a polygonization of 3D volumes turns only about 15-30% of the volume into triangles - but as the volume scales by the power of three, you only get a minimum increase in axial resolution.

Generally, I didn't want to give the impression that I hate meshing or MC per se - we're going to use meshes for foliage and fauna, and the meshes will also be based on pre-contoured SDF's (I will probably be using meshed point clouds here or voronoi contouring).

It's just that I was speaking with all the frustration of someone who spent three weeks trying to blast boundaries with an ill-fitted technique, because he believed what he was told, which is that ray tracing is too slow to be used in a game. I'm married to this new technique and I want to see how far the rabbit hole goes.

Therefore, suggesting MC to me when I have performance issues is a bit like asking a depressed atheist whether he has considered letting Christ into his heart ;-)

You know, everything is in flow, the game changes all the time, and an exception to a rule can become its own rule.
added on the 2012-11-08 12:53:07 by paniq paniq
Quote:
Therefore, suggesting MC to me when I have performance issues is a bit like asking a depressed atheist whether he has considered letting Christ into his heart ;-)

What's wrong with suggesting that you've been doing it wrong all along? ;)
added on the 2012-11-08 12:59:23 by Gargaj Gargaj
Addendum: it is not unthinkable that once the game nears completion (and we are able to spend more time on optimization), the features that we made use of allow me to pre-cache local geometry in meshes, and implement HPMC or some other technique to do that.
added on the 2012-11-08 13:02:25 by paniq paniq
Gargaj :PPP
added on the 2012-11-08 13:03:30 by paniq paniq
paniq: Using MC doesn't force you to have your iso-values stored in a grid any more than ray marching does. So the memory-argument is pretty much moot in this context.
added on the 2012-11-08 16:53:23 by kusma kusma

login