pouët.net

Displacement Mapping

category: general [glöplog]
 
Why is the drawing of an low res Model with Displacement Mapping faster than drawing the full resoltion modell?
added on the 2010-02-05 02:10:43 by AND1 AND1
Uhm. Less polygon data, less setup. Pixel shader / fixed functions are fast and textures resident. There is a break-even point obviously.
added on the 2010-02-05 02:18:22 by raer raer
So reading the huge amount of polygon data is the bottle neck?
added on the 2010-02-05 02:21:26 by AND1 AND1
Reading, storing, transforming, sorting, drawing and generally handling the huge amount of polygons adds up to a bottleneck.
added on the 2010-02-05 06:39:14 by booster booster
With DM in particular, if you do the tesselation in realtime, you can make it viewport-dependent and just stop adding polygons once they're smaller than one pixel. Compare static mesh, where you need to decide on a tesselation early and stick with it.
added on the 2010-02-05 08:19:18 by ryg ryg
Its not that dumb a question after all!

Yes, my first thought is : transform the vertices.
but you've got to raytrace the displacement map in the shader on the the other side.It does not exactly work the same way, though.
Then you've got the bandwidth of the vertex data. But you also have the bandwidth of the displacement map.

Bottom line is, the displacement map is much more compact than vertices, like a terrain heightmap, the spatial referential is adapted to the local context. It's a kind of data compression after all.

Then, with vertices you've got to store UV coords, normals maybe, etc. These are computed on the fly with displacement mapping.Then you've got the polygon data that are pointers to the vertex data. Plus maybe per plygon attribute, don't know, texture handle or so.

(just a thought, with vertices you've got the problem of LOD when polygon approaches pixel, with displacement mapping on the other side you've got the problem of mipmaps...i saw a paper once that dealed with the problem of mipmaps for bump maps, maybe not trivial, with anisotropy etc.)

By the way, I'm not an expert so i have been wondering if the technique of displacement mapping had been implemented with little "cubes" (six quads, maybe three could do) instead of a quad? Do someone understand what I mean? :) You draw the quads of the cube and in the shader you raytrace the heightfield, that way you've got a displacement mapping that really works when your view is parallel to the heightfield, you really see the mountains in all cases.

any idea?

ps: fuuuuuuuck! I just discovered you can drag and enlarge the box that lets you type your post in the pouet bbs.neat.
ryg: Aaaaaaaaaaaaaaaah that's very clever!

I kind of see this as an hybrid of rasterization and raytracing.

But you still maybe will need LOD on the lowpoly mesh if things are far.

Maybe we could genrealize this, think recursively...
Every model is just a cube with displacement mapping on its faces, this gives you more polygons. Then on some of tese polygons (the ones forming the "mountains" of the six heightfields) you add the posibility off adding a new level of displacement mapping...Mmmh...I'm starting to wonder if this can work :)

The idea is to sort of consider each big object in the world and to "raytrace" its bounding box or bounding sphere, provided we have designed a clever adapted structure that describes it...
Maybe recursively intricated bounding boxes in a tree...

Got to think more.
hey hey, now we've got insane amount of shader mips, could we replace that big displacement map with some nurbs coefficients and trace it on the fly? Or dct coefficients for the displacement map maybe? for texture data too...
That drag-and-enlarge-the-textarea thingy is a safari/webkit feature :)
added on the 2010-02-05 09:33:23 by booster booster
HelloWorld: Displacement Mapping doesn't necessarily mean using ray tracing:
- you can upload the vertices of a low poly mesh as vertex shader constants and then just send barycentric coordinates in your vertexbuffer to render the hi-res model (ok, this is effectively more a vertex compression scheme than DM and is restricted by constant count, but well)
- you can use progressive meshes (incl. LOD)
- or you write a REYES implementation (which is more or less the optimal architecture for DM). This would be SW rendering or CUDA/Cell of course.
added on the 2010-02-05 10:06:16 by arm1n arm1n
-> for the first two points see Tom Forsyth's stuff
added on the 2010-02-05 10:08:38 by arm1n arm1n
Mmh...Maybe I need to read more. I was referring to displacement mapping as a texture accessed in the pixel shader just like a bump map, not generating real vertices in the pipeline, I guess I meant parallax mapping ; when I talk of ray tracing, it's just that in that case you're kind of tracing a ray in your pixel shader to intersect chith the heightfield.
Hardware tessellation is the future/present!
added on the 2010-02-05 11:55:35 by xernobyl xernobyl
BB Image
Displacement Mapping With Tesselator?
added on the 2010-02-05 12:07:39 by the_Ye-Ti the_Ye-Ti
BB Image
Mmmmhh... seems neat for that time.
and still widely in use..
added on the 2010-02-06 12:43:17 by toxie toxie
does anyone know a scientific paper where i can cite from?
added on the 2010-02-06 17:17:15 by AND1 AND1
Not really, also I'm not sure if you're referring to creating new vertices on the fly or doing pixel shader parallax/occlusion/relief etc...
Wow, I understand the subtilities better now

BB Image

with correct Z occlusion

BB Image

That's pretty fucking ace imho.

from http://www.inf.ufrgs.br/~oliveira/pubs_files/Policarpo_Oliveira_Comba_RTRM_I3D_2005.pdf

and1, using subd+displacement mapping to compactly store highres meshes:
http://research.microsoft.com/en-us/um/people/hoppe/dss.pdf

you can pretty much work through the references from there.
added on the 2010-02-06 21:32:06 by ryg ryg
I have a plan for a deferred shading like technique with color and 3D-normal shit onto 2 render-targets using an fast approximate screenspace displacement mapper + fake HDR-stuff + fake Focal-BS in a second pass on Low-Level 2.0-hardware

It shouldn't look really bad, but is this a bad idea? cause it might work for the cause. even bullshitting 1280+ hardware smh?
added on the 2010-02-07 04:48:56 by yumeji yumeji

login