Unlimited Detail Technology

category: offtopic [glöplog]
I also just realized that if the guy manages to map an entire volumetric scene via some sort of fractal-octree system it might also very well be possible to iterate over this octree on the GPU and finally implement a proper adaptive direction- and distance-field (not just distance) based raymarcher. I have been looking into direction-fields for some time but this might provide at least some development.
added on the 2010-04-15 12:00:07 by decipher decipher
decipher talking of 4k ofcoz :D
-> i saw the compostudio
added on the 2010-04-15 12:05:02 by hArDy. hArDy.
This guy isn't using fractals since that would mean almost self-similarity and (theoretically) unlimited detail which doesn't seem to be the case; also there are a few frames in their videos where the camera gets just too close to some detail, showing that the object's true nature is just a soup of axis-aligned colored squares.

I guess what he's doing is simply re-using octree(?) nodes in multiple places in the same scene. This also presents another big problem with his technology which is also apparent in the videos. To avoid an artifical look level designers normally use alternate textures or add faults to repetitive things such as stairs on old buildings. Doing similar things with Umlimited Detail would eat lots of memory.
added on the 2010-04-15 12:29:39 by Kabuto Kabuto
yeah I think it's simple tree reuse in two ways

1) u can search for volume patterns in the tree, but only if u separate colour and volume (i.e. have some kind of key/hash system for the colours)

2) instancing - (the pyramids of creatures smacks of simply pointing a node to a "creature root node") - it kinda sucks because the instancing can only be tree grid aligned.
added on the 2010-04-15 12:36:09 by ZJ ZJ
eyactly what i´m thinking of !
this is a hoax and wont get true, every idiot investing should loose all his/her money instantly to learn even faster than 16 months !
added on the 2010-04-15 12:44:04 by hArDy. hArDy.
The only question is, can he get the same fps with same amount of voxels not using impostors, either massive number of different models, or same model duplicated in memory. If not it's neat, but cant be used with real-world virtual worlds :)

Same goes for animation: what can differ between impostors, translation, rotation, scaling, skeletal position, textures?

Especially since he didnt know about caches, since thats exactly what it is about, and what has arguingly been helping him behind his back...

but let's see.
The real question is, will we be able to use lots of bloom shaders on this?
bloom shaders are so 2001, nowadays we all use Crease Darkening.
added on the 2010-04-15 17:06:02 by kb_ kb_
Where can I get a tech demo? otherwise this is rubish...
see it to belive it.
added on the 2010-04-15 17:06:23 by AMNESTY AMNESTY
the Phantom console has this unlimited detail engine built-in.... i've seen it!! GOTTA INVEST|#|@!·"
added on the 2010-04-15 17:41:12 by Jcl Jcl
anyone know how to draw a 3d cube? lol - brain not working

3d cube from 2d screen space points specifically

I got it kinda working last night - but the perspective foreshortening is not 100% correct.

I'm doing this: calc the 2d screen points of the initial octree 8 points
and using them in the recursive loop - sub dividing each time. but it's not correct - any ideas?

added on the 2010-04-15 23:04:31 by ZJ ZJ
ZJ: we wants a demo exe orz we wonts believe u!!!8!!
u can have a .exe no problem - but I have no sparse data yet so it sucks ass big time :)
added on the 2010-04-15 23:45:16 by ZJ ZJ
ZJ: Uhm, it's not correct, no. You have to subdivide the initial cube in world space. Look up some of the old tricks for perspective-correct texturemapping in software rasterisers, the problem is about the same. You can't escape teh division.

The problem is even worse if the camera is inside the cube. Try it, you'll see. ;)
added on the 2010-04-16 00:20:39 by doomdoom doomdoom
yeah thanks doom.

certain the divide can be dropped - just need 2 mid points instead of the one I'm using now

not worried about camera inside the cube yet - i'll just always project the cube infront and mully some voodoo to get around that.
added on the 2010-04-16 00:23:38 by ZJ ZJ
1. Brag about having an Unlimited Detail rendering engine
2. ...
added on the 2010-04-16 04:33:24 by Jcl Jcl
it's kinda funny. i agree with everyone saying that he needs a better show case. if you take a look at the crytek engine show cases, they were putting all gamers at awe and thus got popular (im sure it was good).
Yeah, it became popular and earned respect for looking awesome. And it was actually almost never adorsed for games...
added on the 2010-04-16 08:54:56 by rpfr rpfr
ZJ: But the problem is you can't interpolate across z=0 at all. But that's just a particularly nasty case of the fact that you can't interpolate at all in screen space. A line in world space projects to a line in screen space, but the midpoints of the two lines are not the same.

World-space midpoint in screen space (perspective correct): ( x1 + x2 ) / ( z1 + z2 )

Screen-space midpoint in screen space: ( p1 + p2 ) / 2, where pn = xn / zn

If z1 = z2 the two are equivalent. But when z1 != z2 they're not. You can work out the correct midpoint from screen-space coordinates by doing a weighted average, but then you'll still need a 1 / ( z1 + z2 ) term. So it will only increase the complexity of the whole thing as you have to address situations like z1 < 0 < z2 as special cases.
added on the 2010-04-16 09:56:54 by doomdoom doomdoom
You can approximate divisions when the tree node is so far away that so perspective distortions are much less than a pixel on screen. When you notice that simply project the x,y,z vectors onto the screen (from the perspective of the current node) and use those projected values when subdividing the node further.

AFAIK modern CPUs require about 20 cycles per floating-point or 64 bit division. This is not a problem per se, just make sure that the CPU can parallelize things nicely and insert some further code before you actually use the division result.

I've read an article about the quake1 development where developers had exactly this division problem: having correct perspective per pixel was too expensive but not using perspective correction gave ugly distortions (as seen in many othre old software-rendering 3D games (I'm not talking about 2.5D games such as doom, due to their simple inner working they don't have this problem)). Their solution was to do a division once for (IIRC) 16 horizontal pixels and parallelize things nicely so they can render pixels while the CPU is computing the next perspective correct texture position in the background. This way distortion was only visible when viewing walls from a very acute angle.
added on the 2010-04-16 11:55:35 by Kabuto Kabuto
Kabuto: The SSE "RCPPS"-instruction allows you to do four 1.0/x approximations in ~2 clocks on modern CPUs. 12 bits of precision should be plenty for perspective divides.
added on the 2010-04-16 12:14:38 by kusma kusma
12 bits of precision should be enough for everybody!
added on the 2010-04-16 13:06:04 by sol_hsa sol_hsa
pi = 3.14160

Suck it, Archimedes!
added on the 2010-04-16 13:41:52 by doomdoom doomdoom
yeah, suck it, then heres your projected cube, sir

BB Image
btw imageshack seems down since some time ago...
is this the end? :( :( :(