Effects in demos that you don't know how they work

category: code [glöplog]
rasmus: Rename the .dat file to .avi, open it in a media player and see what happens. 8)
added on the 2011-02-15 10:35:14 by doomdoom doomdoom
added on the 2011-02-17 00:01:56 by las las
las: No, that's rasterized polygons.
added on the 2011-02-17 00:06:11 by kusma kusma
I guess instancing or sth like that?
added on the 2011-02-17 00:16:23 by las las
powly, that was pretty much not what was discussed. gpu->gpu textures are fast and cpu->gpu are not. the latter was in question.
But you asked how they are done, it's by rendering directly to textures. It's just another shader for those textures to move the particles.
added on the 2011-02-17 07:27:01 by msqrt msqrt
ok, so they are raytraced dots from what i can tell
No, they are point sprites.
added on the 2011-02-17 08:44:21 by pommak pommak
rasmus: http://directtovideo.wordpress.com/2009/10/06/a-thoroughly-modern-particle-syste m/
added on the 2011-02-17 09:44:32 by smash smash
added on the 2011-02-17 09:45:01 by smash smash
pommak, ok i just scanned the fs files and saw the word "march" :-)
Anyone has more detailed information on this post I made a long time ago:
rasmus: wrong shader :) particles aren't marched.
added on the 2011-02-18 07:54:23 by pommak pommak
About voxels: There is a brief description of the voxel technique in my diary article about Luminagia. In particular, read the entry for January 13, 2008.
added on the 2011-02-18 10:20:38 by Blueberry Blueberry
nice Blueberry, gotta read it
added on the 2011-02-18 14:33:15 by rudi rudi
Well then help me, please! :) I recently sat down to do some voxel effects and so far I've rehashed the twister and landscape. I've even applied a polar map transform to the twister and that yielded a need torus-like twisty like seen in 'live evil'.

But then my old arch nemesis: the tunnel and the ball. I tried mapping a landscape along the Y-axis of a buffer and doing a polar transform, for a tunnel. This came close but still looked a bit off. Any pointers?

Then the ball. I recently rewatched non-stop ibiza and it's obvious (is it?) that there are some objects in there that are essentially a heightmap wrapped around a sphere. Without any transforms I figured that a way to do this might be the following:

Iterate over each angle of a full circle (360) and for each ray cast outwards from the middle of the screen and (at least for starters) the middle of the heightmap, into the correct corner (like a good radial blur). And then just for each step along the ray do the project-height-and-render-spans-front-to-back thing. Two serious issues with this. Firstly how to project/scale the sampled height. I guess an option would be to go the twister route and imagine the ray is a 2D half circle slice (so 0-1-0 sine curve, 180deg), but that might need another thought. Secondly it's obviously not very memory efficient to read and write pixels in that way, I'm guessing that won't really fly full framerate on an Amiga? :)

So in the end this is probably also solved by a map transform of sorts. I just feel to see how/where right now :)
added on the 2011-09-10 11:22:37 by superplek superplek
(also shit I didn't see the link to blueberry's Luminagia doc... kind of confirms a few things but,

For instance, to produce a voxel blob, shoot the rays out in all directions from one point, scale the height by a sine function (half a period) and map polar-wrapped.

so hmm, if I were to, for each angle of the sphere, do what I said above (and what the doc says) -- but instead of directly rendering the actual voxels radially i render them vertically to a buffer and then do a polar transform blit with that bufer?)
added on the 2011-09-10 11:26:57 by superplek superplek
I guess the latter must be somewhat it. In fact I figured one would get a shitload of ugly overdraw issues when drawing the fan spans directly on screen. Okay, I'll test this approach.
added on the 2011-09-10 12:15:22 by superplek superplek
You can do it in 1-pass as well. You accomplish that by, for each "slice", having a precomputed list of (screen offset, voxel height) pairs, sorted on the voxel height values.
Thus, when you paint all the (screen offset) locations for a slice, you effectively paint a ray on-screen that starts in the screen-center and works its way outward.

What's good about this approach is that you paint each pixel exactly once. (With the 2-pass method, most of the information that you are painting close to the center of the image will never be used.) On the other hand, the per-pixel work is more convoluted, and you cannot easily gain performance by scaling back rendering quality in the same way that you can in the 2-pass approach.

I don't know which method is faster for the same quality level.
added on the 2011-09-10 12:37:17 by Kalms Kalms
I think that just might be smart. Doing the second pass isn't that cheap either: there's filtering (reallly necessary in 640x480 or higher) and there's poor locality when traversing the transform map pixel-by-pixel (so you'd end up with 8x8 tiling or something to that extent).

So, if I read this right, the list consist of a projected height-on-screen transformed to an actual coordinate, for each height, and this for each angle. So you'd end up with new_height = table[angle][map_height] and use a line algorithm to draw the span from previous to new height?
added on the 2011-09-10 13:46:43 by superplek superplek
I'm not reading it right. But I'll "think" out loud: I cast a ray for a certain angle (or slice), start traversing it and at that point I know 3 things: the fan angle (or the direction vector of the ray both in-map and on-screen), the sampled height and how far along the ray I am (which I figure has an effect on the projection/scaling). Now I guess I am to project said height by the appropriate sinecurve (= voxel height?) and that + the fan angle I'm currently doing gives me a screen offset to draw the span to?
added on the 2011-09-10 13:55:57 by superplek superplek

During precomputation, you precompute the inverse of a tunnel table.
That is, you build a bunch of lists of target pixels. The interpretation of list number X is, "if I paint all the pixels in list X, then I draw all the pixels which lie at an angle of X degrees as measured from the screen origin". In addition to this, you should also store the distance for each pixel - and sort them in ascending height within each list.

Then, render time.

At the beginning of each ray, you choose which list of target pixels that you should be using. This is determined entirely from the fan angle.

Then, when you are progressing along a ray...
You have the current fan angle and how far along the ray that you have travelled so far.
In addition to this, you also track how far along you currently are in the list of target pixels.
So you sample from the heightmap (using fan angle + distance to pick location) and apply sinecurve. The value that you have is the height value.
Now, it's time to paint zero or more pixels. Check the height value against the next target pixel's height in the list. If the target pixel's height value is lower -- paint at that target pixel's screen offset, advance in list, and redo the test. Keep on going until you've reached a target pixel whose height value is too high. At that point, it is time to advance along the ray and perform a new heightmap sample.
added on the 2011-09-10 14:53:33 by Kalms Kalms
added on the 2011-09-10 14:57:38 by ferris ferris
Awesome Kalms, thank you. Can't be much clearer than this.
added on the 2011-09-10 14:58:33 by superplek superplek
and ferris, cram a sock in it and get back to rm'ing cubes with spherical holes in them within the comfort of a pixelshader that solves all actual work for you :)

(i'll get back to d3d as soon as this runs, for good measure)
added on the 2011-09-10 15:03:11 by superplek superplek