pouët.net

New rendering way?

category: code [glöplog]
texel - guess it's quite close to REYES if you pick your sample-points by uniformly subdividing your parametric space. I haven't heard of the exact method you are describing before though. Guess it bears resemblance to Loonies - Michigan as well, in that it picks random points on an implicit surface an projects them (assuming using a zbuffer for visibility still).
added on the 2011-05-31 20:23:29 by hornet hornet
As I read it, you are doing some sort of randomized forward (point) rendering of explicit parametric surfaces. Those surfaces could be polygons for that matter...

So, it's kind of useful in cases where you have analytic parametric surfaces that are hard to make implicit (raymarch) or intersect (raytrace).. However, you could most often make micropolyons (like REYES like Hornet is mentioning) out of them and thereby be sure to fill whole surface efficiently.
As I see it you'll have a very hard to time to determine (automatically, but that may not be needed for procedural pictures ofcourse) when you have covered the whole surface. On the other hand simple regular point sampling would have similar problem on complex surfaces. Still it would take extremely many samples to converge. You could splat larger particles, for the artistic effect if nothing else ;)

A gpu implementation using compute shaders would be quite straightforward and decently efficient (given the method ofcourse). I would just recommend (for SIMD reasons) to have the same primitive for a whole thread group instead of completely random samples.

And yes, doing any global effects like shadows seems hard to fit in.

hornet: michigan is splatting particles on the scene from the camera in a randomized grid, using raymarching to find the scene intersection - "screen space emitting".
added on the 2011-05-31 20:42:21 by Psycho Psycho
texel, this is how IFS fractals and such where rendered back in the 80s. It is also how Buddhabrot fractals are rendered.

to improve your render time, if that was a concern, you would have to do your sampling indeed proportional to the surface area of the objects. also, you might want to use Metropolis algorithm.

btw, you can accumulate samples in a clever way as points fallinto the color buffer pixels and do some antialiasing for free. plus add the time as random parameter and get motion blur for free too. but again, the parameter space is so big that for real scenes i think you really need some clever sampling scheme.

but, all that being said, for 4k intros, this is a great idea.
added on the 2011-05-31 23:30:10 by iq iq
psycho - ach, got the wrong end of the stick then, thanks! :)
added on the 2011-06-01 00:31:22 by hornet hornet
I like this idea. It is slow but very simpler than raytracing or rasterizing.

Quote:
Hey, hey, hey... it works!!!

http://www.romancortes.com/ficheros/rendertest.html

Could you help me please to find if I invented this rendering way!? If not, I'm enjoying it as if I did... :P


Thank you very much for sharing your code.
I modified your rendertest.html to render a fractal object.
It render a function from 3D point to boolean in the following way.

1. Generate a random sampling point in 3D.
2. Check if the sampling point is a part of the object.
If not, move the point so that it is a part of the object.
3. Scale the sampling point coordinate and project to screen. And put color in color buffer.
4. go to 1.

If I discard sampling point when it is not a part of the object, many sampling points were discarded and rendering time is longer.

This is capture image:
http://twitpic.com/55edeq

Here is source:
Code: <!DOCTYPE html> <html> <head> <script type="text/javascript"> function init() { function frac(x, y, z) { function fraccore(x) { ox = x; var scale = 1; for(i=0; i<6; i++) { x*=3; scale*=3; flx = Math.floor(x); x -= flx; if(!( flx==0 || flx==2 )) { if(x<1.5) { ox -= 1/scale; }else { ox += 1/scale; } } } return ox; } x = fraccore(x)-0.5; y = fraccore(y)-0.5; z = fraccore(z)-0.5; return [x*4*x*x, y*4*y*y, z*4*z*z+1.5, (z*4*z*z+0.5)*255]; } var canvas=document.getElementById("canvas"), context=canvas.getContext("2d"), imageData=context.getImageData(0, 0, 512, 512), buffer=imageData.data, zbuffer=[512*512], interval, steps=0, render=function () { var point, i, x, y, z, xp, yp, pos; for (i=0; i<50000; i++) { x=Math.random(); y=Math.random(); z=Math.random(); point = frac(x, y, z); xp= Math.floor( ((point[0])*512)/point[2])+ 256; yp= Math.floor( ((point[1])*512)/point[2])+ 256; if ( (xp>=0)&& (xp<512)&& (yp>=0)&& (yp<512)&& (point[2]<zbuffer[pos=yp*512+xp])) { zbuffer[pos]=point[2]; buffer[pos*4+1]=point[3]; buffer[pos*4+3]=255; } } context.putImageData(imageData, 0, 0); steps++; if (steps>1024) { window.clearInterval(interval); } }; for (i=0; i<512*512; i++) { zbuffer[i]=10000000000; } interval=window.setInterval(render, 0); } </script> </head> <body onload="init()"> <canvas id="canvas" width="512" height="512"></canvas> </body> </html>
added on the 2011-06-01 13:14:22 by tomohiro tomohiro
i admit i havent been following the thread too much, but if i get this right, this is basically implicit surface bruteforcing, right? cos if it's so, what's the point of randomness? you could just step through the surface parameters with an arbitrary density (even optimize yourself by changing the density based on the previous results) and get a much quicker result.
added on the 2011-06-01 13:41:20 by Gargaj Gargaj
Mmmmh, could be used to do some pointillism rendering, or even painting strokes rendering ?
added on the 2011-06-01 14:16:53 by flure flure
gargaj: i bet you'd optimise a tv noise demo until it's a linear gradient too! ;) But tomohiro's example show's why you're right - the centre of the image gets drawn in a couple of seconds, but the lower right takes forever.
added on the 2011-06-01 14:23:21 by psonice psonice
or instead of using random/fixed step sizes, use a simple quasi monte carlo sequence like halton.. this way you should get the fastest tradeoff as it fills up your screen "evenly" (given that your surface parametrization is also "somehow" uniform over the screen, otherwise you have to use metropolis like iq pointed out)..
added on the 2011-06-01 16:49:52 by toxie toxie
Test tomohiro code.

Copy-paste this http://pastebin.com/Bj2Fyz4s
Here http://htmledit.squarefree.com

It works ;)
woudn't be a kind of importance sampling in those implicit models parameter space a way to go? I just wonder how to sample those parameters but this is another problem. I don't really get the point of using Metropolis here.
added on the 2011-06-01 23:05:28 by maq maq
Gargaj / psonice: I dare say the randomness *does* make sense once you have 3 or more parameters (as tomohiro's code does), and it's not really feasible to systematically cover the whole parameter space. Of course, in that case the random version is only going to give you a point cloud rather than a proper solid, so... yeah.

But for surfaces - yep, I agree.
added on the 2011-06-01 23:35:18 by gasman gasman
It might be somewhat new in terms of academic papers, but it's certainly not new. Though perhaps you could combine it with some other techniques and/or quality standards and give it a snappy name and lead its renaissance :) To me, it even seems like the most intuitive/naive way to see any procedural shape. I used to do this all the time to experiment with functions for generating shapes, such as deformed toruses in 3D.. but I never intended to release.
added on the 2011-06-02 02:01:30 by guesser guesser
gasman: so how about subdividing the parameter space (i.e. for example doing multiple runs through the function domain, halving down step size with each run) and gradually going down to arbitrary density - gives a good result early on and much more deterministic (especially when it comes to deciding the aforementioned "statistical completeness")
added on the 2011-06-02 15:12:43 by Gargaj Gargaj

login