pouët.net

Question for PC hw accel demo coders

category: general [glöplog]
 
I want to start doing some 3D coding again, and so need to make the decision as to whether to learn all the hw accel stuff. I know very little specifics about how modern hardware works, hence this thread of questions :)

The last time I wrote a 3D engine was round about the GeForce2 era, and it was entirely software rendered. Since the 3D hardware was fixed function I found the idea of using it thoroughly uninteresting.

I know it's come a long way since then, but is it really as flexible as software rendering? As a demo coder do you feel you're free to create whatever effects you want? (As long as it fits within the triangle rasterisation paradigm...)

I have some specific questions. They're quite long, but hopefully the answers are only one or two sentences :)

I guess I understand what pixel shaders are - they basically allow you to use whatever algorithm you want to choose the colour of a pixel on a triangle as long as the fragment program is within a given size, right? What data sources do you have available to you as a fragment program? (e.g. surrounding geometry, z-buffer, existing frame-buffer data of this and the last frame, etc etc) What is *not* available? Can you create your own buffers of per-pixel data for arbitrary use?

Is there any way yet to shade outside of triangle edges? E.g. say I wanted to draw a polygon with dithered edges to create an sketchy effect, is this possible now? I've seen demos try to do this with what looks like stuff overlayed on top, but it always looks like shit.

I read some slides by a game dev about something called 'deferred rendering' a couple of months ago. It didn't fit in with how I thought the hardware worked. If they wrote all this meta data to a buffer while rasterising the triangles, how did they then go back and process that to create the final frame afterwards? I didn't think you could process the frame buffer in a traditional left->right top->bottom manner on hardware.

If I do want to do something entirely impossible or inpractical on the 3D card for a particular scene, are there any problems with just rendering on the CPU and uploading as a texture every frame? Would there be any way to upload a z-buffer in this way, so I can have hw-rendered polygons intersecting with externally rendered stuff?

Is there anything you can do in Direct3D that you can't do in OpenGL + extensions? I'd really rather use OpenGL since I haven't used Windows much since about 1998 and it'd be a bitch to go back now :). I also despise COM programming so if DirectX is anything like that I think I would die.

Thanks for any insight, sorry for being a bit tl;dr :)
added on the 2008-03-24 23:54:47 by nagato^ nagato^
wait a few more years and Tim Sweeney foretells that it will be all software rendering again (!) with the proliferation of cores...
added on the 2008-03-25 00:50:24 by Zest Zest
put your softrender on a ps3 cell, and it will fly ;)
added on the 2008-03-25 00:56:58 by winden winden
I'm too lazy to answer all of it, but if you think about it, most of your questions can be answered by this single sentence:

Yes, you can have the card render or copy whatever you want into textures, and you can then read from these textures arbitrarily in your fragment shaders.

You should be able to figure out how to use this fact to do almost everything you ask for in this post.
added on the 2008-03-25 01:00:35 by ector ector
What Ector said. You can put anything you want in textures, including the contents of the framebuffer, precalculated tables, or whatever. The main inputs to a pixel shader are external constants and interpolated values from the vertex shader. You can set the constants per draw-call, but not per-vertex or per-pixel. Constants are usually used for things like the current lighting state, the current time, or whatever. Modern graphics cards support a small number of interpolators between the vertex shader and the pixel shader, something like a dozen 4-component vectors seems about normal.

Why would you want to shade outside the triangle's edges? Just make the triangle bigger.
added on the 2008-03-25 01:22:40 by s_tec s_tec
Ector: That is. IF you are able to find your source code!
added on the 2008-03-25 01:23:11 by Hatikvah Hatikvah
By the way, the best way to see what is possible is to read the docs. The assembly-language shader reference will give you a good sense of the limits, although I would not want to actually code a shader in assembly:

http://msdn2.microsoft.com/en-us/library/bb219844(VS.85).aspx
added on the 2008-03-25 01:34:37 by s_tec s_tec
The gpu can gather, but not scatter data in the pixel shader(opengl word for fragment program?)
added on the 2008-03-25 01:35:27 by imbusy imbusy

login