pouët.net

Strange behaviour of ATI 5xxx with regards to multisampling

category: general [glöplog]
Hi all,

I have a strange problem with my new ATI 5xxx which I can't fix.

I use FBOs like that:

glGenRenderbuffersEXT(1, &FBOId2[num]);
glBindRenderbufferEXT(GL_RENDERBUFFER_EXT, FBOId2[num]);
glRenderbufferStorageMultisampleEXT( GL_RENDERBUFFER_EXT, multisample,GL_RGBA8,sizex,sizey);

glGenRenderbuffersEXT(1, &FBODepthBuffer[num]);
glBindRenderbufferEXT(GL_RENDERBUFFER_EXT, FBODepthBuffer[num]);
glRenderbufferStorageMultisampleEXT( GL_RENDERBUFFER_EXT, multisample, GL_DEPTH_COMPONENT32,sizex,sizey);

and everything works fine when multisample is 0. As soon as I put multisample > 0 (2,4 etc.) I get weird artifacts on anything involving depth buffer calculations (shadow mapping etc.). I can describe the artifacts as some sort of pixelization. The more the multisample the worst the effect (with multisample == 8 I just see 4x4 pixel blocks). I have also tried this with GL_DEPTH_COMPONENT24 and 16. It is the same.

This is not an issue with ATI 4xxxx and my 8 series nvidia. I tried to multisample just the rgba8 renderbuffer (so that depth buffer has no multisampling) but it doesn't work (won't show anything on screen).

Any ideas? It is very weird that the behaviour is so different to the old series ATIs.
added on the 2010-05-26 14:21:36 by Navis Navis
And this is the depth map that I get with multisampling = 0 and =8. No other processing is involved inbetween.


BB Image
added on the 2010-05-26 14:56:15 by Navis Navis
My machine does not get through to asd.gr
Looks like an issue with the depthbuffer-compression. Have you tried allocating a stencil-buffer as well? AMD have previously packed depthstencil together, so could potentially be a workaround.
added on the 2010-05-26 16:03:09 by hornet hornet
When I read about it, it is usually called the z/stencil buffer... I think the traditional stencil buffer is "obsolete". (What ever happened to w-buffering? I remember the DX7sdk sample.)

If AA works the way I think it does, it could be including a outlier (stencil value) in the mean (of depth samples). I dunno, I nvr done 3D stuff. I need to start coding someday.
added on the 2010-05-26 16:24:05 by QUINTIX QUINTIX
I don't allocate a stencil buffer.
added on the 2010-05-26 16:35:13 by Navis Navis
Navis - no, but try to allocate a stencil buffer as well and see if anything changes. In all likeliness it's just a driver-bug, but possibly you can work around it.
added on the 2010-05-27 01:04:53 by hornet hornet
not that it fixes your problem but:

Quote:
It is very weird that the behaviour is so different to the old series ATIs.


actually thats quite normal with ati cards/drivers :)
added on the 2010-05-27 09:12:53 by gopher gopher
looks like a depth compression / tiling problem to me or maybe something to do with zcull? You should check the specifics on zcull and rop tiling and compression of rendertargets in vram. Maybe the resolve is in the incorrect format? I cannot comment on gl implementations though tbh, been a while since I used the gl api.
added on the 2010-05-27 10:16:58 by dv$ dv$
navis: why the hell are you casting the shadow of a chicken??!?!??!
BB Image
added on the 2010-05-27 14:05:15 by maali maali
Mostly what gopher said. Old ATI sometimes generate lots of problems, even in the most simple renderings.
added on the 2010-05-27 14:05:35 by Defiance Defiance
Defiance, I find calling HD5XXX 'old' hardcorely hi-fi.
added on the 2010-05-27 14:43:28 by msqrt msqrt
Maali, :D
added on the 2010-05-27 15:14:08 by xoofx xoofx
actuallly i forgot to draw the chin-thing that chickens have, i was already wondering 'wtf am i forgetting?' when i drew it :DDD
added on the 2010-05-27 15:37:17 by maali maali
It's not a chicken, but good drawing anyway.

I've discovered that the size of "tiles" is the same regardless of the size of rendering window. It is 6x6 pixels !?!
added on the 2010-05-27 15:38:20 by Navis Navis
@lx: ok my bad :P

Could this be of some help?
added on the 2010-05-27 15:43:58 by Defiance Defiance
lol i meant msqrt.
added on the 2010-05-27 15:44:41 by Defiance Defiance
It's also mentioned in the opengl wiki.
added on the 2010-05-27 16:04:43 by msqrt msqrt
Oh, renderbuffer versus render to texture, my bad :) I've been wondering what are renderbuffers good for for some time anyways, time to google ->
added on the 2010-05-27 16:07:44 by msqrt msqrt
If it's not a chicken, then what is it?

BB Image
added on the 2010-05-27 16:22:48 by doomdoom doomdoom
For me it looks like a guy sniffing into the nose of a very big sleeping dog...
BB Image
added on the 2010-05-27 16:59:24 by xTr1m xTr1m
fail...
BB Image
added on the 2010-05-27 16:59:50 by xTr1m xTr1m
Wtf ARE Renderbuffers anyway? Why not use an FBO with a texture?
added on the 2010-05-27 17:56:15 by raer raer
Renderbuffers are your best choice when you're not going to read the result as a texture, because they don't have to live by the texture-mapping hardware's rules. This means that they can some times get a more efficient memory-representation. For instance, some hardware only supports writing to 24-bit packed RGB, but requires to use 32-bit XRGB if it's going to be read as a texture. I think depth-buffer compression also is such a case on some hardware. At worst they are the same as a texture, so you don't really lose anything. A typical use-case for this is depth/stencil-buffers for scene rendering. They can also be used for some purposes that textures can't, like render to vertex arrays.
added on the 2010-05-27 18:28:03 by kusma kusma
Quote:
added on the 2010-05-27 17:56:15 by RareWtFailWhale

Their exaclty the same thing, only different.
added on the 2010-05-27 18:41:39 by xernobyl xernobyl

login