pouët.net

nVidia RTX innovation or marketing bullshit?

category: code [glöplog]
So, what's your opinion on nVidia's new announcement of it's "10 years in the making" accelerated realtime raytracing cards?
added on the 2018-08-21 23:15:46 by xernobyl xernobyl
Innovation and marketing bullshit aren't mutually exclusive.
added on the 2018-08-21 23:38:31 by Gargaj Gargaj
At face value it's cool as heck, but I'll save the excitement for after I see actual performance numbers. The "10GRays/s" sounds like a somewhat cherry-picked figure since the memory bandwidth of the device would give a bound of about 62 bytes per ray, enough to read maybe one triangle and a single BVH node.

I'm very excited about a future where raytracing is finally practical for all kinds of use but a) I'd like to have a single trace() function at shader level instead of a billion new hoops to jump through, and b) I'd like to see it on all vendors.
added on the 2018-08-21 23:49:40 by msqrt msqrt
@gargaj xor should be a word
added on the 2018-08-22 00:15:08 by xernobyl xernobyl
that "trace" function would always have to be somewhat async since it's run time isn't constant
added on the 2018-08-22 00:18:23 by xernobyl xernobyl
well, people are complaining they're fucking expensive for just a 15% performance boost on existing, conventional benchmarks/games, but well, that's comparing apples with pears. looking forward to the 2170 when the RT tech is hopefully more established in modern day gaming :) it looks bloody sexy in this https://www.youtube.com/watch?v=KJRZTkttgLw, but a day later they release a showcase of Shadow of the Tomb Raider with apparently RTX arch shaders and to be fair you hardly see any lighting difference/improvements to more conventional non-RT techniques other than magically disappearing beer glasses (i assume they forgot to port the glass shader or smth :P)
xernobyl: that property also holds e.g. for texture fetches with can take a hundred cycles easily, but if it's in some cache it will be much faster. The GPU switches the warp that is waiting for such a request (memory access, texture access, instruction fetch, ...) for a different warp that is not blocked. this "latency hiding" makes it appear from the perspective of the individual warp as if the request didn't take any time, because the warp was just not running. this rescheduling is super lightweight (but it works only only if the workload is sufficiently heterogeneous between warps. if all eligible warps wait for something, then you get an actual stall and your GPU is underutilized).
added on the 2018-08-22 00:58:08 by cupe cupe
It just works... you open up your commercial engine, add a cube, add a light source, turn on RTX, submit to Revision... it just works!
added on the 2018-08-22 05:12:59 by bloodnok bloodnok
Does anyone have any details on how they've implemented it? Like how the scene description is delivered to the GPU...
added on the 2018-08-22 05:19:13 by bloodnok bloodnok
it's all possible because nvidia licensed sega blast processing technology
added on the 2018-08-22 10:00:10 by arm1n arm1n
NVidia does what AMDon't
It's probably a bit of both, yes.

Knowing nvidia, the numbers are probably real, but only for some synthetic test case that has nothing to do with real-world performance.

That doesn't mean that the cards aren't absolute beasts, though..
added on the 2018-08-22 10:38:55 by sol_hsa sol_hsa
and just after he says "all the shadow mapping artifacts are gone" there's some kind of a depth biasing/post-filtering problem
BB Image
added on the 2018-08-22 12:31:12 by msqrt msqrt
It's actually pretty interesting, RTX technology seems to combine rasterization with raytracing where required (lighting, reflections and stuff). It's good to see graphics become more photorealistic with the support of the GPU, without faking to much stuff that is. :)

NVIDIA GeForce RTX - Official Launch Event
added on the 2018-08-22 16:13:37 by Defiance Defiance
Quote:
...without faking too much stuff that is. :)
added on the 2018-08-22 16:14:34 by Defiance Defiance
We're not gonna fake it
No! We ain't gonna fake it
We're not gonna fake it
Anymooooore


Or maybe we will, cuz a cubemap is cheaper than 16k rays. :)
I foresee a transition period were tracing rays will only be viable for small and super sharp surfaces. And not with PBR, which would really be awesome.
added on the 2018-08-23 05:13:20 by BarZoule BarZoule
Thinking of it, it really is about complexity (polygons) vs fidelity.
And complexity is hard to give up.
So it only gets interesting once you no longer need more polygons.
added on the 2018-08-23 05:20:53 by BarZoule BarZoule
we gotta take these lies
and make them true
somehow
added on the 2018-08-23 06:32:06 by sol_hsa sol_hsa
BB Image
added on the 2018-08-23 21:45:03 by Zplex Zplex
Funny how people talk about raytracing as not faking it, because obviously calculating rays bouncing from polygons is 100% accurate representation of reality.
added on the 2018-08-23 22:35:04 by sauli sauli
@Zplex, this made my day!
added on the 2018-08-23 23:23:12 by dex46... dex46...
Zplex: awesome, thank you! :-D
added on the 2018-08-23 23:36:10 by wertstahl wertstahl
sauli: and mostly simply treating light as particles instead of waves
added on the 2018-08-24 00:50:10 by xernobyl xernobyl

login