pouët.net

Mantle - a new low-level API for AMD GPUs

category: code [glöplog]
Understanding AMD’s Mantle: A Low-Level Graphics API For GCN


I am not a coder (nothing newer than 1995 technology, anyway :), but this could be interesting. It seems AMD is leveraging its presence in both the PS4 and XBO, by introducing a new low-level API based on the XBO API (presumably) for current/next-gen Radeon cards. This could make porting games to the PC easier, but of course, couldn't replace traditional D3D or OpenGL versions for NVidia GPU users.

Will demo coders pick up Mantle? Will it mean going back to limited hardware, "GUS-only"-era compatibility (at least more so than today)? I guess we'll have to wait a bit longer to find out just how much of an improvement it can make.
added on the 2013-09-26 20:00:37 by phoenix phoenix
I feel it's a bit too early to pass judgement, but I guess if it's open enough, then there's no reason why demos couldn't use it - apart from the whole ATI-only thing.
added on the 2013-09-26 20:03:14 by Gargaj Gargaj
Sure, it should come in handy for the next-gen Rob is Jarig port.
added on the 2013-09-26 20:19:19 by visy visy
it'll make a massive difference to something that's limited by the amount of cpu time spent in d3d - i.e. something that pushes a lot of drawcalls. might enable say 9000 draw calls in a 60hz frame not 1000.

the problem is that (from personal experience) most demos probably dont need very many drawcalls and are gpu limited, so it wont be much use to them.
added on the 2013-09-26 20:20:34 by smash smash
So perfect for those 3DS Max flyby demos!
added on the 2013-09-26 20:24:38 by visy visy
From the company that brought you Streram SDK!

Anyway, this is probably intended to entice xbox one/ps4 developers to port their stuff over to PC with Mantle and for that reason it might work (kinda) but do we really need to fragment the demoscene some more?

On the other hand, people already do this by writing single-vendor GLSL shaders I guess.
added on the 2013-09-26 21:38:48 by sagacity sagacity
And Amga demos that don't work on an Atari ST. Not a big deal I guess, if someone puts it into good use. Few of us can watch console or oldschool demos in real time either.
added on the 2013-09-26 21:48:01 by Preacher Preacher
Mantle - A very risky move from AMD.

So that is what they did while not properly supporting high-level APIs...
It almost took them a year to get a OpenGL 4.3 beta driver ready, did anyone test that yet?

We have to wait for the specifications.
Plus: The rumors say that their new flagship hardware does not properly compete against NV hardware, so I will just wait for the numbers.

I suspect the following to happen:
Instead of going AMD only, I will go NV only.
As long the NV hardware is superior for some certain tasks, especially tasks involving control flows which are not completely non-divergent...

AMD is for gamers.
:P
added on the 2013-09-26 22:17:02 by las las
Well played, yet another vendor-specific API. Yay!
added on the 2013-09-26 23:00:25 by kbi kbi
We already have "GUS only". I mean, who in their right mind has an AMD card anyway, let alone tries to run demos on one. It's Nvidia all the way down, baby.
added on the 2013-09-26 23:12:29 by yzi yzi
MaNtLe!
added on the 2013-09-26 23:59:08 by pantaloon pantaloon
las: AMD themselves says that their new hardware will "Blow NV out of the water". They probably wouldn't have said this unless they have tests to back it up. :)
added on the 2013-09-27 00:02:40 by gloom gloom
RIP NVIDIA.

Maybe Intel will buy their tech. Probably not. They've already advanced so much in the field on their own.

Sell your stock now (just not to me).

Quote:
AMD is for gamers.

That's a pretty good place to be at for a GPU manufacturer.
Quote:
I mean, who in their right mind has an AMD card anyway, let alone tries to run demos on one.

Looks like NVidia putting tweaked Cg compiler in their drivers and let everyone think it was proper GLSL was a wise anti-competition step. Now everybody writes broken shaders which NVidia happily accepts and which by some unknown reason don't work on other hardware with "obviously broken" drivers. ;)
added on the 2013-09-27 01:24:10 by KK KK
looks like a good thing. removes some overhead. more work for coders. but it's getting a lil mean. now how many render backends do game developers have already. how many imcompatible efficient ways to render some polygons on how many platforms and apis?
added on the 2013-09-27 01:38:21 by yumeji yumeji
oh, welcome back, BSODs \0/
added on the 2013-09-27 04:37:26 by ton ton
yzi: me, and its fucking awesome.
added on the 2013-09-27 09:28:03 by smash smash
yzi: AMD bought gDebugger and made it free. They try to be strict with their drivers. They introduced ARB_debug_output. Granted, they have a history of bad drivers, but that can be mitigated by a bit of care when choosing (eg: Steam has recommendations). Also they push for actual CPU/GPU shared memory adressing which is very relevant for thos of us not working in video games.

Meanwhile, NVIDIA voluntarily cripple OpenCL performance to promote CUDA.

Guess what, I prefer the former entity.
added on the 2013-09-27 09:57:53 by ponce ponce
in my years of trying different ati/nv cards, the nv cards never caught fire unlike certain cards that definitely need a mantle!
I think that's my cue to continue work on my PowerSGL code and make a demo with it.
added on the 2013-09-27 10:36:52 by Scali Scali
Quote:
las: AMD themselves says that their new hardware will "Blow NV out of the water". They probably wouldn't have said this unless they have tests to back it up. :)


Prolly they just wrote the benchmarks with mantle... As it is supporting their own internal architecture, it should be faster. Also apple said, touch id uses subcutaneous layers, they probably wouldn't have said this unless they have tests to back it up. :) eh?

on topic: yay, hardware specific api. i suspect that amd just wants to point to the developers when their code does not perform as good as on nvidia (like it was up to now). i guess smash is one of the few guys in the world who manages that his demos run better on amd cards than on nvidia ones ;) how do you do that?!
added on the 2013-09-27 11:20:43 by skomp skomp
before a bunch of pouet smartarses chime in (oh wait, its too late), can i just try and cut this thread off here: "if you work on the renderer of a (pc & modern console) game, you probably know why you might want this and can decide whether the extra work of supporting another platform is worth it for the cpu performance gain in your case. if you don't work on the renderer of a (pc & modern console) game, this is probably not for you. nothing to see here, move along."

:)
added on the 2013-09-27 11:47:25 by smash smash
Yeah. The real gains for this will in be the high end game rendering engines. And of course I support anything that competes with DX, really.

Especially a multiplatform approach.

Strangely I'm more open to vendor-locked APIs (which, as some whispers on the Internet have shown isn't even strictly true, because Mantle could reasonably be implemented on NV hw at some point) than OS-locked APIs.
added on the 2013-09-27 12:16:26 by visy visy
Hmm, factoring everything the only thing i can see that would be able to give them that much increase in latency would that they are MMAP:ing in some kind of command buffer on the gpu, NVidia/Intel should potentially be able to support something similiar by creating an emulation layer (might not be trivial but not too bad). But if the GPU is on the same die as the CPU then it might be hard for NVidia to keep up over a bus (compared to on die).

My biggest question is actually about how security is handled if they expose that "much"?
added on the 2013-09-27 12:17:58 by whizzter whizzter
decrease in latency ofcourse
added on the 2013-09-27 12:19:10 by whizzter whizzter

login