pouët.net

software rendering using gpu?

category: code [glöplog]
Quote:
is it feasable to softsynth audio on the gpu?

Afaik the recent intros from 0b5vr and Nusan have GPU synths, iirc shadertoy also supports generating audio from shader code by now.
added on the 2023-11-13 15:21:18 by LJ LJ
Quote:
cpus are so crazy fast that it's hard to think of an audio case that would benefit from gpu rendering

In the context of intros, it's not about performance, but about code density: shader source code compresses very well, CPU machine code much less so.
added on the 2023-11-13 15:36:34 by KeyJ KeyJ
added on the 2023-11-13 17:19:55 by LJ LJ
aaaaaand it was actually Oidos in that intro, not Clinkster but hey, same deal regarding the precalc.
added on the 2023-11-13 17:23:42 by LJ LJ
Quote:
cpu:s are so crazy fast that it's hard to think of an audio case that would benefit from gpu rendering.


Audio is floating point samples with 44.1kHz sample rate. => 3 min of track contain 7938000 samples, which you could render on the GPU using a 2817x2817 texture target. I'd say that is the exact thing GPUs are made for - the only thing you need to keep in mind is that all filter based effects are always feedback effects and need a multipass setup to work properly while maintaining the parallelization benefits of the GPU.

My 5 cents: GPU synths = insane speed-up plus size coding advantage of being able to store shaders instead of binary with x87 instructions.

It's not completely straight-forward to implement tho. It has been done however.
added on the 2023-11-13 17:59:03 by NR4 NR4
On the subject of audio precalc, I wonder how those render-whole-song-for-each-instrument-separately precalcs actually compare to a full 4klang precalc - the reason that 4klang isn't so noticeable is that the intro can be started while the audio is still precalculating and I don't think that anyone who is using 4klang is ever waiting for it to finish first.

Anyway, similarly to what NR4 said, a simple synth with features basic enough to let the music easily be a pure function of time is embarrassingly parallel (filters can be done analytically for basic waveforms, or additive synthesis can be done with loops, atomics, or warp-reductions) and so would fit a GPU very easily. Filters and reverb can be implemented as convolutions anyway, and convolution is a very common use-case for a GPU. Though, procedurally generating an impulse response is another matter...
added on the 2023-11-13 21:23:23 by fizzer fizzer
Sorry, but using my GPU for my calculations isn't as in, generally speaking, using an extra piece of hardware to accelerate/render my code? Isn't this hardware acceleration??? I believe that strictly speaking a software renderer is just that, code that renders graphics using the CPU chip set only (excluding the integrated GPU, as well).
added on the 2023-11-13 23:38:30 by Defiance Defiance
I did it in my 2014 4k intro (Discovery). Admittedly, it's rather primitive (i was looking for a simple sound anyways), and the shader for it was done by hand, but it does have rudimentary low pass filtering (through a redundant, moving window convolution of samples). Interestingly, i also had to slice it into multiple draw calls (with viewport settings), because the driver would always panic out for the single call variation taking too long :)
added on the 2023-11-13 23:38:32 by reptile reptile
Okay, never mind, people refer to compute shaders as 'software rendering on the gpu' (even though the GPU is still being used). An example of that can be found here.
added on the 2023-11-13 23:47:14 by Defiance Defiance
Bru, using a computer is hardware acceleration...
added on the 2023-11-14 00:27:44 by LJ LJ
Quote:
hi there,

i was wondering if its feasable to build a software rendering engine using gpu acceleration like cuda or openCL or whatever api is hot at the moment? maybe even for sound? would it make any sense?


if you have a specific task to efficiently offload to an external processor/processing unit then why not?

i think that's the all-encompassing simplest yet true answer to this question.

so: yes you can. try it if you want to learn something new, never wrong!

i'd involve my own projects in this conversation were it not that it was you who asked a question.
added on the 2023-11-14 02:51:33 by superplek superplek
Obviously hw vs sw acceleration is a relic of the past dating back to 3dfx and fixed pipelines. Now it's all software rendering on dedicated hardware ;P
added on the 2023-11-16 20:52:41 by tomkh tomkh
This was the first thing in the mind when I first learned about shaders, back then :)
added on the 2023-11-19 13:54:10 by Pete Pete
I combined my old 3D engine with OpenGL fixed function pipeline back in previous millennnium. Just a quick hack. Already that time, I was eager to use just a part of the pipeline.
added on the 2023-11-19 13:58:43 by Pete Pete
Quote:


i think that's the all-encompassing simplest yet true answer to this question.

so: yes you can. try it if you want to learn something new, never wrong!

i'd involve my own projects in this conversation were it not that it was you who asked a question.

At least for me, it would be technically pointless endeavor. I just want to do it.
...
Back in the days (10 yr), a chief in tech had learned out that mobile gpu:s could be a sole platform for a software codec. So as asked, I implemented for example DCT8x8 in GLES2.
added on the 2023-11-19 14:11:32 by Pete Pete
... as a vertex shader :)
added on the 2023-11-19 14:13:14 by Pete Pete

login