pouët.net

Windows 7 64 Bit Random Stuff

category: general [glöplog]
you have to wait some time 4 some cOOl & nice DX11 gfx cards, i think till dec. 09
added on the 2009-09-04 10:42:57 by .reEto .reEto
ryg: And lastly, my point to begin with wasn't that Vista or Win7 are "teh biggezt advancement yet", I completely agree that WinNT was a bigger one. It's that they are't the smallest ones, like you suggested. "less interesting today than they have ever been." really does mean that, and to THAT I STRONGLY disagree.

And while you might not be IMPRESSED by some of the features I listed, they are still big advancements over previous windows versions. Sure, double-buffering should have been done a long time ago, but it doesn't make this change any less real.

Speaking of compute shaders, there's one BIG advantage of DX11 compute shaders: compatibility. CUDA is nVidia only, and OpenCL implementations are, well, pretty rough so far. With one common run-time, the bar is a lot lower for getting your programs to run on multiple vendors' hardware. BrookGPU isn't something I can say I know much about, but I suspect that stanford isn't a big enough player in the field to pull it off successfully.

Oh, and believe me, I do value many of the new features of DX11, I just pointed out the compute shaders, because I believe those are what will revolutionize the world of graphics. Sure, there's other options for doing compute shaders, but I expect DX11 compute to be a very strong player, and I'm very glad I have one of these recent Windows versions that allow me to use it.
added on the 2009-09-04 11:02:51 by kusma kusma
.reEto: Nope. DX11 will run on DX10 hardware, only without the new hardware features. AFAIK, Microsoft haven't clarified if this means that the compute shaders will work or not, but as most DX10 hardware is capable or running OpenCL, and the OpenCL and DX compute feature-set is pretty much identical (apart from OpenGL vs D3D integration, of course), I suspect they will.
added on the 2009-09-04 11:06:58 by kusma kusma
Quote:
you have to wait some time 4 some cOOl & nice DX11 gfx cards, i think till dec. 09

Nah. The first 5xxx cards from ATi are scheduled for September. :)
added on the 2009-09-04 11:14:07 by tomaes tomaes
kusma: if you want to c dx11 fx you need dx11 gfx card.. i know that dx11 will work with dx10, i am testing win7 since feb. now. :)

tomaes: i am waiting for geForce dx11 card. i like the "zotac" cards very much, hope they will release a DX11 version of "zotac gtx280-1024" :)
added on the 2009-09-04 11:33:31 by .reEto .reEto
.reEto: What does testing Win7 have to do with knowing stuff about DX11? Anyway, you might get some DX11-features even with a DX10 card, as you did in DX9 with some DX8 card. I'm suspecting the compute-shader to be one of those features that you'd get with gf8 and up.
added on the 2009-09-04 11:55:55 by kusma kusma
kusma: testing win7 has nothing to do with that, but thats a kind of small talk, i do on pouet many times, cause we are not still in contact everytime :). if you dislike small talk just give me a tip :)
added on the 2009-09-04 12:42:37 by .reEto .reEto
.reEto: I don't dislike small-talk, I just thought it sounded like there were some connection that I didn't know of.
added on the 2009-09-04 14:01:58 by kusma kusma
Quote:
It's that they are't the smallest ones, like you suggested. "less interesting today than they have ever been." really does mean that, and to THAT I STRONGLY disagree.

I didn't mean to suggest that every minor OS revision in the 80s or 90s was a big deal; it's just that for a long time major changes were quite frequent (once every 2-3 years; and I mean in the OS space in general, it's certainly not true if you limit yourself to PCs and Macs) while today this doesn't really seem to happen at all.

Quote:
And while you might not be IMPRESSED by some of the features I listed, they are still big advancements over previous windows versions. Sure, double-buffering should have been done a long time ago, but it doesn't make this change any less real.

It's an improvement for Windows, yes, but that was just because it was really antiquated in that regard; I'd count that as maintenance not innovation, same as with MacOS finally getting memory management that deserved the name in OS X.

Quote:
I just pointed out the compute shaders, because I believe those are what will revolutionize the world of graphics.

Hm, I see them as a stepping stone at most. Current models, including CS, are still way too low level to revolutionize anything. There's still tons of plumbing you need to deal with, lots of memory model details to be aware of, etc. Fun for people like me who enjoy working around such constraints, but not fit to start a real revolution.

Right now, GPGPU is an art. Compute Shaders have the potential to turn it into a science. For it to be revolutionary, you have to turn it into a commodity. "Here, use this compiler, and it will take all the data parallel parts of your program and make them Just Work(tm) on the GPU". There's undeniably a lot of progress, but we're nowhere near that yet, and as long as there remain a lot of weird limitations and special cases, GPUs will only be employed by people who like working around weird limitations and special cases (and those who really, really need the compute power, of course).
added on the 2009-09-04 19:13:11 by ryg ryg
I'm .. doubtful the "sufficiently smart compiler" will ever exist.

However, an algo inspired stream-computing language could succeed. The same way SQL has established itself on one end of the spectrum.
added on the 2009-09-04 19:17:42 by _-_-__ _-_-__
I meant "algol inspired"
added on the 2009-09-04 19:17:57 by _-_-__ _-_-__
Quote:
I'm .. doubtful the "sufficiently smart compiler" will ever exist.

For current GPU architectures, I certainly agree. After all, that's where the special cases and limitations come from.

But if the hardware doesn't place arbitrary limitations on which dataflows are allowed and which aren't, the underlying models really are very simple. GPUs and stream computers are "parallel for". Clusters and clouds are "process simultaneous independent requests". Shared-memory multiprocessors are "break a large problem into several mostly independent subproblems and combine the results" (emphasis on the mostly, the advantage of shared memory is that some amount of very low-latency communication between workers is possible).

For the last two, no canonical model has emerged yet, but the "parallel for" really is a pretty good abstraction for stream processors. The two big issues are a) handling memory (how does your data get to local memory, how does the runtime know what to copy, or can you do a "semi-shared" memory model where the source data is promised to stay constant while the loop runs and the GPU "pages" memory in as it's requested) and b) how well does the program map to the available GPU instruction set?

It seems pretty clear to me that it's not very hard to design a stream processor with an instruction set that is fairly well-suited to executing arbitrary code. And once stream processors are common, that's exactly what you want. The real difficulty is that the design also needs to work well as a GPU to make sure that stream processors DO get common. To me, it seems like a foregone conclusion that we'll eventually get there; the interesting question is going to be how exactly (Larrabee is certainly one candidate if it works out).

Ah well, time will tell :)
added on the 2009-09-04 19:58:52 by ryg ryg
Haven't really used Win7 much yet, but I'm actually quite excited by dx11 and some of the potential advancements. The multi-threaded aspect really improves efficiency (and ease) by leaps and bounds and you can go beyond the standard d3d worker thread/command buffer manipulation, as well as reduce cpu/locking issues.

I'm also looking forward to compute shaders for post-processing or image filtering effects, the thread local storage model should help considerably reduce bandwith-limited shaders. And beyond this simple application, there's a lot of possibilities that open up (Once thing that I've looked into is modelling interactive physically based fluid dispersion). I don't know much about the tesselation aspects, but am interested in seeing what people do with it beyond the standard LOD or smooth surfaces. IE, theoretically adding procedural detail to surfaces, or possibly supplementing decal systems? That might still be far off though

For CS, my understanding is that there's going to be multiple compute shader models, with the lower version backwards compatible with dx10 and dx10.1 cards. There's some stuff in the latest dx sdk, but haven't really played around with it :-)

added on the 2009-09-05 04:21:46 by Nezbie Nezbie
I think it ought to be mentioned that Crinkler was just updated to 1.2 which not only delivers Windows 7 compatible executables, but also re-compresses existing Crinkler-processed executables to work under Windows 7. Mucho gracias to Blueberry and Mentor!
added on the 2009-09-06 12:30:38 by gloom gloom
Brilliant, thanks guys!!
Oh bugger, I'm a bit stoopid! How on earth do you use it? I've set up a dos prompt to use it, but can't figure out the command lines to recompile 4k's for Win7 to use. :(
If you just want to recompress with the same parameters just type:
crinkler.exe /RECOMPRESS input.exe /OUT:output.exe

If you want to fiddle with the compression parameters you can add any of the HASHSIZE, HASHTRIES, COMPMODE and SUBSYSTEM switches. The remaining options such as ORDERTRIES, TRANSFORM:CALLS, RANGE, UNSAFEIMPORT etc. are not supported in recompression mode, as we have essentially lost all the benefits of being a linker at this point.
added on the 2009-09-06 13:22:15 by mentor mentor
That's great mentor, thanks. :)
LOL. I have seen Debris on a an Intel Atom 330 based PC with an ATI Card running on Win 7 64 bit ....>properly< ....!
added on the 2009-09-06 22:54:20 by emkay emkay

login