Who has coded (asm) demos on the biggest nb of machines?

category: general [glöplog]
If Chaos/Sanity is the same as Chaos/Farbrausch, we may have a candidate here: Amiga+ST+PC (68k+x86).

Though i equal him in quantity (not in quality): ST+Acorn+PC (68k+Arm+x86;).
added on the 2008-07-23 22:54:48 by baah baah
6510/7510/8510, x86, PPC, 680X0 (all of 'em), PIC, AVR, MIPS, several ISAs I designed byself (accu, register, stack based)

You know one, you know them all. There is really nothing special about knowing many assembler dialects and it is not even that useful nowadays.
added on the 2008-07-23 23:01:09 by Calexico Calexico
Baah: he is the same person :)
added on the 2008-07-23 23:09:01 by magic magic
I once had the dream to code a demo for each machine or architecture possible.

So far X86, Z80, 6510. I'll go for ARM and 68000 next.
I have released demos in machines like CPC, C64, PC, GP32, GP2X.
I'll go for AtariSt and NDS or GBA next.
added on the 2008-07-23 23:14:04 by Optimonk Optimonk
I'll go for AtariSt and NDS or GBA next.

We'll hold you to that one Optimus!
added on the 2008-07-23 23:27:36 by CiH CiH
Yup Calexico, you're right, it's often very similar. But you had to learn all the features of the machines again and again, which makes it impressive anyway for all your stuff. And some processors/architectures might be weird: transputers, forth processors, fpgas...

Oh, btw, apart from CISC+RISC, do you know about SIC=Single Instruction Computer? The only instruction is "substract and jump if negative"! It's turing complete:
;primes - list all primes between 2 and 127
; Alain Brobecker (baah/Arm's Tech)
; 07-Aug-1999

;N is the number we are checking
;D is the opposite of the divisor
;M is the modulo

D D D:0 ;D=0
D 0 *+1 ;D=-2
M M M:0 ;M=0
M D N:2 ;M=-D
M N *+4 ;M=-D-N, if <0 then N>-D so continue
INC NUL PrintPrime ;Otherwise N is a prime
M D SearchModulo ;M-=D, if <0 then N>k*-D
M DEC NotAPrime ;If modulo is zero then N is not a prime
D DEC OneDivisor ;D-=1 is always negative

255 N DEC:1
N INC INC:255 ;N+=1, quit if <0 (ie bigger than 127)
INC NUL NUL:PrimeLoop ;INC=-1, so go to PrimeLoop

36 bytes... My Acorn port is here:
Otherwise look for OISC, which is the original name. Devilish design by Ross Cunniff.
added on the 2008-07-23 23:29:21 by baah baah
Optimuz, it's possible to make the same binary run on 68k+Arm, since the bra for one of them does nothing on the other one. I can give some help here. Maybe it could work on other machines too. ARM processors are very elegant (don't like thumb though).
added on the 2008-07-23 23:30:19 by baah baah
baah: there are actually two OISC paradigms:

1) "the substract and branch if negative" that you mentioned
2) and the move machine. basically the ALU and the PC are memory mapped and you can access them as memory positions.

added on the 2008-07-23 23:54:45 by Calexico Calexico
I once had the dream to code a demo for each machine or architecture possible.

Yeah, I am also working on that!
So far I have covered:
C64, Amiga, PC DOS, PC Win/OpenGL, TI-83, J2ME, DTV, GP32 and PSP.
At the moment I am working on a demo for yet another strange platform. Fun stuff!

Actually I think it adds to the enjoyment of watching demos if you have at least coded something on a given platform, as you get a better understanding for what effects really are impressive.
added on the 2008-07-24 01:00:07 by Sdw Sdw
I have coded assembler on at least a dozen PC's :)
added on the 2008-07-24 01:15:04 by mentor mentor
Yeah having coded for two different CPUs isn't that amazing after all, all due respect to Chaos. ;) Also, what Calexico said. If you can code for one processor, learning a new instruction set is easy. Actually mastering ASM on modern processors is another matter, but there's really not much point these days anyway.
added on the 2008-07-24 02:47:00 by doomdoom doomdoom
But who has coded with most APIs ever?! Oh the excitement!
added on the 2008-07-24 05:28:16 by waffle waffle
baah, admit it, you started this thread to boast about how awesome you think you are.
added on the 2008-07-24 07:01:17 by skrebbel skrebbel
Assembly language is not harder or easier to learn than most other programming languages. If your brain can chew C, it can handle asm just as well.

The hard part is not the instruction set(s). The hard part is to know, where all the useful stuff is located (graphics, sound, memory, ...), what values in what i/o mapped functionality do what etc., so decent hw documentation is really helpful. Only knowing the language is pretty much worthless. :)
added on the 2008-07-24 08:26:57 by tomaes tomaes
A smart man once said "Your compiler can generate better code in microseconds than you can do in hours".
added on the 2008-07-24 09:08:24 by kusma kusma
RBIL anyone?! ;)
added on the 2008-07-24 09:29:20 by raer raer

A smart man once said "Your compiler can generate better code in microseconds than you can do in hours".

That man probably never experienced AMIGGGGGAAAAAA compilers ;)
added on the 2008-07-24 09:37:28 by StingRay StingRay
does shader asm count? :)
added on the 2008-07-24 10:18:09 by Gargaj Gargaj
x86, Z80, 680x0
thinking who ever coded in assembler has a better imagination about how a compiler translates stuff in other languages. on specific application boards there is no other choice than assembler.
added on the 2008-07-24 10:20:53 by seppjo seppjo
I could do MIPS at one point but I've forgotten the instruction names.
I can do add.l move.l dbra
I can do rep stosd

I usually don't, though. That's what compilers are for, and I can vouch that your average computer science graduate understands a lot more about compilers than your average self-professed ASM guru. Having looked into building compilers myself, the ASM code generation is the (still usually not-so) trivial part :)

But of course, writing assembly language is fun in a way that few things in this world are. That's why you have demoscene in the first place ;)
added on the 2008-07-24 10:39:06 by Preacher Preacher
seppjo: In general, I don't think so, no. I think most people who ever coded in assembler cooked results in a university course, never thinking about why and how. But sure, doing a bit of manual register allocation and scheduling might improve your C/C++ coding for that particular platform atleast.
added on the 2008-07-24 10:44:18 by kusma kusma
I always combinated high and low languages. Mostly using assembler when there was no other way (because of hardware, size or speed reasons). Prefered mixture was Basic, Pascal, Assembler.
most of you can imagine some kind of code (in mind) by watching demos. isn't it so?
added on the 2008-07-24 10:50:59 by seppjo seppjo
average computer science graduate

No. Your average compsci graduate can just barely manage Hello World in two or three languages. The ones that come out of college with a very deep understanding of compilers are the ones who were already self-taught coders to begin with.

As for compilers outperforming humans, it doesn't work that way. If the coder takes into account what sort of code the compiler is likely to generate, if he's careful with cache lines and memory/register usage, if he groups operations so that SIMD instructions get a chance to work properly, if he uses the right datatypes for the right tasks (as dictated by the specific CPU, not the programming language), and so on and so on, then the compiler will give "good enough" results much faster.

If all the coder knows is, say, C++, even if he's an expert in just that, he may write "good, clean code", but there's only so much the compiler is allowed to do, and even less that it can do, to turn it into efficient code.
added on the 2008-07-24 11:16:08 by doomdoom doomdoom
What Calexico and Doom said:

- So-called Average Computer Science Graduates don't know shit (we have to find that out every time when ppl are applying for a job; they routinely fail at the most trivial "what does thisandthat language feature do" questions)

- ... which is good news for compiler programmers, because of course a good compiler routinely outperforms an average programmer (I've seen VC2005 beat me in one or two occasions when testing out simple DSP stuff). BUT: A programmer with understanding of the CPU architecture, the algorithms involved in his code and how to tweak them towards the hardware will always be better than usual compiler output. Which is really practical for some critical inner loops eg. in game engines (in two of three current consoles, the GPU's vertex throughput sucks ass), or, let's say software synthesizers that had to output a whole song on a Pentium2@300MHz.

- And: Yes, if you know one ASM language, you basically know them all (6502, 65816, x86(-64), ARM, MIPS and PPC here, plus shader assembly and a custom ISA we designed at the uni :). They all have registers and a few simple commands to manipulate them. There are of course differences but those are a matter of perhaps one day to learn and figure out how to misuse :)
added on the 2008-07-24 11:38:03 by kb_ kb_

The hard part is to know, where all the useful stuff is located (graphics, sound, memory, ...), what values in what i/o mapped functionality do what etc.

I'd say the hard part is understanding what happens in the processor for every instruction in such detail that you select the optimal instruction sequence and register usage.
Of course, that's what optimizing compilers are for, and it's really quite useless to write code for modern processors in assembly language, except in some very specific cases.

Having said that, I still find that assembly language programming can be fun (I suppose you have to be a bit of a nerd to understand that). x86 assembly was actually one of the first programming languages I learned, about a decade ago. I guess some of the fascination was taking all these instructions, each of which seemingly does pretty much nothing on their own, and putting them together in the right combination to get the desired result.

Since then I've worked with several different processors, and still write assembly code frequently for my own projects, even though I code C for a living :P

added on the 2008-07-24 11:48:43 by mic mic