pouët.net

AI is forcing the hand of the Demoscene.

category: general [glöplog]
Quote:
i think you all are vastly overestimating how much people care about ai art lol

https://www.pcgamer.com/firmament-ai-generated-content/
but that's besides the point - the point is what it DOES, regardless of whether people care or not.
added on the 2023-07-04 14:04:01 by Gargaj Gargaj
Quote:
I think in compos you can only appeal to the honesty of the artists. In the 80's/90's many GFX artists copied pictures of Boris Vallejo. In principle, the human implementation of an image in pixel style is nothing different than having an AI do it. It's just faster.


(I hope people have get it that those days many saw computer GFX as a technical competition in pixelling skills where the actual content of the image didn't matter)
@4gentE
Ok, so I see that your concern isn't about the demoscene alone. What I don't understand is your prediction for the future. Like I said in my previous post, at the moment AI is no more than about generating digital pictures which are stored/displayed on digital media.

My question is, at which level do you consider that an intangible picture prompted by a jockey will replace materials such as crayons (or for the same matter oils, stone, clay, marble, fabrics, metals... you name it)? Do you believe that AI will be able to create material things at some point in that future? Or is it a future about humans being permanently wired to networks and having very little connection with the physical world anyway, therefore materials will be irrelevant?

Because if we look at the current “digital picture generating” alone… then why is there still people using crayons today when there's Photoshop and automated tools and drawing tablets and styluses for so many years already?
Quote:
By the way I guess you guys are aware that LLMs will soon be able to produce hundreds of fake websites mimicking this one, all with fake "tradition", fake prods, fake human interactions in fake forums, whole nine yards. Per hour perhaps. Hundreds per hour. Gee, what could go wrong?


I think it would be cool to train a LLM with all our comments. In a few months, it might be hard even to you to tell if it was you or "pseudo4gentE" who posted this or that. :]
added on the 2023-07-04 14:59:00 by ham ham
Quote:
By the way I guess you guys are aware that LLMs will soon be able to produce hundreds of fake websites mimicking this one, all with fake "tradition", fake prods, fake human interactions in fake forums, whole nine yards. Per hour perhaps. Hundreds per hour. Gee, what could go wrong?

Looking forward to it! The question is also what could go right. Maybe the Pouet API is getting improved along the way. :-)
added on the 2023-07-04 15:04:56 by bifat bifat
@ham: Haha, yes, it occured to me some posts ago, the exact same thing. I was gonna ask you guys: “Wait, you didn’t realize that I was a chatbot?”
added on the 2023-07-04 15:15:36 by 4gentE 4gentE
@rexbeng/PA: It’s more likely that the Etsy-sellers don’t have the rights to sell his stuff.
added on the 2023-07-04 15:16:20 by gaspode gaspode
I liked when smash said something about moral… then it all went downhill.
@🎀𝓀𝒶𝓃𝑒𝑒𝓁🎀
That's because morals are quastionable. Plato used to favor craftsmen over artists and said the later where immoral.
@rexbeng:
Look man, these days where I live I cannot find anyone to service electronics or mechanics. Do you understand why is that (possibly) so? It’s because abstraction upon abstraction upon abstraction. It’s because uneducated and uninterested people are being employed and narrowly trained to go thru their official ‘manual’ and exchange whole modules like legos. They don’t know what the f*ck they’re doing. They are like trained monkeys. Like prompt jockeys. To them, even todays devices are blackboxes. LLMs are taking this a big step further. Deskilling.

It’s not that there will be no crayons. It’s that people will not be able to sustain their living by using crayons.
added on the 2023-07-04 15:24:59 by 4gentE 4gentE
@gaspode
Perhaps so. Still easy for BV to hunt down and close them if he wanted to, since they are on an otherwise legit platform.
Quote:
That's because morals are quastionable. Plato used to favor craftsmen over artists and said the later where immoral.


From my experience demoscene also favors craftsmen, mistakenly identifying them as “artists”. Also from my experience, most of the time the demoscene couldn’t recognize “art” if it jumped and took a bite off its ass. Exactly what la_mettrie correctly recognized in his/her post.
added on the 2023-07-04 15:38:02 by 4gentE 4gentE
Quote:
Plato used to favor craftsmen over artists and said the later where immoral.


Well, Plato was a very practical man and he noticed the danger (for order in the city) inherent in the behavior of all these hippie artists and poets.

And of course he was right because art is above things as mundane as morality/ethics. :]
added on the 2023-07-04 15:40:56 by ham ham
@4gentE
Hey, I hear you and can totally relate, but this has been a thing since humans started using technology (perhaps even earlier, but not documented). At some point, for example, hunters where not needed to source food anymore; same happened to various craftsmen during the industrial revolution. Likewise, not all people using crayons today or yesterday or 100 or 500 years ago are/were able to sustain their living.

Still, besides the morals/ethics, at it's current state I dont see the picture generating AIs as anything more but a tool (like Photoshop among dozens of others) which can actually be useful to artists. On the other hand I see how it may render a number of crafts obsolete (again, like Photoshop did).

If AI in general is threatening or not I cannot predict. From what I have been reading there's the concern that it pretty much will replace everything we do. So what's the point? If there's no people having jobs and making money, how are the big corporations going to monetize all that huge effort of replacing everything? And then what? Is humanity going to live in a temporary Utopia where all we have to care about is our hobbies until we totally fade away? Some might even say this doesn't sound that bad. :)

About the 'demoscene vs art' thing. I see what you mean and whenever I try to describe what the demoscene is about, it's hard to go beyond the "crafting" core of it. That said, making demos do fulfill a fundamental condition of the arts: It has no practical use and is plainly self-referential. So there! But I'd rather not get out of context.

@ham
Exactly! =D
@rexbeng:
As you said, we’ll all just have to wait and see, there’s really no stopping this onslaught. But I have to say that I don’t like it one bit. If nothing else, I have to say it, only to be able to nag and repeat “i told you so you old dumbass” when AI apocalypse finally comes and you and me end up half broken and deformed on an island full of trash. ;-) Don’t worry, I’ll have my crayons with me, so I don’t bore you to death. And they will work.

“Context”. Funny you mentioned the word. Because that word is critical in (my intimate) definition of art. Art has to rely on context. Art without context is mere decor. The act of expertly producing decor has a name. It’s called ‘craft’. Not ‘art’ but ‘craft’. And it’s very important for human societies. And it’s at least equally worth as ‘art’. I feel Warhol, one way or another, explained this long ago.
I’m aware that my ramblings stopped adhering to subject at hand some time ago, so I’ll just try to wrap up and hand off the mic to someone who has something to say on actual subject.

Let me just share one more (almost completely subject unrelated) story that relates to continuity of technological ‘progress’ which you mention, in a field i know very well. There’s no speculation here, no projection, no interpolation, it’s all been seen and lived thru. It goes something like this:
Local graphic design achieved its heyday (quality wise) in the 80s. That was before computers. Then (in simplified version) computers came. In design, there’s this Big Monster that is called The Client. Different Big Monsters come with different levels of ability and willingness to sabotage good design work. But generally speaking, best work is done when the monster is kept well at bay. As a rule. When computers came, the quality of graphic design plummeted. Wanna know why? Because everybody was doing it. Because all of a sudden Big Monsters ‘little talented nephews’ had computers and would readily churn up designs that adhered to Big Monsters poor taste. Also, some bad designers with poor ethics joined in, let their hands be guided by the Big Monster, and took over the good designers gigs. The result was worthy of crying. The proffession regressed 30 years in 5-6 years time. Something similar happens with journalism today I think. Anyway. With time this situation got better. Universities finally reacted, modernised their curriculums. So that now, 30 years after this ‘great de-evolution’ of graphic design we came back to the level (quality wise) at which we were in the 80s. Every designer uses a computer and a bunch of software these days. A lot of resources, a lot of money and time spent to achieve the same result. Maybe even still a little bit inferior compared to the heyday. So what I ask you all is this: do we need to go thru that all ordeal again? Do we absolutely have to see new barbarians tear down everything and wait for them to become educated? So that in 30 years time we can climb back to where we are now? Can’t we just move on without ‘moving fast and breaking stuff’?
added on the 2023-07-04 18:34:15 by 4gentE 4gentE
"do we need to go thru that all ordeal again?" I guess we do :)

Also (because of context) I believe Warhol would be pro AI.

It's been an interesting talk nonetheless. Thanks. I will be bringing my crayons and whatever art supplies left to that island. We can share. ^_^
Hmm..you seem to still not getting it.

You can create a generative model of virtually anything, including chip design or even rockets, by training it on existing designs. And to no surprise, once you have it, you can produce "infinite" variations of similar designs/art/code/whatever, but this has nothing to do with creativity or intelligence.

Ideally, you would just set a goal, like "produce a design that satisfies optimal design criteria, e.g. maximize CPU performance etc...", and AI tool will try multiple designs and pick the best one etc...but nobody knows how to do it yet on a large scale. Reinforcement learning (say in LLMs) is just a tuning mechanism for now, it is not actively executed when you use the model (at inference time).

Of course it's in the interest of AI startups, big tech etc...to draw a doomsday pictures to draw attention and investment, but you don't have to fall for it.
added on the 2023-07-05 11:08:40 by tomkh tomkh
PS the chip design effort you linked is rather automated optimization of some aspects of chip design. They put AI label on it,just to get more attention, but it has very little to do with any other "AI" forms under discussion here (generative models, stable-diffusion, LLMs).
added on the 2023-07-05 11:31:06 by tomkh tomkh
@tomkh: And what is intelligence and creativity, according to you?

Take a look at the paper hitchhikr mentioned before. It's funny that this AI even discovered by itself the von Neumann architecture during the process of creating its own new design for a RISC-V processor. :]

Read the paper! It's not what you're describing, but something really new and revolutionary.

And these are just baby steps!
added on the 2023-07-05 12:34:52 by ham ham
All you need to do is banning AI generated content in competitions.
added on the 2023-07-05 16:32:51 by dex46... dex46...
ham: dunno about prior art in the field, but from what I skimmed over this paper, seems like a monte-carlo search algorithm to me that is searching for the fitting architecture (boolean formulas?) given input-output examples. Maybe nobody has tried it in CPU design field so far, but doesn't look like something that can be even remotely called AI or has much to do with topics discussed here. Please note you have to prepare those input-output examples by actually computing them on existing CPU.
added on the 2023-07-05 20:12:53 by tomkh tomkh
As for my view on intelligence: you can emulate many narrow aspects of human capabilities today, but human brain is so far most versatile, has active memory (very hard topic in LLMs) and what's quite important: works real-time. I'm pretty sure it's possible to create human-like AI say within 30 years if we will continue to have a steady progress, but it's gonna be a shit-show as usual, lots of hype and over promising, jumping on AI train with no AI at all, then stagnation for years, then suddenly another revival etc...just like with every other tech.
added on the 2023-07-05 20:30:23 by tomkh tomkh
@tomkh:
Plus, I’m no expert on the subject, but from what I gather - I’m of opinion that current hysteria around LLMs, which I see as nothing more than marketing to bring big capital in, actually hurts the progress in building ‘true AI’. Maybe even stuffing a lot of research into a possible dead end, puting all eggs into one basket, while holding progress in other branches of AI research. After all this media noise quiets down, the next AI winter is bound to be a long and cold one.
added on the 2023-07-05 21:45:09 by 4gentE 4gentE
@tomkh:

Please read pages 26 and 27 of the paper to get out of the error you have fallen into by assuming that the traditional methods of EDA tools are being used. See also Figures 1 (page 5), 5 (page 11) and 9 (page 26).

What this thing is about is creating of an entire design for a complete CPU with more than 4 millions of logic gates. All completely generated by IA exploring a search space of unprecedented size (that's why Monte Carlo-based search is needed) and starting from a finite set of examples of the I/O of the desired circuit (and without need to recurring to any of those formal hardware programming languages, such as Verilog).

The problem solved by the method described in that paper is as follow:

With only finite I/O examples, infer the digital circuits in the form of Boolean functions that can be generalized to infinite I/O examples with high accuracy (99.9999999% in the case of extrapolated behavior and 100% in the behavior described in the finite set of examples).

It is necessary to use finite I/O examples because it is impossible to obtain all I/O examples for such large-scale circuits as a CPU (their number increases exponentially for each new input bit).

Also, the inferred Boolean function descriptions represents both the combinational logic and sequential logic of the desired CPU.

That's it (read the paper for the details).


And this has discover from scratch not only the general von Neumann architecture but also some fine-grain architecture optimizations!

We are talking about the basis of machine’s self-evolution, no less.

This new RISC-V compliant CPU was entirely generated in about 5 hours. Compared that to the time (weeks or months) that takes just to verify the design of a CPU designed using previous (not completely automated) methods.

As I said before, this is all something really new and revolutionary and not a mere optimization of previous EDA tools and techniques as you seems to believe.
added on the 2023-07-05 22:07:17 by ham ham

login